Search Program
Organizations
Contributors
Presentations
Birds of a Feather
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Graphics Systems Architecture
Hardware
Image Processing
Rendering
Full Conference
Experience
DescriptionAs SIGGRAPH attendees grapple with aging infrastructure (much that predates the pandemic) they are at a critical technological inflection point – AI and GPU rendering are accelerating rapidly, should they invest in just GPU technology moving forward, stick with more flexible CPU farms or strike a sort of hybrid approach? Whether they’re buying new infrastructure or investing in cloud resources the questions are just as valid. Our session will help attendees understand the pros and cons of each approach, the economic considerations, where we are in the GPU adoption curve and insights from those that have already made a migration.
Production Session
Arts & Design
Gaming & Interactive
Production & Animation
Not Livestreamed
Not Recorded
Animation
Games
Real-Time
Full Conference
Wednesday
DescriptionThis talk will break down the animation process in South of Midnight across gameplay and cutscenes. And how it is a mix of art and tech that brings the stop motion, southern gothic world and characters alive.
Poster
Full Conference
Experience
DescriptionWe propose a grid optimization method with regional control that uses attention mechanisms to prioritize visually significant areas and employs an attention flow mechanism to optimize resource allocation for structural consistency, thereby enhancing mesh reconstruction precision and capturing finer local geometry details.
Poster
Full Conference
Experience
DescriptionDynamic skinning is a new method that extends upon normal linear blend skinning and allows for time delay effects with a general framework allowing for oscillatory secondary motion and extension to linear blend skinning itself with great artistic control, all of which is compatible with existing standard rigged characters.
Poster
Full Conference
Experience
DescriptionAnts exhibit unique abilities to self-assemble into animate, living structures, which display both fluid and solid-like proprieties. We present an interactive constraints-based approach for simulating the collective dynamics of ant swarms in various 3D settings. Our method closely imitates real-world behaviors with compelling physical realism.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionThis project explores the role of the designer in digital fabrication workflows as digitization leads to higher levels of design automation. As digital technologies are adopted to streamline design to manufacturing workflows, elements of the creative process can become standardized to improve production efficiency at the cost of designer autonomy and product customization. In order to ensure designers’ agency and increase product variation, the Carrara project presents a collaborative tool utilizing agent-based modeling (ABM) to represent designers, fabrication machines, and algorithms as active co-participants in the design process. This co-participatory workflow enables a generative, scalable product line that takes advantage of digital efficiencies while providing the designer with autonomy and control in the creative process.
Poster
Full Conference
Experience
Description"Digitizing Devotion" utilizes advanced oblique photography and AI to create immersive virtual reconstructions of sacred spaces, preserving traditional worship practices for the global diaspora while ensuring cultural continuity across generations and geographical boundaries.
Poster
Full Conference
Experience
DescriptionThis paper introduces Dust in Time, an embodied and tangible interactive installation that transforms physical gestures into audio-visual responses through hourglasses and projected particles, offering a reflective exploration of time and human presence.
Poster
Full Conference
Experience
DescriptionThe animated short film Sensual explores a novel workflow for hand-painted watercolour animation, blending traditional artistic methods with AI-based frame interpolation techniques. By combining compositing with the Real-Time Intermediate Flow Estimation (RIFE) image interpolation network, we significantly reduced production time while maintaining the unique hand-painted aesthetic.
Poster
Full Conference
Experience
DescriptionFoliager is a generative AI-powered pipeline that transforms natural language into biologically plausible 3D forest ecosystems, combining ecological simulation with procedural graphics to support scientific visualization, storytelling, and environmental design.
Poster
Full Conference
Experience
DescriptionExtending Giada Peterle’s concept of auto-cartography, this paper explores Tasmania’s complex and dynamic island identity through an interactive installation powered by a customised generative AI model. By collecting human experience as a training dataset, it reimagines mapping as an embodied, affective process that engages participants to reflect on their relations to place.
Poster
Full Conference
Experience
DescriptionIn this study, we propose an experience inspired by the Anywhere Door concept, in which users transition between multiple life-sized projected virtual spaces by opening, closing, and passing through a physical door.
Poster
Full Conference
Experience
DescriptionA complete traditional puppetry performance requires diverse control interfaces to support a broader range of manipulation techniques. To address this, our work integrates three distinct immersive puppeteering experiences: VR-HMD, MR-HMD, and CAVE systems to enable asymmetric interaction, opening up new possibilities for the future of digital puppetry theater.
Poster
Full Conference
Experience
DescriptionThis study presents a method for designing balancing toys. Through interactive modeling techniques that optimize both shape and mass distribution, this research achieves the challenging feat of locating the center of mass outside the body. The designed balancing toys were successfully fabricated using an FDM 3D printer.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionWe presents DreamCraft, a VR system that demonstrates the potential of combining panorama with 3D generation techniques to provide users with an intuitive and feature-rich platform for creating interactive 3D scenes. Our pilot study shows that even users with no prior experience can effectively use the system.
Poster
Full Conference
Experience
DescriptionThis paper presents EARSIM, a new approach to auditory localization training through Virtual Reality, utilizing a configurable multi-sensory cue system to enable adaptive and personalized difficulty levels. The proposed system addresses the limitations of conventional localization techniques and demonstrates potential as a flexible platform for future clinical applications in auditory rehabilitation.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionWe present a marker-based VR system that simulates real-time water surface flow by tracking ArUco markers on physical water. The system generates FlowMaps from tracked motion to drive fluid effects in Unity. A circular pool with water-jet units enables controllable flow, enabling sensor-driven, immersive fluid simulation.
Poster
Full Conference
Experience
DescriptionThis paper presents two novel teleportation methods for VR environments that address limitations of conventional parabola-based approaches when navigating varying heights. The SphereBackcast and Penetration methods utilize straight-line specification for intuitive movement to elevated locations. Experiments with 22 participants showed our methods significantly outperformed parabola-based teleportation for height differences above 2m, while maintaining comparable performance on flat terrain. NASA-TLX and SUS evaluations confirmed improved usability and reduced cognitive load, indicating these methods can be readily integrated into existing VR applications.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionIn this poster we introduce the INT-ACT project which aims to investigate the use of immersive mobile eXtended Reality (XR) environments for presenting the emotional, experiential and environmental dimensions of Intangible Cultural Heritage (ICH) associated with tangible cultural heritage sites. We also present a museum exhibition that we have developed as part of INT-ACT that focuses on the ICH related to a prehistoric megalithic site in the Alentejo region of Portugal. Visitors to the exhibition can interact with the physical environment using a mobile immersive mobile XR app to access different audio-visual media content presenting the ICH of the selected site..
Poster
Full Conference
Experience
DescriptionCheerleading stunts are group gymnastics performed by multiple people. As the skills involved become more challenging, it is necessary to devise better practice methods. Thus, in this paper, we propose a pretraining support system for cheerleading stunts using Virtual Reality (VR) technology. This system allows the users to experience successfully performing a stunt in the virtual space by adopting the viewpoints of the cheerleaders performing various types of stunts. Our system has the potential to meaningfully augment the established training method of previsualization of stunts.
Poster
Full Conference
Experience
DescriptionThis paper introduces SugART, an MR project that enables users to learn and recreate traditional sugar painting at home. Combining hand tracking, virtual guidance, and real-time feedback, our project supports creative expression and cultural education, lowering barriers to participation in intangible cultural heritage through accessible and interactive digital experiences.
Poster
Full Conference
Experience
DescriptionThe Gesture Lives On transforms traditional Taiwanese glove puppetry into an immersive digital performance through real-time VR gesture tracking and virtual puppet co-performance, offering a novel model for integrating cultural heritage with contemporary performance technologies.
Poster
Full Conference
Experience
DescriptionThis study explores the effective range of a weight illusion induced by AR visual effects. Results show that AR visual effects on the arm, creating a “strong” impression, can make 100–500 g weights feel lighter when lifted with the visually augmented arm.
Poster
Full Conference
Experience
DescriptionYou Can Grow Here is an immersive virtual reality experience that combines theatrical storytelling, improvisational design methods, and evidence-based wellness techniques to guide users through emotional regulation and anxiety relief—successfully exhibited in UIC’s CAVE2 and aligned with the United Nations Sustainable Development Goal of Good Health and Well-Being.
Poster
Full Conference
Experience
DescriptionWe propose an interactive camerawork authoring system for free-viewpoint 3D dance contents that synthesizes and edits the camerawork by retrieving optimal sequences from a database based on user queries of music and pose similarity, and demonstrate its effectiveness quantitatively and qualitatively.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionDeveloped uNEEDXR™ system achieves 60,000 nits brightness in micro-OLEDs on silicon, featuring high brightness, high pixel density, low power consumption, high contrast ratio, high color saturation, and tunable energy distribution (including viewing angle, wavelength, and bandwidth).
Poster
Full Conference
Experience
DescriptionWe propose a large-étendue direct-view holographic display using dynamic optical steering with a high-pixel-resolution amplitude-only SLM.
The system expands the eye box in both lateral and depth directions by translating two lenses.
We further extend SGD-based hologram optimization to support dual light sources and amplitude-only SLM, achieving stereoscopic image delivery.
The system expands the eye box in both lateral and depth directions by translating two lenses.
We further extend SGD-based hologram optimization to support dual light sources and amplitude-only SLM, achieving stereoscopic image delivery.
Poster
Full Conference
Experience
DescriptionThis study proposes a novel Maxwellian optical system that combines a transmissive mirror device (TMD) with spherical multi-pinholes. Its effectiveness was verified through 2D and 3D simulations, demonstrating a significantly wider viewing angle than that of conventional systems.
Poster
Full Conference
Experience
DescriptionWe propose a naked-eye stereoscopic display with an ultra-wide viewing zone by applying the display principle of general LCDs. By replacing the polarizer of an LCD with a reflective polarizer and arranging them three-dimensionally, this technology refracts light rays freely and enables an expansion of the viewing zone.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionAn "infinity mirror" is an optical novelty that uses facing mirrors to create the appearance of a tunnel of copies of a scene receding into the distance. This poster shows how to use view-dependent appearance to make all copies of the scene appear non-reflected, allowing for "speed tunnel" effects.
Poster
Full Conference
Experience
DescriptionA real-time algorithm for driving multispectral LED lights in a spherical lighting reproduction stage to achieve accurate color rendition for a dynamic scene. This technique drives several thousand multispectral LED lights at video framerate by pre-computing a LUT of the NNLS solutions across the full range of input RGB values.
Poster
Full Conference
Experience
DescriptionWe propose a smartphone-based wide field-of-view HMD using inexpensive mirrors and lenticular lenses. Lenticular lenses on both display edges create multi-view images, which are then redirected by strategically placed mirrors to expand the peripheral field of view, effectively enlarging the display area without increasing the screen size.
Poster
Full Conference
Experience
DescriptionA novel method for automatic colorization of anime line drawings achieves improved accuracy over state-of-the-art segment matching-based approaches by leveraging semantic segmentation and color shuffling processes without relying on flow estimation, effectively addressing challenges posed by large motion gaps and small regions.
Poster
Full Conference
Experience
DescriptionWe evaluate four models using INR and VAE structures for compressing phase-only holograms. Our findings show that the pretrained VAE struggles with this task, while SIREN achieves 40% compression with high-quality 3D images (PSNR = 34.54 dB), highlighting the effectiveness of INRs and VAE limitations.
Poster
Full Conference
Experience
DescriptionWe present the first open-source system for automatic interpretation of Ancient Egyptian texts, combining OCR, transliteration, and translation into a unified pipeline that supports diverse writing styles and improves accessibility for learners and researchers.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Games
Generative AI
Geometry
Modeling
Pipeline Tools and Work
Full Conference
Experience
Description[email protected]
This discussion is for educators facilitating coursework in a rapidly changing digital world, where software updates and generative artificial intelligence (AI) in addition to online resources are shaping how learning outcomes are defined for students in higher education.
Ensuring that students in higher education are equipped with the methods, procedures, and technical understanding of technology is important for efficient and creative endeavors. This talk will present a pedagogical approach to sequential and non-sequential curricular activities for new and growing 3D Animation programs in higher education.
This discussion is for educators facilitating coursework in a rapidly changing digital world, where software updates and generative artificial intelligence (AI) in addition to online resources are shaping how learning outcomes are defined for students in higher education.
Ensuring that students in higher education are equipped with the methods, procedures, and technical understanding of technology is important for efficient and creative endeavors. This talk will present a pedagogical approach to sequential and non-sequential curricular activities for new and growing 3D Animation programs in higher education.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
Description3D Gaussian splatting has rapidly emerged as a transformative technique for high-fidelity scene reconstruction and real-time rendering. It is also unlocking new capabilities for physical AI development, from robotics, to autonomous vehicles, and more. This session explores recent advances in 3D Gaussian splatting, and introduces NVIDIA NuRec APIs and tools for reconstruction and rendering. Learn how 3D Gaussian splatting accelerates the development of interactive, physically accurate AI environments and paves the way for next-generation simulation pipelines in both research and industry.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Industry Insight
Virtual Reality
Full Conference
Experience
DescriptionJoin The 3D Artist Community for a panel discussion with 3D artists who’ve successfully transitioned from entertainment into industries like fashion, architecture, product design, and more. Hear firsthand how they adapted their skills, what challenges they faced, and what surprised them most along the way. Whether you're curious about switching industries or just exploring new possibilities, this is your chance to ask questions, get advice, and connect with artists who’ve been there. Learn how the world of 3D is expanding beyond film and games—and where you might fit in.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionGiven a 3D object representing the source content and a reference style image, our method performs 3D stylization with a large pre-trained reconstruction model. This is achieved in a zero-shot manner, with no training or test time optimization required, while delivering superior visual fidelity and efficiency compared to existing approaches.
Birds of a Feather
Arts & Design
Pipeline Tools and Work
Full Conference
Experience
DescriptionUser interface design for 3D graphics/VFX software has its own history, conventions, constraints, and challenges. Come meet other designers working in this exciting niche, and swap information, insights, and war stories.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Description3D-Fixup enables realistic 3D-aware photo editing by leveraging 3D priors and a novel data pipeline that extracts training pairs from real-world videos. Its feed-forward architecture supports efficient, high-quality edits involving complex 3D transformations while preserving object identity, outperforming prior methods in both edit accuracy and user control.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present 3DGH, a generative model that creates realistic 3D human heads with composable hair and face components. By modeling both the separation and correlation between hair and face in a generative paradigm, it enables high-quality, full-head synthesis and flexible 3D hairstyle editing with strong visual consistency and realism.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper introduces a nearly second-order convergent training algorithm for 3D Gaussian Splatting that exploits independent kernel attributes and sparse coupling across images. By constructing and solving small Newton systems for parameter groups, it achieves about an-order faster training while maintaining or exceeding SGD-based reconstruction quality.
Poster
Full Conference
Experience
DescriptionWe introduce a novel user interface, Manu-grid, to estimate the projection function, which corresponds to the drawing method used in a background image to obtain the geometrical consistent composite of 3D model images into the background scene.
Poster
Full Conference
Experience
DescriptionThis study proposes a region-wise confidence estimation method for anime-style line drawing colorization. By comparing local patches in the colorized image with training images using normalized cross-correlation, the method highlights uncertain regions. It improves usability by aiding artists in identifying colorization errors efficiently and reliably.
Poster
Full Conference
Experience
DescriptionA 30-parameter, physics-based model transforms digital images into authentically scanned film colour. Trained on a single roll of colour-positive film, it matches LUT accuracy without artefacts and exposes interpretable parameters, offering filmmakers a data-light and production-ready solution to revive and preserve classic film aesthetics.
Poster
Full Conference
Experience
DescriptionWe introduce a modular, open-source pipeline that combines multiple custom-trained LoRA and ControlNet models to disentangle style and identity, enabling fast, visually and narratively consistent AI-generated short films,validated through two award-winning multi-scene productions.
Poster
Full Conference
Experience
DescriptionWe present a compact, handheld holographic video camera that captures full-color holograms in real time under natural lighting, making laser-free holography possible. By integrating advanced optical components and AI-driven super-resolution, it enables high-quality holographic content capture, paving the way for portable, next-generation immersive media and real-world applications of holography.
Poster
Full Conference
Experience
DescriptionTo generate previews with near-final render quality in VFX and enable faster iteration, we propose G-FED, G-Buffer Guided Frame Extrapolation in Video Diffusion Models. G-FED denoises 1spp frames, guided by G-buffer data, to infill masked forward projections and generate high-quality images that are spatially and temporally coherent.
Poster
Full Conference
Experience
DescriptionWe propose a geometry- and illumination-aware 2d-graphic compositing pipeline. We use meshes generated by off-the-shelf monocular depth estimation methods to warp the 2d-graphic according to the surface geometry. Using intrinsic decomposition, we composite the warped graphic onto the albedo and reconstruct the final result by combining all intrinsic components.
Poster
Full Conference
Experience
DescriptionWe introduce the novel task of predicting flat colors for unintended small regions left unpainted by flood-fill operations—common in anime-style illustrations—and present a U-Net-based method that achieves 62.5% exact-match accuracy on professional data, outperforming naïve baselines and establishing a promising foundation for supporting anime-style colorization workflows.
Poster
Full Conference
Experience
DescriptionQRBTF is an AI QR code generator trained with ControlNet, which can generate scannable QR codes hidden within images based on prompt input.
Poster
Full Conference
Experience
DescriptionThis work presents a pipeline that converts rasterized graphic design posters into multi-layered, editable assets. It decomposes elements, addresses layer ordering using a novel Z-index strategy, and shows high accuracy through evaluations of over 24,000 posters. User feedback confirms its ability to accurately reconstruct posters with excellent fidelity.
Poster
Full Conference
Experience
DescriptionSAWNA tackles layout-sensitive text-to-image generation by treating user-specified empty regions as first-class constraints. Bounding-box masks are blurred and injected as mean-shifted, inert noise into the frozen Stable Diffusion latent, suppressing synthesis inside reserved areas while preserving diversity and quality elsewhere. This simple training-free modification supports workflows that require precise layout fidelity, including advertising (e.g., space for logos or headlines), UI design (e.g., button placement), and animation pre-production (e.g., speech bubbles, subtitles, or motion overlays).
Experiments show that SAWNA outperforms layout-aware baselines like GLIGEN and in-painting pipelines, both of which struggle to maintain truly empty regions without introducing artifacts or incoherence. In contrast, SAWNA yields clean, editable space while producing semantically rich images across the remaining canvas.
This makes it especially suitable for design-critical applications where reserved regions are integral to downstream compositing or storytelling.
Experiments show that SAWNA outperforms layout-aware baselines like GLIGEN and in-painting pipelines, both of which struggle to maintain truly empty regions without introducing artifacts or incoherence. In contrast, SAWNA yields clean, editable space while producing semantically rich images across the remaining canvas.
This makes it especially suitable for design-critical applications where reserved regions are integral to downstream compositing or storytelling.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present 4D Gaussian Video (4DGV) for high-quality, low-storage volumetric video reconstruction and real-time streaming. Our method effectively handles complex motion and enables effective motion compression, achieving superior performance in both reconstruction quality and storage efficiency.
Poster
Full Conference
Experience
DescriptionMinecraft to 3D automatically converts any Minecraft world into an engine-ready, fully textured 3D scene—recognising default structures, smoothing voxel terrain, and swapping structures for high-quality models—so educators, indie developers, and artists can transform their in-game prototypes into production environments in minutes.
Poster
Full Conference
Experience
DescriptionWe propose a finetuned conditional latent diffusion model for generating motion field from user-provided sketches, which are subsequently integrated into a latent video diffusion model via a motion
adapter to precisely control the fluid movement.
adapter to precisely control the fluid movement.
Poster
Full Conference
Experience
DescriptionWe present a pipeline for designing and detecting subtle code-conveying patterns that can be
printed on transparent sticker paper, then applied to real-world surfaces, rendering the modifications imperceptible to the human eye, but robustly detectable to our model, with specific emphasis placed on allowing for human error in sticker placement.
printed on transparent sticker paper, then applied to real-world surfaces, rendering the modifications imperceptible to the human eye, but robustly detectable to our model, with specific emphasis placed on allowing for human error in sticker placement.
Poster
Full Conference
Experience
DescriptionStructInbet is an skeleton-based inbetweening system that achieves controllable, structure-aware interpolation generation with improved pose clarity and motion alignment to user intent, surpassing prior point-based methods in reducing ambiguity.
Poster
Full Conference
Experience
DescriptionWe introduce an architecture-agnostic super-resolution framework that uses human visual sensitivity to allocate computational resources efficiently, delivering substantial reductions in computational demand without perceptible quality loss, as validated by user studies—offering significant advantages for applications like VR and AR.
Poster
Full Conference
Experience
DescriptionOur training-free method enables photorealistic facade editing by combining hierarchical procedural structure control with diffusion models. Starting from a facade image, we reconstruct, edit, and guide generation to produce high-fidelity, photorealistic variations. The method ensures structural consistency and appearance preservation, demonstrating the power of symbolic modeling for controllable image synthesis.
Poster
55. Train Once, Generate Anywhere: Discretization Agnostic Neural Cellular Automata Using SPH Method
9:00am - 5:30pm PDT Sunday, 10 August 2025 West Building, Level 2, Outside Room 219Full Conference
Experience
DescriptionWe introduce SPH‑NCA, a discretization agnostic neural cellular automata that uses a differentiable SPH method for perception and a stable training scheme, allowing image and texture synthesis on any grid, resolution, or 3D surface while trained on a fixed-resolution 2D image.
Poster
Full Conference
Experience
DescriptionWe propose a two-stage sketch-guided smoke illustration generation framework using stream function. The input sketch is converted into the stream function through a latent diffusion model, which subsequently drives the velocity field generation. The velocity field serves as a guidance force to drive the smoke simulation.
Poster
Full Conference
Experience
DescriptionWe propose a new game engine module, Capsule, that allows multiple players efficiently share one engine. We implemented Capsule in O3DE, in an application agnostic way. Our experience with four applications show that Capsule increases datacenter resource utilization by accommodating up to 2.25x more players, without degrading player gaming experience.
Poster
Full Conference
Experience
DescriptionThis study examines distance management in combat sports training with haptic feedback. Results show that haptic feedback reduced punch distances and movement, while no significant difference was found in step count or average distance to the opponent. Haptic feedback aids better distance management with less movement.
Poster
Full Conference
Experience
DescriptionThis study presents a gaze entropy–based framework to identify cognitive failures and predict accident risk before TOR(Take-Over Request) in conditionally autonomous driving. Using a Random Forest model, it enables early risk detection and offers practical insights for driver monitoring.
Poster
Full Conference
Experience
DescriptionWe achieve physically plausible 3D fragment reassembly by framing it as path-verified spectral packing, using FFT correlation and alignment-maximizing ICP refinement against a known target boundary for high-fidelity, collision-free reconstruction.
Poster
Full Conference
Experience
DescriptionThis study demonstrates that proposed interactive posters and trailers breaking the "Fourth Wall" significantly boost movie anticipation and viewing intentions, offering an effective promotional strategy for mobile-based video streaming platforms.
Poster
Full Conference
Experience
DescriptionThis paper proposes PAAP (Performer-Aware Automatic Panning System), the first system to automatically track performer(s) and generate spatial audio panning data integrated with a Digital Audio Workstation (DAW). Real-time processing of PAAP via Open Sound Control (OSC) confirms its readiness for deployment in professional music production.
Poster
Full Conference
Experience
DescriptionThis study presents a multimodal framework integrating human factors(workload, situation awareness), biometrics(heart rate variability, eye-tracking), and spatial complexity to predict Level 2 autonomous driving accidents, achieving 73.7% accuracy via logistic regression, with age and workload as key predictors and elevated cognitive load in complex environments informing real-time adaptive safety interventions.
Poster
Full Conference
Experience
DescriptionSEE-2-SOUND is a zero-shot approach that generates spatial audio for visual content. It decomposes the task into four steps: identifying visual regions of interest, locating them in 3D space, generating mono-audio for each, and integrating them into spatial audio. Our approach can generate realistic spatial-audio from images or videos.
Poster
64. Skylight: Real-Time Projection Mapping for Surgical Navigation Leveraging Skin-Adhered Fiducials
9:00am - 5:30pm PDT Sunday, 10 August 2025 West Building, Level 2, Outside Room 219Full Conference
Experience
DescriptionSkylight is a surgical navigation system that uses skin-mounted fiducials and real-time projection mapping to display high-accuracy, CT-registered guidance directly onto the patient’s body -- eliminating the need for bone-mounted trackers and enhancing surgical precision and usability.
Poster
Full Conference
Experience
DescriptionStroke Imprint is a knitted wearable that simulates affective strokes to comfort young women experiencing anxiety by using pressure sensing and SMA-based actuation. Paired with a digital interface, the glove allows users to record personalized tactile sensation. Through user interviews, design iterations, and user testing, the study demonstrates its therapeutic potential as an anxiety-tracking wearable within a closed biofeedback loop.
Poster
Full Conference
Experience
DescriptionPlay with Earth introduces a novel project addressing the preservation and innovation of ICH, focusing on traditional mud toys from China's Yellow River. Based on a comprehensive documentation of 15,686 photographs of mud toys and interviews with inheritors, our project achieved an interactive platform combining traditional craftsmanship with AI-assisted creativity.
Poster
Full Conference
Experience
DescriptionRay tracing is a widely used technique for modeling optical systems, involving
sequential surface-by-surface computations which can be computationally
intensive. We propose Ray2Ray, a novel method that leverages implicit neural representations to
model ray tracing through optical systems with greater efficiency and performance, eliminating the need for surface-by-surface computations in a single pass end-to-end model.
Ray2Ray learns the mapping between rays emitted from a given source and their
corresponding rays after passing through a given optical system in a physically
accurate manner.
sequential surface-by-surface computations which can be computationally
intensive. We propose Ray2Ray, a novel method that leverages implicit neural representations to
model ray tracing through optical systems with greater efficiency and performance, eliminating the need for surface-by-surface computations in a single pass end-to-end model.
Ray2Ray learns the mapping between rays emitted from a given source and their
corresponding rays after passing through a given optical system in a physically
accurate manner.
Poster
Full Conference
Experience
DescriptionWe present a comparative analysis of skin tone rendering in MetaHuman avatars using real-world and reference-based color inputs, revealing systematic differences across the Monk Skin Tone scale and highlighting key limitations in current real-time rendering pipelines for darker and intermediate skin tones.
Poster
Full Conference
Experience
DescriptionGBake introduces a raytracing-based technique for creating reflection probes in Gaussian-splatted environments that overcomes inherent EWA splatting errors at cubemap seams, enabling realistic integration of traditional mesh objects into scenes comprised of 3D Gaussians.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionWe propose the first framework to enable spatial adaptivity with the closest point method, which provides a more efficient spatial discretization suitable for recent applications of the closest point method in computer graphics, such as fluid simulation [Morgenroth et al. 2020] and geometry processing [King et al. 2024].
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionThis paper presents a high-fidelity steganography method for 3D Gaussian splatting that requires no additional training. We propose a bit-level embedding technique that leverages the lower bits of the 32-bit floating-point representation. We further introduce an embedding strategy that considers each Gaussian's opacity and incorporates RSA encryption.
Poster
Full Conference
Experience
DescriptionHyperParamBRDF uses hypernetworks conditioned on physical parameters to predict nanostructure BRDFs with high fidelity, accelerating appearance evaluation by orders of magnitude compared to simulation and enabling real-time exploration.
Poster
Full Conference
Experience
DescriptionWe develop an object insertion pipeline and interface that enables iterative editing of illumination-aware composite images. Our pipeline leverages off-the-shelf computer vision methods and differentiable rendering to reconstruct a 3D representation of a given scene. Users can add 3D objects and render them with physically accurate lighting effects.
Poster
Full Conference
Experience
DescriptionWe present a high-performance, VFX-inspired workflow that transforms unstructured CFD data into scalable, high-fidelity visualizations using parallel voxelization, OpenVDB export, and CyclesPhi rendering, supporting both batch processing and interactive frame exploration for scientific analysis and visual communication.
Poster
Full Conference
Experience
DescriptionSurfelPlus introduces a real-time global illumination renderer optimized for low-end hardware, achieving dynamic indirect lighting through unified surfel generation, adaptive surfel management, and advanced spatial-temporal filtering, significantly improving performance and visual fidelity without expensive precomputations.
Poster
Full Conference
Experience
DescriptionOur GPU-resident pipeline based on Unreal Engine 5 unifies scene generation, rendering, and processing entirely on the GPU, eliminating CPU–GPU transfers and disk I/O to achieve near-constant per-sample latency, up to 12× speedups, and sustained high-throughput training with effectively infinite synthetic data.
Poster
Full Conference
Experience
DescriptionThis paper proposes a polarization path tracing method that incorporates multiple microfacet reflections and introduces an approximation potentially enabling computationally efficient rendering.
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
INVITED TO THE FIRST ROUND OF THE STUDENT RESEARCH COMPETITION
Poster
Full Conference
Experience
DescriptionThis paper introduces innovative methods for generating gestures, facial expressions, and exaggerated emotional expressions for non-photorealistic characters using comics-extracted expression data and dialogue-specific semantic gestures for conversational AI, achieving significantly enhanced user satisfaction compared to a state-of-the-art photorealistic method.
Poster
Full Conference
Experience
DescriptionSpeech driven 3D face animation driven by disentangled phoneme and prosody features, enabling fine-grained and intuitive control over visemes and expressions—uses a convolutional autoencoder to learn a relative motion prior and a transformer to map these interpretable audio features into latent deformations.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionGeometry processing often requires the solution of PDEs with boundary conditions on the manifold’s interior. However, input manifolds can take many forms, each requiring specialized discretizations. Instead, we develop a unified framework for general manifold representations by extending the closest point method to handle interior boundary conditions.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionA novel deep learning system enhances realistic virtual oculoplastic surgery simulations.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionYUBI, a novel interface, translates nuanced finger force inputs into continuous, full-body avatar motion, fostering strong embodiment in virtual reality. It enables embodied interactions like object manipulation and navigation using only finger strength, overcoming physical constraints while delivering realistic haptic experiences.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a divide-and-conquer approach for orienting large-scale, non-watertight point clouds. The scene is first segmented into blocks, and normal orientations are estimated independently within each block. These local orientations are then globally unified through a graph-based formulation, solved via 0-1 integer optimization. Experiments demonstrate the robustness of our method.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper introduces a novel median filtering algorithm, using hierarchical tiling to reduce redundant computations and achieve better complexity than prior sorting-based methods. The paper discusses two implementations, for both small and larger kernel sizes, that outperform the state of the art by up to 5x on modern GPUs.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a material model for diffuse fluorescence that is compatible with RGB and spectral rendering. This models builds on an analytical integrable Gaussian-based model of the spectral reradiation that is efficient enough to permits real-time rendering and editing of such appearance.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis work presents a statistical wave-scattering model for surfaces with nanoscale mixtures in geometry and material. It predicts average appearance (BRDF) and draws realistic speckles directly from surface statistics, without explicit definitions. The proposed model demonstrates various applications including corrosion (natural), particle deposition (man-made) and height-correlated mixture (artistic).
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present an innovative hybrid near-wall model for the multi-resolution lattice Boltzmann solver to effectively enable simulations of high Reynolds number turbulent boundary layer flows. For the first time, it strikes an excellent balance between the precision demanded by industrial computational design and the efficiency required for various visual animations.
Talk
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Dynamics
Games
Generative AI
Modeling
Performance
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Sunday
DescriptionWe propose a novel mobile scanning solution that allows end-users to reconstruct 3D hairstyles with actual per-strand curves from just a phone scan. Using a mixture of deep learning and optimization, we make hair scanning fast and accessible while delivering qualitative assets ready to be used in any 3D software.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a general spectral-domain simulation framework for optical heterodyne detection (OHD), extending path integral rendering to capture power spectral density of OHD. Unlike existing domain-specific tools, our approach supports diverse scenes and applications. We validate it against real-world data from FMCW lidar, blood flow velocimetry, and wind Doppler lidar.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose Neural PLS, a neural particle level-set method for tracking and evolving dynamic neural representations. Oriented particles serve as interface trackers and sampling seeders, enabling efficient evolution on a multi-resolution grid-hash structure. Our approach integrates traditional PLS and implicit neural representations, achieving superior performance in benchmarks and physical simulations.
Spatial Storytelling
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Augmented Reality
Capture/Scanning
Computer Vision
Ethics and Society
Industry Insight
Performance
Spatial Computing
Full Conference
Experience
DescriptionA case study of the storytelling and technical developments of THE TENT, an AR tabletop narrative built with volumetric video and photogrammetry that premiered at SXSW 2024 and toured the world including the Immersive Pavillion at SIGGRAPH 2024 and was lauded for its use of cinematic and theatrical techniques.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a platform for creating believable, conversational digital characters that combine conversational AI, speech, animation, memory, personality, and emotions. Demonstrated through Digital Einstein, our system enables interactive, story-driven experiences and generalizes to any character, making immersive, AI-powered character experiences more accessible than ever.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionSpheres that are disjoint from a given union of spheres can be computing by solving a convex hull problem. This can be exploited for contouring discretely sampled signed distance functions.
Industry Session
Arts & Design
Production & Animation
Animation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionWorking with prim patterns is a fundamental part of unlocking Solaris. In this talk we discuss the basics of prim patterns, powerful auto-collections, and crafting patterns to be efficient. We will also discuss some new auto-collections part of Houdini 21.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Education
Ethics and Society
Games
Generative AI
Lighting
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
Description"A Sparrow’s Song" is a CG-animated short, following a widowed air raid warden who finds a dying sparrow in WWII. Created as a diploma project at Filmakademie Baden-Württemberg, it blends traditional workflows with modern production technologies. We explore creative problem-solving, sustainability, and innovative techniques that brought the film to life.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionInspired by a true story, a widowed air raid warden in the midst of World War II struggles to overcome grief and rediscover joy in her life—until she finds a dying sparrow she hopes to save.
Industry Session
Production & Animation
Animation
Pipeline Tools and Work
Full Conference
Experience
Exhibits Only
Monday
DescriptionThis session will reflect on the Netflix Animation Studio of today, highlighting the addition of Animal Logic to the brand. We will explore what it means to be part of Netflix, including the unique opportunities it presents, the challenges we’ve navigated, and the responsibility we share in shaping the future of animation. Through the lens of our unified end-to-end (E2E) pipeline, we will celebrate the foundations we've built, honor the teams we were, and look ahead to the possibilities of what we can achieve together.
Talk
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Dynamics
Games
Generative AI
Modeling
Performance
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Sunday
DescriptionDisney Animation makes heavy use of Ptex, which required a texture streaming pipeline. The goal was to create a scalable system to provide a real-time experience even as the number of Ptex expand into thousands. We cap the maximum size of the GPU cache, and employ a LRU eviction scheme.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present an implicitly-integrated, quaternion-based constrained Rigid Body Dynamics (RBD) that guarantees satisfaction of kinematic constraints, unifying the solution strategy for complex mechanical systems with arbitrary kinematic structures, by navigating subspaces spanned by constraint forces and torques for systems with redundant constraints, over actuation, and passive degrees of freedom.
Course
Research & Education
Livestreamed
Recorded
Animation
Education
Modeling
Rendering
Full Conference
Virtual Access
Experience
Sunday
DescriptionFor a beginner, walking into a SIGGRAPH conference is an intimidating experience. There is much to see and much to do, and everyone seems to be speaking an unfamiliar language where they ooh and ahh over things that they appreciate but you don’t. This course is for them!
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper proposes a scalable framework using Bayesian Neural Networks and a novel 2mD acquisition function to efficiently discover gamut boundaries in performance space. Combining NSGA-II's diversity and Bayesian Optimization's efficiency, the method enables large-batch, parallel optimization, outperforming traditional approaches in real-world engineering and fabrication tasks.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Not Recorded
Animation
Digital Twins
Industry Insight
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionHNTB, a leading infrastructure engineering firm, collaborates with Cesium to enhance AEC workflows by integrating 3D geospatial context into runtime engines. This partnership reduces modeling time for large-scale projects and introduces tools for efficient editing. Cesium's platform optimizes and streams 3D data, improving design, visualization, and analysis for infrastructure projects.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionSimulation is rapidly redefining how embodied AI systems learn, adapt, and deploy in the real world. In this talk, we’ll share best practices from Lightwheel’s work supporting leading research and industry teams with SimReady assets, scalable synthetic data pipelines, and simulation-first workflows.
We’ll walk through a practical sim-to-real pipeline built on top of NVIDIA Isaac Sim, Isaac Lab, OpenUSD, and GR00T-compatible infrastructure, showcasing how Lightwheel enables teams to fine-tune foundation models and reinforcement learning policies entirely in simulation before successful real-world deployment. We’ll highlight how high-quality, physics-accurate SimReady assets — from rigid to deformable — play a critical role in generalization, and how Lightwheel’s open-source and cloud tooling are helping to democratize access to high-quality simulation for the broader embodied AI community.
From generating realistic demonstrations in simulation—using validated SimReady assets for manipulation and locomotion—to co-training and deploying robot foundation models like Isaac GR00T N1.5, this session distills hard-earned insights into what works—and what doesn’t—when building generalist robots with simulation at the core.
We’ll walk through a practical sim-to-real pipeline built on top of NVIDIA Isaac Sim, Isaac Lab, OpenUSD, and GR00T-compatible infrastructure, showcasing how Lightwheel enables teams to fine-tune foundation models and reinforcement learning policies entirely in simulation before successful real-world deployment. We’ll highlight how high-quality, physics-accurate SimReady assets — from rigid to deformable — play a critical role in generalization, and how Lightwheel’s open-source and cloud tooling are helping to democratize access to high-quality simulation for the broader embodied AI community.
From generating realistic demonstrations in simulation—using validated SimReady assets for manipulation and locomotion—to co-training and deploying robot foundation models like Isaac GR00T N1.5, this session distills hard-earned insights into what works—and what doesn’t—when building generalist robots with simulation at the core.
Birds of a Feather
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Computer Vision
Display
Dynamics
Graphics Systems Architecture
Image Processing
Modeling
Pipeline Tools and Work
Virtual Reality
Full Conference
Experience
DescriptionEveryone in the ACES community-users, contributors, implementers, and anyone interested in learning more—is invited to take part in this informal, open discussion. We'll cover key topics including the features of ACES 2.0, its current implementation status across both commercial and open-source tools, and its continued evolution.
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
ACM SIGGRAPH Award Talk
Livestreamed
Recorded
Full Conference
Virtual Access
Experience
Tuesday
DescriptionEach year, ACM SIGGRAPH presents nine awards recognizing exceptional achievements in computer graphics and interactive techniques at the ACM SIGGRAPH Conference.
For a list of the awardees, visit: https://www.siggraph.org/awards/
For a list of the awardees, visit: https://www.siggraph.org/awards/
ACM SIGGRAPH 365 - Community Showcase
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Scientific Visualization
Full Conference
Experience
DescriptionThe ACM SIGGRAPH Cartographic Visualization (Carto) session explores how viewpoints and techniques from the computer graphics community can be effectively applied to cartographic and spatial data sets. Speakers demonstrate their latest tools and application efforts.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe ACM SIGGRAPH History Archive is an initiative created by individuals that believe in free exchange of information as a means of improving society. Knowledge is only possible when information is obtainable. The SIGGRAPH community gathers together each year at the annual conference to share knowledge, network with people, and learn from each other. The work included in the SIGGRAPH History online Archive is a testament to the fact that open access to information fuels innovation, creativity, and achievement. This session focuses on major improvements to the archive that have happened over the past year, including adding 12,000 new entries, programming new features, developing new pipelines, optimization of the infrastructure, adding new design features, and scanning hundreds of documents. The large team of volunteers and interns work daily to research and enter new entries and fix bugs and add new functionality.
The physical archive is currently housed at Bowling Green State University (home of the 2nd SIGGRAPH conference) and will be expanding its footprint. Recent acquisition of the Jim Blinn collection of SIGGRAPH artifacts necessitated a redesign of the space. The SIGGRAPH Archive is also involved in a major consortium of new media art archives from around the world and helps lead an initiative to globally connect archives. This session also will solicit audience input regarding the future of the SIGGRAPH History Archive, possible enhancements, integration of new technologies, and its long-term sustainability.
The physical archive is currently housed at Bowling Green State University (home of the 2nd SIGGRAPH conference) and will be expanding its footprint. Recent acquisition of the Jim Blinn collection of SIGGRAPH artifacts necessitated a redesign of the space. The SIGGRAPH Archive is also involved in a major consortium of new media art archives from around the world and helps lead an initiative to globally connect archives. This session also will solicit audience input regarding the future of the SIGGRAPH History Archive, possible enhancements, integration of new technologies, and its long-term sustainability.
ACM SIGGRAPH 365 - Community Showcase
Research & Education
Not Livestreamed
Not Recorded
Industry Insight
DescriptionYearly Pioneers social gathering with light appetizers and a cash bar. No agenda, just schmoozing with old friends and colleagues. Admission by Pioneer membership as printed on your badge. (This year the Reception does *not* overlap with Real-Time Live!)
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe kickoff of the 2025 SIGGRAPH Educators Program will include an overview of activities and opportunities with, and sponsored by, the ACM SIGGRAPH Education Committee. Additionally, the winning entries from the SpaceTime competition will be shown along with a screening of the show reel for the 2025 double-curated Faculty Submitted Student Work (FSSW) exhibit. Designed as a way for educators to share their project ideas across schools and disciplines, FSSW is an online archive of assignment and project briefs as well as a curated video emphasizing the variety of student work and schools submitted. The annual exhibition video is a taste of the content available on the Education Committee website and this event showcases and celebrates to the greater SIGGRAPH community the best examples of student work done for projects and assignments for the 2024-2025 school year.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe kickoff of the 2025 SIGGRAPH Educators Program will include an overview of activities and opportunities with, and sponsored by, the ACM SIGGRAPH Education Committee. Additionally, the winning entries from the SpaceTime competition will be shown along with a screening of the show reel for the 2025 double-curated Faculty Submitted Student Work (FSSW) exhibit. Designed as a way for educators to share their project ideas across schools and disciplines, FSSW is an online archive of assignment and project briefs as well as a curated video emphasizing the variety of student work and schools submitted. The annual exhibition video is a taste of the content available on the Education Committee website and this event showcases and celebrates to the greater SIGGRAPH community the best examples of student work done for projects and assignments for the 2024-2025 school year.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionAcroType is an interactive installation where visitors can see themselves composed from numerous cut-outs on a video screen. Thousands of programmed and animated ants carry bits and pieces of the live video image and constantly recompose this video feed bit by bit. Visitors watch themselves appearing and disappearing as the laborious ants assemble steadily their portraits. Similar to leaf-cutter ants, these artificial animals get never tired to create and recomposed the human portraits.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionParth delivers adaptive fill-reducing ordering to accelerate Cholesky solvers in simulations with dynamic sparsity patterns, such as contact modelling, achieving up to 255× ordering speedups. With seamless, three-line integration into popular solvers like MKL and Accelerate, Parth ensures reliable, high-performance computations for applications in computer graphics and scientific computing.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present an algorithm for simulating large-scale, violently turbulent two-phase flows—such as breaking ocean waves, tsunamis, and asteroid impacts—at extreme resolutions of the coupled water-air velocity field. This is achieved by integrating a new multiphase FLIP variant with highly efficient dual particle–grid adaptivity and a novel adaptive Poisson solver.
Talk
Arts & Design
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionWe advertise the use of tetrahedral grids constructed via the longest edge bisection algorithm for rendering volumetric data with path tracing. Our GPU implementation outperforms regular grids by up to a speed-up factor of 30 and allows to render production assets in real time.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Dynamics
Geometry
Industry Insight
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionIn this paper, we discuss some of the solutions we have found to common problems of non-procedural groom workflows. We discuss these solutions and how they were used to create, animate and simulate high-fidelity, photo-realistic character grooms for Mufasa: The Lion King
Educator's Day Session
Production & Animation
Research & Education
Not Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Hardware
Modeling
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
DescriptionThis technical talk presents recent innovations in hardware and post-processing workflows from Corbel3D (Pixel Light Effects), aimed at advancing mobile photogrammetry and high-fidelity 4D scanning. Key developments include portable head scanners integrating white and UV light capture, multi-mode stacked acquisition methods, and high-speed SLR burst synchronization at 120fps. Improvements in power source buffering significantly reduce on-set energy demands, enabling rapid deployment of full-body mobile scanning systems. We will also explore emerging cross-industry applications in sports performance analysis and ergonomic movement research, highlighting the broader impact of high-resolution volumetric data capture.
Course
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Games
Geometry
Industry Insight
Lighting
Performance
Real-Time
Rendering
Full Conference
DescriptionModern video games employ a variety of sophisticated algorithms to produce groundbreaking 3D rendering pushing the visual boundaries and interactive experience of rich environments. This course brings state-of-the-art and production-proven rendering techniques for fast, interactive rendering of complex and engaging virtual worlds of video games.
This is the course to attend if you are in the game development industry or want to learn the latest and greatest techniques in the real-time rendering domain!
This is the course to attend if you are in the game development industry or want to learn the latest and greatest techniques in the real-time rendering domain!
Course
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Games
Geometry
Industry Insight
Lighting
Performance
Real-Time
Rendering
Full Conference
Virtual Access
Tuesday
DescriptionModern video games employ a variety of sophisticated algorithms to produce groundbreaking 3D rendering pushing the visual boundaries and interactive experience of rich environments. This course brings state-of-the-art and production-proven rendering techniques for fast, interactive rendering of complex and engaging virtual worlds of video games.
This is the course to attend if you are in the game development industry or want to learn the latest and greatest techniques in the real-time rendering domain!
This is the course to attend if you are in the game development industry or want to learn the latest and greatest techniques in the real-time rendering domain!
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a GPU-optimized IPC framework achieving up to 10× speedup across soft, stiff, and hybrid simulations. Key innovations include a connectivity-enhanced MAS preconditioner, a parallel-friendly inexact strain limiting energy, and a hash-based two-level reduction strategy for fast Hes-
sian assembly and efficient affine-deformable coupling.
sian assembly and efficient affine-deformable coupling.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionDiscover how simulation-first development is advancing humanoid robotics. AEON, Hexagon’s advanced humanoid robot, was engineered entirely in virtual environments using NVIDIA Omniverse, Isaac platform technologies, and OpenUSD. This approach enabled rapid skill acquisition and robust validation, preparing AEON for complex industrial tasks in real-world settings. The session will showcase how 3D simulation empowers robots to master perception, locomotion, and coordination—dramatically accelerating development cycles and reducing deployment risks.
Talk
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Geometry
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionThis presentation will provide an in-depth exploration of RodeoFX's implementation of Houdini Solaris as a cornerstone of its VFX pipeline for House of the Dragon Season 2. Aimed at the Houdini community and CG artists, the session will highlight the challenges, solutions, and benefits of leveraging Solaris for large-scale productions.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present the first scene-update aerial path planning algorithm specifically designed for detecting and updating change areas in urban environments, which paves the way for efficient, scalable, and adaptive UAV-based scene updates in complex urban environments.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Performance
Physical AI
Real-Time
Robotics
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
DescriptionAfter all of the presentations, attendees are invited to participate in a Q&A.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Education
Ethics and Society
Games
Generative AI
Lighting
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionThis paper introduces a case study of an AI & Filmmaking course designed as a sandbox for generative AI experimentation in computer graphics education. It explores students’ collaborative creation of a silent documentary, analyzing learning outcomes, technical and ethical concerns, and AI’s role in reenacting historical events in documentary storytelling.
Educator's Day Session
Research & Education
Not Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Education
Full Conference
Virtual Access
Experience
Monday
DescriptionJoin us for an insightful session where we will unveil NVIDIA’s comprehensive resources to enhance academic programs and drive impactful student research. Whether you are new to the field or an experienced educator, this session will equip you with the tools needed to upskill, design innovative curricula, and mentor groundbreaking projects.
NVIDIA’s Deep Learning Institute (DLI) offers hands-on training in AI, accelerated computing, and data science. Key features include:
-End-to-End Projects: Build and deploy comprehensive projects across various technologies.
-Industry-Standard Tools: Gain experience with widely used software and frameworks.
-Live Workshops: Engage in live, instructor-led workshops led by NVIDIA-certified experts.
-Self-Paced Courses: Access online courses anytime to fit your schedule.
-Certifications: Earn NVIDIA course certificates to boost your career.
The NVIDIA University Ambassador Program empowers qualified faculty and researchers to teach workshops to their students at no cost, fostering a global network of AI educators and researchers.
The Teaching Kit Program supports university educators with downloadable courseware in accelerated computing, deep learning, and robotics, including lecture materials, GPU cloud resources, and self-paced courses.
This session will be presented by Will Ramey, senior director at NVIDIA, who leads global teams responsible for the company’s developer programs and the Deep Learning Institute. Since joining NVIDIA in 2003, Will has served in various technical and leadership roles, including as the first product manager for the CUDA parallel programming platform. Prior to NVIDIA, he managed an independent game studio and developed advanced technology for the entertainment industry. Will holds a degree in computer science from Willamette University and completed the Japan Studies Program at Tokyo International University.
With over 500 DLI University Ambassadors at hundreds of institutions globally, NVIDIA is committed to advancing the SIGGRAPH community through cutting-edge education and research. Don’t miss this opportunity to join the forefront of AI and computing innovation.
NVIDIA’s Deep Learning Institute (DLI) offers hands-on training in AI, accelerated computing, and data science. Key features include:
-End-to-End Projects: Build and deploy comprehensive projects across various technologies.
-Industry-Standard Tools: Gain experience with widely used software and frameworks.
-Live Workshops: Engage in live, instructor-led workshops led by NVIDIA-certified experts.
-Self-Paced Courses: Access online courses anytime to fit your schedule.
-Certifications: Earn NVIDIA course certificates to boost your career.
The NVIDIA University Ambassador Program empowers qualified faculty and researchers to teach workshops to their students at no cost, fostering a global network of AI educators and researchers.
The Teaching Kit Program supports university educators with downloadable courseware in accelerated computing, deep learning, and robotics, including lecture materials, GPU cloud resources, and self-paced courses.
This session will be presented by Will Ramey, senior director at NVIDIA, who leads global teams responsible for the company’s developer programs and the Deep Learning Institute. Since joining NVIDIA in 2003, Will has served in various technical and leadership roles, including as the first product manager for the CUDA parallel programming platform. Prior to NVIDIA, he managed an independent game studio and developed advanced technology for the entertainment industry. Will holds a degree in computer science from Willamette University and completed the Japan Studies Program at Tokyo International University.
With over 500 DLI University Ambassadors at hundreds of institutions globally, NVIDIA is committed to advancing the SIGGRAPH community through cutting-edge education and research. Don’t miss this opportunity to join the forefront of AI and computing innovation.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Image Processing
Modeling
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionNow in its third year, AI3D gathers creators using generative AI to create 3D—text/image-to-3D, AI-based texturing, scene synthesis, sketch-to-3D, and 3D-controlled generative video and beyond. Unlike traditional procedural tools, these workflows leverage diffusion models, neural fields, and transformers to generate spatial content. This interactive BOF blends Demo Jam and dialectical discussion—bring your work, prototypes, and questions. Whether you’re building tools, pipelines, or speculative interfaces, join us to share ideas and connect with others shaping the future of AI-native 3D creation.
Technical Workshop
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Full Conference
Experience
DescriptionGenerative AI is transforming how we create, edit, and understand visual content, yet a gap remains between researchers building these tools and artists using them. The 7th CVEU workshop at SIGGRAPH 2025 invites researchers, artists, and industry practitioners to bridge this gap and shape the future of creative workflows with GenAI. We will explore generative models for image and video creation, interactive editing, and personalized content generation while addressing practical challenges of latency and scalability. Through keynotes, artist-researcher discussions, and an art gallery, we will highlight emerging tools that empower creators.
More details: https://cveu.github.io/
More details: https://cveu.github.io/
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThe Encephalartos woodii is a cycad believed to be extinct in the wild. Only one specimen was discovered and it has since been propagated in botanical gardens. However, existing specimens are clones originating from this single plant and all are male. With the female specimen undiscovered 'AI in the Sky' partakes in the search for the female using drone technology and artificial intelligence (AI).
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Education
Fabrication
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
DescriptionGenerative AI is transforming architectural design by assisting creative decision-making. The talk presents an architectural design course where students incorporated AI as a co-pilot—a means to break free from creative stagnation, explore different design personas within themselves, and push the boundaries of their architectural thinking while navigating real-world constraints.
Talk
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Dynamics
Games
Generative AI
Modeling
Performance
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Sunday
DescriptionAI-Powered Real-Time VFX for Mobile explores the fusion of Generative Adversarial Networks (GANs) and GPU particle systems to achieve cinematic-quality fire and water effects on mobile devices. The technology optimizes real-time rendering across a wide range of hardware, enhancing mobile gaming and social media experiences while maintaining performance efficiency.
Birds of a Feather
New Technologies
Production & Animation
Artificial Intelligence/Machine Learning
Generative AI
Image Processing
Industry Insight
Lighting
Pipeline Tools and Work
Full Conference
Experience
DescriptionThis session examines how virtual production and AI can enhance creative possibilities and streamline production workflows, particularly for independent teams operating with limited resources.
Using a collaborative case study between Lightcraft and Beeble as a jumping-off point, we’ll examine how real-world shoots, 3D environments, and AI-assisted relighting and keying can be combined in new ways. The session will focus on the creative opportunities unlocked when technical bottlenecks are removed, rather than specific tool demonstrations.
Attendees will gain insights into emerging hybrid workflows that blend traditional filmmaking with AI-enhanced processes, reshaping how we approach storytelling, timelines, and creative decision-making.
Using a collaborative case study between Lightcraft and Beeble as a jumping-off point, we’ll examine how real-world shoots, 3D environments, and AI-assisted relighting and keying can be combined in new ways. The session will focus on the creative opportunities unlocked when technical bottlenecks are removed, rather than specific tool demonstrations.
Attendees will gain insights into emerging hybrid workflows that blend traditional filmmaking with AI-enhanced processes, reshaping how we approach storytelling, timelines, and creative decision-making.
Appy Hour
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Education
Games
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionUse AI to intuitively create 3D objects in reality. The user augments rough primitives and gives AI a prompt, takes a photo from an angle using an AI3D Easel which iteratively helps refine the image until it is ready for image to 3D process. We also introduce AI3D Render.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Digital Twins
Ethics and Society
Games
Generative AI
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionAlgorithmic Miner uses VR to reveal the hidden labor behind AI systems. By immersing participants in data annotation tasks, it critically reflects on exploitation, automation, and techno-capitalism, prompting new discussions on ethical, human-centered design in interactive systems.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAlignTex is a novel framework for generating high-quality textures from 3D meshes and multi-view artwork. It improves texture generation by ensuring both appearance detail and geometric consistency, outpacing traditional methods in quality and efficiency, making it a valuable tool for 3D asset creation in gaming and film production.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
Gaming & Interactive
New Technologies
Animation
Artificial Intelligence/Machine Learning
Computer Vision
Dynamics
Graphics Systems Architecture
Image Processing
Performance
Pipeline Tools and Work
Rendering
Virtual Reality
Full Conference
Experience
DescriptionThe Academy Software Foundation (ASWF) and Alliance for OpenUSD (AOUSD) have a liaison agreement to align community needs with OpenUSD standards development. Join us for key updates from both groups, including progress from the ASWF USD Working Group, developments in AOUSD with a focus on the Core Specification, and joint efforts around lights, materials, and color. We’ll also introduce the new ASWF USD Working Group Collective Project. This BoF is part of a full day of open source Birds of a Feather sessions hosted by ASWF during Open Source Days. Learn more at aswf.io/opensourcedays.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA group of pigs are raised peacefully in a monastery when, one day, one of them finds out the truth behind their existence. Thus he decides to free his friends.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPresenting AMOR, a policy conditioned on context and a linear combination of reward weights, trained using multi-objective reinforcement learning. Once trained, AMOR allows for on-the-fly adjustments of reward weights, unlocking new possibilities in physics-based and robotic character control.
Course
Gaming & Interactive
Livestreamed
Recorded
Generative AI
Hardware
Real-Time
Rendering
Full Conference
Virtual Access
Tuesday
DescriptionThis course teaches the fundamentals of neural shading, wherein traditional graphics algorithms are replaced with simple neural networks. Both theory and practical implementation will be covered,
along with hardware acceleration, and production deployment. Follow along with the instructors using interactive samples written in Python & Slang!
along with hardware acceleration, and production deployment. Follow along with the instructors using interactive samples written in Python & Slang!
Course
New Technologies
Research & Education
Livestreamed
Recorded
Games
Graphics Systems Architecture
Image Processing
Math Foundations and Theory
Rendering
Scientific Visualization
Full Conference
Virtual Access
Monday
DescriptionQuantum computing is a radically new and exciting approach to programming. By exploiting the unusual behavior of quantum objects, this new technology invites us to re-imagine the computer graphics methods we know and love in revolutionary new ways. This course is math-free and requires no technical background.
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Generative AI
Geometry
Image Processing
Lighting
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Thursday
DescriptionWe propose AniDepth, a novel anime in-betweening method using a video diffusion model enhanced by converting anime illustrations into depth maps. Guided by line-arts, our approach interpolates depth maps and colors to boost fidelity, temporal smoothness, and performance while seamlessly integrating into production pipelines.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionView a sneak peek of the upcoming Netflix feature animation release! In Your Dreams is a comedy adventure about Stevie and her brother Elliot who journey into the landscape of their own dreams. If the siblings can withstand a snarky stuffed giraffe, zombie breakfast foods, and the queen of nightmares, the Sandman will grant them their ultimate dream, the perfect family.
Join us for a live in-person conversation with Director Alex Woo, Kuku Studios, Sony Pictures Imageworks Visual Effects Supervisor, Nicola Lavender and Head of Character Animation, Sacha Kapijimpanga followed by a Q&A and poster signing!
Join us for a live in-person conversation with Director Alex Woo, Kuku Studios, Sony Pictures Imageworks Visual Effects Supervisor, Nicola Lavender and Head of Character Animation, Sacha Kapijimpanga followed by a Q&A and poster signing!
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA screening and in-person conversation! Director Amanda Strong, Spotted Fawn Productions and Eloi Champagne, Head of Technical Direction and Production Technologies, NFB shares how CG was used in stop motion to create this meaningful work where Dove, a gender-shifting warrior, uses their Indigenous medicine (Inkwo) to protect their community from an unburied swarm of terrifying creatures. See the short film, participate in the Q&A and meet the filmmakers!
Two lifetimes from now the world hangs in the balance. Dove, a young, enigmatic, gender-shifting warrior, discovers the gifts and burdens of their Inkwo (medicine) to defend against an army of hungry, ferocious monsters. Dove’s courage, resilience and alliance with the Earth culminates in a battle against these flesh-consuming creatures, who become stronger with each body and soul they devour. Inkwo for When the Starving Return is a call to action to fight and protect against the forces of greed around us.
Two lifetimes from now the world hangs in the balance. Dove, a young, enigmatic, gender-shifting warrior, discovers the gifts and burdens of their Inkwo (medicine) to defend against an army of hungry, ferocious monsters. Dove’s courage, resilience and alliance with the Earth culminates in a battle against these flesh-consuming creatures, who become stronger with each body and soul they devour. Inkwo for When the Starving Return is a call to action to fight and protect against the forces of greed around us.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe derive a nonlinear elastic rod energy, starting from a general 3D volumetric isotropic material. Validated against FEM, we accurately capture rod stretching, bending and twisting, under finite deformations. We also propose how to separately control linear/nonlinear stretchability/bendability/twistability, supporting rod material design for application in computer graphics.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionGetting any character moving can be done in under 5 minutes. Uthana uses the power of AI to create text-to-motion animation, real time character control, rig-agnostic auto-retargeting, and motion stitching to make this possible. Uthana works for any rig, any movement, and any level of animation experience.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present the Anymate Dataset, a large-scale dataset of 230K 3D assets paired with expert-crafted rigging and skinning information---70 times larger than existing datasets. Using this dataset, we propose a learning-based auto-rigging framework with three sequential modules for joint, connectivity, and skinning weight prediction.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAnyTop generates motion for diverse character skeletons using only skeletal structure as input. This diffusion model incorporates topology information and textual joint descriptions to learn semantic correspondences across different skeletons. It generalizes with minimal training examples and supports joint correspondence, temporal segmentation, and motion editing tasks.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper introduces an appearance-aware adaptive sampling method using deep reinforcement learning to optimize the reconstruction of spatially-varying BRDFs from minimal images. By modeling the sampling as a sequential decision-making problem, the method identifies the next best view-lighting pair, outperforming heuristic sampling strategies for heterogeneous materials.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a novel volumetric representation for the aggregated appearance of complex scenes and a pipeline for level-of-detail generation and rendering. Our representation preserves accurate far-field appearance and spatial correlation from scene geometry. Our method faithfully reproduces appearance and achieves higher quality than existing scene filtering methods.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionVenturing through a distant desert, we wander the terrain of our intimate desires and fears, seeking answers. Yet, life never fails to deliver a surprise. When the opportunity arises, do you free yourself?
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionArenite is a novel, physics-based simulation method for generating realistic sandstone structures. It combines fabric interlocking, multi-factor erosion, and particle-based deposition. Our GPU-based implementation produces detailed 3D shapes such as arches, alcoves, hoodoos, and buttes in minutes and provides real-time control.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionIn a frostbitten frontier world, a legendary mech pilot learns his latest mission might hold the key to the demons that have haunted him for decades.
ACM SIGGRAPH 365 - Community Showcase
Art Gallery
Art Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionWelcome to the Art Gallery and Art Papers Contributors from the Chairs, and announcement of the Best in Show awards for those programs. Acknowledgement of the Distinguished Artist Award winner, and recognition of the contributions of the year-round Digital Arts Community (DAC).
Art Gallery
Art Paper
Arts & Design
Not Livestreamed
Not Recorded
Art
Full Conference
Experience
DescriptionIn this informal conversation, SIGGRAPH Lifetime Achievement Award recipients Ernest Edmonds and Manfred Mohr reflect on their pioneering contributions to digital art and share insights into the new works they are presenting in this year’s Art Gallery. Chaired by Francesca Franco, Art Gallery Chair for SIGGRAPH 2025, the dialogue revisits key moments from their decades-long practices while looking ahead to the continued evolution of generative and computational art. A rare opportunity to hear from two artists whose influence continues to shape the field.
Talk
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Games
Image Processing
Industry Insight
Lighting
Pipeline Tools and Work
Real-Time
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionApplying noise fields to create procedural wind on curves has been an attractive method for its speed, variability, and timeframe independence, but the motion looks artificial. We present techniques to art-direct as well as enhance the realism of procedural curve wind with the addition of collisions, shielding, gusts, and recovery.
Birds of a Feather
Gaming & Interactive
Research & Education
Artificial Intelligence/Machine Learning
Real-Time
Rendering
Full Conference
Experience
DescriptionAs real-time graphics grow increasingly sophisticated, assessing video quality has become critical for optimizing rendering pipelines and cloud gaming experiences. Traditional metrics like PSNR fall short in capturing perceptual artifacts unique to modern techniques such as path tracing and neural supersampling. This Birds of a Feather session invites researchers, developers, and artists to discuss the evolving landscape of video quality assessment in graphics. We'll explore recent advances like Computer Graphics Video Quality Metric (CGVQM), limitations of existing methods, and future directions including semantic-aware, temporally consistent, and no-reference approaches. Join us to help shape the future of perceptual quality in graphics.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAssetDropper is a novel framework for extracting standardized assets from reference images, addressing challenges such as occlusion and distortion. Leveraging both synthetic and real-world datasets, along with a reward-driven feedback mechanism, it achieves state-of-the-art performance in asset extraction and provides designers with a versatile open-world asset palette.
Birds of a Feather
Gaming & Interactive
New Technologies
Animation
Computer Vision
Display
Dynamics
Graphics Systems Architecture
Image Processing
Modeling
Performance
Pipeline Tools and Work
Full Conference
Experience
DescriptionThe Diversity & Inclusion Working Group at the Academy Software Foundation breaks down access barriers and fosters connection across Foundation projects, the broader open source community, and the VFX and animation industries. If you're passionate about entertainment and looking to get involved, we invite you to attend. Our members include engineers, students, faculty, and more. We’ll share updates on current efforts and leave time for open discussion and community input. This BoF is part of a full day of open source Birds of a Feather sessions hosted by the Academy Software Foundation during Open Source Days. Learn more at aswf.io/opensourcedays.
Birds of a Feather
Gaming & Interactive
New Technologies
Animation
Artificial Intelligence/Machine Learning
Computer Vision
Display
Dynamics
Graphics Systems Architecture
Image Processing
Performance
Pipeline Tools and Work
Rendering
Full Conference
Experience
DescriptionMeet and discuss topics of the new Academy Software Foundation (ASWF) Machine Learning Working Group and the initial related ASWF sandbox projects. We expect to update the group on progress of our initial project(s).
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper introduces a novel asymptotic directional stiffness (ADS) metric to analyze the contribution of middle surface geometry on the stiffness of shell lattice metamaterials, focusing on Triply Periodic Minimal Surfaces (TPMS). It provides a theoretical framework and optimization techniques, advancing the understanding of TPMS shell lattices.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionA deep time meditation on the changing elemental composition of the atmosphere of the planet Earth. Atmos Sphaerae was commissioned by Christiane Paul for the DiMoDA 4.0 virtual reality exhibit "Dis/Location," which premiered at Gazelli Art House and was shown at the ZKM Karlsruhe and the Onassis Foundation's ONX Studio NY. The "flat" video version of Atmos Sphaerae has been presented by Gazelli Art House at ART SG in Singapore and developed for multi-screen immersive spaces at ONX Studio NY.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionOur system revolutionizes assembly verification by combining CAD-trained detection, AR guidance, and vision-language models. The system eliminates extensive training data needs while providing natural language feedback. This enables workers of all skill levels to perform complex assemblies accurately, addressing workforce challenges through rapid skill development and reduced reliance on experts.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe extend the Vertex Block Descent method for fast and unconditionally stable physics-based simulation using an Augmented Lagrangian formulation to enable simulating hard constraints with infinite stiffness and systems with high stiffness ratios. This allows simulating complex contact scenarios involving rigid bodies with stacking and friction, and articulated joint constraints.
Industry Session
New Technologies
Production & Animation
Animation
Artificial Intelligence/Machine Learning
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionIn this new era of storytelling and episodic production, the media and entertainment industry has been evolving with computer graphics for over two decades. Today, creators have more tools than ever to bring their visions to life—but with greater creative potential comes increased complexity.
As technology advances, so must our production pipelines. Creative professionals are now challenged to find the right balance between quality, speed, and cost. In this session, Boxel Studio shares how it leveraged Autodesk’s Flow Studio (formerly Wonder Studio) and its trackerless motion capture system to deliver a high volume of creature animation for Superman & Lois Season 4—all on a fast-paced broadcast schedule.
You’ll gain insight into how our team approached this challenge from both a creative and production standpoint, using Flow Production Tracking to manage over 600 VFX shots in just five months. This session highlights how innovative workflows and strategic toolsets can dramatically accelerate production timelines without compromising artistic quality.
As technology advances, so must our production pipelines. Creative professionals are now challenged to find the right balance between quality, speed, and cost. In this session, Boxel Studio shares how it leveraged Autodesk’s Flow Studio (formerly Wonder Studio) and its trackerless motion capture system to deliver a high volume of creature animation for Superman & Lois Season 4—all on a fast-paced broadcast schedule.
You’ll gain insight into how our team approached this challenge from both a creative and production standpoint, using Flow Production Tracking to manage over 600 VFX shots in just five months. This session highlights how innovative workflows and strategic toolsets can dramatically accelerate production timelines without compromising artistic quality.
Industry Session
New Technologies
Production & Animation
Animation
Artificial Intelligence/Machine Learning
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionDiscover how creators Bad Decisions Studio and JL Mussi are reshaping the digital landscape using Autodesk Flow Studio (formerly Wonder Studio).
They’ll demonstrate how they leverage Flow Studio’s AI-powered 3D toolset to streamline their creative workflows and achieve cinematic results without the blockbuster budgets. You’ll learn how to export mocap data, camera tracking, and more into tools like Maya, Unreal Engine, and Blender, giving you total creative control, frame by frame.
Whether you're an emerging filmmaker, content creator, or 3D freelance artist, you’ll leave this session inspired and equipped to harness AI for faster, more directable storytelling.
They’ll demonstrate how they leverage Flow Studio’s AI-powered 3D toolset to streamline their creative workflows and achieve cinematic results without the blockbuster budgets. You’ll learn how to export mocap data, camera tracking, and more into tools like Maya, Unreal Engine, and Blender, giving you total creative control, frame by frame.
Whether you're an emerging filmmaker, content creator, or 3D freelance artist, you’ll leave this session inspired and equipped to harness AI for faster, more directable storytelling.
Industry Session
Production & Animation
Animation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionStep into Gotham’s underworld and uncover the technical artistry behind the visual effects of The Penguin. Join Overall VFX Supervisor Johnny Han and Pixomondo’s VFX Supervisor Nathaniel Larouche for an inside look at how over 3,000 shots were brought to life, from on-set shooting strategies to how Autodesk Flow Production Tracking streamlined the VFX pipeline and kept the team in sync across a complex production landscape.
Industry Session
Production & Animation
Animation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionIn this talk with Claudio Gonzalez, Lead Creatures Technical Director at Wētā FX, you’ll hear about the exciting challenges, creative solutions, and astounding details that went into creating the ‘the horde’ in Autodesk Maya for Season 2, Episode 2 of The Last of Us.
Gonzales will share details on the dynamic wardrobe refitting system, a process that enabled Wētā FX artists to efficiently swap and share wardrobe pieces across a range of body shapes during the costume assembly phase.
Next, he’ll focus on the Loki Solver’s role in achieving seamless integration between cloth and hair simulations. This naturally leads into a technique developed to enhance motion in these simulations—by leveraging root motion to drive the solver, Wētā FX was able to amplify the dynamic response of both hair and cloth.
The talk will also spotlight the canine creatures featured in the attack sequence, including a breakdown of the unique challenges we faced and the creative solutions we implemented.
Finally, Claudio will highlight how Wētā FX’s bodyOpt system allowed the team to quickly and effectively update multiple characters late in production, resulting in more defined and readable arm silhouettes with minimal turnaround time.
Gonzales will share details on the dynamic wardrobe refitting system, a process that enabled Wētā FX artists to efficiently swap and share wardrobe pieces across a range of body shapes during the costume assembly phase.
Next, he’ll focus on the Loki Solver’s role in achieving seamless integration between cloth and hair simulations. This naturally leads into a technique developed to enhance motion in these simulations—by leveraging root motion to drive the solver, Wētā FX was able to amplify the dynamic response of both hair and cloth.
The talk will also spotlight the canine creatures featured in the attack sequence, including a breakdown of the unique challenges we faced and the creative solutions we implemented.
Finally, Claudio will highlight how Wētā FX’s bodyOpt system allowed the team to quickly and effectively update multiple characters late in production, resulting in more defined and readable arm silhouettes with minimal turnaround time.
Industry Session
Production & Animation
Animation
Artificial Intelligence/Machine Learning
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionThere’s real satisfaction in crafting every subtle move of your character, especially in those hero moments where performance means everything. But building every motion from scratch can drain your time and budget. In this session, learn how to use Maya’s MotionMaker, powered by Autodesk AI. It combines keyframing, motion capture, and machine learning into a single tool. Whether you’re blocking background action or laying the groundwork for hero animation, MotionMaker quickly gives you a strong starting point.
Industry Session
New Technologies
Production & Animation
Animation
Pipeline Tools and Work
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionOpenUSD is transforming the landscape of visual effects, gaming, and immersive storytelling—enabling real-time collaboration, seamless data exchange, and cross-platform connectivity. Join a panel of technical leaders from Pixar, Epic Games, and DigitalFish as they delve into how OpenUSD breaks down traditional barriers, facilitating smooth interoperability across entire creative ecosystems.
Industry Session
Production & Animation
Animation
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionLiz will dive into the intricacies of hero and crowd character development and performance in The Electric State. She will explore the considerations involved in translating Simon Stålenhag's vision from book to screen, including how character design translates into movement, performance, and emotional connection. Gain a deeper understanding of the efforts behind the collaborative act of storytelling through animation.
Industry Session
Production & Animation
Animation
Artificial Intelligence/Machine Learning
Digital Twins
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionRyan Coogler’s Sinners is a haunting supernatural tale set in the 1930’s Jim Crow-era Mississippi Delta, following twin brothers—both played by Michael B. Jordan—as they return home to eventually confront something far darker than their past. Join Production VFX Producer, James Alexander and Rising Sun Pictures’ VFX Supervisor, Guido Wolter as they reveal how the team brought Sinners’ iconic twins — Smoke and Stack — to life. Discover the cutting-edge technology behind the illusion, including RSP’s proprietary REVIZE toolkit. Learn how the team preserved the unique performance of Michael B. Jordan while crafting seamless twinning effects, and how the “Halo Rig” played a pivotal role in the creative process.
Industry Session
Production & Animation
Animation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionJoin Production VFX Supervisor, Michael Ralla and ILM VFX Supervisor Nick Marshall as they reveal how ILM’s most sophisticated work was designed to vanish into the frame. While the twinning of Smoke & Stack stands out, ILM’s VFX work was designed to be indistinguishable from live-action IMAX photography. Working from previs and concept art, the generalist team at ILM Vancouver used 3ds Max to create a fully digital, period-accurate Clarksdale train station with a full Pullman train and integrated the results seamlessly into 65mm film footage.
To match Autumn Durald Arkapaw’s analog aesthetic, ILM developed custom digital lens profiles workflow and Compositors were trained to work with scanned film, embracing celluloids’ imperfections as intentional —not flaws. This is invisible VFX at its finest— crafted not to be seen, but to be believed.
To match Autumn Durald Arkapaw’s analog aesthetic, ILM developed custom digital lens profiles workflow and Compositors were trained to work with scanned film, embracing celluloids’ imperfections as intentional —not flaws. This is invisible VFX at its finest— crafted not to be seen, but to be believed.
Industry Session
Production & Animation
Animation
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionDynamic changes are an everyday reality in stop-motion productions. Clear timelines and workflows are key in productions that require an orchestrated collaboration between both analog and digital processes. Join Whitney Schmerber, Art Production Supervisor at ShadowMachine, for a deep dive into how ShadowMachine redesigned the world of Tiny Chef using Flow Production Tracking & Flow Capture to streamline communications, maximize data organization and forecast into the future to flag disruptions to workflows and company capacities.
Industry Session
Production & Animation
Animation
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionTake flight with Framestore for a behind-the-scenes look at how they translated Toothless, and the beloved animated movie world of Berk, into a live-action spectacle. Working hand-in-hand with director Dean DeBlois and production VFX Supervisor Christian Mänz, Framestore tackled everything from early visual development to complex flight choreography, imbuing the Night Fury, and supporting Dragons with the added physicality, emotion, and nuanced interaction with the Viking cast, that you would expect in the real world.
In this session, Framestore VFX Supervisor François Lambert will explore the creative and technical journey behind the work: from the challenges of shaping Toothless’ form and personality, to the practical sides of production, including the design of SFX gimbal rigs that connected the actors believably with their animated steeds. He’ll also unpack how all of this work came together in final shots, that feel as emotional, as they are epic. Whether you’re a fan of the films or simply curious about the craft, discover what it takes to make a Dragon, and its rider, truly fly.
In this session, Framestore VFX Supervisor François Lambert will explore the creative and technical journey behind the work: from the challenges of shaping Toothless’ form and personality, to the practical sides of production, including the design of SFX gimbal rigs that connected the actors believably with their animated steeds. He’ll also unpack how all of this work came together in final shots, that feel as emotional, as they are epic. Whether you’re a fan of the films or simply curious about the craft, discover what it takes to make a Dragon, and its rider, truly fly.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present AutoKeyframe, a novel framework that simultaneously accepts dense and sparse control signals for motion generation by generating keyframes directly. Our method reduces manual efforts for keyframing while maintaining precise controllability, using an autoregressive diffusion model and a new skeleton-based gradient guidance method for flexible spatial constraints.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper introduces an automated scheduling framework to optimize cloth and deformable simulations across heterogeneous computing devices. Using an enhanced HEFT algorithm and asynchronous iteration methods, our approach minimizes communication delays and maximizes parallelism. our experiments demonstrate superior frame rates over single-unit solutions for real-time and resource-constrained environments.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionThis must-see session explores how digital creation and automation can deliver precision marketing at scale. By leveraging NVIDIA’s OpenUSD, Omniverse, and enterprise-grade AI, GRIP presents a robust pipeline that integrates Digital Twins, Responsible Enterprise AI, and rule-based controllers. The result is always on-brand, highly adaptable 3D content that fits seamlessly into existing production workflows. Real-world customer applications of these technologies highlight how brands can reach broader audiences, foster richer interactions, and engage new markets.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionDeveloped in partnership with NVIDIA and Inworld AI, Streamlabs’ intelligent streaming assistant is an AI-powered co-host, producer and technical assistant for digital creators. Ashray Urs, Head of Streamlabs, will walk audiences through the streaming assistant’s evolving capabilities, with a focus on customizability and enhanced audience and streamer interactions.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThis scientific visualization depicts a bacterial molecular landscape and explores the speed of diffusion and biochemical reactions that power life at the molecular scale.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionCelebrating the enduring legacy of Mac Miller, Hornet produced a 24-minute animated film in collaboration with the posthumous official release of the album “Balloonerism".
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionBANG introduces Generative Exploded Dynamics, a novel method that dynamically decomposes 3D objects into meaningful, volumetric parts through smooth, controllable exploded views. Bridging intuitive human understanding and generative AI, it enables precise part-level manipulation, semantic comprehension, and versatile applications in 3D creation, visualization, and printing workflows.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionText-to-image diffusion models struggle with multi-subject generation due to subject leakage. Prior methods impose external layouts that conflict with the model’s prior, harming alignment and natural composition. We introduce a method that leverages the layout encoded in the initial noise, promoting alignment and natural compositions while preserving the model’s diversity.
Spatial Storytelling
Arts & Design
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Capture/Scanning
Digital Twins
Education
Performance
Real-Time
Rendering
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionStanford Virtual Human Interaction Lab and C.U.T.E. present a demo featuring the interplay between XR and dramaturgy in BEASTS, a live digital puppetry performance drawing on Korean folklore. Blending theatrical and research perspectives, we explore embodiment as the performer creates and becomes shaped by fantastical digital representations.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Computer Vision
Dynamics
Education
Fabrication
Generative AI
Geometry
Image Processing
Modeling
Simulation
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
Description"Becoming Space" is an installation that explores the agency of AI, discourse, and material intersections through AI-generated forms and 3D printing. Inspired by Ovid's Metamorphoses, it explores human-animal transformations using CLIP-guided diffusion models and stereolithography. The installation reveals limitations in AI's physical form interpretation---which is dominated by discourse---while demonstrating "intra-action" between language, algorithms, machines, and materials. This work tries to discuss authorship and material agency through the entanglement of matters.
Birds of a Feather
Research & Education
Industry Insight
Full Conference
Experience
DescriptionNew graphics technologies, from advanced ray tracing to virtualized geometry, deliver exciting capabilities for application developers, designers, and users. However, they also introduce critical application performance benchmarking challenges for hardware vendors and system buyers who need to accurately compare different configurations. This talk explores how SPEC addressed these challenges in the new SPECviewperf 15 benchmark, including new graphically intensive workloads using modern graphics APIs. Attendees will gain insight into the expanded applicability of the benchmark, along with trade-offs involved in interpreting benchmark results – enabling more informed business decisions – all without needing to purchase applications to do testing.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe derive vertex position and irradiance bounds for each triangle tuple, introducing a bounding property of rational functions on the Bernstein basis, to significantly reduce the search domain when systematically simulating specular light transport.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Digital Twins
Display
Dynamics
Ethics and Society
Fabrication
Games
Geometry
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Physical AI
Pipeline Tools and Work
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionThe Berthouzoz Lunch Event is an annual networking event organized by WiGRAPH for researchers, faculty, and students. Floraine Berthouzoz started this event as an informal gathering. After her passing in 2015, Floraine’s mentees and colleagues built upon her efforts to create an event that aims to broaden the network of women researchers and provide a friendly and personal environment where graduate students can interact with senior researchers. We will be hosting a panel of women in graphics who will share with us some of their research experiences. The event is open to all researchers, regardless of gender.
ACM SIGGRAPH 365 - Community Showcase
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis session will feature a selection of notable research papers from the Eurographics 2025 conference, offering attendees insights into current trends and developments in Computer Graphics and related fields in the European region and beyond. We will also provide an update on Eurographics' activities and opportunities.
Best Paper Award - A unified multi-scale method for simulating immersed bubbles (Joel Wretborn, Alexey Stomakhin, Christopher Batty)
BPA Honorable mention - Neural Two-Level Monte Carlo Real-Time Rendering (Mikhail Dereviannykh, Dmitrii Klepikov, Johannes Hanika, Carsten Dachsbache
BPA Honorable mention – Lipschitz Pruning: Hierarchical Simplification of Primitive-Based SDFs (Wilhem Barbier, Mathieu Sanchez, Axel Paris, Élie Michel, Thibaud Lambert, Tamy Boubekeur, Mathias Paulin, Théo Thonat)
Young Researcher Awardee - Valentin Deschaintre – A journey through appearance representations: from analytical to generative
Young Researcher Awardee – Sebastian Starke – 7 Years of Bringing Characters to Life with Computer Brains
Best Paper Award - A unified multi-scale method for simulating immersed bubbles (Joel Wretborn, Alexey Stomakhin, Christopher Batty)
BPA Honorable mention - Neural Two-Level Monte Carlo Real-Time Rendering (Mikhail Dereviannykh, Dmitrii Klepikov, Johannes Hanika, Carsten Dachsbache
BPA Honorable mention – Lipschitz Pruning: Hierarchical Simplification of Primitive-Based SDFs (Wilhem Barbier, Mathieu Sanchez, Axel Paris, Élie Michel, Thibaud Lambert, Tamy Boubekeur, Mathias Paulin, Théo Thonat)
Young Researcher Awardee - Valentin Deschaintre – A journey through appearance representations: from analytical to generative
Young Researcher Awardee – Sebastian Starke – 7 Years of Bringing Characters to Life with Computer Brains
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionBetween Feathers & Footprints is a VR experience where players seamlessly shift between human and bird forms, exploring the world through physics-driven flight and adaptive perception. Through mini-games, players navigate challenges using both grounded interaction and aerial movement, showcasing how different forms shape problem-solving, exploration, and immersive multi-perspective gameplay.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Display
Education
Games
Generative AI
Geometry
Graphics Systems Architecture
Image Processing
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Wednesday
DescriptionWe explore 3D Gaussian Splatting for cultural heritage visualization, integrating game engines to create immersive experiences. Using a historical Hakka mansion in Hong Kong as a case study, we examine 3DGS’s limitations and potential, demonstrating how emerging workflows can enhance digital heritage storytelling through interactive, cinematic, and real-time 3D representations.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionUnlock new career paths beyond media and entertainment. Join our panel to discover opportunities in fields like digital twins, AI, and robotics. Hear from industry leaders and learn how to transition your 3D skills to these high-demand industries.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionAs machines begin to operate autonomously in the physical world — from robotics to autonomous vehicles to industrial systems — they need not only perception but understanding. NVIDIA Cosmos WFMs enable developers to build world models that can predict future states, transfer knowledge across domains, and reason about outcomes. In this hero talk, we dive deep into Cosmos Predict, Cosmos Transfer, and Cosmos Reason.
Frontiers
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionEngage. Reflect. Connect.
New this year, SIGGRAPH 2025 introduces Frontier Breakout Sessions, a series of intimate, speaker-led discussions designed to foster meaningful conversations beyond the Frontiers presentation.
New this year, SIGGRAPH 2025 introduces Frontier Breakout Sessions, a series of intimate, speaker-led discussions designed to foster meaningful conversations beyond the Frontiers presentation.
Frontiers
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionEngage. Reflect. Connect.
New this year, SIGGRAPH 2025 introduces Frontier Breakout Sessions, a series of intimate, speaker-led discussions designed to foster meaningful conversations beyond the Frontiers presentation.
New this year, SIGGRAPH 2025 introduces Frontier Breakout Sessions, a series of intimate, speaker-led discussions designed to foster meaningful conversations beyond the Frontiers presentation.
Frontiers
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionEngage. Reflect. Connect.
New this year, SIGGRAPH 2025 introduces Frontier Breakout Sessions, a series of intimate, speaker-led discussions designed to foster meaningful conversations beyond the Frontiers presentation.
New this year, SIGGRAPH 2025 introduces Frontier Breakout Sessions, a series of intimate, speaker-led discussions designed to foster meaningful conversations beyond the Frontiers presentation.
Talk
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Games
Image Processing
Industry Insight
Lighting
Pipeline Tools and Work
Real-Time
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionThis session breaks down the visual and technical work behind two full-CG environments in The Sandman Season 2. From stylized depth and scale in the Underworld to intricate lens-matching at the Edge of the Dream World, we share how custom tools shaped the show’s distinctive visual signature.
Talk
Research & Education
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Education
Spatial Computing
Full Conference
DescriptionEngage. Reflect. Connect.
New this year, SIGGRAPH 2025 introduces Roundtables, a series of intimate, speaker-led discussions designed to foster meaningful, in-depth conversations beyond the stage.
Each 90-minute roundtable brings together up to 10 participants per table in a focused, informal setting. Each table centers on a specific topic aligned with a SIGGRAPH 2025 Talk and is facilitated by the speaker who presented it. These sessions are a unique opportunity to share experiences, ask questions, and explore emerging ideas in a small-group format, with peers, contributors, and community members from across disciplines.
Participation is first-come, first-served at each 10 participant table, so we strongly encourage attendees to arrive early to secure a seat at the topic of their choice.
Session Format:
Each roundtable follows a simple and engaging structure:
• A short (3–5 minute) reflection by the speaker to kick off the conversation
• A 60-minute, participant-led discussion on the table’s focus theme
• A closing 10-minute summary of shared insights and takeaways
Roundtable Themes:
Table 1: From Stage to Screen: Performing Characters in Real Time; Exploring how live performance and motion capture bring digital characters to life instantly with Ben Mars
Table 2: Animating the Natural World; Exploring procedural techniques for bringing vegetation, wind, and environment to life in animated storytelling with Arunachalam Somasundaram
Table 3: Table 3: AI for 3D: Rethinking the Creative Process; How intelligent systems are reshaping design, modeling, and imagination in CG. with Ian Huang
Table 4: Immersive Worlds-Storytelling Across Dimensions; Bridging story, interaction, and space through tech with Christopher Panzetta and Julian Humml
Table 5: Teaching the Future-Education, AI & CG; Innovative strategies in CG education and AI literacy with Rochele Gloor, Joe Geigel, Eve Bolotova and Scott Milner
Table 6: The Shading Spectrum-LookDev and Rendering Innovation; Advancing shading and look development across pipelines with Patrick YU Wang and Trent Crow
Table 7: Sustainable Screens-Green Innovation in Graphics; Greener production practices in real-time rendering and careers beyond entertainment with Rulon Raymond and Michael Tanzillo
Table 8: Epic Simulations: Taming the Chaos, Exploring the technical and creative challenges of physically-based simulations in production with Eston Schweickart
Table 9: Performance Engineered; Innovation in character creation, from rigs to stages with Leon Sooi and David Gray
Table 10: Cultural Memory & Spatial Media and Expanding 3D content creation with neural techniques and cultural heritage with Lukasz Mirocha and Juan Carlos Olmos Guerra
If you’re looking for creative exchange, bold ideas, and engaging conversation with some of the most innovative minds at SIGGRAPH, this is your table!
Important Note:
Roundtables are not intended for job-seeking, recruiting, or formal presentations. They are designed for open dialogue, shared learning, and genuine peer connection.
New this year, SIGGRAPH 2025 introduces Roundtables, a series of intimate, speaker-led discussions designed to foster meaningful, in-depth conversations beyond the stage.
Each 90-minute roundtable brings together up to 10 participants per table in a focused, informal setting. Each table centers on a specific topic aligned with a SIGGRAPH 2025 Talk and is facilitated by the speaker who presented it. These sessions are a unique opportunity to share experiences, ask questions, and explore emerging ideas in a small-group format, with peers, contributors, and community members from across disciplines.
Participation is first-come, first-served at each 10 participant table, so we strongly encourage attendees to arrive early to secure a seat at the topic of their choice.
Session Format:
Each roundtable follows a simple and engaging structure:
• A short (3–5 minute) reflection by the speaker to kick off the conversation
• A 60-minute, participant-led discussion on the table’s focus theme
• A closing 10-minute summary of shared insights and takeaways
Roundtable Themes:
Table 1: From Stage to Screen: Performing Characters in Real Time; Exploring how live performance and motion capture bring digital characters to life instantly with Ben Mars
Table 2: Animating the Natural World; Exploring procedural techniques for bringing vegetation, wind, and environment to life in animated storytelling with Arunachalam Somasundaram
Table 3: Table 3: AI for 3D: Rethinking the Creative Process; How intelligent systems are reshaping design, modeling, and imagination in CG. with Ian Huang
Table 4: Immersive Worlds-Storytelling Across Dimensions; Bridging story, interaction, and space through tech with Christopher Panzetta and Julian Humml
Table 5: Teaching the Future-Education, AI & CG; Innovative strategies in CG education and AI literacy with Rochele Gloor, Joe Geigel, Eve Bolotova and Scott Milner
Table 6: The Shading Spectrum-LookDev and Rendering Innovation; Advancing shading and look development across pipelines with Patrick YU Wang and Trent Crow
Table 7: Sustainable Screens-Green Innovation in Graphics; Greener production practices in real-time rendering and careers beyond entertainment with Rulon Raymond and Michael Tanzillo
Table 8: Epic Simulations: Taming the Chaos, Exploring the technical and creative challenges of physically-based simulations in production with Eston Schweickart
Table 9: Performance Engineered; Innovation in character creation, from rigs to stages with Leon Sooi and David Gray
Table 10: Cultural Memory & Spatial Media and Expanding 3D content creation with neural techniques and cultural heritage with Lukasz Mirocha and Juan Carlos Olmos Guerra
If you’re looking for creative exchange, bold ideas, and engaging conversation with some of the most innovative minds at SIGGRAPH, this is your table!
Important Note:
Roundtables are not intended for job-seeking, recruiting, or formal presentations. They are designed for open dialogue, shared learning, and genuine peer connection.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Computer Vision
Digital Twins
Education
Games
Geometry
Image Processing
Lighting
Modeling
Pipeline Tools and Work
Scientific Visualization
Virtual Reality
Full Conference
Experience
DescriptionBlender Foundation will present an overview of past year's Blender open source project results and the plans for next year. Everyone is welcome for feedback and share experiences.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Augmented Reality
Display
Games
Geometry
Lighting
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionIn this talk, Netflix Animation Studios explores the challenges of implementing HDR technology in animation production workflows through the case study of the internal short film "Sole Mates", highlighting approaches to overcome software and hardware constraints in artist workflows.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis study presents the Blooming Resonant Tea system, integrating cymatics (vibrations that create liquid surface patterns), music, and projections to enhance both flavor and ingredient immersion. Users customize their tea experience by selecting herbal teas, cymatics patterns, and flavor-associated music, creating a unique, immersive, multi-sensory ritual for future tea drinking.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionIn this demo, we present Timeless Blossoms, a VR-AI system that reimagines Traditional Chinese Flower Arrangement, allowing users to create 3D floral compositions in an cultural- enriched virtual space while generative AI converts their designs into real-time Xieyi paintings, merging heritage artistry with technology to spark cross-cultural and temporal dialogue.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Dynamics
Geometry
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Full Conference
Virtual Access
Thursday
DescriptionIn the realm of high-end visual effects, achieving lifelike character deformations is both an art and a technical challenge. BodyOpt is the latest evolution in WetaFX’s character deformation pipeline, integrating advanced simulation techniques with artist-friendly workflows to enable efficient processing of complex deformations across numerous shots and characters.
Birds of a Feather
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Generative AI
Full Conference
Experience
DescriptionMany disciplines in ACM have advanced to the point where progress and SOTA research occurs on a weekly if not daily basis. We want to gauge interest for a new type of conference that may begin as more of an unconference where researchers self-present and self-moderate to present the latest research as of 1 week before the conference.
Modern conference deadlines occur 6 months before, and if one misses and publishes next, the research becomes late by a year. In the bigger picture, learnings from this project and endeavor can be used to help modernize other events as computing evolves.
Modern conference deadlines occur 6 months before, and if one misses and publishes next, the research becomes late by a year. In the bigger picture, learnings from this project and endeavor can be used to help modernize other events as computing evolves.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a novel algorithm for efficient and accurate Boolean operations on B-Rep models by mapping them bijectively to controllable-error triangle meshes. Using conservative intersection detection on the mesh to locate all surface intersection curves and carefully handling degeneration and topology errors ensure that the results are watertight and correct.
Industry Session
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Hardware
Physical AI
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Monday
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
New Technologies
Production & Animation
Animation
Art
Digital Twins
Education
Geometry
Industry Insight
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
Industry Session
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Hardware
Physical AI
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Monday
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
Spatial Storytelling
Gaming & Interactive
New Technologies
Not Livestreamed
Not Recorded
Augmented Reality
Games
Pipeline Tools and Work
Real-Time
Rendering
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionILM developed Marvel's “What If...? – An Immersive Story” for Apple Vision Pro, overcoming many technical and design challenges. For telling stories in both mixed and virtual reality scenes, the team combined Unreal Engine with Apple's RealityKit and built many custom technical solutions for materials, skinning, particles, and gesture tracking.
Birds of a Feather
Research & Education
Industry Insight
Full Conference
Experience
DescriptionBreaking into VFX and animation can be exciting and overwhelming. This Birds of a Feather session brings together students and emerging professionals for an open, supportive roundtable. Swap advice, share your journey, and connect with others navigating internships, reels, and first roles across disciplines. No pitches, no pressure: just real talk with peers who get it. Whether you're exploring your next steps or eager to build your network, all backgrounds and skill levels are welcome.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present BrepDiff, a simple, single-stage diffusion model for generating Boundary Representations (B-reps). Our approach generates B-reps by denoising point-based face samples with a dedicated noise schedule. Unlike multi-stage methods, BrepDiff enables intuitive, editable geometry creation, including completion, merging, and interpolation, while achieving competitive performance on unconditional generation.
Immersive Pavilion
Bridging Physical and Virtual Realms in Mixed Reality: The Co-Presence Experience in LIAN: Re:Vision
10:30am - 5:00pm PDT Monday, 11 August 2025 West Building, Exhibit Hall BArts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionLIAN: Re:Vision is a Mixed Reality installation that explores love and distance through a co-located, two-player experience. Players navigate narrative-driven challenges, with their avatars' proximity crucial for survival, reflecting on relationship dynamics. Innovative use of pass-through and occlusion features enhances co-presence, while external displays bridge participants and spectators.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Fabrication
Full Conference
Experience
DescriptionThis event will provide an opportunity for those working on fabrication to meet others and share their experiences. We encourage them to bring physical objects (fabrication results) to make it a fun event. We plan to invite people working on fabrication widely using our own connections, but also welcome anybody interested in this topic.
This will be the 4th "Bring Your Bunny (or Something)" BoF event with previous events taking place at SIGGRAPH in 2023 and 2024, and SIGGRAPH Asia in 2024.
This will be the 4th "Bring Your Bunny (or Something)" BoF event with previous events taking place at SIGGRAPH in 2023 and 2024, and SIGGRAPH Asia in 2024.
Talk
Arts & Design
Production & Animation
Livestreamed
Recorded
Animation
Art
Geometry
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionTo convey a vivacious, protopian space station in Pixar's "Elio", a small team of environment artists amplified the traditional pipeline with procedural techniques used in unique ways to develop a vast quantity of various biomes of alien terrains and architectures pulsing with their own internal energy.
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Recorded
Animation
Dynamics
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionThe Wild Robot's environment is rich in variety, dense in layout, painterly in look, and a character in itself alive with motion. This talk describes the different techniques, tools, and pipeline used to breathe life into this wild environment, along with the challenges faced while dealing with the environment’s complexity.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Augmented Reality
Display
Games
Geometry
Lighting
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionThis talk explores how OpenUSD enables a scalable auto-rigging pipeline for user-generated content (UGC) in an avatar ecosystem. We discuss how OpenUSD supports an efficient procedural workflow, standardizes diverse asset representations through schemas, and optimizes pipeline execution and iteration via a structured task graph.
Industry Session
New Technologies
Production & Animation
Research & Education
Animation
Artificial Intelligence/Machine Learning
Dynamics
Pipeline Tools and Work
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionThis presentation shows you how to build supervised ML pipelines in Houdini 21.
We give an overview of many of the ML nodes that are available in Houdini, where special attention is given to nodes and features that have been added with Houdini 21.
These ML nodes are designed to both leverage and enhance Houdini's proceduralism.
Fully automated end-to-end ML pipelines that include data generation, training and inference can be built entirely inside Houdini.
Houdini's ML nodes can be used to perform sub-tasks that are common to a variety of machine learning applications. This can save you a lot of work when you're creating your own machine learning pipeline. Among other things, the ML nodes support example generation, preprocessing, saving & loading data sets, neural-network training, and inference.
We show how the ML nodes were used to build ML applications that are being released as part of Houdini 21. These applications include a volume upresser that can be applied to pyro sims and an improved character deformer.
We give an overview of many of the ML nodes that are available in Houdini, where special attention is given to nodes and features that have been added with Houdini 21.
These ML nodes are designed to both leverage and enhance Houdini's proceduralism.
Fully automated end-to-end ML pipelines that include data generation, training and inference can be built entirely inside Houdini.
Houdini's ML nodes can be used to perform sub-tasks that are common to a variety of machine learning applications. This can save you a lot of work when you're creating your own machine learning pipeline. Among other things, the ML nodes support example generation, preprocessing, saving & loading data sets, neural-network training, and inference.
We show how the ML nodes were used to build ML applications that are being released as part of Houdini 21. These applications include a volume upresser that can be applied to pyro sims and an improved character deformer.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose BuildingBlock, a hybrid approach integrating generative models, PCG, and LLMs for diverse and structured 3D building generation. A Transformer-based diffusion model generates layouts, which LLMs refine into hierarchical designs. PCG then constructs high-quality buildings, achieving state-of-the-art results and enabling scalable architectural workflows.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionC-tubes are 3D tubular structures made of developable strips. We introduce an algorithm to construct C-tubes while guaranteeing exact surface developability and an optimization method for design exploration. Applications span architecture, engineering, and product design. We present prototypes showcasing cost-effective fabrication of complex geometries using different materials.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose a fast, single-threaded continuous collision detection (CCD) algorithm for convex shapes under affine motion. By combining conservative advancement with a cone-casting approach, it avoids primitive-level overhead and enables efficient integration into intersection-free simulation methods such as ABD.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose a framework for learning on in-the-wild meshes containing non-manifold elements, multiple components, and interior structures. Our approach uses cages and generalized barycentric coordinates to parametrize and learn volumetric functions, demonstrated by segmentation and skinning weights, achieving state-of-the-art results on wild meshes.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Industry Session
Arts & Design
Production & Animation
Art
Geometry
Image Processing
Modeling
Pipeline Tools and Work
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionHarness the power and flexibility of COPs to generate terrain inside Houdini.
Learn how to create custom tools that facilitate terrain creation and fine-tuning.
Explore ground-breaking ideas (pun intended) for texture synthesis using the newest COP tools.
Layers upon layers of fun!
Learn how to create custom tools that facilitate terrain creation and fine-tuning.
Explore ground-breaking ideas (pun intended) for texture synthesis using the newest COP tools.
Layers upon layers of fun!
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Games
Pipeline Tools and Work
Real-Time
Full Conference
Experience
DescriptionAs digital storytelling evolves, so does the role of the camera. This live, collaborative BoF brings together a cross-industry panel to explore how cinematography principles are upheld or reimagined across games, VFX, animation, virtual production, and hybrid pipelines. Through guided discussion and attendee knowledge exchanges, we’ll dive into the aesthetic, technical, and narrative choices behind virtual cameras. Topics will include interactive storytelling, real-time engines, procedural workflows, and how creators both follow and break classical rules to shape modern visual experiences. Participants will share insights, challenges, and approaches that are redefining cinematic language in today’s digital production landscape.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionCandice, a young girl is bullied by the other children. One day she found a dead cat which ask her to fix it. She is then going to search for every dead animals possible to create some new friends.
Talk
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Games
Image Processing
Industry Insight
Lighting
Pipeline Tools and Work
Real-Time
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionDiscover how Image Engine simulated geometric crystals erupting from the floor in Avatar: The Last Airbender to capture the hero. Using custom Houdini tools, collision-aware systems, and physically inspired lighting, the team was able to integrate the plate and characters into beautiful final renders to relay the director's vision.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Education
Ethics and Society
Games
Generative AI
Lighting
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionCase Study: In year two of AI education and infrastructure at SCAD the institution has set in place a robust and accelerated series of new initiatives centred around open-source development and providing the necessary resources for students to develop and launch AI-driven products, tools and IP.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce CAST, an innovative method for reconstructing high-quality 3D scenes from a single RGB image. Supporting open-vocabulary reconstruction, CAST excels in managing occlusions, aligning objects accurately, and ensuring physical consistency with the input, unlocking new possibilities in virtual content creation and robotics.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionCathex is an immersive artistic VR experience that unfolds like a virtual poem, guiding players on a philosophical journey of emotional transformation, catharsis, and self-discovery. By engaging in emotion-regulatory movements, players uncover a long-forgotten memory of the universe and their own self.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Generative AI
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Physical AI
Pipeline Tools and Work
Real-Time
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionKick off your SIGGRAPH week in style! Join the ACM SIGGRAPH Chapters Committee from 9 pm-1 am Monday, August 11 at Mansion Nightclub for a night of fun with the world’s greatest pixel wranglers. Your conference badge is your entry ticket, all registration levels welcome. You must be legal drinking age. The club is a 14 minute walk from the Vancouver Convention Center at 1161 W Georgia St, Vancouver, BC V6E 0C6. Ask for more information at the ACM SIGGRAPH Village!
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Ethics and Society
Fabrication
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Math Foundations and Theory
Modeling
Performance
Physical AI
Pipeline Tools and Work
Rendering
Robotics
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionCome to this casual gathering to learn about how to join or start an ACM SIGGRAPH professional or student chapter and to meet and socialize with current chapters.
More information about Chapters is available at https://siggraph.org/chapters
More information about Chapters is available at https://siggraph.org/chapters
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Generative AI
Graphics Systems Architecture
Haptics
Hardware
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Full Conference
Experience
DescriptionHosted by the professional chapter of Bogotá, Colombia, join us for a social hour to connect with other students and professionals who live where you live, and learn how to participate in a year-round community. This event is especially for conference attendees who have traveled to Vancouver from other continents!
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Digital Twins
Display
Dynamics
Ethics and Society
Fabrication
Games
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Performance
Physical AI
Pipeline Tools and Work
Scientific Visualization
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionHosted by chapters on the East Coast of North America, join us for a social hour to connect with other students and professionals who live where you live, and learn how to participate in a year-round community.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Ethics and Society
Fabrication
Generative AI
Geometry
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionHosted by chapters on the West Coast of North America, join us for a social hour to connect with other students and professionals who live where you live, and learn how to participate in a year-round community.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionChoreoFin is a kinetic dress inspired by aquatic creatures and mythological beings such as mermaids and amphibian humanoids. Its fins, made from nano-metal-coated fabric, shimmer with multicolored textures and are animated by fifteen shape-memory alloy actuators. Each actuator bends in three directions using BioMetal fibers, enabling smooth, expressive motion. The fins respond to nearby viewers—freezing or trembling if approached suddenly, or gently swaying as if breathing beneath the surface. ChoreoFin reflects both biological function and ornamental beauty, subtly merging elements of living forms and wearable design.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Dynamics
Geometry
Industry Insight
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionIn Disney’s Moana 2, crafting the intricate hair and cloth motion to support and enhance the complex character performances required a new strategic approach. This involved performance categorization, continuity through visual planning, and iterative refinement, enabling Technical Animation to achieve the highly art-directed shots with consistency, efficiency, and effectiveness.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionA 3D-aware and controllable text-to-video generation method allows users to manipulate objects and camera jointly in 3D space for high-quality cinematic video creation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce an adaptive octree-based GPU simulator for large-scale fluid simulation. Our hybrid particle-grid flow map advection scheme effectively preserves vortex details, enabling high-resolution and high-quality results. The source code has been made publicly available at: https://wang-mengdi.github.io/proj/25-cirrus/.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a compact, C2-continuous kernel for MPM that reduces numerical diffusion and improves efficiency—without sacrificing stability. Built on a dual-grid framework and compatible with APIC and MLS, our method enables high-fidelity, large-scale simulations, further pushing the limits of MPM.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a novel scannable 2D code where the payload is stored in the topology of nested color regions, abandoning traditional matrix-based approaches (e.g., QRCodes). Claycodes can be largely deformed, styled, and animated. We present a mapping between bits and topologies, shape-constrained rendering, and a robust real-time decoding pipeline.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a Clebsch PFM fluid solver that accurately transports wave functions using particle flow maps. Key innovations include a new gauge transformation, improved velocity reconstruction on coarse grids, and better fine-scale structure preservation. Benchmarks show superior performance over impulse- or vortex-based methods, especially for small-scale flow features.
Technical Paper
Closed-form Generalized Winding Numbers of Rational Parametric Curves for Robust Containment Queries
10:55am - 11:05am PDT Tuesday, 12 August 2025 West Building, Rooms 211-214Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe derive closed-form expressions for GWNs of rational parametric curves for robust containment queries.
Our closed-form expression enables efficient computation of GWN, even if the query points are located on the rational curve. We also derive the derivatives of GWN for other applications.
Our closed-form expression enables efficient computation of GWN, even if the query points are located on the rational curve. We also derive the derivatives of GWN for other applications.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Education
Ethics and Society
Graphics Systems Architecture
Industry Insight
Full Conference
Experience
DescriptionThis panel explores three case studies investigating how cloud-based GPU labs are democratizing access to careers in real-time filmmaking, motion capture, and virtual production by eliminating traditional geographic, economic, and technological barriers. Featuring insights from global educators at Final Pixel and CGPro, alongside industry leaders from Vicon and the Virtual Production Academy, the session will spotlight key lessons learned, challenges overcome, and innovative strategies for building inclusive, accessible creative education programs. Attendees will discover how cloud technology is reshaping talent pipelines and expanding opportunities for emerging artists and storytellers worldwide.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionCLR-Wire is a unified generative framework for 3D curve-based wireframes, jointly modeling geometry and topology in a continuous latent space. Using attention-driven VAEs and flow matching, it enables high-quality, diverse generation from noise, images, or point clouds—advancing CAD design, shape reconstruction, and 3D content creation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionCMD revolutionizes 3D generation by enabling flexible local editing of 3D models from a single rendering, as well as progressive, interactive creation of complex 3D scenes. At its core, CMD leverages a conditional multiview diffusion model to seamlessly modify/add new components—enhancing control, quality, and efficiency in 3D content creation.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionAn office worker unravels as he struggles to deal with bullying, psychological violence and harassment at work. At work, psychological health can sometimes be so fragile that it "hangs by a thread." The campaign aims to spark open conversations and encourage proactive solutions.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionCobra is a novel efficient long-context fine-grained ID preservation framework for line art colorization, achieving high precision, efficiency, and flexible usability for comic colorization. By effectively integrating extensive contextual references, it transforms black-and-white line art into vibrant illustrations.
Talk
Production & Animation
Livestreamed
Recorded
Animation
Art
Dynamics
Geometry
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Thursday
DescriptionIn Pixar's Elio, the Universal Users Manual is a sentient alien book “character” created as a unique collaboration between characters and FX, consisting of a stack of pages animated along rigged shaped paths and then processed in Houdini to create the look of individual pages with particle and volume FX.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce a collaborative metalens array comprising over 100-million nanopillars for broadband imaging. The proposed array camera is only a few millimeters flat and employs a non-generative reconstruction method, which performs favorably and without hallucinations, irrespective of the scene illumination spectrum.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a practical method for dental layer biomimicry and multi-spot shade matching using multi-material 3D printing. It integrates seamlessly into workflows combining dental CAD tools and industrial multi-material slicers.
We validated it by printing multiple dentures and teeth with varying inner structures and translucencies to match VITA classical shades.
We validated it by printing multiple dentures and teeth with varying inner structures and translucencies to match VITA classical shades.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose ColorSurge, a lightweight dual-branch network for end-to-end video colorization. It delivers vivid, accurate, and real-time results from grayscale input, and is easily extensible for high-quality performance at low computational cost.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe reveal that existing online reconstruction of dynamic scenes with 3D Gaussian Splatting produces temporally inconsistent results, led by inevitable noise in real-world recordings. To address this, we decompose the rendered images into the ideal signal and the errors during optimization, achieving temporally consistent results across various baselines.
Course
Arts & Design
New Technologies
Research & Education
Livestreamed
Recorded
Fabrication
Full Conference
Virtual Access
Wednesday
DescriptionThis course will introduce attendees to foundations in computational craft. Computational Craft integrates computational fabrication– the use of computer programming to develop models and machine instructions for digital fabrication– with established craft materials and techniques to fabricate functional and decorative craft artifacts.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionGothic microarchitecture—a prevalent feature of late medieval art—comprises sculptural works that replicate monumental Gothic forms, though its original construction techniques remain historically undocumented. Leveraging insights from 15th-century Basel goldsmith drawings, we present an interactive framework for reconstructing these intricate designs from 2D projections into accurate 3D forms.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Games
Modeling
Rendering
Full Conference
Experience
DescriptionFounded in 2019, the Computer Graphics & Animation Research Projects initiative develops research projects in computer graphics and animation, then seeks out undergraduate students to staff its research project teams. So, if you are a student who would like to learn more about our projects, or a faculty member looking for opportunities for your students, join us for a lively discussion of what it means to engage in research with other, undergraduate students, worldwide. And, for more info, just visit our website (cgarp.net).
Birds of a Feather
New Technologies
Spatial Computing
Full Conference
Experience
DescriptionThe Computer Graphics History Institute is building a knowledgebase of computer graphics history. The knowledgebase will exist virtually; it will be built using ontologies, living taxonomies, and RDF triple stores. Management of the knowledgebase will be performed almost exclusively through XR, with interface design based on human cognitive abilities.
The purposes of the Institute are reference and research. With a general collection of history, it will be a coordinated place for learning about the history of CG. It will provide research frameworks for digital representations of history, and managing knowledge with XR using an API to the human cognitive system.
The purposes of the Institute are reference and research. With a general collection of history, it will be a coordinated place for learning about the history of CG. It will provide research frameworks for digital representations of history, and managing knowledge with XR using an API to the human cognitive system.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a novel Monte Carlo approach to solve boundary integral equations with Dirichlet boundary conditions in two dimensions. While Walk-on-Spheres uses largest empty circles, which touch the boundary in only one point, we utilize semicircles and circle sectors that share one or two boundary edges resulting in shorter walks.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Ethics and Society
Industry Insight
Pipeline Tools and Work
Full Conference
Experience
DescriptionBuilding on last year’s Bird-of-a-Feather and panel, this open forum invites educators, industry professionals, students, and researchers to share experiences, strategies, and questions around neurodiversity and accessibility in computer graphics and interactive technologies. As we work toward more inclusive classrooms and workplaces, we will explore practical solutions, community needs, and ongoing challenges. Whether you are neurodivergent, an advocate, or simply curious, join us in continuing this vital conversation to support belonging, success, and innovation across the SIGGRAPH community. Email contact [email protected]
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a physics-based method for simulating intricate freezing dynamics on thin films. Our novel Phase Map method integrated with MELP particles reproduces Marangoni freezing dynamics and the "Snow-Globe Effect". The framework captures soap bubble freezing dynamics while ensuring stability in complex scenarios and enabling precise pattern control.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a tracking-based video frame interpolation method, optionally guided by user inputs. It utilizes sparse point tracks, first estimated using existing point tracking methods and then optionally refined by the user. Without any user input, it already achieves state-of-the-art results, with further significant improvements possible through user interactions.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionCora is a novel diffusion-based image editing method that achieves complex edits, such as object insertion, background changes, and non-rigid transformations, in only four diffusion steps. By leveraging pixel-wise semantic correspondences between source and target, it preserves key elements of the original image’s structure and appearance while introducing new content.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionOne dancer, one body, one phone. In a time of collective alienation and technological mass control, one woman rediscovers her soul and reclaims her mind.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionMultiple importance sampling (MIS) is vital to most rendering algorithms. MIS computes a weighted sum of samples from different techniques to handle diverse scene types and lighting effects.
We propose a practical weight correction scheme that yields better equal-time performance on bidirectional algorithms and resampled importance sampling for direct illumination.
We propose a practical weight correction scheme that yields better equal-time performance on bidirectional algorithms and resampled importance sampling for direct illumination.
Birds of a Feather
Arts & Design
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Dynamics
Ethics and Society
Full Conference
Experience
DescriptionThis BOF is for attendees interested in discussing issues of racial bias embedded in computer graphics research. It is a follow-on from similarly titled BOFs at SIGGRAPH 2021, 2022, and 2024.
We will celebrate progress over the last four years, discuss setbacks, and brainstorm paths forward. This will be a friendly, collaborative space for mutual, authentic engagement across difference. Attendees can be at any stage, including learning about, taking steps toward, or enacting change in computer graphics research.
We will celebrate progress over the last four years, discuss setbacks, and brainstorm paths forward. This will be a friendly, collaborative space for mutual, authentic engagement across difference. Attendees can be at any stage, including learning about, taking steps toward, or enacting change in computer graphics research.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionWhen Anna, an Olympic athlete, finds herself behind in the race, she redoubles her efforts with the aim of winning the competition and never disappoint again. But as she pushes herself beyond her limits, she burns out...
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Computer Vision
Dynamics
Education
Fabrication
Generative AI
Geometry
Image Processing
Modeling
Simulation
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
DescriptionAfter all of the presentations, attendees are invited to participate in a Q&A.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Augmented Reality
Computer Vision
Digital Twins
Education
Fabrication
Games
Geometry
Modeling
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionThis talk examines the challenges and innovations in designing and rigging non-humanoid alien characters with unconventional anatomies for Pixar’s Elio. The team explored creatures with limbless, multi-segmented, and fluid bodies, requiring novel animation and rigging solutions.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Dynamics
Geometry
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Full Conference
Virtual Access
Thursday
DescriptionWe explore Wētā FX's tools and techniques for bringing characters to life through the creation of Malgosha, touching on costumes, shaders, simulation, the motion capture process, and animators’ techniques for creating realistic body and cloth performance.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionWe present the fastest physics solver for games and interactive applications. Based on the new Augmented Vertex Block Descent method, it can simulate complex interactions of millions of objects in real time. It is numerically stable and computationally efficient, and it can properly handle frictional contacts, stacking, and articulated chains.
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Recorded
Animation
Dynamics
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionThe Wild Robot used a painterly style that utilized transparency, soft edges, and smearing of assets. Traditional flat data channels were not able to capture non-binary transparency, making compositing difficult. We present Crypto-Deep data, an extension to Cryptomatte that stores layered geometric data needed to address compositing artifacts with transparency.
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Audio
Augmented Reality
Games
Geometry
Industry Insight
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionStudio Syro will cover the unique pipeline that they use to produce immersive animated films, games and experiences directly in VR using the VR painting software Quill. The session will cover their unique artist-first pipeline which blends traditional painting and animation techniques with VR giving each piece a handmade feel.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a novel local-domain fluid-solid interaction simulator grounded in a lattice Boltzmann solver. By leveraging an MPC-based domain-tracking approach and an improved convective boundary condition, it offers enhanced stability and efficiency for deriving control policies of virtual agents, holding great promise for applications in both computer animation and robotics.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Dynamics
Games
Lighting
Math Foundations and Theory
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Thursday
DescriptionCollisions are a key problem in generating complex crowds animation. The mudskippers in Walt Disney Animation Studios' "Moana 2" presented a particularly challenging scenario as they pack tightly together to form a towering pile. Our solution introduces an additional simulation step to deform the skinned character meshes to resolve contact.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Art
Education
Full Conference
Experience
DescriptionCurious about how to get involved in SIGGRAPH’s creative community? This session introduces the ACM SIGGRAPH Digital Arts Community (DAC), a group dedicated to connecting artists, technologists, and researchers working in digital and computational media arts. You’ll learn how DAC creates opportunities for creative exchange and collaboration across disciplines through programs that are open, inclusive, and designed to spark new ideas.
Come hear about DAC’s signature initiatives like SPARKS Lightning Talks, online exhibitions, and the student digital art competition, plus updates from international partners like ISEA and Expanded Animation. Whether you're a student, first-time attendee, or longtime SIGGRAPH participant, this session is your chance to explore how you can engage, share your work, and join a global network of creative thinkers.
Come hear about DAC’s signature initiatives like SPARKS Lightning Talks, online exhibitions, and the student digital art competition, plus updates from international partners like ISEA and Expanded Animation. Whether you're a student, first-time attendee, or longtime SIGGRAPH participant, this session is your chance to explore how you can engage, share your work, and join a global network of creative thinkers.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Artificial Intelligence/Machine Learning
Computer Vision
Education
Graphics Systems Architecture
Physical AI
Virtual Reality
Full Conference
Experience
DescriptionJapan’s creativity continues to flourish, serving as a key to engaging the world and driving innovation.
Join us for "Creative Japan," a dynamic session that explores the forefront of computer graphics and interactive technologies with a focus on Japan.
The session features leading researchers, industry experts, and creators from Japan and around the globe who associated with Japan are shaping the future of the field . Discover groundbreaking ideas and innovations through talks by top experts. This inspiring lineup highlights the unique perspectives of Japanese professionals driving progress in both academia and industry.
Join us for "Creative Japan," a dynamic session that explores the forefront of computer graphics and interactive technologies with a focus on Japan.
The session features leading researchers, industry experts, and creators from Japan and around the globe who associated with Japan are shaping the future of the field . Discover groundbreaking ideas and innovations through talks by top experts. This inspiring lineup highlights the unique perspectives of Japanese professionals driving progress in both academia and industry.
Industry Session
Gaming & Interactive
New Technologies
Production & Animation
Animation
Artificial Intelligence/Machine Learning
Computer Vision
Games
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionChaos Vantage supports open standards like USD and MaterialX in real-time or near real-time. That's what you will get as Chaos Vantage continues to evolve far beyond its original scope and limitations, unlocking real-time ray tracing for studios and artists beyond the V-Ray and any single renderer ecosystem.
Join Vantage Product Manager Simeon Balabanov as he explores how what began as a real-time viewport companion for V-Ray is now becoming an open, standalone platform built for the most demanding real-time production workflows. Simeon will also highlight how support for rendering Gaussian Splats is transforming previz workflows, and how the addition of OpenVDB volume rendering significantly expands Vantage’s potential in high-end visual effects.
Join Vantage Product Manager Simeon Balabanov as he explores how what began as a real-time viewport companion for V-Ray is now becoming an open, standalone platform built for the most demanding real-time production workflows. Simeon will also highlight how support for rendering Gaussian Splats is transforming previz workflows, and how the addition of OpenVDB volume rendering significantly expands Vantage’s potential in high-end visual effects.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionCryoScapes began during Jiabao Li’s Arctic Circle Artist Residency in Svalbard, inspired by water’s diverse forms—vapor, snow, waves, glaciers, and sea ice. The team developed a 3D ice printing system to create intricate sculptures that evolve with temperature changes. Roaming water droplets freeze on hydrophobic or hydrophilic treated surfaces, forming landscapes that blur the sense of scale, from lunar terrains to microscopic crystalline patterns. A macro camera captures the evolving formations in real time, while AI searches for nature-like patterns and composes them into Haiku-inspired poems. As CryoScapes travels to different cities, it co-creates with local conditions—temperature, humidity, and water mineral content—blurring the line between the artificial and the natural.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionManfred Mohr is a recipient of the ACM SIGGRAPH Distinguished Artist Award for Lifetime Achievement in Digital Art.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionCueTip is an interactive and explainable automated coaching assistant for a variant of pool/billiards. CueTip has a natural-language interface, the ability to perform contextual, physics-aware reasoning, and its explanations are rooted in a set of predetermined guidelines developed by domain experts. CueTip matches SOTA performance, with grounded and reliable explanations.
The Emerging Technologies program has partnered with the Technical Papers program. For a hands-on demonstration of this paper, visit:
Emerging Technologies Demo - CueTip: Interactive and Explainable Physics-aware Pool Assistant
Tuesday, August 12, 12-1 pm
Location: Experience Hall, West Building, Exhibit Hall B
The Emerging Technologies program has partnered with the Technical Papers program. For a hands-on demonstration of this paper, visit:
Emerging Technologies Demo - CueTip: Interactive and Explainable Physics-aware Pool Assistant
Tuesday, August 12, 12-1 pm
Location: Experience Hall, West Building, Exhibit Hall B
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Digital Twins
Ethics and Society
Games
Generative AI
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionAfter all of the presentations, attendees are invited to participate in a Q&A.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a method for the automatic placement of knit singularities based on curl quantization. Our method generates knit graphs that maintain all structural manufacturing constraints as well as any additional user constraints. This approach allows for simulation-free previews of rendered knits and also extends to the popular cut-and-sew setting.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Education
Ethics and Society
Games
Generative AI
Lighting
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionThis session presents academic experiences from multiple institutions and courses that share approaches to presentation, implementation and evaluation of virtual production concepts and techniques, including design considerations and technological implementations, that provide students with the opportunity to learn and experience virtual production within a studio classroom environment.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Education
Games
Full Conference
Experience
DescriptionThis session offers a preview of SIGGRAPH 2025’s creative programming. Chairs and Directors from various conference programs will share highlights of their featured projects, upcoming events, and curatorial visions. Attendees will get an early look at how this year’s conference explores new directions in digital art, encourages cross-disciplinary collaboration, and invites creative experimentation across media and technology.
The session also introduces the ACM SIGGRAPH Digital Arts Community (DAC), which supports global dialogue among artists, designers, and technologists. Through programs like SPARKS Lightning Talks, digital exhibitions, and student competitions, DAC fosters connections between art, computer graphics, and interactive media. Learn more at dac.siggraph.org.
The session also introduces the ACM SIGGRAPH Digital Arts Community (DAC), which supports global dialogue among artists, designers, and technologists. Through programs like SPARKS Lightning Talks, digital exhibitions, and student competitions, DAC fosters connections between art, computer graphics, and interactive media. Learn more at dac.siggraph.org.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionIn this work, we propose DAM-VSR, an appearance and motion disentanglement framework for video super-resolution. Appearance enhancement is achieved through reference image super-resolution, while motion control is achieved through video ControlNet. Additionally, we propose a motion-aligned bidirectional sampling strategy to support the generation of long videos.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper introduces DAMO, a Deep solver for Arbitrary Marker configuration in Optical motion capture. DAMO directly infers the relationship between each raw marker point and 3D model joint, without using predefined marker labels and configuration information.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionAs to prove himself to his gangster father, Alessandro decides to rob a bar. What he wouldn’t expect is to meet another side of his father: Lady Victoria the drag queen.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a method for discovering novel microscale TPMS structures with high-energy dissipation. By combining a parametric design space, empirical testing, and uncertainty-aware deep ensembles with Bayesian optimization, we efficiently explore and discover structures with extreme energy dissipation capabilities.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA baby sea turtle needs to reach the ocean. To achieve that, she has to overcome lots of obstacles and predators on her way.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose DC-VSR, a novel video super-resolution approach based on a video diffusion prior. DC-VSR leverages Spatial and Temporal Attention Propagation (SAP and TAP) to ensure spatio-temporally consistent results and Detail-Suppression Self-Attention Guidance (DSSAG) to enhance high-frequency details. DC-VSR restores videos with realistic textures while maintaining spatial and temporal coherence.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThe proposed neural network, DeepMill, can efficiently predict inaccessible and occlusion regions in subtractive manufacturing. By utilizing a cutter-aware dual-head octree-based convolutional architecture, it overcomes the computational inefficiency of traditional geometric methods and is capable of real-time prediction of inaccessible and occlusion regions during the 3D shape design phase.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionDeFillet, the reverse of CAD filleting, is vital for CAE and redesign but challenging with polygon CAD models. Our algorithm uses Voronoi vertices as rolling-ball center candidates to efficiently identify fillets. Sharp features are then reconstructed via quadratic optimization, validated on diverse models.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionDeformable Beta Splatting (DBS) is a novel approach for real-time radiance field rendering that leverages deformable Beta Kernels with adaptive frequency control for both geometry and color encoding. DBS captures complex geometries and lighting with state-of-the-art fidelity, while only using 45% fewer parameters and rendering 1.5x faster than 3DGS-MCMC.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe introduce ``Multi-Layered Inflatables,'' a novel class of asymmetric-shaped inflatable structures with easy fabrication. These structures consist of multi-pair interconnected planar sheets, leveraging multi-layered inflation to create complex curved surfaces while maintaining structural integrity. In this hands-on, the attendees can design and fabricate various types of Multi-Layered Inflatables.
Birds of a Feather
Arts & Design
Animation
Audio
Ethics and Society
Real-Time
Full Conference
Experience
DescriptionThe demoscene is a computer art subculture born in the 1980s and active to this day. Programmers, musicians and graphics artists alike push the boundaries of their platforms as they create audio-visual performances some describe as digital graffiti.
In this session, some of the latest topics, works or experiments will be presented and discussed.
In this session, some of the latest topics, works or experiments will be presented and discussed.
Course
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Computer Vision
Generative AI
Lighting
Rendering
Full Conference
DescriptionThis course explores the role of randomness in generative AI, drawing from statistical physics, stochastic differential equations, and computer graphics. It provides a deep understanding of how noise affects generative modeling and introduces advanced techniques and applications in AI.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Ethics and Society
Graphics Systems Architecture
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Physical AI
Pipeline Tools and Work
Real-Time
Robotics
Scientific Visualization
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionExplore the synergy between research and application of radiance fields (NeRF/3DGS). This two-panel session facilitates a dialogue between researchers and industry professionals on applying these techniques for visualization, digital twins, interactive media (XR/3D/web) design, and more. The discussion will delve into critical areas like radiance field compression, web-based hosting, game engine integration, interoperability & standards and dynamic techniques (4DGS). This collaborative forum aims to foster knowledge exchange, identify future directions, and promote new collaborations, connecting the research and user communities within computer graphics and interactive techniques.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionDescendant pioneers a heritage conservation workflow through cross-platform real-time tool integration. This hybrid project uses a contemplative VR experience to sample and simulate Teochew embroidery and folk rituals, exploring how digitization reshapes cultural memory practices. By innovating in cultural memory digitization, it interrogates digital environments' impact on ritual preservation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionOur method proposes a novel computational design framework for designing anisotropic tensor fields. It enables flexible control over scalings without requiring users to specify orientations explicitly. We apply these anisotropic tensor fields to various applications, such as anisotropic meshing, structural mechanics, and fabrication.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Augmented Reality
Display
Games
Geometry
Lighting
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionThe Oscar tablet-based interfaces provide intuitive access to Industrial Light & Magic's StageCraft technology platform, offering filmmakers extensive creative control in a real-time LED virtual production volume. Using an empathetic iterative design approach, the resulting user interfaces led to a successful realization of the filmmaker's vision.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a pin-pression gripper featuring parallel-jaw fingers with 2D arrays of independently extendable pins, allowing instant shape adaptation to target object geometry and dynamic in-hand re-orientation for enhanced grasp stability. Reinforcement learning with curriculum-based training enables flexible, robust grasping and grasp-while-lift mode, validated by sim-to-real experiments with superior performance.
Stage Session
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Pipeline Tools and Work
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionWhat does it take to move a digital human from a flashy demo to real-world customer interactions in under a year? Join WWT’s behind-the-scenes journey—from debuting our first fully interactive avatar live at a major industry event to re-engineering ‘Kendra,’ a next-gen MetaHuman built for global enterprise scale.
We'll share the unfiltered experiences: last-minute hardware hacks, audio challenges, licensing landmines and tackling the uncanny valley head-on. You’ll also learn how we optimized our pipeline for sub-1.5-second latency, tailored deployments for offline, hybrid, and cloud setups, and designed a robust architecture capable of real-world use.
Leave with actionable insights, honest pitfalls and practical guidance for creating digital humans that do more than impress audiences—they deliver measurable enterprise value.
We'll share the unfiltered experiences: last-minute hardware hacks, audio challenges, licensing landmines and tackling the uncanny valley head-on. You’ll also learn how we optimized our pipeline for sub-1.5-second latency, tailored deployments for offline, hybrid, and cloud setups, and designed a robust architecture capable of real-world use.
Leave with actionable insights, honest pitfalls and practical guidance for creating digital humans that do more than impress audiences—they deliver measurable enterprise value.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionDesignManager is an AI-powered design support system that functions as an interactive copilot throughout the creative workflow. With node-based visualization of design evolution and conversational interaction modes, it helps designers track, modify, and branch their processes while providing context-aware assistance through an innovative agent framework.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionDetlev has no problem. Why should he have a problem? He's obviously doing great.
Production Session
Production & Animation
Livestreamed
Not Recorded
Animation
Art
Industry Insight
Lighting
Rendering
Full Conference
Virtual Access
Monday
DescriptionThe Wild Robot follows the journey of Roz, a high-tech robot stranded on a remote island. In the film, we accentuate this juxtaposition of nature and technology stylistically–a futuristic machined precision amongst a deconstructed, painterly world. The style of the film serves the story–Roz does not initially belong on the island, stylistically. To that goal, we reimagined every workflow to retain the immediacy and fluidity of an artist's hand.
This production session features the development and technical challenges required to accomplish the film's unique look. Ultimately, our goal throughout every department was to incorporate the endearing qualities of traditional painting and 2D animation, while maintaining the richness and sophistication of a 3D space. These hand-crafted elements give the audience an immersive, imaginative experience and support the emotional intent of the story.
This production session features the development and technical challenges required to accomplish the film's unique look. Ultimately, our goal throughout every department was to incorporate the endearing qualities of traditional painting and 2D animation, while maintaining the richness and sophistication of a 3D space. These hand-crafted elements give the audience an immersive, imaginative experience and support the emotional intent of the story.
Birds of a Feather
Gaming & Interactive
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Games
Graphics Systems Architecture
Performance
Pipeline Tools and Work
Real-Time
Full Conference
Experience
DescriptionThe Slang shading language and compiler is a proven open-source technology empowering real-time graphics developers with flexible, innovative features that complement existing shading languages, including neural computation inside graphics shaders. Join us for the latest updates from the Slang Working Group and discussions around key shader community topics, including new capabilities and future directions, case studies, tools and techniques.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Experience
DescriptionSoftware companies have 40-60% ratio engineers. Movie and cartoon production companies are around 5-10%. Still they need to maintain a wide range of DCC plugins and OS, multiple versions of in-house tools across multiple projects. The ratio (size of the build matrix/number of engineers) in production companies is way bigger than software companies. Because their business model is about pictures, not software. To face such a ratio, production companies need to maximize the use of DevOps technologies. In this BOF we'd like to share production proof knowledge, tips and tricks to improve the delivery and robustness of technologies to production.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionThe battle against Hatred has only just begun.
Journey into the new region of Nahantu in search of Neyrelle, who is both suffering the fate of her choice to imprison the Prime Evil Mephisto, and seeking a means to destroy him.
Journey into the new region of Nahantu in search of Neyrelle, who is both suffering the fate of her choice to imprison the Prime Evil Mephisto, and seeking a means to destroy him.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionIntroducing differentiable path tracing for geometric acoustics with an efficient gradient algorithm based on path replay backpropagation. The system computes derivatives of output spectrograms with respect to arbitrary scene parameters (materials, geometry, emitters, microphones) within the framework of acoustic ray tracing, with applications demonstrated in various geometric scenarios.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionMeet Diffuse-CLoC—a powerful unification of intuitive steering in kinematic motion generation and physics-based character control. By guiding diffusion over joint state-action spaces, it enables agile, steerable, and physically realistic motions across diverse downstream tasks—from obstacle avoidance to task-space control and motion in-betweening—all from a single model, with no fine-tuning required.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionDiffusing Winding Gradients (DWG) efficiently reconstructs watertight 3D surfaces from unoriented point clouds. Unlike conventional methods, DWG avoids solving linear systems or optimizing objective functions, enabling simple implementation and parallel execution. Our CUDA implementation on an NVIDIA GTX 4090 GPU runs 30–120x faster than iPSR on large-scale models (10–20 million points).
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionDiffusion as Shader (DaS) is a unified approach for controlled video generation that uses 3D tracking videos to enable versatile editing, including animating mesh-to-video, camera control, motion transfer, and object manipulation, while improving temporal consistency.
Course
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Generative AI
Image Processing
Full Conference
DescriptionThis tutorial focuses on diffusion models, cutting-edge tools for image and video generation. Designed for graphics researchers and practitioners, it offers insights into the theory, practical applications, and real-world use cases. Participants will learn how to effectively leverage diffusion models for creative projects in computer graphics.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionPowder-snow avalanches are natural phenomena that result from an instability in the snow cover on a mountain relief. This paper introduces a physically-based framework to simulate powder-snow avalanches under complex terrains, allowing us to animate the turbulent snow cloud dynamics within the avalanche in a visually realistic way.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Computer Vision
Ethics and Society
Generative AI
Haptics
Hardware
Image Processing
Performance
Physical AI
Robotics
Virtual Reality
Full Conference
Experience
DescriptionThis session will present the highlights from four SPARKS (Short Presentations for the Kindred Spirit) Sessions. The "Sensing the Body to Expand Possibilities in Art and Performance" session moderated by Elizabeth Jochum, Alan Macy and Bonnie Mitchell explored how artists push creative boundaries utilizing computational tools to augment the body in artistically expressive ways. "AI and Artistic Autonomy", moderated by Mauro Martino, Rebecca Ruige Xu and Gustavo Alfonso Rincon investigated how the dependency on models and algorithms developed by others influences creative practice. "Artistic Interpretation of Digital Cultural Heritage" moderated by Fan Xiang and Victoria Szabo, explores how the choices visual artists make in sourcing, composing, and styling historical imagery affects our understanding of the past. And lastly the "First Nations’ Futures" SPARKS session, moderated by Rewa Wright and Clarissa Ribeiro, discusses how digital art is used as a gateway to share stories about culture, community, and identity.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Computer Vision
Dynamics
Education
Fabrication
Generative AI
Geometry
Image Processing
Modeling
Simulation
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
DescriptionThis work offers an innovative approach to digitally replicating crazing patterns, which are aesthetic crazing found on ceramics. By using a quadtree structure, the method captures the time dependent and user-interaction aspects of these patterns, providing a novel perspective in digital material design. This contribution is important for the fields of digital arts, computer graphics, and interactive techniques, as it enriches the representation of cultural aesthetics and advances digital texture generation methods.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Digital Twins
Ethics and Society
Games
Generative AI
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionBoth a critique and celebration of digital representation, this project offers multiple perspectives beyond technological homogenization. Through exploring digital f(r)ictions and multiplicities, we reject singular viewpoints in favor of interconnected truths. Our work with AI and Colombian art raises questions about bias, agency, and authenticity in cultural production, prompting reflection on AI's influence on collective imaginaries.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Augmented Reality
Computer Vision
Digital Twins
Education
Fabrication
Games
Geometry
Modeling
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionAn R&D initiative within Aardman Animations exploring VP technologies for stop-motion film making. Taking a holistic view of VP from story development through to delivery. Utilising real-time tools, digital twins, and a cross platform XR sandbox. Striving to evolve traditional processes, enhancing creativity, efficiency, and integration across the production pipeline.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Dynamics
Geometry
Industry Insight
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionThis work describes a new approach for directing cloth draping that accommodates 3D shaping and 2D pattern making simultaneously. We showcase our results with a series of garment assets and cloth animations from Pixar feature films Inside Out 2 (2024) and Elio (2025).
Birds of a Feather
Production & Animation
Generative AI
Full Conference
Experience
DescriptionWhat happens when cinematic direction becomes as simple as writing a sentence? Jon Finger explore how recent breakthroughs in generative video are enabling creators to visualize and iterate on complex scenes using natural language alone. Join us for a deep dive into the potential unlocked when AI models understand physics, light, and motion—and how this shift is transforming storytelling in games, film, and animation. This is a conversation about tools, focused on the broader implications for artists, designers, and the next generation of visual storytellers.
Organizer - Jon Finger, Luma AI
Organizer - Jon Finger, Luma AI
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Art Paper
Discipline Together With the Self in Kendo: Exploring "Qi" Through Mixed Reality and Autoethnography
3:45pm - 4:05pm PDT Monday, 11 August 2025 West Building, Rooms 109-110Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Performance
Physical AI
Real-Time
Robotics
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
DescriptionThis work explores “qi” in kendo through mixed reality and autoethnography, blending tradition and technology. By animating digital humans with “qi”, it frames martial arts as art. The project invites reflection on selfhood and offers fresh insights at the intersection of culture, embodiment, and digital experience.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAlthough discrete connections are ubiquitous in vector field design, their torsion remains unstudied. We extend the existing toolbox to control the torsion of discrete connections: we introduce a new discrete Levi-Civita connection and define torsion as a measure of deviation from this reference, so torsion becomes a simple linear constraint.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThe paper proposes a construction algorithm based on a divide-and-conquer strategy to map a disk-topology triangular mesh onto any convex polygon., which supports arbitrary numerical precision and exact arithmetic. Under exact arithmetic, it strictly guarantees a bijection for any mesh and convex polygon.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionDJESTHESIA uses tangible interaction to craft real-time audiovisual multimedia, blending sound, visuals, and gestures into a unified live performance. Combining a DJ setup with motion capture and projected visuals, DJESTHESIA composes music and visuals from the DJ’s mixing and gestures, transforming the DJ into both a performer and a performance.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe formalize the path-tracing of volumes composed of anisotropic kernel mixture models. Our work enables computing physically-based light transport on complex volumetric assets efficiently, on tiny memory budgets. We further introduce Epanechnikov kernels as an efficient alternative in kernel-based rendering, and showcase our method in different forward and inverse volume rendering applications including radiance fields.
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Recorded
Animation
Dynamics
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionTo meet the ambitiously illustrative design requirements for the FX in The Wild Robot and The Bad Guys 2, we developed a collection of tools, called Doodle, to let artists nimbly blend drawing techniques with simulation to efficiently craft stylized shape and motion in a 3D FX pipeline.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionDYG is a 3D drag-based scene editing method for Gaussian Splatting that enables precise, multi-view consistent geometric edits using 3D masks and control points. It combines implicit triplane representation and a drag-based diffusion model for high-quality, fine-grained results. Visit our project page at \url{https://drag-your-gaussian.github.io/}.
Stage Session
Arts & Design
New Technologies
Production & Animation
Art
Artificial Intelligence/Machine Learning
Generative AI
Pipeline Tools and Work
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionGriptape Nodes enhances your creative capabilities by letting you design AI-driven workflows for images, video, scripts, and more — all through an intuitive drag-and-drop interface with integrated Agentic chat. Build advanced, predictable workflows that put you in control while expanding what's possible for your team. Share custom or private node libraries, integrate seamlessly with services like Deadline Cloud and various MCP servers, and orchestrate multi-engine workflows effortlessly. With studio-friendly licensing, Griptape Nodes is built to fit right into your production pipeline — empowering artists and studios to create more, faster, without compromise.
Technical Workshop
Arts & Design
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Computer Vision
Generative AI
Image Processing
Robotics
Full Conference
Experience
DescriptionDrawing is a fundamental human activity, used to think, communicate, and create. This workshop aims to deepen our understanding of drawing from both human and computational perspectives.
We will discuss questions such as: How do people draw pictures? What can psychology teach us about drawing behavior? How can the sketching process be modeled computationally? And how can sketching enhance designer control over AI tools?
We hope to inspire new connections and ideas by bringing these topics together in one place, and to inspire new interest in the computer graphics community in these topics.
We will discuss questions such as: How do people draw pictures? What can psychology teach us about drawing behavior? How can the sketching process be modeled computationally? And how can sketching enhance designer control over AI tools?
We hope to inspire new connections and ideas by bringing these topics together in one place, and to inspire new interest in the computer graphics community in these topics.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionThis presentation demonstrates my custom system integrating real-time AI generation with laser mapping. Using StreamDiffusion via TouchDesigner, images are created and refined through an interactive feedback loop, then transformed into laser paths. The system maps both projections and laser traces onto surfaces, with Ableton Live enabling synchronized performance with music.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionTo address a lack of generalization to novel classes, we propose DreamMask, which systematically explores data generation in the open-vocabulary setting, and how to train the model with both real and synthetic data. It significantly simplifies the collection of large-scale training data, serving as a plug-and-play enhancement for existing methods.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionDreamPrinting is a volumetric 3D printing pipeline that transforms generative, radiance-based models into delicate real-world art pieces. By precisely assigning pigments at each voxel, it reveals breathtaking details—like translucent fur and glowing leaves—enabling users to convert imaginative digital scenes into tangible realities with unprecedented color fidelity.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce Dress-1-to-3 to reconstruct physics-plausible, simulation-ready separated garments from an in-the-wild image. Starting with the image, our approach combines a pre-trained image-to-sewing pattern generation model with a pre-trained multi-view diffusion model. The sewing pattern is refined using a differentiable garment simulator based on the generated multi-view images.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present a neural global illumination method capable of capturing multi-frequency reflections in dynamic scenes by leveraging object-centric feature grids and a novel dual-band fusion module. Our approach produces high-quality, realistic rendering effects and outperforms state-of-the-art techniques in both visual quality and computational efficiency.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionDualMS is a novel framework for designing high-performance heat exchangers by directly optimizing the separation surface of two fluids using dual skeleton optimization and neural implicit functions. It offers greater topological flexibility than TPMS and achieves superior thermal performance with lower pressure drop while maintaining comparable heat exchange rates.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present a framework for generating music-driven synchronized two-person dance animations with close interactions. Our system represents the two-person motion sequence as a cohesive entity, performs hierarchical encoding of the motion sequence into discrete tokens, and utilizes dual generative masked transformers to generate realistic and coordinated dance motions.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionPersonalizing text-to-video models is challenging because dynamic concepts require capturing both appearance and motion. We propose Set-and-Sequence, a framework that personalizes DiT-based video models by first learning an identity LoRA basis from unordered frames, then fine-tuning coefficients with motion residuals on full videos, enabling superior editability and compositionality for applications.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionIntroducing the first GPU-based system for dynamic triangle mesh processing, delivering order-of-magnitude speedups over CPU solutions across diverse applications. Our system uses patch-based data structure, speculative conflict handling, and a novel programming model, enabling robust, high-performance, and fully dynamic mesh operations directly on the GPU.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a revolutionary method for experiencing live sports in stunning 3D, redefining the way games are seen, through immersive, interactive replays. Alongside, we release a large-scale synthetic dataset built to benchmark realism, motion, and human interaction in dynamic scenes, to fuel the next wave of research in 3D streaming.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper presents Epsilon Difference Gradient Evolution (EDGE), a novel method for accurate flow-map computation on grids without velocity buffers. EDGE enables large-scale, efficient and high-fidelity fluid simulations that capture and preserve complex vorticity structures while significantly reducing memory usage.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe automate video nonlinear editing using a multi-agent system. An Editor agent uses tools to create sequences from clips and instructions, while a Critic agent provides feedback in natural language. Our learning-based approach enhances agent communication. Evaluations with an LLM-as-a-judge metric and user studies show our system’s superior performance.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionAfter four days of education-focused presentations, attendees are invited to an invigorating, open discussion on current topics in computer graphics education and interactive techniques. Come to the town hall session to ask questions, share perspectives, and have a good time with the SIGGRAPH education community!
Technical Workshop
New Technologies
Not Livestreamed
Not Recorded
Capture/Scanning
Computer Vision
Generative AI
Modeling
Virtual Reality
Full Conference
Experience
DescriptionThis workshop aims to democratize 3D content creation, including both static and dynamic content by exploring recent advances in 3D reconstruction from real-world inputs of videos and images, as well as in generative AI technologies. We bring together researchers and practitioners to discuss scalable pipelines that lower the barrier to immersive content creation. A key feature of this workshop is a hands-on demonstration: participants will experience 1) volumetric content generation and real-time streaming experience, and 2) VR contents generated from fast 3D methods on VR headsets. The workshop seeks to bridge the gap between cutting-edge reconstruction research and VR applications.
Course
Research & Education
Livestreamed
Recorded
Animation
Artificial Intelligence/Machine Learning
Computer Vision
Dynamics
Geometry
Image Processing
Modeling
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionLike a semester long graduate seminar on Eigenanalysis, Singu-
lar Value Decompositions, and Principal Component Analysis in
Computer Graphics and Interactive Techniques, this course looks at
matrix diagonalization and analysis through the lens of 13 technical
papers selected by the lecturers.
lar Value Decompositions, and Principal Component Analysis in
Computer Graphics and Interactive Techniques, this course looks at
matrix diagonalization and analysis through the lens of 13 technical
papers selected by the lecturers.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionOur framework enables realistic and interesting elastic body locomotion by determining optimal muscle activations to achieve desired movements. It combines interior-point method for contact modeling with a novel mixed second-order differentiation algorithm that merges analytic and numerical approaches, allowing Newton's method optimization to create diverse soft body animations.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionElevate3D transforms low-quality 3D models into high-quality assets through iterative texture and geometry refinement. At its core, HFS-SDEdit refines textures generatively while preserving the input’s identity leveraging high-frequency guidance. The resulting texture then guides geometry refinement, allowing Elevate3D to deliver high-quality results with well-aligned texture and geometry.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionGenerating string instrument performances with intricate movements and complex interactions poses significant challenges. To address these, we present ELGAR—the first diffusion-based framework for whole-body instrument performance motion generation solely from audio. We further contribute innovative losses, metrics, and dataset, marking a novel attempt with promising results for this emerging task.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Art
Digital Twins
Ethics and Society
Games
Hardware
Industry Insight
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionLearn about ongoing sustainability journey that Call of Duty® graphics developers have embarked on, including data used to guide each decision. Several techniques will be surveyed, along with their results, to help inspire developers of any real time graphics application to reduce their carbon footprints with minimal effort.
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (UltraMeshRenderer: Efficient Structure and Management of GPU Out-of-Core Memory for Real-Time Rendering of Gigantic 3D Meshes) presented on:
Thursday, August 14, 9-10:30 am
Paper Session: Lightning Fast Geometry
Location: West Building, Rooms 220-222
This Emerging Technologies demo is a partner presentation to the Technical Paper (UltraMeshRenderer: Efficient Structure and Management of GPU Out-of-Core Memory for Real-Time Rendering of Gigantic 3D Meshes) presented on:
Thursday, August 14, 9-10:30 am
Paper Session: Lightning Fast Geometry
Location: West Building, Rooms 220-222
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (A Platform for Interactive AI Character Experiences) presented on:
Monday, August 11, 10:45 am-12:15 pm
Paper Session: Moving, Seeing, Touching & Eating in VR
Location: West Building, Rooms 118-120
This Emerging Technologies demo is a partner presentation to the Technical Paper (A Platform for Interactive AI Character Experiences) presented on:
Monday, August 11, 10:45 am-12:15 pm
Paper Session: Moving, Seeing, Touching & Eating in VR
Location: West Building, Rooms 118-120
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (CueTip: Interactive and Explainable Physics-aware Pool Assistant) presented on:
Wednesday, August 13, 9-10:30 am
Paper Session: Robots in the World
Location: West Building, Rooms 118-120
This Emerging Technologies demo is a partner presentation to the Technical Paper (CueTip: Interactive and Explainable Physics-aware Pool Assistant) presented on:
Wednesday, August 13, 9-10:30 am
Paper Session: Robots in the World
Location: West Building, Rooms 118-120
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (Escher Tile Deformation via Closed-Form Solution) presented on:
Thursday, August 14, 2-3:30 pm
Paper Session: Textures, Tiles & Codes
Location: West Building, Rooms 211-214
This Emerging Technologies demo is a partner presentation to the Technical Paper (Escher Tile Deformation via Closed-Form Solution) presented on:
Thursday, August 14, 2-3:30 pm
Paper Session: Textures, Tiles & Codes
Location: West Building, Rooms 211-214
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (ForceGrip: Reference-Free Curriculum Learning for Realistic Grip Force Control in VR Hand Manipulation) presented on:
Monday, August 11, 10:45 am-12:15 pm
Paper Session: Moving, Seeing, Touching & Eating in VR
Location: West Building, Rooms 118-120
This Emerging Technologies demo is a partner presentation to the Technical Paper (ForceGrip: Reference-Free Curriculum Learning for Realistic Grip Force Control in VR Hand Manipulation) presented on:
Monday, August 11, 10:45 am-12:15 pm
Paper Session: Moving, Seeing, Touching & Eating in VR
Location: West Building, Rooms 118-120
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (GAIA: Generative Animatable Interactive Avatars with Expression-conditioned Gaussians) presented on:
Thursday, August 14, 9-10:30 am
Paper Session: The Shape of You
Location: West Building, Rooms 211-214
This Emerging Technologies demo is a partner presentation to the Technical Paper (GAIA: Generative Animatable Interactive Avatars with Expression-conditioned Gaussians) presented on:
Thursday, August 14, 9-10:30 am
Paper Session: The Shape of You
Location: West Building, Rooms 211-214
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (GSHeadRelight: Fast Relightability for 3D Gaussian Head Synthesis) presented on:
Wednesday, August 13, 2-3:30 pm
Paper Session: Light & Relight
Location: West Building, Rooms 301-305
This Emerging Technologies demo is a partner presentation to the Technical Paper (GSHeadRelight: Fast Relightability for 3D Gaussian Head Synthesis) presented on:
Wednesday, August 13, 2-3:30 pm
Paper Session: Light & Relight
Location: West Building, Rooms 301-305
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (LAM: Large Avatar Model for One-Shot Animatable Gaussian Head) presented on:
Monday, August 11, 2-3:30 pm
Paper Session: Get a Head
Location: West Building, Rooms 301-305
This Emerging Technologies demo is a partner presentation to the Technical Paper (LAM: Large Avatar Model for One-Shot Animatable Gaussian Head) presented on:
Monday, August 11, 2-3:30 pm
Paper Session: Get a Head
Location: West Building, Rooms 301-305
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (Painting with 3D Gaussian Splat Brushes) presented on:
Monday, August 11, 3:45-5:35 pm
Paper Session: Interactive Reality & Perception
Location: West Building, Rooms 118-120
This Emerging Technologies demo is a partner presentation to the Technical Paper (Painting with 3D Gaussian Splat Brushes) presented on:
Monday, August 11, 3:45-5:35 pm
Paper Session: Interactive Reality & Perception
Location: West Building, Rooms 118-120
Emerging Technologies
Technical Paper
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe SIGGRAPH Emerging Technologies program collaborated with the Technical Papers program to introduce an opportunity for authors of selected Technical Papers to present their work as a hands-on demonstration within the Experience Hall.
This Emerging Technologies demo is a partner presentation to the Technical Paper (PhysicsFC: Learning User-Controlled Skills for a Physics-Based Football Player Controller) presented on:
Wednesday, August 13, 3:45-5:35 pm
Paper Session: Physics-Based Human Characters
Location: West Building, Rooms 220-222
This Emerging Technologies demo is a partner presentation to the Technical Paper (PhysicsFC: Learning User-Controlled Skills for a Physics-Based Football Player Controller) presented on:
Wednesday, August 13, 3:45-5:35 pm
Paper Session: Physics-Based Human Characters
Location: West Building, Rooms 220-222
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionMarker-based optical motion capture (MoCap) is critical for virtual production and movement sciences. We propose a novel framework for MoCap auto-labeling and matching using uniquely coded clusters of reflective markers (AEMCs). Compared to commercial software, our method achieves higher labeling accuracy for heterogeneous targets and unknown marker layouts.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionA boy navigates fleeting a childhood friendship and discovers his own queerness across three pivotal summers in 1990s southern China.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionDesigning freeform surfaces to reflect or refract light to achieve target light distributions is a challenging inverse problem. We propose an end-to-end optimization strategy using a novel differentiable rendering model driven by image errors, combined with face-based optimal transport initialization and geometric constraints, to achieve high-quality final physical results.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Ethics and Society
Fabrication
Games
Generative AI
Geometry
Graphics Systems Architecture
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Performance
Physical AI
Pipeline Tools and Work
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionAfter the enthusiastic response to our BOF last year, we're back for more!
Entrepreneurs are some of the most inspiring and supportive people to meet and connect with. If you're an established or upcoming business founder and want to meet, connect, and share start-up stories, struggles, and successes with like-minded entrepreneurs, don't miss this opportunity!
Entrepreneurs are some of the most inspiring and supportive people to meet and connect with. If you're an established or upcoming business founder and want to meet, connect, and share start-up stories, struggles, and successes with like-minded entrepreneurs, don't miss this opportunity!
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionA real-time deformation method for Escher tiles --- interlocking organic forms that seamlessly tessellate the plane following symmetry rules. Rather than treating tiles as mere boundaries, we consider them as textured shapes, ensuring that both the boundary and interior deform simultaneously. The deformation is achieved via a closed-form solution.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionIn this work, we introduce Expressive Virtual Avatars (EVA), an actor-specific, fully controllable and expressive human avatar framework that achieves high-fidelity, lifelike renderings in real-time, while enabling independent control of facial expressions, body movements, and hand gestures.
Birds of a Feather
Gaming & Interactive
New Technologies
Research & Education
Augmented Reality
Games
Graphics Systems Architecture
Industry Insight
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionOpenXR is the royalty-free open standard from Khronos Group, enabling developers to build once and deploy across diverse AR/VR platforms. This session highlights the latest core specification updates, platform interoperability efforts, and how vendor extensions are integrated while maintaining portability. LunarG will also present GFXReconstruct for OpenXR, a powerful tool for analyzing XR application behavior. Join the OpenXR Working Group for updates, insights, and an open discussion on challenges, best practices, and the future of cross-platform XR development.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis study proposes a design that enables users to experience the movements of others outside their view in a social VR environment through multi-sensory feedback. Users perceive the distance and movement of invisible users through the VR screen content and chair, using scent, vibration and sound to enhance social presence.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Capture/Scanning
Computer Vision
Digital Twins
Games
Pipeline Tools and Work
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis Birds of a Feather session brings together researchers and industry experts to discuss the evolving role of 3D Gaussian Splatting (3DGS) in computer graphics, interactive techniques, and real-world workflows. From academic research to applied use cases, panelists will share insights on how 3DGS is being explored for immersive environments, virtual production, cultural heritage, simulation, and more. Join us for an open exchange on the technical challenges, emerging best practices, and the future potential of this rapidly developing field.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Digital Twins
Graphics Systems Architecture
Industry Insight
Modeling
Performance
Real-Time
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionANARI is a cross-platform API designed to decouple scene description from rendering implementation, making it easier for developers—especially in domains like scientific visualization and simulation—to leverage the latest rendering engines without deep graphics expertise. This session will explore ANARI’s growing capabilities, share real-world applications, and discuss best practices. Join us to see how ANARI is opening doors to scalable, portable, and accessible 3D rendering across platforms.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Industry Insight
Real-Time
Full Conference
Sunday
DescriptionCome hear about the creation of Nickelodeon’s hit new animated action/adventure series Max & the Midknights! The show’s Supervising Producer and CG Supervisor, along with production partner Xentrix’s Head of Pipeline and Creative Director, will share a behind-the-scenes look at the challenges they faced creating this ambitious cinematic CG show with its hand-made and stop-motion look and feel.
You will learn how Max’s design-forward pipeline, built upon a custom set of tools developed in Unreal, enabled a new agile story process that blends the best qualities of live-action and animated filmmaking. Unlike the “traditional” storyboard-based TV approach, Max’s visualization artists shoot on final models in 3D then deliver dailies to editors who leverage their live-action expertise to assemble a full episode for review. Once noted and updated, the visualization pass goes to the storyboard artists, who--thanks to the true 3D nature of the visualization work--can focus exclusively on character performance with the confidence that all of their compositions will be fully-reproducible in 3D. This hybridized process obviates the need for a downstream blocking/layout pass because the cameras and animations used to make the animatic are exported from Unreal and delivered along with the next-level animatic to Max’s Bangalore-based production partner Xentrix Studios.
In addition to Max’s unique story pipeline, you will also hear about the tools and techniques developed collaboratively between Nickelodeon and Xentrix to permit character animation in Maya with final rendering/delivery in Unreal, including a seamless alembic- and FBX-based animation transfer approach and automatic shot creation in Unreal from Maya data. The team will also highlight the creative power of real-time iteration for improving cameras, shading, and lighting later in the process than permitted in the typical pipeline, and how they were able to get broadcast-ready final pixels straight out of Unreal.
You will learn how Max’s design-forward pipeline, built upon a custom set of tools developed in Unreal, enabled a new agile story process that blends the best qualities of live-action and animated filmmaking. Unlike the “traditional” storyboard-based TV approach, Max’s visualization artists shoot on final models in 3D then deliver dailies to editors who leverage their live-action expertise to assemble a full episode for review. Once noted and updated, the visualization pass goes to the storyboard artists, who--thanks to the true 3D nature of the visualization work--can focus exclusively on character performance with the confidence that all of their compositions will be fully-reproducible in 3D. This hybridized process obviates the need for a downstream blocking/layout pass because the cameras and animations used to make the animatic are exported from Unreal and delivered along with the next-level animatic to Max’s Bangalore-based production partner Xentrix Studios.
In addition to Max’s unique story pipeline, you will also hear about the tools and techniques developed collaboratively between Nickelodeon and Xentrix to permit character animation in Maya with final rendering/delivery in Unreal, including a seamless alembic- and FBX-based animation transfer approach and automatic shot creation in Unreal from Maya data. The team will also highlight the creative power of real-time iteration for improving cameras, shading, and lighting later in the process than permitted in the typical pipeline, and how they were able to get broadcast-ready final pixels straight out of Unreal.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Augmented Reality
Computer Vision
Digital Twins
Education
Fabrication
Games
Geometry
Modeling
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionOur groundbreaking groom pipeline for LAIKA's "Wildwood" revolutionizes stop-motion puppet fabrication through CG-assisted silicone casting. By leveraging the VFX team's 3D models and digital grooms, we were able to scale to the needs of this epic feature, while providing anisotropic characteristics and enabling seamless integration between practical and digital elements.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce FaceExpressions-70k, a large-scale dataset comprising 70,500 crowdsourced comparisons of facial expressions collected from over 1,000 participants. It supports the training of perceptual models for expression differences and helps guide decisions on acceptable latency and sampling rates for facial expressions when driving a face avatar.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionGiven a single co-located smartphone video captured in a dim room as the input, our method can reconstruct high-quality facial assets within the distribution modeled by a diffusion prior trained on Light Stage scans, which can be exported to common graphics engines like Blender for photo-realistic rendering.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionOur framework can efficiently synthesize facial microstructure from an unconstrained facial image via differentiable optimization. We propose neural wrinkle simulation for differentiable microstructure parameterization, and direction distribution similarity to align features with blurry image patches. Our framework is also compatible with existing facial reconstruction methods for detail enhancement.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a novel method for normal estimation of unoriented point clouds and VR ribbon sketches that leverages a modeling of the Faraday cage effect. Our method is uniquely robust to the presence of interior structures and artifacts, producing superior surfacing output when combined with Poisson Surface Reconstruction.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionLarry, a man in his thirties, makes one final attempt to save his father from alcoholism, even at the risk of his own downfall.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionFashionComposer is a flexible model for compositional fashion image generation, with a universal framework that handles diverse input modalities such as text, human models, and garment images. It personalizes appearance, pose, and human figure, using subject-binding attention to integrate reference features, enabling applications like virtual try-ons and human album generation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis paper presents a GPU-friendly framework for real-time implicit simulation of hyperelastic materials with frictional contacts. Utilizing a novel splitting strategy and efficient solver, the approach achieves robust, high-performance simulation across various stiffness materials, handling large deformations and precise friction interactions with remarkable efficiency, accuracy, and generality.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionDetecting surface self-intersections is crucial for CAD modeling to prevent issues in simulation and manufacturing. This paper presents an algebraic signature-based algorithm for fast determining self-intersections of NURBS surfaces. This signature is then recursively cross-used to compute the self-intersection locus, guaranteeing robustness in critical cases including tangency and small loops.
Talk
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Games
Image Processing
Industry Insight
Lighting
Pipeline Tools and Work
Real-Time
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionPresentation of a production proven, fast fluid up-resing method allowing high quality fluid FX to be generated from low resolution simulations.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThe median filter is a staple of computational image processing. Existing efficient methods share a common flaw, which is that they use a square kernel, producing visual artifacts. Our method overcomes this limitation, enabling fast and high-quality circular-kernel median filtering, across multiple platforms and image types.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a physics-based modeling system for knots and ties using pipe-like parametric templates, defined by Bézier curves and adaptive radii for flexible, intersection-free shapes. Our method maps cloth regions from UV space into 3D knot forms via a penetration-free initialization and supports quasistatic simulation with efficient collision handling.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce a novel reduced-order fluid simulation technique leveraging Dynamic Mode Decomposition (DMD) to enable fast, memory-efficient, and user-controllable subspace simulation. Combining spatial ROM compression with spectral physical insights, our method excels in animation, real-time interaction, artistic control, and time-reversible fluid effects.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe extend Penner-coordinate-based methods for seamless parametrizations to surfaces with sharp features to which the parametrization needs to be aligned. We describe a two-phase method to efficiently minimize feature constraint residual errors. We demonstrate that the resulting algorithm works robustly on the Thingi10k dataset, completing the quad mesh generation pipeline.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a unified mesh repair framework using a manifold wrap surface to fix diverse imperfections while preserving sharp features. By optimizing projected samples and leveraging adaptive weighting, our method ensures watertightness, manifoldness, and high geometric fidelity, outperforming existing approaches in both topology correction and feature preservation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis study explores how light color influences the perception of emotion of virtual characters. By analyzing various lighting conditions, including red and blue hues, we reveal how light affects emotion intensity, recognition, and genuineness. Findings show that lighting, realism, and shadows are key factors in enhancing the emotional impact.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionOur approach proposes a novel partition method for reliable feature-aligned quadrangulation. The core insight is that singularity-distant smooth streamlines are more suitable as patch boundaries. The key implementation confines patch boundaries to high field smoothness regions.
Validated on large-scale datasets, our method generates high-quality quad meshes while preserving reliability.
Validated on large-scale datasets, our method generates high-quality quad meshes while preserving reliability.
Talk
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Geometry
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionOur talk includes technical information for setting up a similar pipeline, as well as Artist stories covering the necessary extensions discovered during production to achieve the artistic vision of the Directors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose FlexiAct, an image animation framework that transfers actions from a reference video to any target image, enabling variations in layout, viewpoint, and skeletal structure while maintaining identity consistency.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis work constructs Green coordinates for cages composed of Bézier patches, which enables flexible deformations with curved boundaries. The high-order structure also allows us to create a compact curved cage for the input models. Additionally, this work proposes a global projection technique for precise linear reproduction.
Stage Session
New Technologies
Production & Animation
Animation
Artificial Intelligence/Machine Learning
Generative AI
Pipeline Tools and Work
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionJoin us for a 30-minute deep dive behind the scenes of "Flicker", a 7-minute short film currently in production by FuzzyPixel, that's redefining AI-driven content creation. From the creators of "Picchu," this project showcases how generative AI can help achieve the storyteller's vision. In partnership with Griptape.ai, FuzzyPixel is pioneering new workflows that will shape the future of digital production. Griptape Nodes empowers creators with intuitive AI-driven tools that seamlessly integrate with AWS. By combining Deadline Cloud's scalable rendering capabilities with Bedrock's advanced models, teams can build flexible, reusable pipelines that adapt to their creative needs. Technical Directors can develop custom nodes, ensuring the technology serves the art - not the other way around. All this comes in an open-source, pipeline-friendly package that prioritizes precision and artistic control.
FuzzyPixel brings together award-winning artists and technicians who experiment with our services in real-world production scenarios. They create sophisticated animation that meets industry standards while providing valuable insights.
AWS Deadline Cloud makes render farm management effortless for teams working on films, TV shows, games, and design projects. This fully managed service enables teams to deploy and scale rendering projects within minutes, boosting pipeline efficiency and expanding project capacity.
FuzzyPixel brings together award-winning artists and technicians who experiment with our services in real-world production scenarios. They create sophisticated animation that meets industry standards while providing valuable insights.
AWS Deadline Cloud makes render farm management effortless for teams working on films, TV shows, games, and design projects. This fully managed service enables teams to deploy and scale rendering projects within minutes, boosting pipeline efficiency and expanding project capacity.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionIn this short based on Skydance Animation’s feature Spellbound, Flink sets out with a bit of magic to help rescue the messenger pigeons that have turned to stone.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Computer Vision
Dynamics
Education
Fabrication
Generative AI
Geometry
Image Processing
Modeling
Simulation
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
DescriptionTransforming 2D Chinese calligraphy into 3D forms deepens how traditional art is understood and experienced, combining cultural heritage with modern technology. This approach adds spatial depth, opening new possibilities for digital art, preservation, and interactive design.
By merging computational modeling with artistic expression, the work explores how technology can reinterpret historical artforms, encouraging cross-disciplinary dialogue and inspiring new ways to preserve and evolve intangible cultural heritage.
By merging computational modeling with artistic expression, the work explores how technology can reinterpret historical artforms, encouraging cross-disciplinary dialogue and inspiring new ways to preserve and evolve intangible cultural heritage.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionFlexible Level of Detail (FLoD) integrates the concept of LoD into 3DGS using a multi-level representation built with 3D Gaussian scale constraints and level-by-level training strategy. FLoD enables flexible rendering through single-level or selective rendering for optimal image quality under varying GPU VRAM constraints.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a unified compressible flow map framework based on Lagrangian path integrals, enabling conservative density-energy transport and flexible pressure treatments. Validated on diverse systems—from shocks to shallow water—it captures complex flow features like vortices and wave interactions, broadening flow-map applicability across compressibility regimes and fluid morphologies.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present the Vortex Particle Flow Map (VPFM) method, which revitalizes the traditional Vortex-In-Cell approach for computer graphics. By evolving vorticity and higher-order quantities along particle flow maps, our method achieves significantly improved long-term stability and vorticity preservation, enabling high-fidelity simulation of complex vortical fluid motions in 2D and 3D.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis study presents "FluidicSwarm," a swarm robot control system that imitates fluid behavior, extending the user's body. Users manipulate fluid properties with hand movements, adjusting the swarm's shape and flexibility for easy avoidance and transport tasks. This improves the efficiency of swarm robot operations in various environments, including teleoperation.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionFollow the Ship (2025) is a generative video work emerging from Matthew Attard’s project I WILL FOLLOW THE SHIP. It explores the convergence of heritage and digital processes as a form of contemporary digital drawing. The work integrates eye-tracking datasets from historical maritime graffiti found in Malta with algorithmic generative systems, questioning how digital media can reframe cultural memory and visual language. As one of several outcomes from the project, Follow the Ship reflects on themes of heritage, metaphor, the maritime environment, and the digital present, offering a layered meditation on perception, history, and technological reinterpretation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionForceGrip is a reference-free reinforcement learning-based agent for realistic VR hand manipulation. It uses a progressive curriculum (finger positioning, intention adaptation, dynamic stabilization) and physics simulation to convert VR controller inputs into faithful grip forces. In user studies, participants achieved higher realism and precise control than competitors, ensuring immersive interaction.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionAn orphaned bear cub finds a home with a fatherly evergreen tree, until his hunger for trash leads him to danger.
Birds of a Feather
Gaming & Interactive
New Technologies
Research & Education
Games
Graphics Systems Architecture
Hardware
Industry Insight
Modeling
Pipeline Tools and Work
Rendering
Full Conference
Experience
DescriptionJoin Khronos and the Vulkan community for an open conversation about the latest developments shaping the future of Vulkan. We’ll explore updates to the Vulkan SDK and growing ecosystem support, discuss new tooling and capabilities that are making Vulkan more accessible. We’ll also provide the latest on Vulkan SC, the safety-critical Vulkan designed for automotive, aerospace, and industrial systems. Plus, hear how Vulkan is being integrated into professional design and visualization tools. This is your opportunity to share feedback, ask questions, and help guide the direction of Vulkan’s evolution.
Birds of a Feather
Arts & Design
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Art
Pipeline Tools and Work
Full Conference
Experience
Descriptionframe:work is the home of LIVE PIXEL PEOPLE. We bring together the artists, technologists and producers who deliver creative video content for live audience or generate pixels live for camera & screen. Our community works across film, live entertainment, art installations and web, creating visual experiences that are a creative collaboration across technology and design. Join us for a discussion of projects, tools, best practices and market challenges. We'll be talking about our mission of client education, next generation mentorship and community knowledge sharing.
Educator's Day Session
Research & Education
Not Livestreamed
Not Recorded
Education
Industry Insight
Full Conference
Virtual Access
Experience
Monday
DescriptionJoin us for a look at how Adobe Substance 3D is helping students turn skills into careers. Discover what drives our vision, learn more about how we partnered with the ArtCenter and Rivian to empower student creativity, and witness how digital twins are making the jump from classroom projects to real-world impact. Get a glimpse into our global ambassador program, hear about our current community initiatives, and see what’s next as we grow the pipeline from education to industry.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Dynamics
Geometry
Industry Insight
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionThis talk explores the integration of traditional Jacquard weaving techniques into our existing workflow at Netflix Animation Studios. We discuss the merits of fibre-level construction and the lessons learned as we laboured to place the power of a Jacquard loom into the hands of digital artists.
Birds of a Feather
New Technologies
Artificial Intelligence/Machine Learning
Generative AI
Industry Insight
Pipeline Tools and Work
Full Conference
Experience
DescriptionDiscover how AI-powered cloud workflows are transforming creativity for artists, filmmakers, and studios. From Pixels to Possibilities brings together panelists pioneering cloud-driven AI tools in visual effects, AI-powered studio management, design workflows, and the creation of AI digital doubles. This session showcases real-world case studies, including AI relighting for VFX, AI solutions helping studios scale and streamline operations, design tools accelerating creative pipelines, and the emergence of high-fidelity AI replicas reshaping casting, licensing, and talent rights. Panelists will share insights, challenges, and creative breakthroughs, offering an inside look at how cloud AI is turning raw pixels into limitless creative possibilities.
Industry Session
Arts & Design
Animation
Art
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionIn this session, renowned digital portrait artist Ian Spriggs takes us through the evolution of the digital human, from the early days of CG to today’s hyper-realistic characters. Drawing from over a decade of professional and personal work, Ian explores how advancements in technology and a dedication to artistic integrity have shaped his creative process. He’ll share how classical portraiture and emotional storytelling influence his approach, showing how digital humans have evolved from technical feats into expressive canvases of identity, emotion, and human connection. This talk blends technical innovation with artistic inspiration to reveal where digital humans began, where they are now, and where they’re headed.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Augmented Reality
Computer Vision
Digital Twins
Education
Fabrication
Games
Geometry
Modeling
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionImmersion may be ancient, but creating collective, interactive experiences remains a challenge even today. Opaque interfaces, cognitive overload, and co-presence can hinder engagement. Through case studies, this talk explores practical frameworks for leveraging real-time engines, novel HCI, and large-format displays to craft resonant, shared experiences in public spaces.
Industry Session
Arts & Design
New Technologies
Production & Animation
Animation
Artificial Intelligence/Machine Learning
Games
Industry Insight
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionThis session explores the latest advancements in Chaos Arena, bringing real-time ray tracing and a unified asset pipeline to virtual production. We’ll show how USD and MaterialX have been integrated to enable a consistent, interoperable workflow, allowing a single asset to move seamlessly from pre-production to virtual production and final VFX.
We’ll also cover the integration of Gaussian Splats into Arena, demonstrating how they enhance efficiency and visual quality in real-time environments. Learn how these innovations streamline production, eliminate redundant steps, and help democratize high-quality virtual production.
In addition to these key developments, we’ll highlight other recent advances in our real-time ray tracing capabilities and how they continue to push the boundaries of what’s possible in virtual production.
We’ll also cover the integration of Gaussian Splats into Arena, demonstrating how they enhance efficiency and visual quality in real-time environments. Learn how these innovations streamline production, eliminate redundant steps, and help democratize high-quality virtual production.
In addition to these key developments, we’ll highlight other recent advances in our real-time ray tracing capabilities and how they continue to push the boundaries of what’s possible in virtual production.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe present a compact, handheld holographic video camera that captures full-color holograms in real time under natural lighting, making laser-free holography possible. By integrating advanced optical components and AI-driven super-resolution, it enables high-quality holographic content capture, paving the way for portable, next-generation immersive media and real-world applications of holography.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionFungiSync is a cyberdelic mixed reality participatory ritual that reprotocolizes bodily contact—for example, the handshake—through masquerade-style, mushroom-decorated mixed reality masks, enabling participants to merge or exchange their distinct, audio-reactive augmented reality overlays and, in doing so, dissolve "you" and "me" perspectives inspired by fungal non-dualism interdependence wisdom.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present GAIA (Generative Animatable Interactive Avatars) for high-fidelity 3D head avatar generation. GAIA learns dynamic details with expression-conditioned Gaussians, while being animatable consistently with an underlying morphable model. With a novel two-branch architecture, GAIA disentangles identity and expression. GAIA achieves state-of-the-art realism and supports interactive rendering and animation.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Augmented Reality
Digital Twins
Education
Fabrication
Games
Haptics
Performance
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
DescriptionThis talk explores how gamified learning and adaptive game design empower individuals with disabilities. Using case studies from Limbitless Solutions’ interdisciplinary training games, it highlights how computer graphics, interactive techniques, and accessibility-driven design can transform education, fostering engagement, empathy, and innovation in CG classrooms and beyond.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionGarment sewing patterns often rely on vector formats, which struggle with discontinuities and unseen topologies. GarmentImage instead encodes geometry, topology, and placement into multi-channel grids, enabling smooth transitions and better generalization. Using simple CNNs, it works well in pattern exploration, prompt-to-pattern generation, and image-to-pattern prediction.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose a Gaussian fitting compression method for light field probes, reducing storage and memory demands in large scenes. Using low-bit adaptive Gaussians and GPU-accelerated decompression, our technique replaces traditional PCA-based approaches, achieving 1:50 compression ratios. Real-time cascaded light field textures eliminate redundant baking, preserving visual quality and rendering speed.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a grid-free fluid simulator featuring a novel Gaussian spatial representation (GSR) for velocity field. The advantages of GSR over traditional Lagrangian/Eulerian data structures are 4-folded: memory compactness, spatial adaptivity, vorticity preservation and continuous differentiability. Our method also greatly outperforms similar competitors in terms of quality and performance.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe develop novel and efficient computer-generated holography algorithms, dubbed Gaussian Wave Splatting, that transform Gaussian-based scene representations into holograms. We derive a closed-form 2D Gaussian-to-hologram transform supporting occlusions and alpha blending, along with an efficient, easily parallelizable Fourier-domain approximation of this process, implemented with custom CUDA kernels.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionGaVS: Transform unstable shaky videos into smooth, professional-quality footage. We design novel 3D rednering technology that preserves the motion intent while eliminating shakes and distortions—no cropping, no distortion and workable under dynamics and intense motions. GaVS delivers natural-looking results validated by users as superiority. Capture life steadily!
Talk
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Display
Education
Games
Generative AI
Geometry
Graphics Systems Architecture
Image Processing
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Wednesday
DescriptionAs part of our GenAI innovation program, Moment Factory collaborated with Third Rail Projects (TRP), a New York-based troupe renowned for its immersive and participatory creations, to co-create a unique exploration blending human creativity with the power of generative artificial intelligence tools.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present GenAnalysis, an implicit shape generation framework enabling joint shape matching and consistent segmentation by enforcing as-affine-as-possible (AAAP) deformations via regularization loss in latent space. It enables shape analysis via extracting and analysing shape variations in the tangent space. Experiments on ShapeNet demonstrate improved performance over existing methods.
Technical Workshop
Research & Education
Not Livestreamed
Not Recorded
Animation
Physical AI
Robotics
Full Conference
Experience
DescriptionLegged robots, particularly humanoids, represent an emerging technology whose widespread acceptance depends on their ability to perform meaningful tasks at the human cadence in the real world. While human motion data can drive progress in this field, it is often sparse and lacks action labels, limiting the effectiveness of supervised learning. Recent advancements in imitation learning, reinforcement learning, and robotic hardware improvements have led to better generalization of natural behaviors in robots. This workshop will bring together leaders in human/animal simulation, control, animation, and robotics to discuss the state-of-the-art techniques for natural motion generation of physics-based characters and robots.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionA framework to generate past and future processes for drawing process videos.
Course
Not Livestreamed
Not Recorded
Full Conference
DescriptionThis course provides an overview of Generative AI concepts and applications, as well as the challenges and opportunities in this exciting field.
This session, offered by NVIDIA Training, can help you prepare for the NVIDIA Certification Exams* taking place on Thursday at SIGGRAPH.
*The on-site NVIDIA Certification Exam opportunity is available to attendees with Experience and Full Conference Registrations only.
This session, offered by NVIDIA Training, can help you prepare for the NVIDIA Certification Exams* taking place on Thursday at SIGGRAPH.
*The on-site NVIDIA Certification Exam opportunity is available to attendees with Experience and Full Conference Registrations only.
Birds of a Feather
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Physical AI
Full Conference
Experience
DescriptionWhat an AI powered future might look like? Rapid evolution of computing capabilities enables transforming process of scientific discovery, creative production and invent new ways to experience and interact with art, music, design, film, literature, theatre, fashion and every other sphere of cultural production. This super unique BoF seeks to bring together technologists, artists, arts organizations and researchers to discuss what impact AI and generative technologies may have for our future.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a tool for enhancing the detail of physically based materials using an off-the-shelf diffusion model and inverse rendering. Our goal is to enhance the visual fidelity of materials with detail that is often tedious to author, by adding signs of wear, aging, weathering, etc.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present the first generative model for neural BTFs, enabling single-shot generation from arbitrary text or image prompts. To achieve this, we introduce a universal neural material basis and train a conditional diffusion model to generate materials in this basis from flash images, natural images and text prompts.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionLimited high-quality ground-truth data hinders traditional video matting's real-world application. This work tackles this by advocating for large-scale training with diverse synthetic segmentation and matting data. A novel generative pipeline is also introduced to predict temporally consistent alpha masks with fine-grained details.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a systematic derivation of a continuum potential defined for smooth and piecewise smooth surfaces, by identifying a set of natural requirements for contact potentials. Our potential is formulated independently of surface discretization and addresses the shortcomings of existing potential-based methods while retaining their advantages.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Experience
DescriptionA casual discussion on production pipeline development. Come talk shop about of the latest trends, common concerns, and best practices.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Artificial Intelligence/Machine Learning
Digital Twins
Games
Generative AI
Geometry
Industry Insight
Modeling
Pipeline Tools and Work
Rendering
Virtual Reality
Full Conference
Experience
DescriptionglTF is evolving beyond its original role as the JPEG of 3D. This session will offer insight into the current and future evolution of the format, including a new membership structure that makes it easier than ever to get involved and show the future of 3D. Learn how to access and use updated tooling; get a high-level update on new and emerging interactivity, physics, and audio extensions; and learn about the role of Gaussian Splatting in the future of 3D Asset Creation. We’ll explore the newly updated Asset Creation Guidelines 2.0, an essential best practices guide for content creators.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Augmented Reality
Games
Modeling
Pipeline Tools and Work
Real-Time
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionGet an in-depth look at how glTF is evolving to support richer, more realistic textures and new interactive applications, including gaming and metaverse use cases. We’ll begin with updates on the latest advancements in Physically Based Rendering (PBR), including new extensions and improvements to the Khronos sample viewer. Then, dive into emerging glTF capabilities such as baking interactivity into models, supporting VRM avatars, integrating real-time media with MPEG-I, and assembling multiple assets into complex scenes. We’ll wrap up with a case study from Blender Studio using a glTF-based pipeline for the papercraft game Project Dogwalk. Join the conversation!
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Experience
DescriptionThis is part of a linked series of Technical Pipeline BoFs, covering the Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps. Join a participant-driven discussion with key representatives from the graphics community, comparing experiences and exploring techniques related to pushing the production pipeline and resources toward the cloud.
This session will focus on peer-to-peer learning, collaboration and creativity, and high-value topics raised during the session will be explored further via The Pipeline Conference's online Speaker Series and in the Cloud Native user group (https://discord.gg/PU8hygUfbf).
Attendees will receive invites to our "Beers of a Feather" event, the same evening.
This session will focus on peer-to-peer learning, collaboration and creativity, and high-value topics raised during the session will be explored further via The Pipeline Conference's online Speaker Series and in the Cloud Native user group (https://discord.gg/PU8hygUfbf).
Attendees will receive invites to our "Beers of a Feather" event, the same evening.
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionExplore how Sony Pictures Imageworks, creators of visual effects for over fifty feature films, including A Minecraft Movie and Spider-Man: Across the Spider-Verse, have transformed the SPEAR renderer with GPU acceleration. Discover innovative techniques to address execution coherency and memory organization that enable faster, more interactive artist workflows all while attempting to maintain feature parity with the CPU production renderer. Find out how the team overcame challenges with complex production shaders and lighting scenarios and achieved significant performance gains using NVIDIA OptiX. Join us for practical advice on how to implement GPU rendering for modern visual effects workflows.
Course
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Recorded
Education
Games
Geometry
Graphics Systems Architecture
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionThis course introduces graphics programmers of all levels to the latest GPU-feature "Work Graphs" and how to use in HLSL and Direct3D12 for their own projects.
After this class, particiapans should understand, explain, and apply Work Graphs in their own problem domain.
After this class, particiapans should understand, explain, and apply Work Graphs in their own problem domain.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Generative AI
Graphics Systems Architecture
Hardware
Industry Insight
Lighting
Math Foundations and Theory
Performance
Physical AI
Real-Time
Rendering
Spatial Computing
Full Conference
Experience
Exhibits Only
Sunday
Description"Graphics Computing and World Models" is an full-day Industry Workshop that is open to all SIGGRAPH attendees (space permitting). It will feature talks by renowned researchers and in-depth discussions on cutting-edge advancements in graphics and AI. This workshop is designed to foster open innovation and collaboration, bringing together industry and academic experts to explore the future of technology.
Key themes Include:
• The Foundation of Next-Gen Graphics Computing
• Perception for Action: Visual Computing for Physical AI
• Shaping the Future: AI Rendering in Edge Devices
Detailed schedule and presenter information will be published soon, please stay tuned.
Please note: due to space limitation, pre-registration or registration at the door may be required.
Key themes Include:
• The Foundation of Next-Gen Graphics Computing
• Perception for Action: Visual Computing for Physical AI
• Shaping the Future: AI Rendering in Edge Devices
Detailed schedule and presenter information will be published soon, please stay tuned.
Please note: due to space limitation, pre-registration or registration at the door may be required.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Generative AI
Graphics Systems Architecture
Hardware
Industry Insight
Lighting
Math Foundations and Theory
Performance
Physical AI
Real-Time
Rendering
Spatial Computing
Full Conference
Experience
Exhibits Only
Monday
DescriptionHosted by Huawei Canada (an Industry Sponsor of SIGGRAPH 2025), "Graphics Computing and World Models Coffee Chat and Industry Tour" is an Industry Workshop that will take place on August 11 at the Vancouver Convention Center as part of the conference program. It is a workshop that is open to all SIGGRAPH attendees and will feature in-depth discussions on cutting-edge advancements in graphics and AI. This workshop is designed to foster open innovation and collaboration, bringing together industry and academic experts to explore the future of technology.
Register Here
Register Here
Course
New Technologies
Research & Education
Livestreamed
Recorded
Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Dynamics
Education
Fabrication
Generative AI
Geometry
Image Processing
Math Foundations and Theory
Modeling
Physical AI
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Full Conference
Virtual Access
Thursday
DescriptionComputer graphics have evolved from a tool for visualization into a driving force behind scientific discovery, shaping advancements in biology, physics, and beyond. This course explores how graphics techniques have revolutionized interdisciplinary research, inspiring new frontiers in science.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionGSHeadRelight enables fast, high-quality relightability for 3D Gaussian head synthesis. A linear light model based on learnable radiance transfer is integrated into the native 3DGS rasterization process and supports colored illumination. Without requiring expensive light stage data, our method achieves 240+ FPS rendering speed and offers state-of-the-art relighting results.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionA guided lens sampling technique that improves Monte Carlo rendering of depth-of-field by projecting a global 3D radiance field into lens space via bipolar-cone projection. This method efficiently targets high-contribution regions, significantly reducing noise and improving convergence for circle-of-confusion effects in production rendering.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWalk on stars (WoSt) has shown its power in being applied to Monte Carlo methods for solving PDEs but the sampling techniques in WoSt are not satisfactory, leading to high variance. Inspired by Monte Carlo rendering, we propose a guiding-based importance sampling method to reduce the variance of WoSt.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe solve an inverse hand-shadow problem: finding poses of left and right hands that together produce a shadow resembling the target 2D input, e.g., animals, letters, and everyday objects. Our three-stage pipeline decouples the anatomical constraints and semantic constraints, and our benchmark provides 210 diverse shadow shapes of varying complexity.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionHandoid is a novel hand-shaped robotic avatar that switches its morphology between acting as a part of a humanoid robot, and an independent hand-shaped robot avatar physically separated from the humanoid body. Handoid enhances remote user embodiment and expands the operational workspace, opening new horizons for robotic interaction.
Labs
Arts & Design
Gaming & Interactive
Not Livestreamed
Not Recorded
Digital Twins
Games
Real-Time
Simulation
Full Conference
Experience
DescriptionThis class walks participants through how to create global-scale virtual worlds using real-world geospatial data that leverages Cesium and 3D Tiles on the web and in game engines.
Labs
Arts & Design
Production & Animation
Not Livestreamed
Not Recorded
Art
Generative AI
Full Conference
Experience
DescriptionThis hands-on class guides participants through generating their own AI visuals (non-real-time) for projection mapping. They will map these creations onto physical surfaces using real-time tools and optimize rendering with interactive shaders. The session emphasizes creativity, ethical AI use, and sustainable design for accessible immersive experiences.
Labs
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Dynamics
Interactive Classes – Bring Your Own Computer
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionExplore how NVIDIA Earth-2 facilitates efficient weather and climate modeling. Learn how to run a large and growing stack of global AI weather forecasting models and how downscaling models generate super-resolution outputs. Discover use cases and applications benefiting the most from this emerging technology.
Software Needed
None - laptops to be provided by NVIDIA
Software Needed
None - laptops to be provided by NVIDIA
Labs
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Full Conference
Experience
DescriptionThis class focuses on how the artist-centered solutions developed at Blender Studio over more than a decade of filmmaking can help non-technical artists work together seamlessly.
Labs
Arts & Design
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Performance
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionThis Labs hands-on session presents the digital twin workflow behind Hammock Tower, an architectural design for Paris in 2100, which leverages NVIDIA Omniverse, SimScale, Cesium, and Autodesk
Forma to inform climate-resilient strategies for a projected +4°C future.
Forma to inform climate-resilient strategies for a projected +4°C future.
Labs
New Technologies
Not Livestreamed
Not Recorded
Augmented Reality
Digital Twins
Real-Time
Rendering
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionCreate an application for the Apple Vision Pro to configure a photoreal 3D asset. We'll develop an application and set it up to communicate with a product configurator built with the Omniverse Kit SDK and OpenUSD, and implement custom Swift UI to interact with the virtual product in real time.
Labs
New Technologies
Not Livestreamed
Not Recorded
Augmented Reality
Digital Twins
Physical AI
Real-Time
Rendering
Robotics
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionLearn how to create captivating spatial experiences with the Apple Vision Pro, leveraging Swift UI and Xcode for front-end development while utilizing NVIDIA Omniverse as a powerful backend server.
Labs
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Geometry
Interactive Classes – Bring Your Own Computer
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
DescriptionDive into the cutting-edge world of digital twin technology for robotics. Learn to create virtual environments with OpenUSD, simulate robots using NVIDIA Isaac Sim, and control them via ROS. This hands-on lab equips you with essential skills for software-in-the-loop testing in industrial robotics applications.
Software Needed
None - laptops to be provided by NVIDIA
Software Needed
None - laptops to be provided by NVIDIA
Labs
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Art
Education
Full Conference
Experience
DescriptionIn this lab, we introduce participants to graphics in p5.js by creating an interactive postcard with a mix of 2D and 3D elements. This walkthrough includes an introduction to code-based animation, parameterized visuals, mouse and touch interactivity, and screen reader support in p5.js.
Labs
Gaming & Interactive
Research & Education
Not Livestreamed
Not Recorded
Education
Games
Lighting
Performance
Real-Time
Rendering
Full Conference
Experience
DescriptionThis Lab will demonstrate a practical, end-to-end workflow that merges rasterization and ray tracing using Vulkan’s latest features. Participants see how helper libraries, modular shaders (via Slang), and recent dynamic rendering extensions create a gentler ramp onto advanced graphics techniques while still exposing the low-level API handles seasoned developers expect.
Labs
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Education
Generative AI
Industry Insight
Modeling
Performance
Pipeline Tools and Work
Real-Time
Full Conference
Experience
DescriptionDiscover how to integrate state-of-the-art open-source generative AI into your 3D pipeline to flow from idea to 3D asset. In this 90-minute session, you'll build a ComfyUI workflow that transforms concept art into image arrays in any style, culminating in delivery to AI3D endpoints to generate 3D models.
Labs
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Education
Pipeline Tools and Work
Full Conference
Experience
DescriptionIn this session, we will focus on extending Blender’s functionality through its powerful Python API. Starting from fundamental concepts such as operators, we will gradually increase the complexity of our solution, to meet a real use-case scenario.
Labs
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Education
Generative AI
Interactive Classes – Bring Your Own Computer
Full Conference
Experience
DescriptionIntroduction to Generative AI models like Transformers, Diffusion, NeRFs and their application to Computer Graphics.
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed
None (all web-based)
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed
None (all web-based)
Labs
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Games
Graphics Systems Architecture
Image Processing
Interactive Classes – Bring Your Own Computer
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionThis hands-on lab introduces Slang, an open-source, open governance shading language hosted by Khronos that simplifies graphics development across platforms. Designed to tackle the growing complexity of shader code, Slang offers modern programming constructs while maintaining top performance on current GPUs.
Software Needed
None - laptops to be provided by NVIDIA
Software Needed
None - laptops to be provided by NVIDIA
Labs
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Geometry
Interactive Classes – Bring Your Own Computer
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
DescriptionIn this course, we'll discuss OpenUSD fundamentals in the domain of robotics, including benefits of a robot asset structure in OpenUSD, best practices of asset structure utilized by the URDF Importer in Isaac Sim, and review optimizations that can be performed on a robot asset.
Software Needed
None - laptops to be provided by NVIDIA
Software Needed
None - laptops to be provided by NVIDIA
Labs
Arts & Design
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Geometry
Interactive Classes – Bring Your Own Computer
Modeling
Simulation
Full Conference
Experience
DescriptionGeometry Nodes are Blender’s powerful and ever-improving visual framework to create procedural content. This session focuses on the creation of dynamic environment elements, such as space/air traffic in a sci-fi setting. Focus will be on building a system that is flexible, yet easy enough to tweak by a non-technical artist.
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed
Blender
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed
Blender
Labs
Gaming & Interactive
Not Livestreamed
Not Recorded
Interactive Classes – Bring Your Own Computer
Performance
Real-Time
Rendering
Virtual Reality
Full Conference
Experience
DescriptionLearn to render gaussian splats in real-time on mobile and standalone VR using UnityGaussianSplatting. This intermediate workshop covers its fundamentals, followed by optimizations for mobile GPUs.
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed:
Unity 6
RenderDoc
Visual Studio Code
Sourcetree
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed:
Unity 6
RenderDoc
Visual Studio Code
Sourcetree
Labs
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionPaper Animatronics is a project-based learning activity where students create characters and stories and bring them to life through papercraft with sound and motion! Like making posters or dioramas, paper animatronics can be used to reinforce learning in almost any subject.
Labs
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Interactive Classes – Bring Your Own Computer
Pipeline Tools and Work
Rendering
Full Conference
Experience
DescriptionFlamenco is the Open Source render farm software developed by Blender Studio. It can be used for distributed rendering across multiple machines, but also as a single-machine queue runner. This hands-on class will briefly teach how to install and use it, and then focus on customizing it to your needs.
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed:
Blender
Flamenco
Visual Studio Code
Visual Studio Code extension "Go"
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed:
Blender
Flamenco
Visual Studio Code
Visual Studio Code extension "Go"
Labs
Arts & Design
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Education
Graphics Systems Architecture
Image Processing
Math Foundations and Theory
Performance
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionIn this lab, we'll do just that, building a pair of quantum circuits that teleport the state of a quantum bit from one place to another. We'll start by running a few small quantum programs to get comfortable with the process and see the probabilistic nature of quantum measurement.
Labs
Arts & Design
Gaming & Interactive
Not Livestreamed
Not Recorded
Art
Interactive Classes – Bring Your Own Computer
Real-Time
Rendering
Full Conference
Experience
DescriptionIn this course, we create real-time interactive graphics on embedded systems. We also design dynamic visuals with GPU shaders, mapping input data using gestural libraries for control. Finally, we explore embedded system limitations and direct-to-GPU rendering to bridge creative expression and technical implementation for artists, developers, and researchers.
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed
Ossia Score
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed
Ossia Score
Labs
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Audio
Display
Education
Games
Haptics
Hardware
Interactive Classes – Bring Your Own Computer
Performance
Physical AI
Robotics
Simulation
Full Conference
Experience
DescriptionThe Scrapyard Challenge is an interactive workshop where participants create unique arcade and console gaming controllers from found materials for classic games like Street Fighter, Pac-Man, Donkey Kong, Mario Kart, and more.
There will be a stash of objects to use but we are encouraging attendees to bring an “object” with them to hack! Examples of good objects are: cast-off plastic toys (squirters, cars/trucks, gear toys, dollhouses), stuffed animals, old / outdated electronics with moving parts, turntable, old furniture, etc.
Software Needed:
None
There will be a stash of objects to use but we are encouraging attendees to bring an “object” with them to hack! Examples of good objects are: cast-off plastic toys (squirters, cars/trucks, gear toys, dollhouses), stuffed animals, old / outdated electronics with moving parts, turntable, old furniture, etc.
Software Needed:
None
Labs
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Digital Twins
Education
Haptics
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionJoin us in learning how to create 3D and 2D interfaces and graphics with RealityKit, CoreML, and SwiftUI for visionOS applications. Together, we will cover the core design principles of 2D/3D UI and dive into depth perception, spatial awareness, and natural gestures.
Labs
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Interactive Classes – Bring Your Own Computer
Real-Time
Rendering
Full Conference
Experience
DescriptionThis talk covers several methods of stylizing real-time projects built in Unreal Engine for non-photoreal rendering.
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed
1. Unreal Engine 5.6 (Windows Launcher version)
2. Stylized Comic Shader Pack
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Software Needed
1. Unreal Engine 5.6 (Windows Launcher version)
2. Stylized Comic Shader Pack
Labs
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Dynamics
Geometry
Interactive Classes – Bring Your Own Computer
Performance
Physical AI
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Experience
DescriptionThis course offers an introduction to NVIDIA Kaolin library and an in-depth exploration of its physics module. Attendees will learn how to interactively render 3D Gaussian splats, and extend them to physics simulation with collisions using NVIDIA Warp accelerated features.
Bring your laptop and work hands-on on a cloud GPU, reserved for each attendee by the NVIDIA Deep Learning Institute.
Software Needed
None - laptops to be provided by NVIDIA
Bring your laptop and work hands-on on a cloud GPU, reserved for each attendee by the NVIDIA Deep Learning Institute.
Software Needed
None - laptops to be provided by NVIDIA
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionParticle Forest uses machine vision to preserve a digital trace of ancient trees of historical, cultural, and ecological significance in order to build awareness of these charismatic megaflora's grandeur and plight.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present HexHex, which extracts a hexahedral mesh from a locally injective integer-grid map. Key contributions include a conservative rasterization technique and a novel mesh data structure called propeller. Our algorithm is significantly faster and uses less memory than the previous state-of-the-art method, especially for large hex-to-tet ratios.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present SplatDiff, a pixel-splatting-guided diffusion model for single-image novel view synthesis (NVS). Leveraging pixel splatting and video diffusion, SplatDiff generates high-quality novel views with consistent geometry and high-fidelity details. SplatDiff achieves state-of-the-art results in single-view NVS and demonstrates remarkable zero-shot performance on sparse-view NVS and stereo video conversion.
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionBuilding and validating intelligent agents and digital twins for complex real-world operations demands physically accurate, real-time simulation. NVIDIA Omniverse RTX provides this crucial fidelity, fundamentally transforming how Physical AI is developed. This session explores the capabilities of Omniverse RTX, powered by NVIDIA RTX Pro hardware, enabling critical workflows: generating diverse synthetic sensor data for AI training and performing real-time multi-sensor closed-loop simulation (cameras, LiDAR, radar). Discover how these accessible rendering and sensor simulation capabilities empower you to develop cutting-edge intelligent systems.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper presents a CPU-based cloth simulation algorithm that partitions garment models into domains that can be processed by each individual CPU core. Using projective dynamics with domain-level parallelization, this method achieves high performance comparable to GPU methods and runs about an order faster than existing CPU approaches.
Talk
Arts & Design
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionLeveraging the GPU in rigging is challenging due to the requirement of proprietary deformers to achieve photorealistic creature work. We present our strategy to accomplish both, high-performance and high-quality deformations, giving animators an interactive experience with high visual fidelity in a streamlined asset pipeline.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper introduces stratification into resampled importance sampling (RIS) technique for real-time photorealistic rendering. It organizes sample candidates into local histograms and then employs Quasi Monte Carlo and antithetic patterns for efficient sampling. This low-overhead approach significantly reduces rendering noise, improving visual quality compared to existing methods.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present HOIGaze – a novel approach for gaze estimation during hand-object interactions in extended reality. HOIGaze features: 1) a novel hierarchical framework that first recognises attended hand and then estimates gaze; 2) a new gaze estimator combining CNN, GCN, and cross-modal Transformers; and 3) a novel eye-head coordination loss.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a novel representation for learning and generating Computer-Aided Design (CAD) models in the form of boundary representations (BReps). Our representation unifies the continuous geometric properties of BRep primitives in different orders (e.g., surfaces and curves) and their
discrete topological relations in a holistic latent (HoLa) space.
discrete topological relations in a holistic latent (HoLa) space.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionHoloChrome introduces a novel holographic display method by multiplexing multiple wavelengths and two spatial light modulators to enhance image quality. By moving beyond standard three-color primary systems, it significantly reduces speckle noise while preserving natural depth cues while achieving more accurate color reproduction.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionA mixed reality music video that allows you to enjoy music and games at the same time using the pass-through function of Quest 3. VR idol Mikasa will visit your room, sing and dance in front of you, and help you defeat monsters that suddenly appear.
Industry Session
New Technologies
Production & Animation
Animation
Artificial Intelligence/Machine Learning
Dynamics
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionJoin us at SIGGRAPH for an in-depth look at the latest advancements in Houdini’s Character FX workflows, featuring Otis—a powerful new GPU-accelerated dynamics solver purpose-built for soft tissue simulation. Otis combines the speed of Vellum with the material accuracy of FEM, delivering unprecedented realism for muscle, fascia, fat, and tissue interactions. Robust volumetric collisions and constraint systems now support fully integrated muscle and tissue simulation, dramatically reducing setup time while improving physical fidelity and anatomical plausibility. We’ll also highlight machine learning–driven deformation using updated models tailored for anatomical characters—enabling faster, high-quality results with minimal manual input. Procedural muscle deintersections, volume adjustments, and dynamic muscle flexing are easier than ever to configure. Quasi-static skin sliding eliminates the need for a separate simulation pass for skin deformation. Together, these improvements make production-ready results not just achievable, but intuitive, efficient, and artist-friendly.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionExperience the next evolution in 3D simulation with Newton, an open-source, GPU-accelerated physics engine developed by NVIDIA in collaboration with Disney Research and DeepMind. With seamless OpenUSD integration and NVIDIA Warp technology, Newton enables real-time simulation of complex interactions—including rigid and soft bodies, contact, friction, and actuators—empowering creators to push the boundaries of modern 3D pipelines.
Stage Session
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Games
Graphics Systems Architecture
Image Processing
Performance
Real-Time
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionMachine learning can now synthesize nearly any image you can imagine. We can't do that in real time (yet!), but today, we can use neural technology in real-time rendering to reconstruct, denoise, and upscale the majority of pixels. Neural technology is already changing the way the computer graphics industry renders images, enabling a huge leap forward in rendering efficiency. The challenge ahead is enabling these technologies on the strict power budgets associated with mobile devices.
In this talk we will explore:
How neural graphics techniques are advancing so rapidly that standards are still catching up.
Viable ways to support heterogeneous workloads, a mix of hardware accelerators, and fragmented APIs.
Ways to harness and deploy experimental techniques in your titles and go beyond using networks trained by others to create networks to suit your needs.
The emergence of hardware, APIs, and real-time neural networks to solve performance and fidelity bottlenecks.
Each year brings new innovations that shape how we approach graphics. We’ll share how you can prepare for future GPU advances that will transform graphics performance on next generation mobile devices.
In this talk we will explore:
How neural graphics techniques are advancing so rapidly that standards are still catching up.
Viable ways to support heterogeneous workloads, a mix of hardware accelerators, and fragmented APIs.
Ways to harness and deploy experimental techniques in your titles and go beyond using networks trained by others to create networks to suit your needs.
The emergence of hardware, APIs, and real-time neural networks to solve performance and fidelity bottlenecks.
Each year brings new innovations that shape how we approach graphics. We’ll share how you can prepare for future GPU advances that will transform graphics performance on next generation mobile devices.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Dynamics
Games
Lighting
Math Foundations and Theory
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Thursday
DescriptionVirtual crowd simulation is prevalent in graphics and VFX. We demonstrate that widely popular state-of-the-art algorithms fail in several basic benchmark cases. With the goal of designing more robust algorithms, we propose a workaround, which can be easily integrated into real-time applications that simulate crowds.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionHow to Find the Soul of a Sailor is an immersive audio/visual artwork about the future of the oceans told from the perspective of the artist’s late father. A sailor of many years, he left a number of journals from his travels as an officer in the Merchant Navy. Collaborating with The New Real Observatory Platform AI tools, Molga used these journals as small data sets to create future stories in the voice of her Dad. This work is a new take on marine art, and also touches upon digital afterlife.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Games
Industry Insight
Lighting
Full Conference
Experience
DescriptionThe ACM SIGGRAPH Early Career Development Committee’s “Resume and Reel Review” program has long offered students and early career professionals the chance to have their work reviewed by industry experts at SIGGRAPH conferences.
This session will feature a panel of industry professionals who will discuss and review a selection from this year’s program live. This unique opportunity allows attendees to see real-time reviews, learn what makes a great resume and reel, and ask questions directly to experts.
Register for a one-on-one session at https://ecdc.siggraph.org.
This session will feature a panel of industry professionals who will discuss and review a selection from this year’s program live. This unique opportunity allows attendees to see real-time reviews, learn what makes a great resume and reel, and ask questions directly to experts.
Register for a one-on-one session at https://ecdc.siggraph.org.
Industry Session
Arts & Design
New Technologies
Production & Animation
Artificial Intelligence/Machine Learning
Dynamics
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionAn introduction into incorporating machine‑learning enhancements into your Houdini pyro workflows. Make your simulations look 3x better by using Houdini 21's procedural machine learning tools
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Dynamics
Geometry
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Full Conference
Virtual Access
Thursday
DescriptionWe present a modular, asset-centric CFX workflow for costumes and hair/fur. It focuses on individual asset-level simulation set ups, constructing scenes procedurally through merging assets and solving them together. Shot-specific modifications and overrides are applied hierarchically, allowing automatic updates with upstream changes, reducing manual rework and streamlining iterative processes
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionExisting avatar methods typically require sophisticated dense-view capture and/or time-consuming per-subject optimization processes. HumanRAM proposes a feed-forward approach for generalizable human reconstruction and animation from monocular or sparse human images. Experiments show that HumanRAM achieves state-of-the-art results in terms of reconstruction accuracy, animation fidelity, and generalization performance.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Digital Twins
Games
Generative AI
Modeling
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionHyborg Agency proposes an artistic perspective on AI agents: We can design AI agents that maintain their distinct non-human nature while meaningfully participating in human social contexts.
Presenting AI agents as mechanical deer nurtured by community conversations, this computational ecosystem demonstrates how defamiliarized AI agents can enrich human social experiences while promoting transparency about their artificial nature, contributing to more sustainable and ethical human-AI symbiotic relationship.
Presenting AI agents as mechanical deer nurtured by community conversations, this computational ecosystem demonstrates how defamiliarized AI agents can enrich human social experiences while promoting transparency about their artificial nature, contributing to more sustainable and ethical human-AI symbiotic relationship.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Ethics and Society
Pipeline Tools and Work
Full Conference
Experience
DescriptionBringing together interests in sandboxes for hybrid performance across communities from performing arts to wellness. This Birds of a Feathers follows the Frontiers workshop Hybrid Dance Xplorations: Artist-Centric XR/AI Sandbox for Co-Creation and Performance, we featured immersive real-time 3D technologies with a virtual camera system and the latest generative AI work in our XR/AI sandbox for hybrid dance, movement and performance. Our aim to explore ideas, experiences, collaboration and work-in-play, especially with those with other sandboxes.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose Hybrid Tours, a hybrid approach to creating long-take shots by combining short video clips in a virtual interface. We show that Hybrid Tours makes capturing long-take touring shots much easier, and that clip-based authoring and reconstruction lead to higher-fidelity results at lower compute costs.
Spatial Storytelling
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionLocated at water level in the Experience Hall, this work fuses technology, nature, and humanity through computer graphics, sound and lighting installation, presenting water as both a living medium and poetic metaphor. Bridging the physical and digital, it echoes SIGGRAPH’s innovative spirit, inviting viewers to reflect on water’s evolving essence in art, science, and human experience.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionEver feel like three dimensions isn't quite enough? We performed the analysis necessary to simulate the motion of deformables in four spatial dimensions! Along the way, we developed techniques for generating simulation-ready hyper-meshes, analyzing hyper-dimensional deformation energies, and detecting and responding to collision scenarios for softbodies in any dimension.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe developed, to our knowledge, the first virtual reality head-mounted display (HMD) that combines the visual benefits of above retinal-resolution with high brightness (~1000 nits) and high contrast. When showcasing hyperrealistic rendered scenes, it provides a new milestone on how realistic VR experiences can be.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce Image-GS, a content-adaptive image representation based on colored 2D Gaussians. Image-GS achieves remarkable rate-distortion performance across diverse images and textures while supporting hardware-friendly fast random access and flexible quality control through a smooth level-of-detail hierarchy. We demonstrate its versatility with two applications: semantics-aware compression and image restoration.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionOur goal is to accelerate inverse rendering by reducing the sampling budget without sacrificing overall performance. We introduce a novel image-space adaptive sampling framework to accelerate inverse rendering by dynamically adjusting pixel sampling probabilities based on gradient variance and contribution to the loss function.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis work introduces an efficient image-space collage technique that optimizes geometric layouts using a differential renderer and hierarchical resolution strategy. Our approach simplifies complex shape handling in image-space optimization, offering fixed computational complexity. Experiments show our method is an order of magnitude faster than state-of-the-art while supporting diverse visual expressions.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe propose the concept of “Imaginary Joints and Muscles” to provide intuitive proprioceptive feedback for extended body parts without reliance on vision. Our system uses skin stretch to simulate torque at virtual joint interfaces, evoking muscle-like sensations that accurately represent posture and motion, thereby enhancing the user’s body awareness.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose IMLS-Splatting, an end-to-end multi-view mesh optimization method that leverages point clouds for surface representation. By introducing a splatting-based differentiable IMLS algorithm, our approach efficiently converts point clouds into SDF and texture field, enabling multi-view mesh optimization in approximately 11 minutes and achieving state-of-the-art reconstruction performance.
Birds of a Feather
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Digital Twins
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionTraditionally (and since 2014), this session is about immersive visualization systems, software, and tools for science, research, scientific visualization, information visualization, art, design and digital twins. Invited speakers and panelists discuss newest initiatives and developments in immersive space as applied to data exploration, scientific discoveries, and more.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Education
Fabrication
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
DescriptionOver the course of four productions, we demonstrate an incremental USD adoption timeline suitable for small studio and educational contexts, resulting in a workflow that uses USD end to end. We present this process as a case study for how any small studio can implement USD into their pipeline.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose a physics-driven approach to IMU-based motion capture, improving global motion estimation with 3D contact modeling and gravity awareness. Our method estimates world-aligned 3D motion, contact points, contact forces, joint torques, and proxy surface interactions using only six IMUs in real time.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionWhen little Nora's parents split up, the Earth splits in two. She now has to juggle between both
hemispheres to visit them. Problem is, the backpack she's carrying is getting heavier and
heavier...
hemispheres to visit them. Problem is, the backpack she's carrying is getting heavier and
heavier...
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a novel construction algorithm of 3D Apollonius diagrams designed for GPUs. Our method features a fast execution while allowing a comprehensive computation. This is made possible thanks to a light data structure, a cell update procedure and a spacial exploration strategy all designed to support the diagram properties.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Performance
Physical AI
Real-Time
Robotics
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
Description2025, rumored to be the "year of AI agents," the artwork in(A)n(I)mate envisions a future where AI systems act behind the scenes of objects, providing them agency and performativity, animating them, and bringing them closer to human awareness. By inviting conversations with everyday objects, in(A)n(I)mate challenges us to reconsider agency, interaction, and the boundaries of intelligence.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionWe present InfiniteStudio, the first 4D volumetric capture system that meets the visual fidelity requirements for professional-grade video production. Building upon innovations in 4D Gaussian Splatting, InfiniteStudio reduces production time while unlocking unprecedented creative freedom during post-production. It paves the way for the next-generation interactive media and immersive spatial storytelling.
Course
New Technologies
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Generative AI
Pipeline Tools and Work
Full Conference
Virtual Access
Thursday
DescriptionThis three-hour, hands-on workshop introduces artists, designers, and educators to ComfyUI, a powerful node-based interface for generative AI. Participants will install ComfyUI, then learn workflows for inpainting, outpainting, IP Adapters, ControlNet variants (depth, canny, pose), image-to-3D, and image-to-video, gaining practical skills and creative inspiration with open source models and tools.
Labs
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Augmented Reality
Generative AI
Graphics Systems Architecture
Lighting
Real-Time
Full Conference
Experience
DescriptionMoment Factory is a multidisciplinary entertainment studio that sparks human connections in the most innovative ways through Custom Experiences and Original ticketed attractions. Its team combines specializations in video, lighting, architecture, sound, interactivity and special effects to create remarkable experiences. Headquartered in Montreal, the studio has addresses in Tokyo, Paris, New York City and Singapore. Since its inception, Moment Factory has created more than 550 projects worldwide, including Lumina Night Walks, The Messi Experience and several projects at Sphere in Las Vegas. Productions include clients as Universal Studios, Billie Eilish, FIFA, Disney, Olivia Rodrigo, Microsoft, Sony, and Changi Airport Group.
Labs
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Digital Twins
Education
Games
Generative AI
Modeling
Pipeline Tools and Work
Real-Time
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionCombining AI text-to-3D generation and peer-to-peer file sharing, MeshTorrent introduces a scalable platform for decentralized creation and exchange of 3D assets. This advancement empowers collaborative generation, rapid previews, and seamless sharing of .glb models, including extensions for 2D sprites and rigged characters, revolutionizing digital content creation for modern AI art.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionINKi enables instance segmentation for scene sketches by adapting image segmentation models with class-agnostic tuning and depth-based refinement. We introduce a new dataset INK-scene with diverse styles and demonstrate layered sketch organization for advanced editing, including inpainting occluded instances—paving the way for robust, editable sketch understanding.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose InstanceGen - a new technique for improving Text-to-Image models ability to generate images for prompts describing multiple objects, attributes and spatial relationships. InstanceGen requires no training or additional user inputs and achieves state-of-the art results in terms of both accuracy and visual quality on these highly challenging prompts.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a novel framework that instantly (< 1 sec) repairs self-intersections in static surface meshes, which commonly occur during the 3D modeling process.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionInstantRestore is a fast, personalized face restoration framework that uses a single-step diffusion model with an extended self-attention mechanism to match low-quality image patches to high-quality reference patches. Leveraging implicit correspondences in the denoising network, we efficiently transfer identity details in one pass, enabling real-time, identity-preserving restoration without per-identity tuning.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionA three-dimensional dynamic deformable display using novel liquid metal wiring is demonstrated. An integrated strain sensor on the stretchable display calculates its deformation and enables direct interaction between the users and the display device. This unprecedented display type provides a completely novel interactive experience with dynamic deformation.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionInteractive Impossible Objects transforms illusions into physical VR experiences. By separately rendering each eye’s viewpoint and applying redirected walking and hand redirection, users can walk on endless staircases or touch paradoxical forms preserving the illusion. This approach opens new frontiers for immersive art and perceptual research, bridging illusions and reality.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a method for interactive design of procedural patterns, allowing users to sketch content incrementally in a level-by-level fashion. Each level, or scaffold, builds on the previous one, making optimization more responsive and controllable. A comprehensive validation demonstrates improved editing experience compared to conventional techniques.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce an automatic tool to retarget artist-designed garments on a standard mannequin to possibly non-human avatars with unrealistic characteristics, which widely appear in games and animations. We preserve the geometrical features in the original design, guarantee intersection-free, and fit the garment adaptively to the avatars.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce a novel interspatial attention (ISA) for diffusion transformers, which maintains identity and ensures motion consistency while allowing precise control of camera and body poses. Combined with a custom video variation autoencoder, our model achieves state-of-the-art performance for photorealistic 4D human video generation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionA generative workflow for precise image editing using an intrinsic-image latent space. Built on RGB-X diffusion, it enables diverse edits—like relighting, color changes, and object manipulation—while preserving identity and ameliorating intrinsic-channel entanglement. All this is done without extra data or fine-tuning, achieving state-of-the-art results.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionSee how NVIDIA and our partners are advancing OpenUSD as the data ecosystem for industrial digital twins and the next era of AI, including physical AI for robotics, industrial digitalization, and autonomous vehicles. We’ll cover workflows for virtual facility layout and operation, sensor simulation, and standardizing assets.
Course
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Audio
Display
Education
Games
Image Processing
Lighting
Math Foundations and Theory
Modeling
Real-Time
Rendering
Scientific Visualization
Full Conference
Virtual Access
Tuesday
DescriptionThe Fourier Transform is fundamental to computer graphics, explaining topics from aliasing and sampling to image compression and filtering. This friendly course explains the principles in words, pictures, and animation, rather than math. The concepts are the important thing. We show that they are comprehensible, useful, and beautiful.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionIn this paper, we present a computational approach for designing Discrete Interlocking Materials (DIM) with desired mechanical properties. We demonstrate the effectiveness of our method by designing discrete interlocking materials with diverse limit profiles for in- and out-of-plane deformation and validate our method on fabricated physical prototypes.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a computational framework for optimizing shape sequences to achieve user-defined motion objectives in deformable bodies undergoing geometric locomotion. Through a reduced spatiotemporal parameterization of the shape sequences, our method is able to efficiently capture the complex coupling between shape changes and motion in different environments.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionIP-Composer is a novel, training-free method for compositional image generation from multiple reference images. Extending IP-Adapter, it uses natural language to identify concept-specific subspaces in CLIP, projects input images into these subspaces to extract targeted concepts, and fuses them into composite embeddings—enabling fine-grained, controllable generation across diverse visual concepts.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper presents T-Prompter, a method for visually prompting generative models to enable continuous image generation for specific themes, characters, and scenes. It introduces Dynamic Visual Prompting to enhance generation accuracy and quality, outperforming existing methods in maintaining character identity, style consistency, and text alignment.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis paper introduces a gradient combiner that blends unbiased and biased gradients in parameter space using the James-Stein estimator to infer scene parameters (BSDFs and volumes) from images. This approach enhances optimization accuracy compared to relying solely on either unbiased or biased gradients.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper presents a new GPU simulation algorithm, which converges as fast as global Newton's method and as efficient as Jacobi method.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionWind appears in a park. People fly away.
Birds of a Feather
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe Washington Post Video team invites you to join in this Birds of a Feather meet-up for graphics artists in journalism and documentary film.
Creating graphics for documentary or news reporting comes with its own set of unique challenges and creative opportunities. Whether you’re visualizing investigations, crafting illustrations, rendering scientific concepts or conducting research, let’s get together and share our insights while making new connections.
Creating graphics for documentary or news reporting comes with its own set of unique challenges and creative opportunities. Whether you’re visualizing investigations, crafting illustrations, rendering scientific concepts or conducting research, let’s get together and share our insights while making new connections.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis work is a system for experiencing the traditional Japanese painting of flowers and birds. Users can paint pictures on the sliding doors with four different types of brushes. For example, when a branch is added to a cherry tree, flowers bloom around the painted branch.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a novel shadow method named kernel predicting neural shadow mapping. By modeling soft shadow values as pixelwise local filtering from basic hard shadow values, we trained a neural network to predict local filter weights, achieving accurate and temporally-stable soft shadows with good generalizability.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Games
Haptics
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionDevelopers worldwide rely on Khronos open standards to power high-performance, cross-platform 3D graphics, spatial computing and XR, vision processing, parallel computation, and machine learning acceleration. Khronos is a member-funded consortium that welcomes participation from companies, research institutions, and universities. Join us at SIGGRAPH to hear the latest updates from the working groups defining the open standards driving the next generation of applications and devices.
Birds of a Feather
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Education
Graphics Systems Architecture
Industry Insight
Pipeline Tools and Work
Full Conference
Experience
DescriptionThis BOF will share the results from the Khronos AI Ecosystem Research Project, highlighting current industry trends and gaps. Learn how Khronos standards provide scalable, portable AI acceleration and are enabling new use cases, including neural shaders and on-device mobile inferencing. Discover how Khronos is exploring how to shape open, performant acceleration at the base of the AI stack—and how you can get involved. This session is ideal for AI framework, compiler, and runtime developers, as well as silicon vendors seeking to support and leverage a vibrant, cross-platform AI software ecosystem.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Ethics and Society
Games
Generative AI
Graphics Systems Architecture
Haptics
Image Processing
Industry Insight
Math Foundations and Theory
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Robotics
Scientific Visualization
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWrap up your day of Khronos Group BOFs with an engaging networking reception, sponsored by LunarG, Cesium, and NVIDIA. Join us for drinks, snacks, and great conversation with fellow developers, implementers, and standards contributors from across the 3D, XR, and graphics communities. Whether you're looking to connect with peers, continue the day's discussions, or just unwind with friendly faces, this informal gathering is the perfect place to do it. All are welcome! Come raise a glass and celebrate the power of open standards!
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Education
Games
Graphics Systems Architecture
Industry Insight
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Full Conference
Experience
DescriptionThe ACM SIGGRAPH Early Career Development Committee (ECDC)’s Resume and Reel Review program has long provided students and early-career professionals with valuable feedback from industry experts—both at SIGGRAPH conferences and online.
In this session, committee members will share essential best practices for launching your career in the computer graphics industry. Topics include crafting an impactful resume and demo reel, as well as tips on finding mentorship and professional development opportunities.
Be sure to also check out our Birds of a Feather session: How to Get Your Resume & Reel Noticed.
Register for a one-on-one session at https://ecdc.siggraph.org.
In this session, committee members will share essential best practices for launching your career in the computer graphics industry. Topics include crafting an impactful resume and demo reel, as well as tips on finding mentorship and professional development opportunities.
Be sure to also check out our Birds of a Feather session: How to Get Your Resume & Reel Noticed.
Register for a one-on-one session at https://ecdc.siggraph.org.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a simple, but effective framework for kinematically retargeting contact-rich anthropomorphic hand-object manipulations by exploiting contact areas. We reliably retarget contact area data between diverse hands using a novel non-isometric shape matching process and generate high quality results using the retargeted contacts alongside a straightforward IK-based motion synthesis pipeline.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionLAM is an innovative Large Avatar Model for animatable Gaussian head reconstruction from a single image in seconds. Our Gaussian heads are immediately animatable and renderable without additional networks or post-processing. This allows seamless integration into existing rendering pipelines, ensuring real-time animation and rendering across various platforms, including mobile phones.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis work introduces a conditional generative framework for large-scale multi-character interaction synthesis by facilitating natural interactive motions and transitions where characters are coordinated for new interactive partners, proposing a coordinatable multi-character interaction space for interaction synthesis and a transition planning network to plan transitions to achieve scalable, transferable multi-character animations.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Augmented Reality
Display
Games
Geometry
Lighting
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionWe present the Launcher, a software environment configuration tool that has contributed to the success of numerous productions over the course of two decades. We explore the core features that enable us to manage a large number of configurations across numerous departments and projects while balancing stability and flexibility.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose LayerFlow, a unified framework for layer-aware video generation, enabling seamless creation of transparent foregrounds, clean backgrounds, and blended scenes. With multi-stage training and LoRA techniques improving layer-wise video quality with limited data, it also supports variants like video layer decomposition, generating backgrounds for given foregrounds and vice versa.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionLayerPano3D is a novel framework that generates hyper-immersive 3D panoramic scenes from a single text prompt. By decomposing panoramas into multiple layers and optimizing them as 3D Gaussians, it enables full 360°×180° exploration with consistent visual quality, unlocking new possibilities for virtual reality and scene generation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present Leapfrog Flow Maps (LFM), a fast hybrid velocity-impulse scheme with leapfrog integration. The computations are further accelerated by a matrix-free AMGPCG solver optimized for GPUs. As a result, LFM achieves high performance and fidelity across diverse examples, including fireballs and wingtip vortices.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a reinforcement learning framework for assembling structures composed of rigid parts. A pre-trained policy generates alternative assembly plans, enabling rapid adaptation to unexpected disruptions. Our approach supports efficient and robust planning for multi-robot assembly tasks.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present an image-to-image drawing setup capturing eye tracking and stroke data across 156 drawings from 10 artists. Our findings reveal consistent fixation patterns, strong gaze–stroke correlations, and structured drawing sequences, offering new insights into professional observation strategies and observation-guided assistive drawing system design.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Digital Twins
Games
Generative AI
Modeling
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
Description"Learning to Move, Learning to Play, Learning to Animate" is an interdisciplinary multimedia performance, merging real-time AI visuals, plant biofeedback, and found object robotics to explore more-than-human intelligence. Challenging anthropocentrism, it envisions co-creative agency among humans, machines, and organic entities, inviting audiences to experience intelligence as relational, embodied, and shared across natural and synthetic forms.
Course
Research & Education
Livestreamed
Recorded
Animation
Dynamics
Education
Geometry
Modeling
Simulation
Full Conference
Virtual Access
Thursday
DescriptionLevel-of-detail (LoD) is a concept we experience in everyday life and an important topic in graphics. This course explores modern LoD techniques beyond rendering, focusing on hierarchical representations and multilevel solvers for geometry processing and simulation. We demonstrate applications showing how LoD enables efficient, accurate and scalable geometric computation.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Art
Digital Twins
Ethics and Society
Games
Hardware
Industry Insight
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionThis talk explores how 3D Artists in Animation, VFX, and Gaming can transition into industries like fashion, product design, architecture, and more. Learn how to adapt your portfolio, leverage in-demand skills, and bridge knowledge gaps to unlock new career opportunities beyond entertainment in an evolving professional landscape.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe designed a neural field capable of capturing a diverse family of discontinuities, enabling the simulation of cuts in thin-walled deformable structures. By lifting input coordinates using generalized winding numbers, our approach models discontinuities explicitly and flexibly, supporting accurate, real-time simulations with dynamic cut updates and user-interactive cut shape design.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present and analyze a holographic augmented reality display with the bandwidth-preserved guiding method using a light pipe. We propose the use of light pipe to spatially relocate the light engine from the image combiner at the front-module, enabling enhanced weight distribution and obstruction-free view while preserving the wavefront bandwidth.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionLightLab is a diffusion-based method for parametric control over light sources in an image. Leveraging the linearity of light we create a dataset of controled illumniation changes from a small set of real image pairs and synthetic renders, which is used to fine-tune a model to enable physically plausible edits.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce an inverse-LU preconditioner to solve for the typical asymmetric and dense matrices generated by boundary element methods (BEM). The computational efficiency and low memory requirements of our approach conspire to scale up to millions of degrees of freedom, with orders of magnitude speedups in solving times.
Birds of a Feather
Research & Education
Full Conference
Experience
DescriptionUnderstanding of light, real-world cameras and lenses is fundamental to computer graphics (CG), visual effects and games production. This BoF starts with a short presentation exploring two units(courses) as case studies followed by a participant discussion exploring how the art, craft, technical and storytelling aspects of camera and lighting could be fused in academic courses and delivered in a way that not only engages students but also encourages them to experiment and explore these areas further on their own.
The discussion will focus on the sharing of best practices used by participants teaching real-world and virtual camera-based curriculum.
The discussion will focus on the sharing of best practices used by participants teaching real-world and virtual camera-based curriculum.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA story about some people who are like nature.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a simple, parallelizable algorithm inspired by rectified flows to match probability distributions. With linear-time complexity, it approximates optimal transport by employing summed-area tables and direct particle advection. We illustrate our applications in stippling, mesh parameterization, and shape interpolation in 2D, 3D.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Description"Liquid Views - Echoes of Self (2025) is a video reimagining of the 1992 interactive installation that blends nature, art, and technology in a reflection on digital identity. It draws on the ancient act of looking into water — a symbol of self-awareness — and uses technology to distort the viewer's reflection, creating a dynamic visualization of identity. The video explores the boundaries between the self and the digital persona, raising questions about human interaction, technological influence, and self-perception in the digital age. "Echoes of Self" offers a commentary on human connection in a fragmented world.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe propose a novel technology- LiveGS, a real-time free-viewpoint video(FVV) live-broadcast system, which is capable of generating high-fidelity real-time volumetric human representations and rendering on mobiles efficiently with high degree of freedom, while maintaining a low transmission bandwidth.
Birds of a Feather
Research & Education
Artificial Intelligence/Machine Learning
Ethics and Society
Full Conference
Experience
DescriptionShould reviewers be allowed to use LLMs to prepare their reviews? If so, what limits should be imposed on that use? What consequences should reviewers face for violating those limits? We've started to see instances of LLM use by SIGGRAPH reviewers (including to write entire reviews). Unlike some conferences, SIGGRAPH does not yet have a policy governing LLM use in reviewing. The purpose of this BoF is to facilitate discussion on this issue, with the goal of preparing policy recommendations for the SIGGRAPH Papers Advisory Group.
Please share your experience with with LLMs in the review process here: https://tinyurl.com/LLMsInReviewing
Please share your experience with with LLMs in the review process here: https://tinyurl.com/LLMsInReviewing
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Industry Session
Production & Animation
Geometry
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Experience
Exhibits Only
Monday
DescriptionThis talk presents an inside look at how Netflix Animation Studios utilizes procedural solutions in Houdini for complex look development. This session will showcase four proprietary tools developed in-house: Alfro, a system for procedural hair grooming; Weave, a system for procedural fabric generation; Spawn, an instancing system for building complex environments; and Spruce, an instancing system for procedural vegetation generation.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis interactive experience showcases generative models that create ambiguous anamorphoses—images that hold meaning both normally and when viewed through specific mirrors and lenses. Participants explore these illusions hands-on, generate their own using a custom UI, and take home a small cylindrical mirror to continue discovering hidden visuals beyond the exhibit.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionLove That Defies Mortality reinterprets The Peony Pavilion using asymmetric VR and cross-media design, creating immersive, non-linear narrative experiences. By combining traditional Chinese theatre with modern technology, it bridges cultural heritage and contemporary audiences, offering innovative approaches in both academic research and practical application.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionLunar Roving Adventure is a VR game that brings NASA's Apollo lunar missions to life. Designed to engage younger generations, it offers an immersive, interactive experience that combines historical facts with educational gameplay.
Spatial Storytelling
Full Conference
Experience
Description"Maamawi: Together Through the Fire" is not only a performance but a communal ritual, a call to action, and a shared vision for the future. It encapsulates the essence of Anishinaabe teachings and the power of storytelling, blending tradition with technology to inspire, educate, and unite.
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Generative AI
Geometry
Image Processing
Lighting
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Thursday
DescriptionThis paper describes the techniques we use to build the complex light rigs in the Sony Pictures Imageworks lighting pipeline. Our process is semi-automatic and removes manual labour from the artists.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionMagic You is a virtual reality interactive narrative coming-of-age story about ADHD, told in a hand-drawn, colouredpencil aesthetic as a magical journey. The artwork hopes to present the experiences and imagination of ADHD patients to the audience in a poetic way and give the audience a manifestation of neurodiversity.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce MAGNET (Muscle Activation Generation Networks), a scalable framework for reconstructing full-body muscle activations across diverse human movements, which also includes distilled models for solving downstream tasks or generating real-time muscle activations—even on edge devices. The efficacy is demonstrated through examples of daily life and challenging behaviors.
Industry Session
New Technologies
Production & Animation
Animation
Pipeline Tools and Work
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionWashi washi! Yep, I also saw those fancy Vellum sims out there, folding a grid into crispy origamis. This is not a presentation about that 🙂 Last year, COPs made its debut as a beta and we undoubtedbly have some new goodies for you to play with. There are also additions to SOP tools that I'll also call out, and getting some final pixels via Solaris too. Wow, has it been 12 months already??
Talk
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Geometry
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionThis work discusses the technical solutions developed for making the Stream of Consciousness in Pixar's Inside Out 2, including in-house procedural tools that facilitated the authoring and stylization of velocity fields interacting with various 3D obstacles.
Birds of a Feather
Gaming & Interactive
Research & Education
Artificial Intelligence/Machine Learning
Capture/Scanning
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionThis BOF explores a pipeline that fuses point clouds, natural language object descriptions, and eye gaze data to reconstruct architectural scenes and visualize human attention in 3D. Using SLAM-corrected trajectories, ray-object intersections, and interactive Plotly visualizations, the system reveals where users look and how they interact with spatial environments. Potential topics of discussion: methods for aligning multimodal data, computing gaze-object intersections, and designing interpretable heatmaps.
Contact Info: [email protected]
Contact Info: [email protected]
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce Masked Anchored SpHerical Distances (MASH), a novel multi-view and parametrized representation of 3D shapes. MASH is versatilefor multiple applications including surface reconstruction, shape generation, completion, and blending, achieving superior performance thanks to its unique representation encompassing both implicit and explicit features.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionMatCLIP assigns realistic PBR materials to 3D models using shape- and lighting-invariant descriptors derived from images, including LDM outputs and photos. It outperforms prior methods by over 15%, enabling consistent material predictions across varied geometry and lighting, with applications to large-scale 3D datasets like ShapeNet and Objaverse.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionMaterialPicker is a multi-modal material generation model that creates high-quality material maps from images and/or text by fine-tuning a video diffusion model. It robustly extracts materials from real-world photos, even with distortion or occlusion, enhancing fidelity, diversity, and efficiency in material synthesis.
Appy Hour
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Education
Games
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionMeCapture, our mobile augmented reality app, helps users document and visualize long-term body changes through augmented reality guidance. Built as part of our recently published research, Personal Time-Lapse, it shows potential in personal health monitoring and is freely available on the App Store (MeCapture.com).
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
Exhibits Only
DescriptionJoin us for a relaxed and informative mingling event designed for those interested in exploring volunteering opportunities within the ACM SIGGRAPH organisation and in SIGGRAPH Conferences. This is a great chance to meet current key volunteers and leadership, learn about the various roles and committees that you could get involved in, and discover how your skills and passions could make a meaningful impact to our community. Whether you're looking to expand your professional network, gain new experiences, or give back to the community, this event provides a valuable opportunity to get involved with SIGGRAPH!
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionMeschers are a mesh representation for Escheresque geometry. They allow us to solve partial differential equations on the surface of an impossible object, meaning that we can find impossible shortest paths, perform mescher smoothing, and even inverse render a mescher from an image.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionMÉTA INSTRUMENT n°4
The MI4 is the latest generation of computer-assisted musical instrument designed by PUCE MUSE. MI4 is made entirely by 3D printer.
MI4 is a man/machine interface that reconciles the refinement of hand and touch with digital technology. It enables music and image to be played in real time, with a new virtuosity. It offers a major innovation: a refined and highly precise measurement of touch and hand movements to enable a new virtuosity of digital performance. It is designed for anyone - amateur or. professional musicians, people with reduced mobility, performers using digital technology in real time...
The MI4 is the latest generation of computer-assisted musical instrument designed by PUCE MUSE. MI4 is made entirely by 3D printer.
MI4 is a man/machine interface that reconciles the refinement of hand and touch with digital technology. It enables music and image to be played in real time, with a new virtuosity. It offers a major innovation: a refined and highly precise measurement of touch and hand movements to enable a new virtuosity of digital performance. It is designed for anyone - amateur or. professional musicians, people with reduced mobility, performers using digital technology in real time...
Talk
Production & Animation
Livestreamed
Recorded
Animation
Art
Dynamics
Geometry
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Thursday
DescriptionWe present a novel character rigging solution developed for OOOOO, a liquid supercomputer in Pixar's Elio. OOOOO is Pixar’s first mesh-free character rig with a hierarchical arrangement of implicit surface primitives and operators, allowing for complex transformations and offers unprecedented flexibility in character animation.
Talk
Production & Animation
Livestreamed
Recorded
Animation
Art
Dynamics
Geometry
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Thursday
DescriptionElio's OOOOO is Pixar's first character created as an implicit surface, visualized with GLSL in our animation software, Presto. This talk will go past the model/rig stage and consider look development challenges related to the loss of stable mesh data on a shapeshifting character, resulting in a per-frame process.
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Education
Performance
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionDemonstration on how we are designing a hyper-reality adaptation of Shakespeare’s Macbeth featuring life-sized metahuman digital doubles that appear to intelligently interact with live actors in a virtual production volume.
Educator's Day Session
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Capture/Scanning
Digital Twins
Education
Games
Performance
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Experience
Monday
DescriptionJoin Epic Games Education for an in-depth overview of the latest MetaHuman updates in Unreal Engine 5.6. This session will explore how advancements in high-fidelity digital humans can benefit post-secondary programs across diverse disciplines: film, animation, games, simulation, fashion, and more. The session will also cover the latest education-focused initiatives from Epic’s Education, Learning, and Training team- including partnerships, events, and resources for educators and trainers.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionIn high-stiffness, high-resolution simulations, while primal space methods typically fail, the dual-space XPBD method produces unphysical softening artifacts due to convergence stall. We design an innovative Algebraic Multigrid method to enhance XPBD, utilizing lazy-update prolongators and near-kernel optimization. Our approach ensures stability, efficiency, and scalability for high-stiffness, high-resolution deformable models.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionI will show the game Miegakure, in which you are a 3D character inside 4D world. The way we represent the fourth dimension is by taking a 3D slice through the 4D world, similar to how you can take a 2D slice through a 3D world.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Description"Mimosa Pithics" revitalizes Hsinchu’s pith paper cultural heritage in Taiwan by integrating biomimicry, shape memory alloy (SMA) technology, and interactive design. This kinetic artwork features pith petals that open and close in response to viewer interaction, evoking the natural movement of a mimosa plant through infrared sensor activation. Developed in close collaboration with local pith paper artisans, the project highlights the expertise of traditional craftspeople while bridging heritage and contemporary technologies. By reimagining endangered crafts through hybrid methods, "Mimosa Pithics" fosters sustainable dialogues between tradition and innovation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce MIND, a novel generative framework for inverse-designing diverse, tileable 3D microstructures. Leveraging latent diffusion and our hybrid neural representation, MIND precisely achieves targeted physical properties, ensures geometric validity, and enables seamless boundary compatibility—opening new avenues for advanced metamaterial design and manufacturing applications.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionMany problems in graphics can be formulated as a non-linearly constrained global minimization (MINIMIZE), or solution of a system of non-linear constraints (SOLVE). We introduce MiSo, a domain-specific language and compiler for generating efficient code for low-dimensional MINIMIZE and SOLVE problems, using interval methods to guarantee conservative results.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Pipeline Tools and Work
Full Conference
Experience
DescriptionThis is part of a linked series of Technical Pipeline BoFs, covering the Reference Platform, Renderfarming, Cloud, Pipeline, Storage, and MLOps. There's been much ado about Generative AI models, but how does one train, deploy, and integrate these into a studio/production pipeline reliably and efficiently?
This session aims to bridge the gap between experimental development of machine learning models and their operational deployment. It will present the state of the art and encourage participant-led discussion to identify challenges and solutions, fostering shared understanding and actionable insights.
Attendees will receive invites to our 'Beers of a Feather' event, the same evening.
This session aims to bridge the gap between experimental development of machine learning models and their operational deployment. It will present the state of the art and encourage participant-led discussion to identify challenges and solutions, fostering shared understanding and actionable insights.
Attendees will receive invites to our 'Beers of a Feather' event, the same evening.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionMobius is a novel method to generate seamlessly looping videos from text descriptions directly without any user annotations, thereby creating new visual materials for the multi-media presentation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionModelSeeModelDo presents a speech-driven 3D facial animation method using a latent diffusion model conditioned on a reference clip to capture nuanced performance styles. A novel "style basis" mechanism extracts key poses to guide generation, achieving expressive, temporally coherent animations with accurate lip-sync and strong stylistic fidelity across diverse speech inputs.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionThis work presents a physically-based model for simulating and rendering glow discharge, a luminous plasma effect seen in neon lights and gas discharge lamps. The model captures particle interactions and emission dynamics, integrates into volume rendering systems, and enables realistic, interactive visualizations of complex light phenomena.
Talk
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Games
Image Processing
Industry Insight
Lighting
Pipeline Tools and Work
Real-Time
Simulation
Full Conference
Virtual Access
Tuesday
DescriptionFrom observations of luminous flames, it is apparent that soot oxidation behaves as an erosion of the flame. Motivated by this, we model soot oxidation with a level set equation combining physics and proceduralism. We demonstrate our method by several examples ranging from small-scale flames to large-scale turbulent fire.
Course
Gaming & Interactive
Livestreamed
Recorded
Games
Performance
Real-Time
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionThis course covers modern Vulkan features, including the latest additions in Vulkan 1.4. Topics include dynamic rendering, synchronization strategies, streamlined subpasses, and bindless techniques. Through practical examples, participants will learn how to implement different rendering techniques.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionMeasures can be compactly represented and approximated using the theory of moments. This work proves that such moment-based representations are differentiable, leading to principled and efficient approaches for approximating transmittance and visibility in differentiable rendering.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionMonetGPT explores using multimodal large language models (MLLMs) for photo retouching by injecting domain knowledge via visual puzzles. These puzzles help MLLMs understand individual operations, visual aesthetics, and generate expert plans. Our procedural pipeline enables explainable edits with detailed reasoning for the plan and individual operations.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a high-quality online reconstruction pipeline for monocular input streams, reconstructing environments with detail across multiple levels while maintaining high speed.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe present Monsteroom, a Substitutional Reality -based system integrating virtual reality, movable furniture, and smart appliances to simulate giant virtual pets. It enhances realism through spatial illusions and environmental feedback, enabling immersive, embodied interactions beyond traditional virtual pet systems.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe solve partial differential equations in domains involving complex microparticle geometry that is impractical, or intractable, to model explicitly. Drawing inspiration from volume rendering, we treat the domain as a participating medium with stochastic microparticle geometry and develop a volumetric variant of the Monte Carlo walk on spheres algorithm.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionMetric-Aligning Motion Matching (MAMM) is a novel method for controlling motion sequences using sketches, labels, audio, or another motion sequence without requiring training or annotations. By aligning within-domain distances, MAMM provides a flexible and efficient solution for motion control across various control modalities.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose Motion Embeddings for video generation, enabling precise motion in video transfer across diverse scenes and objects. These embeddings disentangle motion from appearance, preserving original dynamics while adapting to new prompts. Experiments show that our method achieved high-quality, prompt-aligned video generation across a wide range of scenarios.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present a framework to utilize Large Language Models (LLMs) for co-speech gesture generation with motion examples as direct conditions. It enables multi-modal controls over co-speech gesture generation, such as motion clips, a single pose, human video, or even text prompts.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionMotionCanvas enables intuitive cinematic shot design in image-to-video generation by letting users control both camera movements and object motions in a 3D-aware scene. Combining classical graphics with modern diffusion models, it translates motion intentions into spatiotemporal signals—without costly 3D data—empowering creative video synthesis for diverse editing workflows.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionLarge vision-language models often fail to capture spatio-temporal details in text-to-animation tasks. We introduce MoVer, a verification system using first-order logic to check properties like timing and positioning in motion graphics animations. Integrated into an LLM pipeline, MoVer enables iterative refinement, significantly improving animation generation accuracy from 58.8% to 93.6%.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Industry Session
Production & Animation
Rendering
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionWe going to look into advances made in MPM solver, new SOP tools and workflows around it and also improvements in Karma XPU and Solaris.
Main focus will be on the multiple material types interactions in MPM solver, two way interaction and material transitions. Solids and liquids working together, surface tension, plasticity, emission expansion, all at once.
We going to prepare tasty hot drink from cookies, milk, sugar, coffee and caramel. Showcasing multiple simulations setups, different approaches and gotchas with different material types in MPM.
Meshing MPM sims is a breeze with the new Neural Surface SOP. Will showcase helpful tips how to utilize it to our advantage.
Every shot will be exported to USD and assembled in Solaris for rendering with Karma XPU. We will look into new Whitewater XPU shader, motion blured volumes, volumetric specular, new rendering properties and AOVs.
This presentation goes trough whole spectrum of Houdini tools to create final spot.
Main focus will be on the multiple material types interactions in MPM solver, two way interaction and material transitions. Solids and liquids working together, surface tension, plasticity, emission expansion, all at once.
We going to prepare tasty hot drink from cookies, milk, sugar, coffee and caramel. Showcasing multiple simulations setups, different approaches and gotchas with different material types in MPM.
Meshing MPM sims is a breeze with the new Neural Surface SOP. Will showcase helpful tips how to utilize it to our advantage.
Every shot will be exported to USD and assembled in Solaris for rendering with Karma XPU. We will look into new Whitewater XPU shader, motion blured volumes, volumetric specular, new rendering properties and AOVs.
This presentation goes trough whole spectrum of Houdini tools to create final spot.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Performance
Physical AI
Real-Time
Robotics
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
Description“Symbiosis of Agents” merges AI-driven multi-agent robotics with immersive environments, exploring the delicate balance of machine agency and artist authorship through emergent behaviors in self-organized AI ecologies. Its layered approach—micro-level strategie, meso-level drives, and an LLM-based “faith system”—creates a novel creative apparatus challenging conventional ideas on creativity and responsibility.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce a fast, wave-based procedural noise model enabling precise spectral control in any dimension. Using precomputed wave functions and inverse Fourier transforms, it supports Gaussian and non-Gaussian noises—including Gabor, Phasor, and novel recursive cellular patterns—making it ideal for compact, controllable, and animated solid textures in 2D, 3D, and time.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionGenerate exciting multi-character interactions, such as team fights, with our training-free method! Multi-character interactions can be decomposed into multiple two-person interactions using a directed graph, which enables repurposing large pre-trained two-character motion synthesis models without any multi-character data. You can compose and vary multi-character interactions spatially and temporally!
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe combine the estimates generated in each guiding iteration, leveraging the importance distributions from multiple guiding iterations. We demonstrate that our path-level reweighting makes guiding algorithms less sensitive to noise and overfitting in distributions.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe personalize a pre-trained global aging prior using 50 personal selfies, allowing age regression (de-aging) and age progression (aging) with high fidelity and identity preservation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce Neural Adjoint Maps, a novel representation for correspondences between 3D shapes. Built on and extending the functional map framework, our approach enables accurate, non-linear refinement of shape matching across meshes and point clouds, setting a new standard in diverse scenarios and applications like graphics and medical imaging.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionNano-3D is an ultra-compact, metasurface-based neural depth sensor that captures orthogonally polarized image pairs in a single shot and reconstructs metric depth in real-time. Demonstrated at SIGGRAPH 2025, it offers live, robust depth maps for objects while revealing its 700-nm-thick TiO2 metalens.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionNeST enables non-destructive 3D stress analysis of transparent objects using the polarization of light. Traditional 2D methods require destructively slicing the object. Instead, we reconstruct the entire 3D stress field by jointly handling phase unwrapping and tensor tomography using neural implicit representations and inverse rendering, enabling novel 3D stress visualizations.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper introduces Nested Attention, a mechanism that improves text-to-image personalization by injecting query-dependent subject features into cross-attention layers, achieving strong identity preservation and prompt alignment. The method maintains the model’s prior, enabling multi-subject generation across diverse domains.
Industry Session
Production & Animation
Animation
Art
Industry Insight
Lighting
Modeling
Full Conference
Experience
Exhibits Only
Monday
DescriptionJoin the Netflix Animation Studios’ Talent Team for an exclusive panel to learn more about our studio! Each of our three dynamic locations – Burbank, Sydney, and Vancouver – serve as a hub of innovation and storytelling, contributing uniquely to our diverse slate of animated projects. Learn about what’s new with us, and what roles we’re currently hiring for to meet our project, studio, and end-to-end pipeline needs. Don't miss this opportunity to discover the inner workings of Netflix Animation Studios and the work that brings our stories to life!
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce a reparameterization-based formulation of neural BRDF importance sampling. Comparing to previous methods that construct a probability transform to the BRDF through multi-step invertible neural networks, our BRDF sampling is in single step without needing network invertibility, achieving higher inference speed with the best variance reduction.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a computational framework that co-optimizes structural topology, curved layers, and fiber orientations for manufacturable, high-strength composites. Using implicit neural fields, our method integrates design and fabrication objectives into a unified optimization process, achieving up to 33.1% improvement in failure load for multi-axis 3D printed fiber-reinforced thermoplastics.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionNeural approach for estimating spatially varying light selection distributions to improve importance sampling in Monte Carlo rendering. To efficiently manage hundreds or thousands of lights, we integrate our neural approach with light hierarchy techniques, where the network predicts cluster-level distributions and existing methods sample lights within clusters.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Display
Education
Games
Generative AI
Geometry
Graphics Systems Architecture
Image Processing
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Wednesday
DescriptionMetaphysic’s Neural Performance Toolset offers photorealistic, AI-driven human performance synthesis, combining advanced neural architectures, identity training, and latent-space manipulation. The system delivers unmatched realism and control for cinematic and real-time productions. Its success in major film and worldwide live events illustrates its capacity to redefine AI-generated content in global entertainment.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe train a network to map signed distance fields to the quadrature points and weights of non-conforming numerical integration rule in a Mixed Finite Element formulation, enabling differentiable elastic simulation over evolving domains. We demonstrate applications to image-guided material and topology optimization.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose NeurCross, a self-supervised framework for quadrilateral mesh generation that jointly optimizes principal curvature direction field and cross field by employing an optimizable neural SDF to approximate the input surface. NeurCross outperforms state-of-the-art methods in terms of singular point placement, robustness to noise and geometric variations, and approximation accuracy.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
New Technologies
Not Livestreamed
Not Recorded
Art
Augmented Reality
Ethics and Society
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionA collaboration with the DAC standing committee, this session presents a dynamic conversation exploring “New Media Architecture(s): Vancouver”, a digital media exhibition that brings augmented reality artworks into dialogue with “Heron’s Dreamscape”, a vibrant public mural by artist Priscilla Yu in Vancouver, BC. Curated by Johannes DeYoung, Gustavo Alfonso Rincon, and Miriam Esquitín, the exhibition reimagines the role of public art through digital augmentation, activating the mural as a living interface between place, community, and technology.
This panel features the curators alongside Priscilla Yu in a discussion that delves into the collaborative process behind the exhibition and the evolving relationship between physical murals and digital interventions. Together, they’ll explore how site-specific digital media can expand the narrative capacity of public artworks, deepen community engagement, and reframe our experience of urban environments.
Through an interdisciplinary lens, the conversation will address the potentials and challenges of blending artistic traditions with emerging technologies — and what it means to co-author public space in the digital age.
Audience members will gain insight into the artistic, curatorial, and technical approaches that shaped False Creek Frequencies, while reflecting on the broader cultural impact of art in augmented urban landscapes.
This panel features the curators alongside Priscilla Yu in a discussion that delves into the collaborative process behind the exhibition and the evolving relationship between physical murals and digital interventions. Together, they’ll explore how site-specific digital media can expand the narrative capacity of public artworks, deepen community engagement, and reframe our experience of urban environments.
Through an interdisciplinary lens, the conversation will address the potentials and challenges of blending artistic traditions with emerging technologies — and what it means to co-author public space in the digital age.
Audience members will gain insight into the artistic, curatorial, and technical approaches that shaped False Creek Frequencies, while reflecting on the broader cultural impact of art in augmented urban landscapes.
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionJoin NVIDIA engineers for an in-depth look at how the NVIDIA OptiX ray tracing toolkit is using DLSS, Mega Geometry, Cooperative Vectors and more to redefine real-time and production rendering. We’ll explore the latest in AI-powered ray tracing techniques, new neural rendering methods, and performance leaps for both interactive and offline graphics. Whether you’re exploring GPU-accelerated rendering for the first time or advancing what’s possible, we invite you to join the conversation about the future of rendering.
Production Session
Arts & Design
Production & Animation
Research & Education
Not Livestreamed
Recorded
Animation
Art
Education
Image Processing
Industry Insight
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Full Conference
Virtual Access
Sunday
DescriptionBringing NASA Data to Life: The Power of Visualizing Science
This production session presents how NASA Earth science reaches global audiences through compelling data-driven visualizations. As scientific data grows increasingly complex and voluminous, the challenge lies in transforming data into meaningful, accessible knowledge. NASA team members bridge this gap by working closely with scientists and mission teams to create innovative visualizations that make intricate Earth phenomena universally understandable. These visualizations transform complex datasets into captivating narratives that advance both science and public understanding.
This production session showcases the making of public facing NASA exhibits and new works in development including data dashboards that visualize near real-time extreme events as they happen, such as wildfires and disasters to name a few. In addition, the session reveals the processes of creating visualizations of atmospheric phenomena using state-of-the-art models that in turn are used as high-quality training data to fuel AI innovation.
A multidisciplinary team of artists, engineers, and data visualization experts demonstrates their process for creating large-scale data-driven media that engages diverse audiences. Building on their expertise working with Earth science data, the team reveals both technical challenges and creative breakthroughs encountered when transforming complex scientific datasets into compelling visual narratives—from initial concept development through final production. Attendees will discover cutting-edge approaches to artistic direction, scientific models, computational techniques, and robust pipeline development. The presentation explores powerful storytelling strategies, technical implementation methods, and emerging research opportunities that advance multi-dimensional storytelling that conveys scientific insights and inspires wonder.
https://svs.gsfc.nasa.gov/
https://earth.gov/
The recording will be available after August 14.
This production session presents how NASA Earth science reaches global audiences through compelling data-driven visualizations. As scientific data grows increasingly complex and voluminous, the challenge lies in transforming data into meaningful, accessible knowledge. NASA team members bridge this gap by working closely with scientists and mission teams to create innovative visualizations that make intricate Earth phenomena universally understandable. These visualizations transform complex datasets into captivating narratives that advance both science and public understanding.
This production session showcases the making of public facing NASA exhibits and new works in development including data dashboards that visualize near real-time extreme events as they happen, such as wildfires and disasters to name a few. In addition, the session reveals the processes of creating visualizations of atmospheric phenomena using state-of-the-art models that in turn are used as high-quality training data to fuel AI innovation.
A multidisciplinary team of artists, engineers, and data visualization experts demonstrates their process for creating large-scale data-driven media that engages diverse audiences. Building on their expertise working with Earth science data, the team reveals both technical challenges and creative breakthroughs encountered when transforming complex scientific datasets into compelling visual narratives—from initial concept development through final production. Attendees will discover cutting-edge approaches to artistic direction, scientific models, computational techniques, and robust pipeline development. The presentation explores powerful storytelling strategies, technical implementation methods, and emerging research opportunities that advance multi-dimensional storytelling that conveys scientific insights and inspires wonder.
https://svs.gsfc.nasa.gov/
https://earth.gov/
The recording will be available after August 14.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionVideo forensics, which focuses on identifying fake or manipulated video, is becoming increasingly difficult with the development of more advanced video editing techniques. We show how coding near-imperceptible, noise-like modulations into the illumination of a scene can create information asymmetry that favors forensic verification of video captured from that scene.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Course
Labs
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionMaximize your SIGGRAPH experience by earning an industry-recognized NVIDIA certification, offered at no additional charge to Experience and Full conference attendees.
For the first time, the OpenUSD Development certification exam will be proctored live at SIGGRAPH. You can also choose from associate and professional-level exams in Generative AI, Data Science, AI Infrastructure, AI Operations, and AI Networking. Prepare in advance by attending sessions at SIGGRAPH 2025 in the "Learn OpenUSD" series, and the "Generative AI Explained" course. NVIDIA also offers official study guides and watch the on-demand certification overview session to understand exam format, topics, and test day tips for success.
What to Know Before You Register:
- You must register for an exam session. Seats are limited and filled on a first-come, first-served basis.
- Associate-level exams take 1 hour.
- Professional-level exams take 90 minutes.
Register here
For the first time, the OpenUSD Development certification exam will be proctored live at SIGGRAPH. You can also choose from associate and professional-level exams in Generative AI, Data Science, AI Infrastructure, AI Operations, and AI Networking. Prepare in advance by attending sessions at SIGGRAPH 2025 in the "Learn OpenUSD" series, and the "Generative AI Explained" course. NVIDIA also offers official study guides and watch the on-demand certification overview session to understand exam format, topics, and test day tips for success.
What to Know Before You Register:
- You must register for an exam session. Seats are limited and filled on a first-come, first-served basis.
- Associate-level exams take 1 hour.
- Professional-level exams take 90 minutes.
Register here
Industry Session
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Hardware
Physical AI
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Monday
DescriptionInteractive chatbots are gaining popularity in use across multiple industries, serving as virtual assistants, customer support agents, sales support agents, and much more. They help users find or input information quickly by instantly responding to requests, eliminating the the need for human intervention or manual research. With rapid advancements in Generative AI and Language Models, modern chatbots can provide users with an engaging experience that is similar to what they would get from humans. This workshop covers important topics to help users understand, build, and deploy Digital Humans, as well as how to customize them for specific user cases.
Industry Session
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Hardware
Physical AI
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Monday
DescriptionIn the era of embodied AI, the boundary between digital creation and physical action is vanishing. This session introduces GRID, General Robotics’ low-code AI development platform that enables creators, engineers, and scientists to quickly generate, deploy, and adapt intelligent robot behaviors—bridging the gap between visual concepts and real-world robotics.
We’ll explore a novel agentic workflow for developing robot skills—where creators collaborate with physical agents in real time, iterating with minimal friction. Whether you're designing a robot animation, visualizing a behavior, or prototyping a new product interaction, GRID empowers users to translate those visual and conceptual ideas into executable AI on physical robots.
Participants will get hands-on experience in a workshop using GRID and select from a range of research and commercial-grade robots to program, deploy, and adapt behaviors “on the fly,” watching your ideas come to life in the physical world in minutes—not hours or days.
We’ll explore a novel agentic workflow for developing robot skills—where creators collaborate with physical agents in real time, iterating with minimal friction. Whether you're designing a robot animation, visualizing a behavior, or prototyping a new product interaction, GRID empowers users to translate those visual and conceptual ideas into executable AI on physical robots.
Participants will get hands-on experience in a workshop using GRID and select from a range of research and commercial-grade robots to program, deploy, and adapt behaviors “on the fly,” watching your ideas come to life in the physical world in minutes—not hours or days.
Industry Session
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Hardware
Physical AI
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Monday
Description"Agents have been all the rage recently, with many frameworks advertising agentic capabilities, new use-cases popping up left and right, and some truly amazing software innovations that keep forcing us to redefine limits and expectations. We'll boil down the agent abstraction to its roots, discuss how the definition is manifesting in modern software, and highlight some key insights that are driving the space forward.
By the end of this course, students will learn:
- What is the definition of an agent, and where is it going?
- How are ""agents"" used to augment existing applications, and how are they being used to solve new problems?
- What are some key ideas that are helping to drive agent applications further in quality, usefulness, and scope?"
By the end of this course, students will learn:
- What is the definition of an agent, and where is it going?
- How are ""agents"" used to augment existing applications, and how are they being used to solve new problems?
- What are some key ideas that are helping to drive agent applications further in quality, usefulness, and scope?"
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionNVIDIA Nsight Aftermath SDK integrates into D3D12 and Vulkan applications to generate GPU crash reports when an exception or TDR occurs, helping developers track down and debug hard-to-reproduce errors in deployed applications. This workshop will teach the fundamentals of Aftermath by having students integrate the Aftermath SDK into an application and walk through the process of inspecting crash reports and manage the symbol files required to achieve full source code attribution for crashes in shaders.
Industry Session
New Technologies
Production & Animation
Animation
Art
Digital Twins
Education
Geometry
Industry Insight
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionIn this hands-on course, learners will explore the principles and practical workflows of asset modularity, instancing, and content reuse in OpenUSD. Through guided exercises and real-world scenarios, participants will build modular asset hierarchies using OpenUSD’s model kinds and composition arcs and leverage scenegraph instances and point instancers to build efficient and scalable 3D scenes.
Industry Session
New Technologies
Production & Animation
Animation
Art
Digital Twins
Education
Geometry
Industry Insight
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionLearn how to prepare 3D assets for simulation and machine learning applications with OpenUSD. Add semantic labels, apply SimReady best practices, and use automated tools for efficient workflows.
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionNVIDIA Nsight Graphics is a standalone tool for designing, debugging, and optimizing games and professional graphics applications. This lab will immerse students in the architecture of an open-source gaussian splat renderer, which will serve as the canvas for learning Nsight Graphics for both rasterization and ray tracing pipelines. This first session in the series will focus on inspection and debugging of frames to identify and diagnose common rendering bugs and performance blockers. By the end of this lab students will be able to navigate key tools within Nsight Graphics, including the Graphics Debugger, Ray Tracing Inspector, and Shader Debugger.
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
Description"NVIDIA Nsight Graphics is a standalone tool for designing, debugging, and optimizing games and professional graphics applications. This lab will immerse students in the architecture of an open-source gaussian splat renderer, which will serve as the canvas for learning Nsight Graphics for both rasterization and ray tracing pipelines. This second session in the series will focus on detailed profiling and optimization of shaders using the GPU Trace Profiler, including coverage of ray-tracing-specific shading bottlenecks. By the end of this lab students will be able to interpret GPU timeline and pipeline stalls via GPU Trace, and analyze shader performance metrics and occupancy using the Shader Profiler.
This lab will include an introduction covering the state of the NVIDIA Graphics Developer Tools, including Nsight Graphics, Nsight Systems, and Nsight Aftermath. It will cover recent advancements in the tools, and how the tools are being used in production both internally at NVIDIA and across the graphics industry.
"
This lab will include an introduction covering the state of the NVIDIA Graphics Developer Tools, including Nsight Graphics, Nsight Systems, and Nsight Aftermath. It will cover recent advancements in the tools, and how the tools are being used in production both internally at NVIDIA and across the graphics industry.
"
Industry Session
New Technologies
Production & Animation
Animation
Art
Digital Twins
Education
Geometry
Industry Insight
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionJoin industry experts in this interactive office hours session to discuss how to optimize your OpenUSD workflows, troubleshoot challenges, and deepen your understanding of OpenUSD best practices. Whether you’re seeking advice on current projects, exploring advanced features, or preparing for the OpenUSD professional certification exam, this is your opportunity to get personalized guidance. All experience levels are welcome—connect, learn, and advance your OpenUSD development journey.
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionThis lab will introduce NVIDIA Nsight Systems for game and graphics development. Nsight Systems provides a holistic view of application performance and utilization of resources across both the CPU and GPU. Hands-on lessons will cover topics such as: VRAM usage visualization, the resource migration tracker, the graphics hotspot analysis recipe, and threading analysis.
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionThis lab will introduce NVIDIA Nsight Systems for game and graphics development. Nsight Systems provides a holistic view of application performance and utilization of resources across both the CPU and GPU. Hands-on lessons will cover topics such as: VRAM usage visualization, the resource migration tracker, the graphics hotspot analysis recipe, and threading analysis.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Geometry
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Thursday
DescriptionIn this hands-on lab, attendees will learn how to develop a Vulkan-based ray tracing application with OpenXR support for Meta Quest. Additionally, they will learn to integrate DLSS to enhance quality and performance, and to utilize VCR to make XR app development easier.
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Geometry
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Thursday
DescriptionIntelligent robotic systems require training on immense amounts of data. Synthetic data generation (SDG) helps create that training data at a massive scale. In this hands-on lab, you'll learn how to use NVIDIA Isaac Sim to simulate entire worlds, and to generate diverse, physically-accurate synthetic data from those worlds with NVIDIA Cosmos World Foundation Models (WFMs).
These datasets become the training set for AI-driven robotics and computer vision, accelerating your AI training pipeline through domain randomization and multimodal controls.
Whether you're a developer or a researcher, this course provides practical workflows and best practices to supercharge your projects. Join us and experience how Cosmos brings speed, realism, and variety to the training data your robotics systems train on.
These datasets become the training set for AI-driven robotics and computer vision, accelerating your AI training pipeline through domain randomization and multimodal controls.
Whether you're a developer or a researcher, this course provides practical workflows and best practices to supercharge your projects. Join us and experience how Cosmos brings speed, realism, and variety to the training data your robotics systems train on.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionOctGPT is a novel multiscale autoregressive model for 3D shape generation. It introduces hierarchical serialized octree representation, octree-based transformer with 3D RoPE and token-parallel generation schemes. OctGPT significantly accelerates convergence, achieves performance rivaling or surpassing state-of-the-art diffusion models, and supports text/sketch/image-conditioned generation and scene-level synthesis.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce Offset Geometric Contact (OGC), a groundbreaking method offering "penetration-free for free" simulations of codimensional objects. OGC efficiently constructs offset volumetric shapes to ensure stable, artifact-free collisions. Leveraging parallel GPU computations, it delivers real-time simulations at speeds over 100× faster than previous methods, eliminating costly collision detection and global-synchronization.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionLogarithmic metric blending enables smooth interpolation between planar shapes while bounding both conformal and area distortions. By blending symmetric positive definite metrics in the log domain, our method geometrically interpolates distortions. This leads to natural transitions that outperform existing techniques in applications such as shape morphing and animation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a fast, on-the-fly 3D Gaussian Splatting method that jointly estimates poses and reconstructs scenes. Through fast pose initialization, direct primitive sampling, and scalable clustering and merging, it efficiently handles diverse ordered image sequences of arbitrary length.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionManual 3D rigging is slow. UniRig introduces a unified learning framework for automatic skeletal rigging. Trained on our large, diverse Rig-XL dataset, it uses an autoregressive model and cross-attention to accurately rig various characters and objects, significantly outperforming prior methods and speeding up animation pipelines.
Talk
Arts & Design
Production & Animation
Livestreamed
Recorded
Animation
Art
Geometry
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionThe environments of Pixar's Elio (2025) feature dynamic elements that bridge the disciplines of shading, modeling, dressing and lighting. In this talk we enumerate the challenges of creating sets that move, glow and change - revealing pipeline innovations that leverage USD, Houdini and other in-house tools.
Talk
Production & Animation
Livestreamed
Recorded
Animation
Art
Dynamics
Geometry
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Thursday
DescriptionCloning Clay, a space amoeba-like organic cloning matter in Pixar’s Elio (2025), required a suite of procedural FX techniques to land each story beat. In regular collaboration with several departments, this method delivered a range of effects, including dynamic hero clay FX, secondary rippling, and full-character transformations.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Ethics and Society
Graphics Systems Architecture
Scientific Visualization
Full Conference
Experience
DescriptionWhat will the impact of your SIGGRAPH papers be when they are all offered openly to the world? How much more will they be cited? How many of your research results will be reproduced when the related software and datasets are openly available? In the past few months, the ACM Council has reaffirmed thrice its intention to open access to all ACM publications and related research artifacts in the ACM Digital Library after 1-1-2026.
True to its mission, which calls for "fostering the open exchange of information", ACM demonstrates its commitment to democratization of knowledge, transparency, and accountability. Having the researcher-author in mind while aiming to achieve sustainability of the Digital Library, ACM has introduced ACM Open. It is an innovative open publishing model that allows (clusters of) institutions or even whole countries to pay a lump sum for papers published by authors affiliated with them, instead of authors paying individually per paper.
This session will explain the details of ACM Open and the changes it brings to publishing and disseminating research results through the ACM DL and answer any questions about it. It will cast this in the general context of Open Science, a new paradigm for navigating the entire research lifecycle in all sciences, which ACM intends to help our community to explore. This session will also highlight some other strategic ACM initiatives aiming at improving the environment of computing scientists and professionals as well as the general public in an ever-changing world.
True to its mission, which calls for "fostering the open exchange of information", ACM demonstrates its commitment to democratization of knowledge, transparency, and accountability. Having the researcher-author in mind while aiming to achieve sustainability of the Digital Library, ACM has introduced ACM Open. It is an innovative open publishing model that allows (clusters of) institutions or even whole countries to pay a lump sum for papers published by authors affiliated with them, instead of authors paying individually per paper.
This session will explain the details of ACM Open and the changes it brings to publishing and disseminating research results through the ACM DL and answer any questions about it. It will cast this in the general context of Open Science, a new paradigm for navigating the entire research lifecycle in all sciences, which ACM intends to help our community to explore. This session will also highlight some other strategic ACM initiatives aiming at improving the environment of computing scientists and professionals as well as the general public in an ever-changing world.
Birds of a Feather
Gaming & Interactive
Artificial Intelligence/Machine Learning
Computer Vision
Display
Dynamics
Generative AI
Graphics Systems Architecture
Image Processing
Modeling
Performance
Pipeline Tools and Work
Rendering
Full Conference
Experience
DescriptionThe Open Review Initiative is an Academy Software Foundation (ASWF) project focused on collaborative, high-quality review tools and interoperability in media and entertainment. Join teams from Autodesk, Walt Disney Animation Studios, Imagineering, DNEG, Sony Imageworks, Netflix Animation Studios, and more for updates and roadmap discussions. Topics include OTIO-based annotation and sync protocols, RPA API development, and OpenAPV usage with a comparison to HTJ2K. Highlights feature OpenRV and X-Studio town hall recaps, tool showcases, and the X-Studio 1.0 release. This session is part of Open Source Days, a full day of open source BoFs hosted by ASWF. Learn more at aswf.io/opensourcedays.
Birds of a Feather
Gaming & Interactive
New Technologies
Animation
Artificial Intelligence/Machine Learning
Computer Vision
Display
Dynamics
Graphics Systems Architecture
Image Processing
Pipeline Tools and Work
Virtual Reality
Full Conference
Experience
DescriptionOCIO is an open source color management solution hosted at the Academy Software Foundation. Join us for an informal community discussion around current/planned OCIO development, implementation, and workflows. Stick around after the BoF for a group happy hour nearby!
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
Birds of a Feather
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Display
Dynamics
Graphics Systems Architecture
Image Processing
Modeling
Pipeline Tools and Work
Rendering
Virtual Reality
Full Conference
Experience
DescriptionOpenCue is an open source render management system and an Academy Software Foundation Project. Come learn about the latest OpenCue updates including containerized frames, new web interface, new pip packages and much more.
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
Birds of a Feather
Gaming & Interactive
New Technologies
Animation
Artificial Intelligence/Machine Learning
Display
Dynamics
Generative AI
Graphics Systems Architecture
Image Processing
Modeling
Pipeline Tools and Work
Virtual Reality
Full Conference
Experience
DescriptionWe've been hard at work on OpenMoonRay since we last presented in this forum, with many new features and improvements for production rendering, developer support, community engagement and more. Come join as as we discuss the year in review, introduce new features and preview the coming roadmap. We'll go over ongoing MaterialX integration and show new, exciting functionality of OpenMoonRay's Light Path Visualizer. We're eager to chat with all rendering enthusiasts!
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
Birds of a Feather
Gaming & Interactive
New Technologies
Animation
Artificial Intelligence/Machine Learning
Computer Vision
Display
Dynamics
Generative AI
Graphics Systems Architecture
Image Processing
Modeling
Pipeline Tools and Work
Rendering
Virtual Reality
Full Conference
Experience
DescriptionOpenPBR is a state of the art material model being designed under an open governance as a MaterialX subproject at the Academy Software Foundation. In this session, hear about the latest updates from the main contributors, feedback from the community, and bring questions about integration into your own workflow.
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
This BoF is part of a full day of open source BoFs hosted by the Academy Software Foundation as part of Open Source Days. Learn more at aswf.io/opensourcedays.
Course
Production & Animation
Research & Education
Not Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Dynamics
Fabrication
Geometry
Modeling
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Sunday
DescriptionThis course will cover the latest developments and tools in the open source library OpenVDB. To mention just a few this includes, fVDB (a DL framework based on VDB), improved GPU support of NanoVDB, new tools (eg. level set and anisotropic surfacing), new grid types (half-float grids), and production examples.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose a coupled mesh-adaptation model and physical simulation algorithm to jointly generate, per timestep, optimal adaptive remeshings and implicit solutions for the simulation of frictionally contacting, large-deformation elastica.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a Generative Order Learner (GOL) that optimizes element ordering for graphic design generation. Our approach learns a content-aware neural order, which can significantly improve graphic generation quality, generalize across different types of generative models and help design generators scale up greatly.
Talk
Arts & Design
Production & Animation
Livestreamed
Recorded
Animation
Art
Geometry
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionFor the aliens in Pixar's Elio we crafted the looks for over 18 species from various inspirations that needed to be unique and appealing but not too familiar. We explored combining illumination models and animated shading techniques in a collaborative approach between our design artists and shading artists.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis work introduces a forward and differentiable rigid-body dynamics framework using Lie-algebra rotation derivatives. The approach offers simplified, compact derivatives, improved conditioning, and higher efficiency compared to traditional methods. Applications include fundamental rigid-body problems and Cosserat rods, showcasing its potential for multi-rigid-body dynamics and incremental-potential formulations.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Not Recorded
Animation
Digital Twins
Industry Insight
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionFor The Wild Robot, the characters exhibit the sophistication of real fur and feathers in motion. But the final rendered look needed to integrate with a stylized painterly world. In LookDev, brushstrokes of detail are layered like an artist builds up strokes of paint, in response to specific key lights.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionWe present the first interactive system for painting with 3D Gaussian splat brushes. With our tool, artists can sample volumetric fragments from real-world Gaussian splat captures and paint with them in real time. Our tool seamlessly deforms sampled splats along painted strokes, introducing realistic transitions between seams with diffusion inpainting.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present the first interactive system for painting with 3D Gaussian splat brushes. With our tool, artists can sample volumetric fragments from real-world Gaussian splat captures and paint with them in real time. Our tool seamlessly deforms sampled splats along painted strokes, introducing realistic transitions between seams with diffusion inpainting.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionRecreate Korea’s Joseon Dynasty’s last unrealized royal banquet in an immersive LBE VR journey through history and culture
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Artificial Intelligence/Machine Learning
Augmented Reality
Education
Games
Generative AI
Industry Insight
Pipeline Tools and Work
Virtual Reality
Full Conference
Virtual Access
Experience
Sunday
DescriptionThis panel brings educators and industry practitioners together to identify emerging trends that disrupt education while opening new opportunities for innovation and industry collaboration. Panelists will highlight innovative pedagogical and curricular approaches and discuss how industry perspectives shape academic training to prepare learners for the evolving workforce better.
Educator's Forum
Arts & Design
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Education
Ethics and Society
Generative AI
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
DescriptionThis panel examines the relevance of traditional VFX techniques in the rapidly evolving industry, discussing how education can integrate these foundational skills with new technologies like AI and real-time rendering. Featuring industry leaders and educators, the session seeks strategies to adapt curricula to prepare students for modern VFX demands.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionHigher-order surfaces enable compact, smooth geometry but require efficient rendering. We introduce PaRas, a GPU-based rasterizer that directly renders parametric surfaces, avoiding costly tessellation. It integrates seamlessly into existing pipelines, outperforming traditional methods for quartic triangular and bicubic rational Bézier patches. Experimental results confirm its superior efficiency and accuracy.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThe Parasitic Finger project explores how humans coexist with uncontrollable finger augmentation with an SMA actuator. What would happen if fingers made unrealistic movements on their own? Unlike human finger joints, they move like tentacles, performing actions such as waving and touching objects. Their vibrations indicate their alertness or cuteness.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPARC is a framework that enhances terrain traversal with machine learning and physics-based simulation. By iteratively training a kinematic motion generator and simulated motion tracker, PARC produces a character controller capable of traversing complex environments using highly agile motor skills, overcoming the challenges of limited motion capture data.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present PartEdit, a novel diffusion-based system enabling precise, text-based edits of object parts without retraining or manual masks. Optimizing part-aware tokens generates localized non-binary attention maps to guide seamless edits. Our novel blending strategy delivers high-quality visual results and outperforms prior techniques in both synthetic and real-world scenarios.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce Patch-Grid, a unified neural implicit representation that efficiently represents complex shapes, preserves sharp features, and handles open boundaries and thin geometric details. By decomposing shapes into patches encapsulated by adaptive feature grids and merging them through localized CSG operations, Patch-Grid demonstrates superior robustness, efficiency, and accuracy.
Course
Production & Animation
Research & Education
Livestreamed
Recorded
Graphics Systems Architecture
Industry Insight
Lighting
Rendering
Full Conference
Virtual Access
Tuesday
DescriptionWe will share some nitty-gritty details and challenges when integrating path guiding into production rendering systems.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPDT is a novel framework that uses diffusion models to transform unstructured point clouds into semantically meaningful and structured distributions, such as keypoints, joints, and feature lines. Exploring complex point distribution transformation, PDT captures fine-grained geometry and semantics, offering a versatile tool for diverse tasks.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionThis is a scientific data visualization of ocean currents around the world based on the ocean model, Estimating the Circulation and Climate of the Ocean (ECCO). The visualization is a tour of major currents of the world including western boundary currents and includes both surface and deep currents.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper investigates photorealistic scene reconstruction using videos captured from an egocentric device in high dynamic range. It presents a novel system utilizing visual-inertial bundle adjustment and a physical image formation model that handles camera motion artifacts. The experiments using Project Aria and Quest3 show substantial improvements in visual quality.
Course
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Games
Lighting
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionUsing examples from film and games, this course presents advances in physically based shading in both theory and production practices, demonstrating how it enhances realism and leads to more intuitive and faster art creation.
Birds of a Feather
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Digital Twins
Math Foundations and Theory
Simulation
Full Conference
Experience
DescriptionThis CG and Simulation themed BOF focuses on three main topics. The first is high order acurate computational science that mimics the system being simulated. The second involves human factors in virtual reality for accessable simulation user interfaces. The third focuses on Augmented Intelligence tools and logic including computer algebra and IoT data acquisition. There will also be forays into special topics such as FORTRAN 2018 and computational science, parallel data flow for IoT, programming in English (and French) and the impact of augmented intelligence. Lastly, there may be some insights on numerical weather prediction and industrial plumes.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a photograph relighting method that enables explicit control over light sources akin to CG pipelines. We achieve this in a pipeline involving mid-level computer vision, physically-based rendering, and neural rendering. We introduce a self-supervised training methodology to train our neural renderer using real-world photograph collections.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a method to estimate optimal cloth mesh resolution based on material stiffness and boundary conditions like shirring or stitching, and dynamic wrinkles from motion-induced collisions. To ensure smooth resolution transitions, we calculate transition distances and generate a mesh sizing map, enhancing realism, efficiency, and versatility for garment design.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPhysicsFC introduces a breakthrough in interactive football simulation—enabling real-time control of physically simulated players that perform complex skills with smooth transitions. It combines skill-specific learning, physics-informed rewards, latent-guided training, and transition-aware state initialization, achieving agile, lifelike football behaviors in scenarios ranging from 1v1 play to full 11v11 matches.
The Emerging Technologies program has partnered with the Technical Papers program. For a hands-on demonstration of this paper, visit:
Emerging Technologies Demo - PhysicsFC: Learning User-Controlled Skills for a Physics-Based Football Player Controller
Tuesday, August 12, 1-2 pm
Location: Experience Hall, West Building, Exhibit Hall B
The Emerging Technologies program has partnered with the Technical Papers program. For a hands-on demonstration of this paper, visit:
Emerging Technologies Demo - PhysicsFC: Learning User-Controlled Skills for a Physics-Based Football Player Controller
Tuesday, August 12, 1-2 pm
Location: Experience Hall, West Building, Exhibit Hall B
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a method to approximate arbitrary freeform surface meshes with piecewise ruled surfaces. Our approach optimizes mesh shape and ruling direction field simultaneously, extracts patch topology, and refines ruling positions and orientations. The technique effectively approximates diverse freeform shapes and has potential applications in architecture and engineering.
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionFrom Toy Story to today, Pixar’s rendering technology has continuously evolved — and now, with RenderMan XPU, a powerful hybrid CPU and GPU renderer, artists are gaining unprecedented speed and creative freedom.
Learn how XPU is empowering artists and production pipelines with faster iteration, state-of-the-art tools, and stylized rendering — all powered by modern CPU and NVIDIA GPU hardware. Get a first look at how Pixar is reimagining final-frame rendering with XPU and putting artists at the center.
Learn how XPU is empowering artists and production pipelines with faster iteration, state-of-the-art tools, and stylized rendering — all powered by modern CPU and NVIDIA GPU hardware. Get a first look at how Pixar is reimagining final-frame rendering with XPU and putting artists at the center.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionPlant.play() reimagines the relationship between plants, humans, and digital spaces through an interactive installation where a living plant becomes the central player in a pet simulation game. Environmental sensors and bioelectrical signals translate the plant’s natural processes into caregiving decisions, shaping pets’ growth, personality, and evolution—leading to sixty possible appearances and eight personality types. Displays highlight the plant’s decision-making journey, emphasizing its role as an active agent. By blending plants, technology, and play, Plant.play() explores posthumanist ideas and invites audiences to reflect on the connections between living beings, their environments, and the roles plants can play in our interconnected world.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce a physically-based character animation framework that exploits part-wise latent tokens. The novel structured decomposition enables dynamic exploration to stably adapt to diverse unseen scenarios. Additional refinement networks improve overall motion quality. We show superior performance on multi-body tracking, motion adaptation, and locomotion with damaged body parts.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionPocket Time-Lapse is a system to record, explore and visualize long-term changes in the environment, based on data that a user can capture with the phone they carry. Our contributions include a process to conveniently capture a scene, and novel techniques for registering and visualizing panoramic time-lapse data.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Performance
Physical AI
Real-Time
Robotics
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
DescriptionPoeSpin is a human-AI cocreating system. By transforming pole dance movements into poetry through AI, we challenge both traditional prejudices against this art form and conventional approaches to human-AI creativity. This work demonstrates how computational systems can preserve the deeply human aspects of artistic expression while creating new possibilities for cross-modal artistic collaboration, suggesting pathways for more inclusive and expressive forms of human-AI co-creation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a new perspective on physics-based character animation. Assuming policies for similar motions should have similar weights, we introduce regularization during RL training to preserve weight similarity. By modeling the weights’ manifold with a diffusion model, we generate a continuum of policies adapting to novel character morphologies and tasks.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose polynomial 2D biharmonic coordinates for closed high-order cages containing polynomial curves of any order by extending the classical
2D biharmonic coordinates using high-order BEM. When applying our coordinate
to 2D cage-based deformation, users manipulate the \Bezier
control points to quickly generate the desired conformal deformation.
2D biharmonic coordinates using high-order BEM. When applying our coordinate
to 2D cage-based deformation, users manipulate the \Bezier
control points to quickly generate the desired conformal deformation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionpOps is a framework for learning semantic manipulations in CLIP’s image embedding space. Built on a Diffusion Prior model, it enables concept manipulation by training operators directly on image embeddings. This approach enhances semantic control and integrates easily with diffusion models for image generation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAccurate modeling of normal distribution functions (NDF) over a high-resolution normal map enables intriguing glinty appearance but is inefficient. We present a manifold-based glint formulation, transferring the glint NDF computation to mesh intersections. This framework accelerates glint rendering, as well as providing a closed-form shadow-masking derivation for normal-mapped diffuse surfaces.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a method for designing smooth directional fields on triangle meshes with precise control over singularities. Our approach uses a power-linear polar representation, allowing singularities of any index to be placed anywhere on the mesh. The resulting fields are smooth, robust to mesh quality, and support N-fold symmetry.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis work addresses recovering textured materials using inverse rendering. Our Laplacian mipmapping improves the reconstruction of high-resolution textures. We also propose a novel gradient computation that enables efficiently reconstructing textured, path-traced subsurface scattering. The methods are applied to challenging scenes, including reconstructing realistic human face appearance from sparse captures.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe present a practical method for rendering scenes with complex, recursive nonlinear stylization applied to physically based rendering. Our approach introduces nonlinear path filtering(NL-PF) and nonlinear neural radiance caching(NL-NRC), which reduce the exponential sampling cost of stylized rendering to polynomial, enabling rendering of nonlinear stylization with significantly improved efficiency.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper presents a novel pipeline to digitize physical threads and predict fabric appearance before fabricating cloth samples, addressing a real need in the fashion industry. It enables designers to make more informed material choices, thereby promoting sustainable production, reducing costs, and fostering innovation in fabric design.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present PrimitiveAnything, a novel framework that reformulates shape primitive abstraction as a primitive assembly generation task. PrimitiveAnything can generate 3D high-quality primitive assemblies that better align with human perception while maintaining geometric fidelity across diverse shape categories, which benefits various 3D applications.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis study explores the primordial sensory experience of humans by re-living the experience of having the body and mind of a baby. This experience was achieved using virtual reality (VR) and by wearing a special membrane bodysuit.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a general framework, Progressive Dynamics++, for constructing a family of progressive dynamics integration methods that advance physical simulation states forward in both time and spatial resolution. We analyze requirements for stable, continuous, and consistent level-of-detail animation and introduce a novel, stable method that significantly improves temporal continuity.
Industry Session
New Technologies
Production & Animation
Animation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionIn this presentation, we will be introducing Project Outback as a context in which to explore Houdini's rigging and animation framework: KineFX (powered by APEX).
We will start by briefly recalling what are KineFX and APEX and what was achieved during the first phase of Project Outback.
For the second phase of our APEX exploration, we will be focusing on facial rigging. We will explain the difference between a facial rig built for a realistic character and one built for a stylised/cartoon character.
We will then continue our discussion by examining some facial rigging techniques and how they can be implemented in KineFX.
Some of the techniques that will be covered include:
- slider creation
- types
- xform components
- limits
- normalizing
- control groups
- blendshapes
- generating from operators
- mirroring
- symmetrizing
- splitting
- refining (sculpting)
- fixing interpolation path with inbetweens
- optimizing
- APEX graph building
- skeleton blendshaping
- lattices and weight maps
- custom deformation operators
- rivets
- layered bone deforms
- deltamush
Finally, we will close the discussion by providing some path to be explored and some tips and tricks to keep in mind.
We will start by briefly recalling what are KineFX and APEX and what was achieved during the first phase of Project Outback.
For the second phase of our APEX exploration, we will be focusing on facial rigging. We will explain the difference between a facial rig built for a realistic character and one built for a stylised/cartoon character.
We will then continue our discussion by examining some facial rigging techniques and how they can be implemented in KineFX.
Some of the techniques that will be covered include:
- slider creation
- types
- xform components
- limits
- normalizing
- control groups
- blendshapes
- generating from operators
- mirroring
- symmetrizing
- splitting
- refining (sculpting)
- fixing interpolation path with inbetweens
- optimizing
- APEX graph building
- skeleton blendshaping
- lattices and weight maps
- custom deformation operators
- rivets
- layered bone deforms
- deltamush
Finally, we will close the discussion by providing some path to be explored and some tips and tricks to keep in mind.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose an iterative prompt-and-select architecture to progressively reconstruct the CAD modeling sequence of a target point cloud. We propose the concept of local geometric guidance and come up with three ways to integrate this guidance into iterative reconstruction. Experiments demonstrate the superiority over the current state of the art.
Spatial Storytelling
Arts & Design
New Technologies
Not Livestreamed
Not Recorded
Art
Hardware
Performance
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe propose a live-streamed immersive performance from France featuring a "psychonaut" experiencing the aquatic VR journey "SPACED OUT," using our waterproof MeRCURY headset in a swimming pool. Three simultaneous live camera feeds and real-time user narration create captivating remote immersion, previously validated through two successful public performances, ensuring technical reliability.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionRobotic systems are increasingly present in intelligent spaces, yet intuitive multi-user control remains challenging. Public Hand enables seamless and intuitive robotic hand manipulation by dynamically adjusting controllability based on proximity-aware approach. This approach enables intuitive and shared interaction without wearable devices, facilitating dynamic and flexible collaboration in robotic rooms.
Spatial Storytelling
Gaming & Interactive
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Capture/Scanning
Digital Twins
Performance
Real-Time
Full Conference
Experience
DescriptionLive physical whole-puppet performances are used to drive digital animation twins via puppix, a new capture system. As well as an overview of our work, we will demonstrate the practicalities of using the system with a live puppet character in the room with the audience, working with direction and interaction.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Dynamics
Geometry
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Full Conference
Virtual Access
Thursday
DescriptionLive physical whole-puppet performances are used to drive digital animation characters and creatures via puppix, a new capture system. The benefits of having a live puppet character in the room with actors, directors and other characters are demonstrated and discussed, as well as the practical processes of capturing non-human physicalities.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionLive physical whole-puppet performances are used to drive digital animation characters and creatures in real time via puppix, a new capture system. Experience the effect of interacting with a professionally puppeteered live theatrical puppet character in the room with you as it performs its digital twin in real time.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe identify stable orientations of any rigid shape, and the probability that it will rest at these orientations if randomly dropped on the ground. We use a differentiable inverse version of our method to design and fabricate shapes with target resting behavior, such as dice with target, nonuniform probabilities.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionPhysically based differentiable rendering computes gradients of the rendering equation. The task is made difficult by discontinuities in the integrand at object silhouettes. To address this challenge, we propose a novel edge sampling approach that outperforms the state-of-the-art among unidirectional differentiable renderers.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper introduces a novel grid structure that extends tall cell methods for efficient deep water simulation. Unlike previous methods, our approach subdivides tall cells horizontally, allowing for more aggressive adaptivity. We demonstrate that this novel form of adaptivity delivers superior performance compared to traditional uniform tall cells.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionQuantum Tango, 2025, connects Vancouver, London, and Milan through a real-time, interactive digital artwork. At each location, a screen and camera setup respond to local audience movement while revealing dynamic images and colour patterns influenced by activity in the other cities. Blending abstract visuals with interlaced images captured at different times, the work creates a shared, evolving aesthetic experience across continents. Building on Edmonds’ earlier Communications Game (1971 on) and Cities Tango (2007 on) projects, this generative networked piece explores urban presence, audience agency, and remote intimacy—transforming public interaction into a cross-cultural dialogue beyond the limits of conventional telepresence. This will be the first time a 3-node version of the work is presented, especially for SIGGRAPH.
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Games
Generative AI
Performance
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionQuantum Theater takes quantum science as subject and method for playable theater. Phenomena like entanglement, superposition, coherence, and collapse shape the performance in a post-AI exploration of liveness, variability, and improvisation. Multiple realities are layered on stage, where the audience as observer-participant plays an active role in cohering singular narratives.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper introduces an improved quad-based geometry streaming method for remote rendering that reduces bandwidth demands through temporal compression and supports QoE-driven adaptation. It achieves high-quality visuals, captures disocclusion events, uses 15× less data than SOTA, and reduces bandwidth down to 100 Mbps, enabling real-time, low-latency rendering on lightweight headsets.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a simple and fast method to reconstruct radiance surfaces by directly supervising the radiance field via image projection.
Unlike volumetric approaches, we move alpha blending and ray marching from image formation into loss computation.
This simple modification enables high-quality surface reconstruction while preserving baseline efficiency and robustness.
Unlike volumetric approaches, we move alpha blending and ray marching from image formation into loss computation.
This simple modification enables high-quality surface reconstruction while preserving baseline efficiency and robustness.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present the first algorithm to automatically compute sewing patterns for
upcycling existing garments into new designs. Our algorithm takes as input
two garment designs along with their corresponding sewing patterns and
determines how to cut one of them to match the other by following garment
reuse principles.
upcycling existing garments into new designs. Our algorithm takes as input
two garment designs along with their corresponding sewing patterns and
determines how to cut one of them to match the other by following garment
reuse principles.
ACM SIGGRAPH 365 - Community Showcase
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Ethics and Society
Full Conference
Experience
DescriptionJoin LGBTQIA+ attendees and allies for the Rainbow Meetup, a welcoming space to connect, celebrate identity, and help shape the future of queer community at SIGGRAPH. This year is our first official gathering as the Rainbow Affinity Group. We’ll introduce the group’s mission and invite participation in key leadership roles, including a new Student Chapters Liaison and Fundraising Chair. To ensure a safe, respectful environment, the ACM Code of Conduct will be strictly enforced. Harassment or non-consensual behavior, including unauthorized photography, will not be tolerated. Come help build community, find support, and lead the future of the Rainbow Affinity Group.
Talk
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Dynamics
Games
Generative AI
Modeling
Performance
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Sunday
DescriptionIn this talk, we’ll share our experience working with Vulkan bindless techniques and ray tracing on Android smartphones, overcoming hardware constraints, driver issues, and other mobile platform limitations along the way. We'll also present a performance comparison across several Android devices available in 2025.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionNVIDIA’s RTX Mega Geometry is a technology that accelerates Bounding Volume Hierarchy (BVH) building, enabling path tracing of scenes with up to 100x more triangles. For the first time, we can apply global illumination to sub-pixel micro geometry in real time.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionDesmos started as a free graphing calculator that’s now used by most students when learning math. With 100M+ users around the world, it has also become a tool for creative exploration, revealing the incredible promise of the next generation of technical designers and mathematical artists. We’ll show off their work.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis system converts 2D video into 3D holograms in real time. The system consists of a holography processor that generates real-time 8-layer, 30 FPS CGH data using HBM, a Linux host that extracts depth information from 2D images and transmits it as packets, and an optical unit that displays the hologram.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionIn this work, we introduce the first real-time framework that integrates yarn-level simulation with fiber-level rendering. The whole system provides real-time performance and has been evaluated through various application scenarios, including knit simulation for small patches and full garments and yarn-level relaxation in the design pipeline.
Real-Time Live!
Livestreamed
Recorded
Full Conference
Virtual Access
Tuesday
DescriptionJoin us to cheer Natalya Tatarchuk as we present her with an award to recognize the 20th Anniversary Advances in Real-Time Rendering in Games.
Real-Time Live!
Livestreamed
Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the main event on Tuesday night, keep the Real-Time Live! excitement going by getting a closer look at some of the real-time projects on Wednesday morning:
Drawing with Light: AI-Driven Visual Synthesis for Real-Time Laser Installations – Miegakure: a Game Where You Explore and Interact With a 4D World – Real-Time Graphics in Desmos, With Just Math and a Browser – Real Time Path-Tracing with NVIDIA RTX MegaGeometry – PUPPIX - Real Time Live Performed Digital Characters Using Physical Puppet Twins
Take advantage of the opportunity to have your questions answered!
Drawing with Light: AI-Driven Visual Synthesis for Real-Time Laser Installations – Miegakure: a Game Where You Explore and Interact With a 4D World – Real-Time Graphics in Desmos, With Just Math and a Browser – Real Time Path-Tracing with NVIDIA RTX MegaGeometry – PUPPIX - Real Time Live Performed Digital Characters Using Physical Puppet Twins
Take advantage of the opportunity to have your questions answered!
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionIn this session, we’ll explore the key challenges of bringing real-time ray tracing to the Media & Entertainment industry, including multi-machine synchronization, multi-GPU rendering, lens distortion, and more. We’ll show how ray tracing simplifies complex problems and how a connected ecosystem—using standards like USD and MaterialX—enables powerful, flexible workflows. These innovations also unlock new possibilities for virtual production, making it possible to achieve high-quality, real-time visuals. We’ll demonstrate why this is the only real-time solution today that meets the demanding needs of M&E.
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionLandis Fields and Shannon Thomas from Industrial Light & Magic (ILM) walk us through their creative development workflow and how RTX rendering in Unreal Engine brings reality as well as efficiencies to storytelling.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis paper presents a method for mapping curved surfaces to the plane without shear, enabling rectangular parameterizations. It introduces a novel approach for computing integrable, orthogonal frame fields. The method improves mesh quality, supports rich user control, and outperforms existing techniques in simulation, modeling, retopology, and digital fabrication tasks.
Technical Workshop
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Animation
Dynamics
Simulation
Full Conference
Experience
DescriptionThis workshop aims to explore the evolution of subspace methods in physical simulation, tracing their origins from classical engineering formulations to cutting-edge neural techniques. By gathering leading researchers, students, and practitioners, the session will serve as a platform for cross-disciplinary dialogue, education, and community building around model reduction techniques in graphics and simulation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionReenact Anything introduces a unified framework for semantic motion transfer, covering applications from full-body and face reenactment to controlling the motion of inanimate objects and the camera. Thereby, motions are represented using text/image embeddings of an image-to-video diffusion model and are optimized based on a given motion reference video.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Performance
Physical AI
Real-Time
Robotics
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
DescriptionThis practice-based project reimagines Beckett’s Not I in virtual reality, marrying minimalist theatre with immersive technology. A lone, disembodied Metahuman mouth exploits VR’s intense presence while subverting customary embodiment and audience agency. Integrating performing avatars, the work probes authenticity, identity, and authorship, demonstrating how “subtractive dramaturgy” thrives in an additive medium. Findings advance performance studies, XR design, and digital humanities by showing how technology reshapes creativity, embodiment, and storytelling.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThe alignment of text,images,and 3D is very challenging,yet it is crucial and beneficial for many tasks.We explore and reveal the characteristics of the native 3D latent space for 3D generation,make it decomposable and low-rank,thereby enabling efficient learning for multimodal local alignment,achieving precise local enhancement and part-level editing of 3D geometry.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present the first drivable full-body avatar model that reconstructs perceptually realistic relightable appearance.
Spatial Storytelling
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Augmented Reality
Ethics and Society
Performance
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionRemixing the Flying Words Project is a mixed-reality installation that reimagines an ASL poem through immersive technology. Using motion capture and AI-generated imagery, it enables audiences to experience sign language poetry kinesthetically. Presented in a mixed-reality headset, it transforms linguistic translation into a dynamic, multisensory engagement with spatial storytelling.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Dynamics
Industry Insight
Full Conference
Experience
DescriptionThis is an annual discussion of Large-Scale Rendering. With render farm infrastructure, both on-premise and hybrid cloud solutions have software, queue management, reliability, and performance woes. Industry experts and new entrants are all welcomed, for a sharing of knowledge, experience, and best practices. Mail [email protected] for any questions or comments.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and MLOps.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present RenderFormer, a neural rendering pipeline that directly renders an image from a triangle-based representation of scene with full global illumination effects, and that does not require per-scene training or finetuning.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce reservoir splatting, a technique preserving exact primary hits during temporal ReSTIR. This approach makes temporal path resampling more robust under motion, especially for regions with high-frequency detail. We further demonstrate how reservoir splatting naturally enables ReSTIR support for both motion blur and depth of field.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionRedesign spaces effortlessly-ReStyle3D transforms indoor scenes by transferring object-specific styles from a single reference image, preserving 3D coherence. Combining semantic-aware diffusion and depth guidance, it enables photo-realistic virtual staging—faithfully redecorating furniture, textures, and decor. Ideal for interior design, our method outperforms existing approaches in realism, detail fidelity, and cross-view consistency.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThis work is a 6Dof XR interactive narrative work. The railway network once reshaped the history and visual culture of northern Chinese cities. Ironically, it also witnessed the slow process of resource depletion in these vast areas. While connecting these towns, the railway tracks also isolated them - on both the realty space and the virtual algorithmic database. We collected hundreds of videos from four northern Chinese cities, and used Gaussian blur and deep learning to create immersive digital scenes, we hope to create an optimistic future archaeological landscape: electronic tracks connect data islands and become distributed virtual narrative spaces.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionTo combine deep learning's generalization with traditional methods' interpretability, we propose CustomBF—a hybrid framework that customizes bilateral filter components per point. By addressing key limitations of the classic bilateral filter, CustomBF achieves robust, interpretable, and effective point cloud denoising across diverse scenarios.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionRigAnything is a transformer-based model that autoregressively generates 3D rigging without templates. It sequentially predicts joints and skeleton topology while assigning skinning weights, working on objects in any pose. It’s 20× faster than existing methods, completing rigging in under 2 seconds with state-of-the-art quality across diverse object types.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Industry Session
Production & Animation
Animation
Industry Insight
Performance
Pipeline Tools and Work
Full Conference
Experience
Exhibits Only
Monday
DescriptionThis talk showcases how the Netflix Animation Studios rigging team leverages a close collaboration with R&D to build intuitive character authoring tools and workflows, empowering our artists to keep pushing their creativity in crafting high-end character performances.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionRising River is an AI-integrated self-analysis VR experience. It began with a question: Can gaming and VR make the therapeutic benefits of Jungian shadow work more accessible to everyone?
The participant will revive a dried-up river by offering words that reflect their inner selves, turning their responses into tangible elements.
The participant will revive a dried-up river by offering words that reflect their inner selves, turning their responses into tangible elements.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose a neural representation for 3D assets with complex shading. We precompute shading and scattering on ground-truth geometry, enabling high-fidelity rendering with full relightability, eliminating complex shading models and multiple scattering paths, offering significant speed-ups and seamless integration into existing rendering pipelines.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Production Session
Production & Animation
Livestreamed
Recorded
Animation
Industry Insight
Lighting
Pipeline Tools and Work
Full Conference
Virtual Access
Wednesday
DescriptionEnter the world of Dune: Prophecy and discover how Image Engine helped bring this iconic universe to life through visual effects.
In this session, Image Engine shares how a concept-first approach, streamlined workflows, and technical problem-solving allowed us to deliver some of the series’ most ambitious sequences, ranging from the vast genetic memory library inside a Thinking Machine to massive sand simulations featuring the legendary sandworm.
We’ll break down how our pipeline enabled scalable, collaborative solutions for complex challenges across departments. From the intricacies of animation-driven holograms to the technical demands of simulating cascading sand, attendees will walk away with valuable insights and practical takeaways they can apply to their own work.
Key topics include:
Concept-Driven Approach: How we used early concept art and reference studies to align creative vision with the client, gaining buy-in before production began.
Streamlined Pipeline: See how our FX library card swap pipeline, batchable templates, and attribute-driven setups helped smaller teams work more efficiently without sacrificing quality.
Genetic Library: Explore the design and lighting of Anirul, the vast genetic archive, as well as how our custom tools allowed animation to drive holographic FX that lit the scene and reflected in real time.
Large-Scale Environment FX: Learn how we handled desert and storm simulations at scale, including tricks to manage heavy particle FX while maintaining art direction.
Holograms: Get a look at how we built the complex, multi-camera holographic war table, featuring hundreds of procedurally generated projectors and lights, and aligned outputs across departments for seamless integration.
Whether you're a student, generalist, or pipeline developer, this talk offers an approachable, behind-the-scenes look at how Image Engine tackled complex sequences through early concept alignment, creative problem solving, and efficient pipeline design.
In this session, Image Engine shares how a concept-first approach, streamlined workflows, and technical problem-solving allowed us to deliver some of the series’ most ambitious sequences, ranging from the vast genetic memory library inside a Thinking Machine to massive sand simulations featuring the legendary sandworm.
We’ll break down how our pipeline enabled scalable, collaborative solutions for complex challenges across departments. From the intricacies of animation-driven holograms to the technical demands of simulating cascading sand, attendees will walk away with valuable insights and practical takeaways they can apply to their own work.
Key topics include:
Concept-Driven Approach: How we used early concept art and reference studies to align creative vision with the client, gaining buy-in before production began.
Streamlined Pipeline: See how our FX library card swap pipeline, batchable templates, and attribute-driven setups helped smaller teams work more efficiently without sacrificing quality.
Genetic Library: Explore the design and lighting of Anirul, the vast genetic archive, as well as how our custom tools allowed animation to drive holographic FX that lit the scene and reflected in real time.
Large-Scale Environment FX: Learn how we handled desert and storm simulations at scale, including tricks to manage heavy particle FX while maintaining art direction.
Holograms: Get a look at how we built the complex, multi-camera holographic war table, featuring hundreds of procedurally generated projectors and lights, and aligned outputs across departments for seamless integration.
Whether you're a student, generalist, or pipeline developer, this talk offers an approachable, behind-the-scenes look at how Image Engine tackled complex sequences through early concept alignment, creative problem solving, and efficient pipeline design.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionScaffoldAvatar presents a novel approach for reconstructing ultra-high fidelity animatable head avatars, which can be rendered in real-time. Our method operates on patch-based local expression features and synthesizes 3D Gaussians dynamically by leveraging tiny scaffold MLPs. We employ color-based densification and progressive training to obtain high-quality results and fast convergence.
Appy Hour
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Education
Games
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionScavengeAR is a mobile AR creature collecting app deployed to SIGGRAPH from 2017 to 2019 with over 1000 Daily Active Users during the conference run. Now it's 2025, and the core team has refactored the app for a future launch. Lets talk about AR development, then and now.
ACM SIGGRAPH 365 - Community Showcase
Production & Animation
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Generative AI
Industry Insight
Full Conference
Experience
DescriptionScott Ross, former general manager of Industrial Light and Magic (ILM), Sr Vice President of Lucasfilm and co-founder of Digital Domain, brings unmatched experience in the VFX industry. Under his leadership, ILM transitioned into the world's leading digital VFX Company, winning five Academy Awards. Founding Digital Domain with partners James Cameron (Avatar, Titanic) and Stan Winston (Jurassic Park) Ross led the company to three Oscar wins for Titanic, What Dreams May Come, and The Curious Case of Benjamin Button.
Ross will speak extensively about his career, the trials and tribulations of leading ILM through its rebirth, founding Digital Domain and building it into an Academy Award winning studio. He will also address the elephant in the room : AI and what it portends for the future of the VFX industry.
Ross will speak extensively about his career, the trials and tribulations of leading ILM through its rebirth, founding Digital Domain and building it into an Academy Award winning studio. He will also address the elephant in the room : AI and what it portends for the future of the VFX industry.
Birds of a Feather
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Generative AI
Full Conference
Experience
DescriptionThis BOF session explores how independent filmmakers can use generative AI and LLM orchestration to produce films from script to screen. By coordinating multiple AI tools—such as for writing, visualization, and editing—small teams can automate tasks and enhance creativity on limited budgets. The session introduces the concept of AI orchestration in filmmaking, showcases emerging practices like chaining scriptwriting and image-generation tools, and opens a discussion on how AI can democratize and empower storytelling without replacing human creativity.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionSecret Level presents 15 original short stories set in classic video game worlds. Platige Image created a Good Conflict episode for the Crossfire. As a storm approaches, two rival mercenary groups collide, each fighting for their vision of the greater good. Their fates hang in the balance.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionSeed of Light is an art game where museum visitors and mass online players connect through an intertwined plotline, mediated by the custom 'Seed' controller. It depicts an artist's journey through life's chapters, transcending cultural and identity barriers to foster profound empathy and emotional connections across diverse audiences.
Spatial Storytelling
Arts & Design
Production & Animation
Not Livestreamed
Not Recorded
Performance
Virtual Reality
Full Conference
Experience
DescriptionSeeing Yourself on Stage explores the evolution of a revolutionary real-time performer monitoring system in VR. While built for motion capture performance, its applications extend beyond theater—enhancing training simulations, squad-based VR gaming, third-person gameplay, VTubing, and virtual production, offering new solutions for immersive self-monitoring in digital environments.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce a novel segment-based framework for light transport simulation, efficiently assembling paths from disconnected segments. Our method includes innovative segment sampling techniques and corresponding estimation strategies. To demonstrate its strengths, we propose a robust bidirectional path filtering prototype, achieving superior rendering quality and faster convergence than state-of-the-art methods.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce a novel method that integrates unsupervised style from arbitrary references into a text-driven diffusion model to generate semantically consistent stylized human motion. We leverage text as a mediator to capture the temporal correspondences between motion and style, enabling the seamless integration of temporally dynamic style into motion features.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionSentimentVoice is a live VR performance using emotion-tracking AI to amplify immigrant narratives. It transforms surveillance technology into an empathetic storytelling tool, actively listening to immigrant stories, responding to voice and facial expressions. Featuring real stories, interactive VR environments, and AI-driven visuals, the project explores identity, memory, and emotional communication.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce shape-space eigenanalysis to compute eigenfunctions across continuously-parameterized shape families. These eigenfunctions are obtained by minimizing a variational principle. To handle eigenvalue dominance swaps at points of multiplicity, we incorporate dynamic reordering during optimization. The method is discretization-agnostic and differentiable, enabling applications in sound synthesis, locomotion, and elastodynamic simulation.
Stage Session
New Technologies
Production & Animation
Capture/Scanning
Digital Twins
Image Processing
Rendering
Simulation
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionJoin Sony and Pixomondo (PXO) - a Sony Group Company known for its award-winning work in virtual production, visualization, and VFX - for an exclusive showcase of their latest collaboration. Leveraging PXO’s creative and virtual production expertise alongside Sony’s state-of-the-art technology, the partnership unveils an innovative short film produced using PXO AKIRA, PXO's cutting-edge vehicle processing ecosystem. The production brings together a motion platform, robotic camera crane, an LED volume, and software to control the ecosystem, transforming the way we tell stories when filming vehicles. This pioneering project demonstrates the seamless integration of Sony technology across every stage of the filmmaking process, from pre-production to post, transforming the future of dynamic vehicle storytelling.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Not Recorded
Animation
Digital Twins
Industry Insight
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionDreamWorks has long shared rig templates between different feature productions. Re-use avoids the need to constantly re-invent common behavior. This talk presents a new synchronization paradigm, based on Premo's new integrated versioning, that allows data to be efficiently synchronized between productions.
Art Gallery
Art Paper
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionJoin artists and researchers for engaging discussions on the intersection of art and computer graphics. These interactive roundtables offer a chance to share insights, discuss emerging trends, and explore the creative challenges and opportunities within the SIGGRAPH Arts community.
Topics may include: Connecting Nature, Art & Technology; AI and Art; Digital Art History and Archives; Media Art Documentation; Art+Science Collaborations, and others that that arise from the Art Gallery and Art Papers at the conference.
Topics may include: Connecting Nature, Art & Technology; AI and Art; Digital Art History and Archives; Media Art Documentation; Art+Science Collaborations, and others that that arise from the Art Gallery and Art Papers at the conference.
Frontiers
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Capture/Scanning
Generative AI
Pipeline Tools and Work
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionIn this special demonstration with the Frontiers workshop, Hybrid Dance Xplorations: Artist-Centric XR/AI Sandbox for Co-Creation and Performance, we feature the latest real-time generative AI work of artist and scientist Steve DiPaola, work-in-play of spatial artist Juliana Loh (Forest Wild XR), and our collaborators in an XR/AI sandbox with a virtual camera system (PeelDev VCam) and touchless interactions (NZTech HoverTap) for hybrid dance, movement and performance.
Related: Attend our Hybrid Dance Xplorations Workshop on Tuesday Aug 12 9:oo-1:00pm.
Forest Wild XR Co-Creators:
Juliana Loh | artistic director & spatial artist
Jonathan Bierman | music/sound & fx designer
Elijah Sam | choreographer & dancer
Skye Higgins (animation), Szeka Tse (shaders), Pouya Salehi (fireball) | art&tech
Clara Xu (fireflies) | dance
Soleil Mousseau, Lauren Butterfield, Ashley Sankaran-Wee | dance
Jerry Wang, Nick Sung (vcam), Jonah Marshall, Trinity Barnes | support
John Geyer, Haley Mills | design & 3D modelling
Derrick Carter | producer - sandbox @ the sawmill
Sang Mah | producer
Forest Wild XR is Juliana's deeply personal work that metaphorically traces her mother's journey with Alzheimer’s, capturing the emotional terrain of a condition that erodes memory, identity, and connection. This first xploration, work-in-play, leads us to the healing power of nature in three emotional beats through art, dance & tech.
This Special Siggraph 2025 Frontiers Hybrid Dance Xplorations Demonstration is sponsored by Simon Fraser University | School of Interactive Arts & Technology.
Related: Attend our Hybrid Dance Xplorations Workshop on Tuesday Aug 12 9:oo-1:00pm.
Forest Wild XR Co-Creators:
Juliana Loh | artistic director & spatial artist
Jonathan Bierman | music/sound & fx designer
Elijah Sam | choreographer & dancer
Skye Higgins (animation), Szeka Tse (shaders), Pouya Salehi (fireball) | art&tech
Clara Xu (fireflies) | dance
Soleil Mousseau, Lauren Butterfield, Ashley Sankaran-Wee | dance
Jerry Wang, Nick Sung (vcam), Jonah Marshall, Trinity Barnes | support
John Geyer, Haley Mills | design & 3D modelling
Derrick Carter | producer - sandbox @ the sawmill
Sang Mah | producer
Forest Wild XR is Juliana's deeply personal work that metaphorically traces her mother's journey with Alzheimer’s, capturing the emotional terrain of a condition that erodes memory, identity, and connection. This first xploration, work-in-play, leads us to the healing power of nature in three emotional beats through art, dance & tech.
This Special Siggraph 2025 Frontiers Hybrid Dance Xplorations Demonstration is sponsored by Simon Fraser University | School of Interactive Arts & Technology.
Keynote Speaker
Livestreamed
Recorded
Full Conference
Virtual Access
Experience
Exhibits Only
Tuesday
DescriptionTo a distant spacecraft, the richness of our home planet appears as nothing more than a fraction of a pixel. When NASA’s Voyager 1 spacecraft took its famous Pale Blue Dot photo in 1990, the only planets known were those in the Solar System. Since then, nearly 6,000 planets have been discovered around other stars. These exoplanets are wildly different from our own, so the hunt for another Earth is on. To find it, astrophysicists are developing the technology to directly image a habitable exoplanet. This distant pale blue dot may provide the first glimpse of life beyond Earth and answer the question of: “Are we alone?”. Join us for a behind-the-scenes look at the future of the search for life in the Universe and how it’s reshaping the art and storytelling of Earth and alien worlds.
Dr. Anjali Tripathi is an astrophysicist at NASA’s Jet Propulsion Laboratory (JPL) and NASA’s inaugural Exoplanet Science Ambassador. As an expert in planet formation and evolution, she has contributed to the design of new space missions for NASA. Dr. Tripathi is a leading science communicator, regularly featured by the BBC, PBS, and TED, and a film and television consultant for the National Academy of Sciences. She has served on the NASA Sea Level Change Team, been a Research Associate of the Smithsonian, and led data visualization for the L.A. County Department of Public Health COVID Data and Epidemiology Team. She previously led science policy for the White House Office of Science and Technology Policy and the U.S. Department of Agriculture. She earned degrees in physics and astrophysics from Harvard, Cambridge, and MIT.
Dr. Anjali Tripathi is an astrophysicist at NASA’s Jet Propulsion Laboratory (JPL) and NASA’s inaugural Exoplanet Science Ambassador. As an expert in planet formation and evolution, she has contributed to the design of new space missions for NASA. Dr. Tripathi is a leading science communicator, regularly featured by the BBC, PBS, and TED, and a film and television consultant for the National Academy of Sciences. She has served on the NASA Sea Level Change Team, been a Research Associate of the Smithsonian, and led data visualization for the L.A. County Department of Public Health COVID Data and Epidemiology Team. She previously led science policy for the White House Office of Science and Technology Policy and the U.S. Department of Agriculture. She earned degrees in physics and astrophysics from Harvard, Cambridge, and MIT.
Keynote Speaker
Livestreamed
Not Recorded
Full Conference
Virtual Access
Experience
Exhibits Only
Monday
DescriptionSIGGRAPH 2025 Opening Session
The SIGGRAPH 2025 Conference and ACM SIGGRAPH leadership warmly welcome all attendees and will share valuable insights into the future direction of our industry, setting the stage for an engaging and informative event.
SIGGRAPH 2025 Keynote Speaker
Can AI algorithms make art and be considered artists? Within the past decade, the growth of new neural network algorithms has enabled exciting new art forms with considerable public interest. These tools raise recurring questions about their status as creators and their effect on the arts. In this talk, Dr. Aaron Hertzmann will discuss how these developments parallel the development of previous artistic technologies, like oil paint, photography, and traditional computer graphics, with many useful analogies between past and current developments. Dr. Hertzmann argues that art is a social phenomenon — that “AI” algorithms will not have human-level intelligence in the foreseeable future — and so it is extremely unlikely that we will consider algorithms to be artists either. However, they, like past art technologies, will change the way we make and understand art, for better and for worse.
The SIGGRAPH 2025 Conference and ACM SIGGRAPH leadership warmly welcome all attendees and will share valuable insights into the future direction of our industry, setting the stage for an engaging and informative event.
SIGGRAPH 2025 Keynote Speaker
Can AI algorithms make art and be considered artists? Within the past decade, the growth of new neural network algorithms has enabled exciting new art forms with considerable public interest. These tools raise recurring questions about their status as creators and their effect on the arts. In this talk, Dr. Aaron Hertzmann will discuss how these developments parallel the development of previous artistic technologies, like oil paint, photography, and traditional computer graphics, with many useful analogies between past and current developments. Dr. Hertzmann argues that art is a social phenomenon — that “AI” algorithms will not have human-level intelligence in the foreseeable future — and so it is extremely unlikely that we will consider algorithms to be artists either. However, they, like past art technologies, will change the way we make and understand art, for better and for worse.
Keynote Speaker
Livestreamed
Recorded
Full Conference
Virtual Access
Experience
Exhibits Only
Monday
DescriptionJoin us for the special address with NVIDIA Research leaders Sanja Fidler, Aaron Lefohn, and Ming-Yu Liu as they chart the next frontier of graphics and simulation.
Today, AI is transforming computer graphics — and computer graphics are driving the next generation of AI.
Sanja, Aaron, and Ming-Yu will showcase the latest breakthroughs in computer graphics technologies, from neural rendering and materials to world foundation models. Plus, explore the new opportunities and applications these capabilities are unlocking across media, automotive, manufacturing, and robotics.
About the Speakers
Sanja Fidler is vice president of AI research at NVIDIA, leading the company’s Spatial Intelligence Lab. She is also an associate professor at the University of Toronto, and an affiliate faculty member at the Vector Institute, which she co-founded. Fidler co-authored over 130 scientific papers in the fields of computer vision, machine learning and NLP. She has served as Area Chair for a variety of conferences, including the Conference on Computer Vision and Pattern Recognition (CVPR), International Conference in Computer Vision (ICCV), the Conference on Empirical Methods in Natural Language Processing (EMNLP), the International Conference on Learning Representations (ICLR), the Conference on Neural Information Processing Systems (NeurIPS), and SIGGRAPH.
Fidler has received the NVIDIA Pioneer of AI Award, Amazon Academic Research Award, Facebook Faculty Award, Early Researcher Award, University of Toronto’s Innovation Award, and the Connaught New Researcher Award. In 2018, she was appointed as the Canadian CIFAR AI Chair. She has also been ranked among the top three most influential AI female researchers in Canada by Re-WORK, was named in Globe and Mail’s Changemakers in 2025, and was listed in the top people in AI in 2023 by the Business Insider. With her co-workers, she received the Best Paper Honorable Mention award at CVPR’17, SIGGRAPH 2023, and Best Paper Award at SIGGRAPH Asia 2023. Her main research interests are in spatial intelligence: 3D content creation, spatial understanding, and simulation for robotics.
Aaron Lefohn is vice president of graphics research at NVIDIA, overseeing teams focused on rendering, AI graphics, and graphics systems. His teams’ inventions have played key roles in bringing path tracing to real-time graphics and pioneering real-time AI computer graphics. Recent NVIDIA products derived from his teams’ inventions include DLSS, RTX Path Tracing, RTX Dynamic Illumination, Neural Shading, the Slang shading language, and more.
Aaron has led real-time rendering and graphics programming model research teams for over 15 years and has productized many inventions into games, professional graphics software, GPU hardware, and GPU graphics APIs.
Aaron holds a Ph.D. in computer science from UC Davis and an M.S. in computer science from the University of Utah.
Ming-Yu Liu is a vice president of research at NVIDIA and a fellow of IEEE. He leads the Deep Imagination Research group at NVIDIA, which currently focuses on Generative AI for Physical AI. His research team has helped create several new product categories for NVIDIA, including NVIDIA Cosmos, a developer-first world foundation model platform for Physical AI; NVIDIA Edify, a family of Generative AI models that powers Getty Images and Shutterstock’s GenAI services; NVIDIA Canvas [GauGAN], a real-time painting tool that uses GANs to turn simple brushstrokes into photorealistic images; and NVIDIA Maxine [LivePortrait], an AI-first cloud-native video streaming platform. His research group publishes scientific papers in top-tier AI conferences regularly, including NeurIPS, ICLR, ICML, CVPR, ICCV, ECCV, and SIGGRAPH. Several of their papers have received prestigious awards.
Today, AI is transforming computer graphics — and computer graphics are driving the next generation of AI.
Sanja, Aaron, and Ming-Yu will showcase the latest breakthroughs in computer graphics technologies, from neural rendering and materials to world foundation models. Plus, explore the new opportunities and applications these capabilities are unlocking across media, automotive, manufacturing, and robotics.
About the Speakers
Sanja Fidler is vice president of AI research at NVIDIA, leading the company’s Spatial Intelligence Lab. She is also an associate professor at the University of Toronto, and an affiliate faculty member at the Vector Institute, which she co-founded. Fidler co-authored over 130 scientific papers in the fields of computer vision, machine learning and NLP. She has served as Area Chair for a variety of conferences, including the Conference on Computer Vision and Pattern Recognition (CVPR), International Conference in Computer Vision (ICCV), the Conference on Empirical Methods in Natural Language Processing (EMNLP), the International Conference on Learning Representations (ICLR), the Conference on Neural Information Processing Systems (NeurIPS), and SIGGRAPH.
Fidler has received the NVIDIA Pioneer of AI Award, Amazon Academic Research Award, Facebook Faculty Award, Early Researcher Award, University of Toronto’s Innovation Award, and the Connaught New Researcher Award. In 2018, she was appointed as the Canadian CIFAR AI Chair. She has also been ranked among the top three most influential AI female researchers in Canada by Re-WORK, was named in Globe and Mail’s Changemakers in 2025, and was listed in the top people in AI in 2023 by the Business Insider. With her co-workers, she received the Best Paper Honorable Mention award at CVPR’17, SIGGRAPH 2023, and Best Paper Award at SIGGRAPH Asia 2023. Her main research interests are in spatial intelligence: 3D content creation, spatial understanding, and simulation for robotics.
Aaron Lefohn is vice president of graphics research at NVIDIA, overseeing teams focused on rendering, AI graphics, and graphics systems. His teams’ inventions have played key roles in bringing path tracing to real-time graphics and pioneering real-time AI computer graphics. Recent NVIDIA products derived from his teams’ inventions include DLSS, RTX Path Tracing, RTX Dynamic Illumination, Neural Shading, the Slang shading language, and more.
Aaron has led real-time rendering and graphics programming model research teams for over 15 years and has productized many inventions into games, professional graphics software, GPU hardware, and GPU graphics APIs.
Aaron holds a Ph.D. in computer science from UC Davis and an M.S. in computer science from the University of Utah.
Ming-Yu Liu is a vice president of research at NVIDIA and a fellow of IEEE. He leads the Deep Imagination Research group at NVIDIA, which currently focuses on Generative AI for Physical AI. His research team has helped create several new product categories for NVIDIA, including NVIDIA Cosmos, a developer-first world foundation model platform for Physical AI; NVIDIA Edify, a family of Generative AI models that powers Getty Images and Shutterstock’s GenAI services; NVIDIA Canvas [GauGAN], a real-time painting tool that uses GANs to turn simple brushstrokes into photorealistic images; and NVIDIA Maxine [LivePortrait], an AI-first cloud-native video streaming platform. His research group publishes scientific papers in top-tier AI conferences regularly, including NeurIPS, ICLR, ICML, CVPR, ICCV, ECCV, and SIGGRAPH. Several of their papers have received prestigious awards.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Art
Augmented Reality
Education
Full Conference
Experience
DescriptionSponsored by the ACM SIGGRAPH Digital Arts Community, the SIGGRAPH 2025 Art Gallery, and Art Papers, in collaboration with Bentall Centre and Downtown Van, this event brings together artists, designers, and creatives for an evening of connection and inspiration. Join us as we celebrate a variety of arts happenings at SIGGRAPH 2025, as well as the launch of the New Media Architectures: Vancouver exhibition.
Attendees will have the opportunity to experience augmented-reality interventions by seven artists working in response to an immersive public mural created by local mural artist Priscilla Yu. The artworks by Jiwon Ham & Ana María Cárdenas, Joshua Dickinson, Sahar Sajadieh & Manaswi Mishra, Darya Ramezani & Gene Anthony Santiago-Holt can be explored through self-guided discovery or guided tours.
Attendees interested in participating in a guided experience should register for free in advance of the event. Secure your spot; space is limited: secure your spot
The experience starts at the reception at Bentall Centre Neighborhood Patio, followed by a guided tour to the AR mural activations and through other Vancouver murals before returning to Bentall Centre; 40 minutes total.
Don't forget to wear your conference badge!
Event Meeting Location: Neighborhood Patio @ Bentall Center, 595 Burrard St, VANCOUVER, BC V7X 1L3
Time & Date: Monday, Aug. 11th, 5:30 pm - 7:00 pm
Attendees will have the opportunity to experience augmented-reality interventions by seven artists working in response to an immersive public mural created by local mural artist Priscilla Yu. The artworks by Jiwon Ham & Ana María Cárdenas, Joshua Dickinson, Sahar Sajadieh & Manaswi Mishra, Darya Ramezani & Gene Anthony Santiago-Holt can be explored through self-guided discovery or guided tours.
Attendees interested in participating in a guided experience should register for free in advance of the event. Secure your spot; space is limited: secure your spot
The experience starts at the reception at Bentall Centre Neighborhood Patio, followed by a guided tour to the AR mural activations and through other Vancouver murals before returning to Bentall Centre; 40 minutes total.
Don't forget to wear your conference badge!
Event Meeting Location: Neighborhood Patio @ Bentall Center, 595 Burrard St, VANCOUVER, BC V7X 1L3
Time & Date: Monday, Aug. 11th, 5:30 pm - 7:00 pm
ACM SIGGRAPH 365 - Community Showcase
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionSIGGRAPH Asia 2025, the 18th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia, will take place from 15 – 18 December 2025 at the Hong Kong Convention and Exhibition Centre (HKCEC).
Generative Renaissance: This year’s theme explores how AI is transforming creativity, art, and science, leading to new forms of expression and discovery. The event will highlight how generative AI is reshaping industries and sparking new creative possibilities.
Generative Renaissance: This year’s theme explores how AI is transforming creativity, art, and science, leading to new forms of expression and discovery. The event will highlight how generative AI is reshaping industries and sparking new creative possibilities.
ACM SIGGRAPH 365 - Community Showcase
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionNew to SIGGRAPH? Returning after a SIGGRAPH-break? Just curious about what the conference has to offer? If your answer is YES to any of these questions, this session is your friendly introduction to the week ahead.
SIGGRAPH for Beginners will help you make the most of your time at the conference. You'll get practical tips, helpful context, and a chance to ask questions in a relaxed setting.
Topics include:
- How to navigate the conference and its program – Jim Kilmer (Pathfinders)
- Where to find help and support during the week – Student Volunteers Program
- Top 5 sessions you won’t want to miss
- An introduction to ACM SIGGRAPH, the organization behind the event – Mikki Rose (Conference Advisory Group Chair)
Bring your questions and curiosity—and stay until the end for our Meet & Greet, featuring drinks and snacks, to kick off the conference in style!
SIGGRAPH for Beginners will help you make the most of your time at the conference. You'll get practical tips, helpful context, and a chance to ask questions in a relaxed setting.
Topics include:
- How to navigate the conference and its program – Jim Kilmer (Pathfinders)
- Where to find help and support during the week – Student Volunteers Program
- Top 5 sessions you won’t want to miss
- An introduction to ACM SIGGRAPH, the organization behind the event – Mikki Rose (Conference Advisory Group Chair)
Bring your questions and curiosity—and stay until the end for our Meet & Greet, featuring drinks and snacks, to kick off the conference in style!
ACM SIGGRAPH 365 - Community Showcase
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionThe SIGGRAPH Thesis Fast Forward is a forum for Ph.D. students in computer graphics to present and broadcast their research in 3 minutes or less. Students from around the world introduce us to a wide variety of topics spanning research over the last five years.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionBertil lives with his parents. One day, a little thumb size boy appears under his bed! His name is Nils Karlsson Pussling. The 2 boys bond right away and Nils shows Bertil a fascinating magical world right inside his bedroom walls. Neither of them will ever be lonely again.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionA simple and robust modification to triangle mesh reduction bridges the gap for what artists want in quad-dominant mesh reduction, preserving symmetry, topology, and joints without sacrificing geometric quality, allowing for high-quality level-of-detail meshes at no cost compared to what was done before.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce a novel method for accurate 3D garment reconstruction from single-view images, bridging 2D and 3D representations. Our mapping model creates connections among image pixels, UV coordinates, and 3D geometry, resulting in realistic garments with intricate details and enabling downstream applications like garment retargeting and texture editing.
Industry Session
Production & Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionILM VFX Supervisor Nick Marshall and Environment Supervisor Anton Borisov discuss the VFX work on Ryan Coogler's hit horror movie Sinners, and break down how the team built the period Mississippi train station environments.
Birds of a Feather
Research & Education
Full Conference
Experience
DescriptionThis Birds of a Feather invites designers and researchers to explore how games, spatial narratives, and philosophical metaphors—such as Zhuangzi’s Wandering Beyond—can support cultural wayfinding, ecological storytelling, and relational design. Through examples ranging from Indigenous-informed educational games to architecture and XR, we ask: how do we design for becoming, belonging, and freedom?
Discussion Goals / Guiding Questions
How can philosophical ideas like Zhuangzi’s “wandering” inform interactive design?
What does “wayfinding” mean across digital, architectural, and cultural systems?
How can design foreground land-based knowledge and cultural protocols?
How do we design for complexity, multiplicity, and embodied storytelling?
Discussion Goals / Guiding Questions
How can philosophical ideas like Zhuangzi’s “wandering” inform interactive design?
What does “wayfinding” mean across digital, architectural, and cultural systems?
How can design foreground land-based knowledge and cultural protocols?
How do we design for complexity, multiplicity, and embodied storytelling?
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis paper presents a novel and first approach - Sketch2Anim, to automatically translate 2D storyboard sketches into high-quality 3D animations through multi-conditional motion generation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose Sketch3DVE, a sketch-based 3D-aware video editing method to enable detailed local manipulation of videos with significant viewpoint changes. Our approach leverages detailed analysis and editing of underlying 3D scene representations, combined with a diffusion model to synthesize realistic and temporally coherent edited videos.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThis work addresses the challenge of learning robust interaction skills from limited demonstrations. By introducing novel data augmentation techniques for skill transitions and recovery patterns, combined with enhanced reinforcement imitation learning methods, we achieve superior performance in learning interaction skills, demonstrating improved generalization and recovery capabilities across diverse manipulation tasks.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA man traumatized by his youth spent in a reformatory is devoted to saving kids destined to a life of misery through death. For him, dying young means living forever in the best version of one's self.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionWhen a Lonely Polar Bear Can't Find a Friend... He Makes One!
Set in a rapidly changing world, it tells the story of a polar bear on his quest for companionship.
A hand drawn 2D film, It's infused with humor and emotional depth in the tradition of classic animated films.
Set in a rapidly changing world, it tells the story of a polar bear on his quest for companionship.
A hand drawn 2D film, It's infused with humor and emotional depth in the tradition of classic animated films.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionSOAP awakens the 3D princess from 2D stylized photos. Unlike other works that directly drive the 2D photos, SOAP reconstructs well-rigged 3D avatars, with detailed geometry and all-around texture, from just a single stylized picture.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionIn the context of quasi-Monte Carlo rendering, we introduce a new Sobol' construction and demonstrate that particular pairs of polynomials of the form p and p^2+p+1 in Sobol'-based sampling lead to (1, 2)-sequences. They can be combined to form high-dimensional low discrepancy sequences with good 2D projections.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Math Foundations and Theory
Modeling
Physical AI
Pipeline Tools and Work
Real-Time
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionAfter the enthusiastic response to our BOF in 2024, we're back and everyone is invited to join the conversation!
This year we're changing the format slightly, with the choice of joining a few smaller conversations we home to create a more welcoming space, especially for those not as comfortable in speaking in larger groups.
Topics to explore include: Confidence, Conflict, Focus and Productivity, Delegation, Uncertainty, Giving and Receiving Feedback, Power Dynamics, Imposter Syndrome, Resilience, Psychological Safety, Perception of Risk, and more...
All voices, perspectives, experience, insights, questions, and curious minds are very welcome!
This year we're changing the format slightly, with the choice of joining a few smaller conversations we home to create a more welcoming space, especially for those not as comfortable in speaking in larger groups.
Topics to explore include: Confidence, Conflict, Focus and Productivity, Delegation, Uncertainty, Giving and Receiving Feedback, Power Dynamics, Imposter Syndrome, Resilience, Psychological Safety, Perception of Risk, and more...
All voices, perspectives, experience, insights, questions, and curious minds are very welcome!
Spatial Storytelling
Arts & Design
Gaming & Interactive
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Virtual Reality
Full Conference
Experience
DescriptionSpace Echo 2.0 is an immersive multiplayer VR experience exploring the paradox of communication. Inspired by Echo and Narcissus, it challenges participants as speaking pushes them apart. In a multiplayer virtual environment. Two users float through space, engaging in dialogue and collaborating on tasks, prompting reflection on connection and isolation.
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Art
Augmented Reality
Education
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionSpatial p5 is an open-source toolkit that simplifies real-time prototyping for mixed reality. Built on p5.js, it enables artists and designers to create immersive experiences without technical barriers. This talk and demo showcase its development, creative potential, and impact on interactive storytelling through intuitive, browser-based XR experimentation.
Talk
Arts & Design
Research & Education
Livestreamed
Recorded
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Display
Hardware
Image Processing
Physical AI
Real-Time
Scientific Visualization
Spatial Computing
Full Conference
Virtual Access
Sunday
DescriptionMachine-Guided Spatial Sensing presents a novel augmented reality system that integrates active learning and human-in-the-loop interaction to measure environmental fields. Using an HMD and handheld sensor, the method accurately captures flow fields and other quantities, outperforming traditional approaches in speed, efficiency, and ease-of-use.
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
DescriptionMachine-Guided Spatial Sensing uses augmented reality and human interaction to efficiently map environmental fields, such as air flows or gas concentrations. Continuous real-time data analysis updates the field map and guides the user's handheld sensor to optimal measurement points, enhancing accuracy.
Spatial Storytelling
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Virtual Reality
Full Conference
Experience
DescriptionThe Spatial Storybook system automatically converts a monaural audiobook into an immersive binaural presentation. It is comprised of an LLM prompted to reimagine a passage of text as a stage play, including stage directions and descriptions of rooms and wall materials; this information conditions a custom, real-time scene rendering engine.
Spatial Storytelling
Arts & Design
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Generative AI
Pipeline Tools and Work
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe share our process for capturing authentic human performance to create a visceral connection with immersive audiences. Topics covered:
• Capturing real-world and human performance with volumetric recording and Gaussian splatting
• Preserving imperfections
• Dialogue and sound in XR
• Reconfiguring timeless storytelling methods for an evolving medium
• How tech advances shape creation of powerful experiences
• Capturing real-world and human performance with volumetric recording and Gaussian splatting
• Preserving imperfections
• Dialogue and sound in XR
• Reconfiguring timeless storytelling methods for an evolving medium
• How tech advances shape creation of powerful experiences
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose a method for estimating spatiotemporally varying indoor lighting from videos using a continuous light field represented as an MLP. By leveraging 2D diffusion priors fine-tuned to predict lighting jointly at multiple locations, our approach achieves superior performance and zero-shot generalization to in-the-wild scenes.
Computer Animation Festival
Production & Animation
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionSIGGRAPH 2025 will open by honoring a film that forever changed the course of animation, technology, and storytelling. Pixar’s “Toy Story”, the world’s first fully computer-animated feature film, will be celebrated in a special 30th anniversary event that captures the spirit of innovation, perseverance, and creativity that defines both the film and the SIGGRAPH community.
Presented by SIGGRAPH’s Computer Animation Festival in partnership with ACM SIGGRAPH Pioneers, the tribute will be held on Sunday, 10 August 2025, at the Vancouver Convention Centre. The celebration will begin at 12:30 p.m. PDT with a featured introduction by Ed Catmull, co-founder of Pixar and a pioneering figure in computer graphics. In “Pioneers Featured Speaker: Catmull Story: To SIGGRAPH and Beyond”, Catmull will share personal reflections on the breakthroughs, challenges, and triumphs that made “Toy Story” possible.
Following the talk and a live audience Q&A, attendees will enjoy trivia and giveaways before a special screening of “Toy Story”, transporting audiences back to where the magic — and CG revolution — began.
Image credit: “Toy Story”, Copyright © Disney/Pixar
Presented by SIGGRAPH’s Computer Animation Festival in partnership with ACM SIGGRAPH Pioneers, the tribute will be held on Sunday, 10 August 2025, at the Vancouver Convention Centre. The celebration will begin at 12:30 p.m. PDT with a featured introduction by Ed Catmull, co-founder of Pixar and a pioneering figure in computer graphics. In “Pioneers Featured Speaker: Catmull Story: To SIGGRAPH and Beyond”, Catmull will share personal reflections on the breakthroughs, challenges, and triumphs that made “Toy Story” possible.
Following the talk and a live audience Q&A, attendees will enjoy trivia and giveaways before a special screening of “Toy Story”, transporting audiences back to where the magic — and CG revolution — began.
Image credit: “Toy Story”, Copyright © Disney/Pixar
ACM SIGGRAPH 365 - Community Showcase
Computer Animation Festival
Livestreamed
Recorded
Full Conference
Experience
DescriptionSIGGRAPH 2025 will open by honoring a film that forever changed the course of animation, technology, and storytelling. Pixar’s “Toy Story”, the world’s first fully computer-animated feature film, will be celebrated in a special 30th anniversary event that captures the spirit of innovation, perseverance, and creativity that defines both the film and the SIGGRAPH community.
Presented by SIGGRAPH’s Computer Animation Festival in partnership with ACM SIGGRAPH Pioneers, the celebration will begin with a featured introduction by Ed Catmull, co-founder of Pixar and a pioneering figure in computer graphics. In “Pioneers Featured Speaker: Catmull Story: To SIGGRAPH and Beyond”, Catmull will share personal reflections on the breakthroughs, challenges, and triumphs that made “Toy Story” possible.
Following the talk and a live audience Q&A, attendees will enjoy trivia and giveaways before a special screening of “Toy Story”, transporting audiences back to where the magic — and CG revolution — began.
Press release
All of Ed Catmull's contributions to SIGGRAPH
Image credit: “Toy Story”, Copyright © Disney/Pixar
Presented by SIGGRAPH’s Computer Animation Festival in partnership with ACM SIGGRAPH Pioneers, the celebration will begin with a featured introduction by Ed Catmull, co-founder of Pixar and a pioneering figure in computer graphics. In “Pioneers Featured Speaker: Catmull Story: To SIGGRAPH and Beyond”, Catmull will share personal reflections on the breakthroughs, challenges, and triumphs that made “Toy Story” possible.
Following the talk and a live audience Q&A, attendees will enjoy trivia and giveaways before a special screening of “Toy Story”, transporting audiences back to where the magic — and CG revolution — began.
Press release
All of Ed Catmull's contributions to SIGGRAPH
Image credit: “Toy Story”, Copyright © Disney/Pixar
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Digital Twins
Ethics and Society
Games
Generative AI
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionInstead of pursuing the concern of AI displacing artists, we emphasise a role for artists in reshaping technology and branching it in new directions. A role that places us less as a user of AI technology, waiting for its creative outputs, but as a maker of what AI can be, perhaps leading us towards an AI that is as unnatural, occult, and esoteric, as it is practical.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe introduce Sphere Carving, a method for automatically computing bounding volumes for conservative implicit surface. SDF queries define a set of spheres, from which we extract intersection points, used to compute a bounding volume with guarantees. Sphere Carving is conceptually simple and independent of the function representation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe introduce spherical harmonics Hessian and solid spherical harmonics, a variant of spherical harmonics, to compute the spherical harmonics Hessian efficiently and accurately to the computer graphics community. These mathematical tools are used to develop an analytical representation of the Hessian matrix of spherical harmonics coefficients for spherical lights.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe leverage repetitions in 3D scenes to improve reconstruction in low-quality parts due to poor coverage and occlusions. Our methods segments the repetitions, registers them together, and optimizes a shared representation with multi-view information flowing from all repetitions, improving the reconstruction of each individual repetition.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionSplat4D generates high-fidelity 4D content from monocular videos by integrating multi-view rendering, inconsistency identification, a video diffusion model, and asymmetric U-Net refinement. Our framework maintains spatial-temporal consistency while preserving details and following user guidance, achieving state-of-the-art benchmark performance. Applications include text/image-conditioned generation, 4D human modeling, and text-guided content editing.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe combine splines, a classical tool from applied mathematics, with implicit Coordinate Neural Networks to model deformation fields, achieving strong performance across multiple datasets. The explicit regularization from spline interpolation enhances spatial coherency in challenging scenarios. We further introduce a metric based on Moran’s I to quantitatively evaluate spatial coherence.
Keynote Speaker
Livestreamed
Recorded
Full Conference
Virtual Access
Experience
Exhibits Only
Tuesday
DescriptionAutodesk invites you to explore the transformative role of AI in the world of visual effects and animation, across film, TV, and games. Discover how Autodesk is advancing creative workflows with AI, harnessing its potential to empower artists to be their most creative, and opening the door to a broader spectrum of creators.
Acclaimed writer, director, and producer David S. Goyer — known for “The Dark Knight” trilogy, “Blade”, Apple TV+’s “Foundation”, STARZ’ “Da Vinci’s Demons”, and Netflix’s “The Sandman” — will reflect on how technology has shaped his storytelling over the years, and how AI might influence the next generation of creative voices. Goyer will be joined by actor Tye Sheridan and filmmaker Nikola Todorovic, co-founders of Wonder Dynamics (now a part of Autodesk) and the team behind Autodesk Flow Studio (formerly Wonder Studio), an innovative cloud-based platform that accelerates VFX pipelines with cutting-edge AI. Moderated by award-winning journalist and author Carolyn Giardina, the panel will explore how AI is reshaping the creative process, expanding access, and redefining the future of storytelling.
The keynote will also include insights from Autodesk leaders Mike Haley (SVP of Research) and Maurice Patel (VP of Media & Entertainment Strategy), who will share how Autodesk’s legacy of research and cross-industry innovation is driving the development of practical, creator-first AI tools. From MotionMaker in Maya to Autodesk Flow Studio, they’ll highlight how AI is helping artists work faster, iterate more freely, and stay in control of their creative vision.
Acclaimed writer, director, and producer David S. Goyer — known for “The Dark Knight” trilogy, “Blade”, Apple TV+’s “Foundation”, STARZ’ “Da Vinci’s Demons”, and Netflix’s “The Sandman” — will reflect on how technology has shaped his storytelling over the years, and how AI might influence the next generation of creative voices. Goyer will be joined by actor Tye Sheridan and filmmaker Nikola Todorovic, co-founders of Wonder Dynamics (now a part of Autodesk) and the team behind Autodesk Flow Studio (formerly Wonder Studio), an innovative cloud-based platform that accelerates VFX pipelines with cutting-edge AI. Moderated by award-winning journalist and author Carolyn Giardina, the panel will explore how AI is reshaping the creative process, expanding access, and redefining the future of storytelling.
The keynote will also include insights from Autodesk leaders Mike Haley (SVP of Research) and Maurice Patel (VP of Media & Entertainment Strategy), who will share how Autodesk’s legacy of research and cross-industry innovation is driving the development of practical, creator-first AI tools. From MotionMaker in Maya to Autodesk Flow Studio, they’ll highlight how AI is helping artists work faster, iterate more freely, and stay in control of their creative vision.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Description3D Gaussian Splatting (3DGS) enables fast 3D reconstruction and rendering but struggles with real-world captures due to transient elements and lighting changes. We introduce SpotLessSplats, which leverages semantic features from foundation models and robust optimization to remove transient effects, achieving state-of-the-art qualitative and quantitative reconstruction quality on casual scene captures.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe demonstrate a robotic avatar concept in which pilots can embody a fish-like form, “swimming” through indoor spaces to interact with others remotely. By mimicking the flapping flight of birds, pilots can control the avatar through body movements. This work opens new opportunities for telepresence with a wing-based approach.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionExisting Gaussian Splatting avatars require desktop GPUs, limiting mobile device use. SqueezeMe converts these avatars into a lightweight representation, enabling real-time animation and rendering on mobile devices. By distilling the corrective decoder into an efficient linear model, SqueezeMe achieves 72 FPS on a Meta Quest 3 VR headset.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Talk
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Geometry
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionA suite of techniques from the Loki simulation framework addresses collision instabilities in character effects, offering solutions like proximity-tolerant contact projection, compliant gap expansion, strain limiting, and advanced collider management. These tools enable a stable, intuitive workflow for integrating physically based collisions with complex production animations.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionCosserat rods have become increasingly popular for simulating complex thin elastic rods. However, traditional approaches often encounter significant challenges in robustly and efficiently solving for valid quaternion orientations. We introduce Stable Cosserat rods, which can achieve high accuracy with high stiffness levels and maintain stability under large time steps.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionStable-Makeup is a diffusion-based makeup transfer method. It leverages a Detail-Preserving makeup encoder, and content-structure control modules to preserve facial content and structure during transfer. Extensive experiments show that Stable-Makeup outperforms existing methods, offering robust, generalizable performance.
Birds of a Feather
New Technologies
Production & Animation
Animation
Digital Twins
Games
Modeling
Simulation
Full Conference
Experience
DescriptionUpdates on alignment of PBR materials, physics extensions, roundtripping between glTF and USD, interactivity, animation, FBX, and new geometric representations.
• Intro to the Metaverse Standards Forum, Khronos, and AOUSD and update from the 3D Asset Interoperability using USD and glTF Working Group
• glTF, OpenUSD – Tooling Pain Points and FBX Interoperability
• PBR Material Interoperability – MaterialX, OpenPBR, USD, and glTF
• 3D Gaussian Splatting glTF/OpenUSD alignment (glTF extension, proposed USD schema)
• Intro to the Metaverse Standards Forum, Khronos, and AOUSD and update from the 3D Asset Interoperability using USD and glTF Working Group
• glTF, OpenUSD – Tooling Pain Points and FBX Interoperability
• PBR Material Interoperability – MaterialX, OpenPBR, USD, and glTF
• 3D Gaussian Splatting glTF/OpenUSD alignment (glTF extension, proposed USD schema)
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionWe invite animators, riggers, and engineers to discuss animation tools (third party and
proprietary), workflows and technology trends in the industry.
proprietary), workflows and technology trends in the industry.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Capture/Scanning
Dynamics
Education
Games
Generative AI
Geometry
Industry Insight
Lighting
Math Foundations and Theory
Physical AI
Pipeline Tools and Work
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionA discussion about the current technological and workflow state of hair in the Visual Effects, Animation, Video Games and VR/AR industries. Covering topics that span the entire pipeline of bringing hair to life.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Animation
Full Conference
Experience
DescriptionWe invite animators, riggers, and engineers to discuss rigging tools (third party and
proprietary), workflows and technology trends in the industry.
proprietary), workflows and technology trends in the industry.
Course
Research & Education
Livestreamed
Recorded
Geometry
Rendering
Simulation
Full Conference
Virtual Access
Sunday
DescriptionWe describe grid-free Monte Carlo methods for solving partial differential equations on complex geometries. These methods offer unique opportunities to accelerate engineering design cycles by being easy to parallelize and output-sensitive like Monte Carlo rendering, while bypassing the need for simulation-ready meshes that are burdensome to generate for conventional solvers.
Talk
Arts & Design
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionA new variation on an old classic, Steerable Perlin Noise offers anisotropic noise at little extra cost, offering a new dimension of control to the average artist.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionStitch-A-Shape introduces a novel framework for generating B-Rep models by directly addressing both topology and geometry. Using a sequential stitching approach, it assembles 3D shapes from vertices through curves to faces, effectively managing topological and geometric complexities. The framework demonstrates superior performance in shape generation, class-conditional generation, and autocompletion tasks.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a novel stochastic version of the Barnes-Hut approximation. Regarding the level-of-detail (LOD) family of approximations as control variates, we construct an unbiased estimator of the kernel sum being approximated. Through several examples in graphics, we demonstrate that our method outperforms a GPU-optimized implementation of the deterministic Barnes-Hut approximation.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionStochastic preconditioning adds spatial noise to query locations during neural field optimization; it can be formalized as a stochastic estimate for a blur operator. This simple technique eases optimization and significantly improves quality for neural fields optimization, matching or outperforming custom-designed policies and coarse-to-fine schemes.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionTo reduce the high rendering costs and transmission bandwidth requirements of path tracing-based cloud rendering, we propose a novel streaming-aware rendering framework that is able to learn a joint optimal model integrating two path-tracing acceleration (adaptive sampling and denoising) and video compression technique with client side G-buffer collaboration.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThe StreamME takes live stream video as input to enable rapid 3D head avatar reconstruction. It achieves impressive speed, capturing the basic facial appearance within 10-seconds and reaching high-quality fidelity within 5-minutes. StreamME reconstructs facial features through on-the-fly training, allowing simultaneous recording and modeling without the need for pre-cached data.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionA novel approach for the computational modeling of lignified tissues, such as those found in tree branches and timber, extends strand-based representation to describe biophysical processes at short and long time scales. The computationally fast simulation leverages Cosserat rod physics and enables the interactive exploration of branches and wood breaking.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe present a stroke-based method for transforming dynamic 3D scenes with smoke, fire, or clouds into painterly animations. Learning from user-provided exemplars, our system transfers stroke styles—color, width, length, and orientation—while preserving motion and structure. This enables expressive and coherent renderings of complex volumetric media.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThe paper presents StructRe, a structure rewriting system for 3D shape modeling. It uses an iterative process to rewrite objects, either upwards to more concise structures or downwards to more detailed ones, generating hierarchies. This localized rewriting approach enables probabilistic modeling of ambiguous structures and robust generalization across object categories.
Birds of a Feather
New Technologies
Production & Animation
Artificial Intelligence/Machine Learning
Pipeline Tools and Work
Full Conference
Experience
DescriptionAs cyber threats rise and remote workflows expand, content security is more critical than ever. Join Studio security assessors, and executives from the MPA’s Trusted Partner Network as we break down the TPN and Studio assessment process and share how the TPN program and MPA Best Practices are driving stronger security across the industry. Gain practical insights, ask questions, and be part of the conversation shaping a safer, more resilient media supply chain.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Dynamics
Simulation
Full Conference
Experience
DescriptionStudio Storage: Storage for VFX & Animation has extreme requirements and only a limited number of proven solutions are available or you can build your own. Industry experts and new entrants are all welcomed, for a sharing of knowledge, experience, and best practices. Mail [email protected] for any questions or comments.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and ML Ops.
This is part of a linked series of Technical Pipeline BoFs, covering the VFX Reference Platform, Renderfarming, Cloud Native, Pipeline, Studio Storage, and ML Ops.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe propose a novel text-to-vector pipeline with style customization that disentangles content and style in SVG generation. Our method represents the first feed-forward text-to-vector diffusion model capable of generating SVGs in custom styles.
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Recorded
Animation
Dynamics
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
DescriptionIn creating the stylized organic environments of The Wild Robot, the goal for Look Dev was to enable 3D Artists to work more similar to traditional artists, capturing the impromptu and organic techniques of 2D art. This required innovative new tools to overcome the constraints of the 3D medium.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Modeling
Pipeline Tools and Work
Simulation
Full Conference
Wednesday
DescriptionSuperman makes the third feature film and fifth project in total on which Wētā FX has had the pleasure of working with James Gunn. In this talk you can hear how the team got stuck in to bringing Superman's latest fight against an arch nemesis to life.
Four-time Academy Award nominee and Senior VFX Supervisor Guy Williams has captained Wētā FX’s work on all these projects, and contributed to VFX tent poles such as Peter Jackson’s The Lord of the Rings and King Kong, Steven Spielberg’s The BFG and Ang Lee’s Gemini Man. He is joined this time by Visual Effects Supervisor Sean Walker who was nominated for an Oscar for his work on Marvel’s Shang-Chi and the Legend of the Ten Rings and has worked on several Planet of the Apes films, The Hobbit trilogy, and many Marvel films, including Avengers: End Game. Also joining them is Sequence VFX Supervisor Jo Davison, who was CG Supervisor on Guardians of the Galaxy: Vol. 3 which earned her a Visual Effects Society Award, and during her 15 years at Wētā FX has worked on a swathe of high-profile projects including Avatar, The Adventures of Tintin and Shang-Chi and the Legend of the Ten Rings.
These three experienced supervisors will be showcasing our work on the project, with particular focus on two key sequences, each made up of around 300 VFX shots each.
Jo Davison, will explore bringing colossal monsters to life, sharing how the team tackled textures at immense scales to enable maximum visual impact across a close up creature interactions and wider shots. A range of FX simulations were needed for the powers of the creature, making scale the subject of a range of considerations. The Wētā FX team did a huge amount of digital city work for this sequence and the epic cinematography required for such scale meant the vast majority of the work is fully CG.
The second massive undertaking for the team was supervised by Sean Walker. A surreal and fantastical universe, constructed from a selection of crystalline metallic forms, grown mathematically by a custom Houdini script. This sparked an entirely new level of creativity, as nine distinct environments were brought together - and allowed for additional fun across the creature’s department. The challenges in this universe were to make an otherworldly and unbelievable realm behave in a way that still felt physically plausible to the audience. Sean will explore the challenges and innovative solutions used across postproduction to bring this fantastical realm to life.
Having both worked with Guy on previous James Gunn projects, it allowed for added creativity outside of a strict VFX delivery - with the team helping to flesh out sequences from pre-vis to final composite. The team will draw from their extensive knowledge and share valuable insights into collaborative filmmaking.
Four-time Academy Award nominee and Senior VFX Supervisor Guy Williams has captained Wētā FX’s work on all these projects, and contributed to VFX tent poles such as Peter Jackson’s The Lord of the Rings and King Kong, Steven Spielberg’s The BFG and Ang Lee’s Gemini Man. He is joined this time by Visual Effects Supervisor Sean Walker who was nominated for an Oscar for his work on Marvel’s Shang-Chi and the Legend of the Ten Rings and has worked on several Planet of the Apes films, The Hobbit trilogy, and many Marvel films, including Avengers: End Game. Also joining them is Sequence VFX Supervisor Jo Davison, who was CG Supervisor on Guardians of the Galaxy: Vol. 3 which earned her a Visual Effects Society Award, and during her 15 years at Wētā FX has worked on a swathe of high-profile projects including Avatar, The Adventures of Tintin and Shang-Chi and the Legend of the Ten Rings.
These three experienced supervisors will be showcasing our work on the project, with particular focus on two key sequences, each made up of around 300 VFX shots each.
Jo Davison, will explore bringing colossal monsters to life, sharing how the team tackled textures at immense scales to enable maximum visual impact across a close up creature interactions and wider shots. A range of FX simulations were needed for the powers of the creature, making scale the subject of a range of considerations. The Wētā FX team did a huge amount of digital city work for this sequence and the epic cinematography required for such scale meant the vast majority of the work is fully CG.
The second massive undertaking for the team was supervised by Sean Walker. A surreal and fantastical universe, constructed from a selection of crystalline metallic forms, grown mathematically by a custom Houdini script. This sparked an entirely new level of creativity, as nine distinct environments were brought together - and allowed for additional fun across the creature’s department. The challenges in this universe were to make an otherworldly and unbelievable realm behave in a way that still felt physically plausible to the audience. Sean will explore the challenges and innovative solutions used across postproduction to bring this fantastical realm to life.
Having both worked with Guy on previous James Gunn projects, it allowed for added creativity outside of a strict VFX delivery - with the team helping to flesh out sequences from pre-vis to final composite. The team will draw from their extensive knowledge and share valuable insights into collaborative filmmaking.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionVarious symbolic objects such as a fish, an apple, an hourglass, a diamond, and a heart, are nicely cooked and displayed on the dining table for a god-like creature. The eternity we yearn for, the meaning we seek, is just one of the daily meals he enjoys.
Birds of a Feather
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Industry Insight
Full Conference
Experience
DescriptionYou do not have to be technical to belong in the world of computer graphics. This session invites the marketers, educators, recruiters, community managers, and others whose work supports the CG industry in essential but often overlooked ways. Whether you work in communications, education, outreach, or another non-technical role, join us to share experiences, connect with peers, and talk about how we build community, grow talent pipelines, and shape the future of this field. If you have ever felt like you're adjacent to the action, this meet-up is for you.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Not Recorded
Animation
Digital Twins
Industry Insight
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionAnimating bird wing fold poses often requires labor-intensive counter animations. Our innovative surface constraint wing fold system, developed for The Wild Robot, enables animators to maintain folded wings seamlessly during dynamic movements. This system streamlines the process, saving time and allowing animators to focus on compelling performances.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionThe “Get an Airbnb” campaign compares the hiccups of hotels to the luxuries of an Airbnb. “Surrounded” takes the miniature world we’ve developed for Airbnb in an exciting new direction, positioning the Airbnb into an immersive forest.
Birds of a Feather
Research & Education
Not Livestreamed
Not Recorded
Ethics and Society
Full Conference
Experience
DescriptionFrom climate change to biodiversity loss and resource exhaustion, human activities are impacting Earth’s limits and Computer Graphics is no exception. How can our research practices and organizations evolve to respect these limits?
After the success of last year’s BoF, we continue building a community of people who want to think about the broader impacts of our research and how to collectively work towards a more sustainable future.
This interactive meetup session will allow attendees to share experiences and questions on related topics in small groups, regardless of their current levels of involvement or expertise in sustainable approaches to research.
After the success of last year’s BoF, we continue building a community of people who want to think about the broader impacts of our research and how to collectively work towards a more sustainable future.
This interactive meetup session will allow attendees to share experiences and questions on related topics in small groups, regardless of their current levels of involvement or expertise in sustainable approaches to research.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThe "Sway" reimagines bamboo, an ancient tool of documentation, as a dynamic and symbolic medium to explore contemporary digital reflections. By integrating the natural movement of bamboo with generative digital processes and dynamic capture techniques, it encapsulates evolving narratives through real-time data and participatory audience interactions. Bamboo transforms into a space where diverse voices converge, offering a profound lens to examine the mediated and often polarized perceptions of conflict in the digital age. The work invites viewers to critically reconsider how technology reshapes the ways we record, interpret, and emotionally engage with the complexities of war, memory, and our surroundings.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionSwiftSketch, a diffusion-based model with a transformer-decoder, generates high-quality vector sketches from images in under a second. It progressively denoises stroke coordinates sampled from a Gaussian distribution, effectively generalizing across various object classes. Training uses the ControlSketch Dataset, a new synthetic high-quality image-sketch pairs created by our ControlSketch optimization method.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThe paper presents a tile-based rendering pipeline for modeling with implicit volumes, using blobtrees and smooth CSG operators. It requires no preprocessing when updating primitives and ensures efficient ray processing with sphere tracing. The method uses a low-resolution A-buffer and bottom-up tree traversal for scalable performance.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Digital Twins
Games
Generative AI
Modeling
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionSynedelica challenges traditional approaches to mixed reality by transforming physical environments through a synesthetic experience. This artwork emphasizes the potential for immersive technology to mediate reality itself, fostering social interaction and shared experiences. By reimagining how we perceive and interact with our surroundings, Synedelica opens new perspectives at the intersection between virtual and physical. Our approach encourages the SIGGRAPH community to explore the innovative capacity of intuitive and serendipitous design.
Talk
Arts & Design
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
DescriptionAdvancements in regression-based computer vision models have automated parts of VFX motion capture. However, complex shots often require manual intervention. By integrating user-specified cues into models, new tools improve tracking accuracy, blending automation with human expertise. This approach streamlines workflows and reduces time required for challenging VFX scenarios.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Frontiers
Research & Education
Livestreamed
Recorded
Animation
Physical AI
Robotics
Full Conference
Virtual Access
Experience
Monday
DescriptionThe rise of Foundation Models, including LLMs and VLMs, has brought us closer to creating AI agents that understand and act in the world. However, challenges remain, particularly in the "act" component, due to the scarcity of human motion data and a lack of action labels. This talk focuses on building intelligence from the ground up. Our main goal has been to achieve zero-shot sim-to-real transfer on hardware with minimal human intervention by integrating reinforcement learning and imitation learning into a streamlined motion imitation framework. We began by focusing on single-task policies and later expanded to multi-task policies capable of generalizing across behaviors. This talk will showcase unprecedented natural behaviors in dynamic tasks performed by the Boston Dynamics e-Atlas robot, marking a major advance in bridging the gap between human characters in graphics and physical humanoid robots in robotics.
Frontiers
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Digital Twins
Full Conference
Virtual Access
Experience
DescriptionMachine learning (ML) for Earth Observation (EO) data is revolutionizing the speed and scope at which science and policy can operate — filling critical data gaps across fields such as ecology and development economics. In this talk, I will outline a class of ML for EO models that distill global satellite data into compact, multi-purpose representations of the Earth. I’ll trace the recent evolution of these “Earth embedding” models, from early image embeddings designed to capture the unique characteristics of satellite imagery, to an emerging class of location encoders that serve as implicit neural representations of EO data. After discussing the impact-driven goals and methodological details of these models, I'll conclude by discussing my longer term vision of building Earth embedding models to unlock new scales of science.
Frontiers
Gaming & Interactive
New Technologies
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Computer Vision
Ethics and Society
Generative AI
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
DescriptionAs XR and AI technologies advance, they are reshaping not just how stories are told, but how they are felt, experienced, and co-created. This session explores the evolving role of interactivity and immersion in storytelling—where audiences are no longer passive, and machines are active collaborators. Drawing on curatorial insights from the Signals Creative Tech Expo, Loc Dao examines how these technologies amplify human expression while raising essential questions about memory, identity, connection, and consent. Attendees will discover how immersive media can engage our senses, simulate empathy, and challenge traditional boundaries between creator and audience. This talk offers a compelling look at storytelling’s future—one where technology deepens our understanding of each other and the world around us.
Frontiers
New Technologies
Livestreamed
Recorded
Display
Graphics Systems Architecture
Virtual Reality
Full Conference
Virtual Access
Experience
DescriptionThis presentation explores the possible futures and emerging trends in computer display technology and their implications for computer graphics. It examines advancements in color fidelity, high dynamic range (HDR), variable refresh rates, and ultra-high pixel densities, as well as emerging form factors like foldable, transparent, and spatial displays. The talk considers how these innovations will impact rendering techniques, color pipelines, and user experience design.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Digital Twins
Ethics and Society
Games
Generative AI
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionThis project explores how AI can preserve and reinterpret cultural memory, raising profound questions about the role of technology in connecting past and future. By transforming transient, everyday digital interactions into meaningful archives, it invites reflection on how today’s voices might shape the narratives of tomorrow. This work challenges the SIGGRAPH community to view the ephemeral as a valuable resource for cultural preservation, offering fresh perspectives on the intersection of art, technology.
Birds of a Feather
Arts & Design
Research & Education
Full Conference
Experience
DescriptionWhat do you get when students from four cities across the Americas (Canada, Mexico, and Colombia) collaborate with a client in a fifth city? And what if these students weren't making one solution, but five that span interactivity, motion design and physical space? ... a whole lot of learning, and not just by students.
In this talk, Nour will walk us through her class's journey as they aimed to solve a real client brief. Join us as she shares the outcomes of experimenting with classroom approaches, managing inter-team dependencies, and navigating language and cultural differences within teams and target users.
In this talk, Nour will walk us through her class's journey as they aimed to solve a real client brief. Join us as she shares the outcomes of experimenting with classroom approaches, managing inter-team dependencies, and navigating language and cultural differences within teams and target users.
Appy Hour
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Education
Games
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Spatial Computing
Full Conference
Experience
DescriptionThe University of Victoria is developing a project with the Royal BC Museum that will change the way that exhibits engage visitors, both in person and online. Maps, satellite data and museum collections are exposed in eXtended Reality and interacted with using conversational AI. Come play with our Unity Application!
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionBy combining a continuous, UVD tangent space 3DGS model with a UNet deformation network while maintaining adaptive densification, we present a novel high-detail 3D head avatar model that preserves even finer detail like pores and eyelashes at 4K resolution.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionTeleTouch is a human-to-human teleoperation system enabling direct tactile communication. It uses a fingernail-mounted vibration motor with a 6-DOF sensor for touch sensing and a high-resolution, wearable electro-tactile display for feedback. AR integration synchronizes hand movements for intuitive remote interaction.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionTetWeave is a novel isosurface representation that jointly optimizes a tetrahedral grid and directional distances for gradient-based mesh processing like multi-view 3D reconstruction. It dynamically builds adaptive grids via Delaunay triangulation, ensuring watertight, manifold meshes. By resampling high-error regions and promoting fairness, it achieves high-quality results with minimal memory requirements.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAnimPortrait3D is a novel method for text-based, realistic, animatable 3DGS avatar generation with morphable model alignment. To address ambiguities in diffusion predictions during 3D distillation, we introduce key strategies: initializing a 3D avatar with robust appearance and geometry, and leveraging a ControlNet to ensure accurate alignment with the underlying model.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe develop a method to compress textures and UVs for meshes in a content-aware way. We combine this with overlapping and folding symmetric UV charts, and demonstrate our approach on a dataset from Sketchfab. We outperform prior work in visual similarity to the original mesh.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Stage Session
Gaming & Interactive
Production & Animation
Research & Education
Artificial Intelligence/Machine Learning
Digital Twins
Education
Ethics and Society
Fabrication
Games
Generative AI
Industry Insight
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionBased on marketplace activity from CGTrader, the world’s largest 3D model platform, this session reveals exclusive insights into the evolving 3D content economy. From high-end visuals to 3D printing, digital 3D models have become essential assets across industries. Explore key market trends in CG and 3D print categories, discover the fastest-growing areas of demand, understand how the market is responding to AI, and gain insights into how creators effectively monetize their work.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Dynamics
Games
Lighting
Math Foundations and Theory
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Thursday
DescriptionScenes in which multiple characters come to life to achieve a cohesive performance present a unique set of animation challenges. While the techniques and workflows evolve, there are constant underlying principles. With examples, we present a distillation of the essence of the art of crowds animation through Disney Animation films.
Birds of a Feather
New Technologies
Production & Animation
Lighting
Performance
Pipeline Tools and Work
Full Conference
Experience
DescriptionThis session explores how generative AI is being used in real creative workflows today, supporting visual development, worldbuilding, motion studies, and storytelling across animation and VFX. Through shared examples and open discussion, we’ll look at how artists, producers, and technologists are experimenting with these tools in collaboration with some of the industry’s most forward-thinking directors and studios. What makes this session unique is its focus on creativity in practice, inviting honest dialogue about emerging methods, artistic potential, and open questions. It’s an opportunity to compare notes, exchange ideas, and learn from each other at the intersection of technology and craft.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Dynamics
Games
Lighting
Math Foundations and Theory
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Thursday
DescriptionThe directors of cinematography of “Moana 2” talk through the lighting design and the camera language used across three of the songs and how it supports the progression of Moana's physical and emotional journey, touching on themes of "Heightened Reality", "Chaotic Theatrics" and "Playful abstraction".
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA vast ocean at the firmament. Humans suffering in the darkness underneath, longing for the warm light of the Lumathans - giant sea creatures worshiped as gods and the only light source in the world. Will the explorer SINH manage to reach the ocean and steal the Lumathan's light?
Talk
Arts & Design
Production & Animation
Livestreamed
Recorded
Animation
Art
Geometry
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Wednesday
DescriptionLighting plays is key in Disney Animation films. For “Moana 2,” we developed a new lighting workflow in Houdini, empowering artists with more control while reducing creative barriers. This talk explores how we enabled new workflows, mirrored successful experiences, and eased the transition with new tools in a legacy system.
Talk
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Display
Education
Games
Generative AI
Geometry
Graphics Systems Architecture
Image Processing
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Wednesday
DescriptionFor the HBO series Dune: Prophecy, we sought to innovate beyond our traditional layouts and previsualization pipeline. We explored a novel approach by integrating Gaussian Splat technology to enhance the quality and efficiency of our CG camera layouts. This talk will delve into the tangible benefits achieved through this methodology.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Graphics Systems Architecture
Pipeline Tools and Work
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionJoin us for an exciting session hosted by the Khronos Group on the WebGL and WebGPU APIs. Get the latest news straight from the working groups driving the future of high-performance graphics and compute on the web. We’ll also feature Toucan, an exciting new language that lets developers write GPU-powered apps in a single source file using WebGPU. Discover what’s next for web graphics.
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionCovering emerging trends in GPU rendering for motion graphics, film making, product design, VFX, games, virtual production, immersive media and more – with deep dives on the latest frontier AI technologies, workflows and IP provenance tools.
Birds of a Feather
New Technologies
Spatial Computing
Full Conference
Experience
DescriptionMeta, Google, Snap, XREAL and others are on their way to making a viable commercial smart glass product that support AR and AI. What do you think (or know) will be their initial capabilities and what will be the real design issues that they will face?
What features do consumer glasses have to provide that are different (or more compelling) than smart glasses for industrial applications? What are the differences in their objectives and what are the different design challenges?
How close are we to seeing smart glasses that support readable/binocular AI?
What features do consumer glasses have to provide that are different (or more compelling) than smart glasses for industrial applications? What are the differences in their objectives and what are the different design challenges?
How close are we to seeing smart glasses that support readable/binocular AI?
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionThe Grinning Man VR experience is an interactive musical performance of the song 'Labyrinth', from the hit London West End stage show, motion captured and animated in virtual reality. The performance is approximately 5 minutes long and uses head and eye-tracking to create the experience of liveness.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Education
Fabrication
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
DescriptionThis paper demonstrates an exercise that exploits a low barrier to entry methodology by introducing photogrammetry, mesh editing, and 3D printing into a studio arts-based curriculum, making visible the influence and impact of each process on the final construct.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present the Mokume dataset for solid wood texturing, comprising nearly 190 samples from various species. Using this dataset, we propose an inverse modeling pipeline to infer volumetric wood textures from surface photographs, employing inverse procedural texturing and neural cellular automata (NCA).
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionThe Mooning is an animated mocumentary that reveals the truth behind the 1969 moon landing.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Digital Twins
Games
Generative AI
Modeling
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionAfter all the presentations, attendees are encouraged to engage in a Q&A session to discuss the topics presented. This will be followed by an Art Papers wrap-up, summarizing key insights, and a preview by the SIGGRAPH 2026 Art Papers chair, offering a glimpse into the upcoming 2026 Art Papers program.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThis scientific visualization explores the iconic Pillars of Creation in the Eagle Nebula and the various ways that stars and dust are intertwined within our galaxy, vibrant nebulae, and the birth of individual stars. Data from research papers and several NASA space telescopes underlies and informs the 3D models.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Full Conference
Experience
DescriptionThe Pipeline Conference continues to evolve and organize new events. We are always looking for help, and will use this in-person opportunity to discuss future events and the organization as a whole. If you're interested in having a say on how the conference and related events proceed in the future, come be part of the conversation.
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Digital Twins
Games
Generative AI
Modeling
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
DescriptionHow would our attitudes to death and dying change if we could see how a human body is reabsorbed into the environment? The Posthumous World is a project about death and our relationship with the planet. At its centre will be a new artwork - a poetic meditation on a body’s journey to re-join the ecosphere, which is also a scientifically accurate, visual simulation of how a body decays after burial. The body in question will be the artist’s own.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionA midcentury misogynist gets what he deserves when he’s forced to spend a day in heels.
Spatial Storytelling
Arts & Design
New Technologies
Not Livestreamed
Not Recorded
Performance
Robotics
Full Conference
Experience
DescriptionScylla System is a performance-based exploration of human-drone interactivity, where a dancer improvises within a complex choreography of 10 flying drones. Investigating agency, narrative, and the uncanny, the work tests the boundaries of improvisation and control while exploring drones as tools for enhancing dancers' adaptability and spatial awareness.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
Production & Animation
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionIt's time to get HtoA, MtoA, C4DtoA, KtoA and MaxtoA plugin users together and get a temperature check on what they love or maybe don't love about their respective Arnold Renderer plugins.
The BOF will be divided into three 20 minute segments.
1). Hand raised surveys on questions about plugin usage, ACES usage, CPU vs GPU, which versions are currently being used, if Operators are in use, etc.
2). Personal anecdotes about lessons learned with the Arnold plugins.
3). UX and Feature Requests for Arnold that Solid Angle can listen to.
The BOF will be divided into three 20 minute segments.
1). Hand raised surveys on questions about plugin usage, ACES usage, CPU vs GPU, which versions are currently being used, if Operators are in use, etc.
2). Personal anecdotes about lessons learned with the Arnold plugins.
3). UX and Feature Requests for Arnold that Solid Angle can listen to.
Birds of a Feather
New Technologies
Production & Animation
Pipeline Tools and Work
Full Conference
Experience
DescriptionJoin us for an insightful panel discussion on the state of pipelines in animation and VFX, featuring results from the latest industry survey. We’ll present key findings from over 200 studios gathered over the past two years, shedding light on current trends, challenges, and best practices. This session encourages active participation and open discussion about the implications of these results, and invites you to help shape the next survey. Your insights will be invaluable in building a trusted resource that raises awareness about pipeline practices across the creative industry.
Talk
Arts & Design
Research & Education
Livestreamed
Recorded
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Display
Hardware
Image Processing
Physical AI
Real-Time
Scientific Visualization
Spatial Computing
Full Conference
Virtual Access
Sunday
DescriptionThis study innovates slit-scan photography with an automated system integrating quadruple exposure controls (aperture, shutter, ISO, slit) and servo-driven mechanisms. Utilizing large-format cameras and adaptive ND filters, it achieves 3D spatiotemporal imaging, merging technical precision with philosophical exploration of time-space continuity, advancing interdisciplinary artistic innovation.
ACM SIGGRAPH 365 - Community Showcase
Research & Education
Not Livestreamed
Not Recorded
Education
Full Conference
Experience
DescriptionWe all understand that networking is critical in our field. And it’s equally important, if not more so, for recent graduates trying to break into the industry. Developing a strong and supportive community via an official ACM SIGGRAPH Student Chapter can achieve that! But having a student chapter at your university doesn’t just help the students directly, it can also indirectly aid in improving your program’s recruitment, retention, and graduation benchmarks. Learn how easy it is to start a student chapter and hear directly from a panel of current and former faculty advisors on how having a student chapter has benefited their students and universities. Though the session is tailored to educators, any attendee will find value in this session.
Production Session
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Lighting
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Monday
DescriptionFor the HBO Original Limited Series THE PENGUIN, 3107 VFX shots were created across 8 episodes. In this direct sequel to the feature film THE BATMAN, the city is on its knees, making way for Oswald “Oz” Cobb to rise up as the new kingpin of Gotham City.
On-set filming techniques included ground breaking interactive lighting for the visceral Oz action sequences. AI-driven workflows were employed to heighten the distress of a flood ravaged Gotham City. Young Victor witnesses the loss of his entire family in a flood, facilitated by complex simulations, striking the emotional core of viewers. Oz engages in a car chase filmed dry, but then immersed in torrential rain with a complex mix of classic 2d elements with sophisticated fx animation. New hybrid techniques in facial tracking augmented a progression of 10 years of visible torment for Sofia Falcone, as she was captive in Arkham Asylum. A car explosion ruptures the streets creating a massive sinkhole, with the help of new procedural destruction tools. All of the VFX were designed to service the characters and to tell their stories, to a degree that truly make the results no longer VFX work, but storytelling work. Our goal was for the viewer to not see the VFX, but to feel them. Join us as our team of industry leading VFX supervisors present the VFX of THE PENGUIN.
On-set filming techniques included ground breaking interactive lighting for the visceral Oz action sequences. AI-driven workflows were employed to heighten the distress of a flood ravaged Gotham City. Young Victor witnesses the loss of his entire family in a flood, facilitated by complex simulations, striking the emotional core of viewers. Oz engages in a car chase filmed dry, but then immersed in torrential rain with a complex mix of classic 2d elements with sophisticated fx animation. New hybrid techniques in facial tracking augmented a progression of 10 years of visible torment for Sofia Falcone, as she was captive in Arkham Asylum. A car explosion ruptures the streets creating a massive sinkhole, with the help of new procedural destruction tools. All of the VFX were designed to service the characters and to tell their stories, to a degree that truly make the results no longer VFX work, but storytelling work. Our goal was for the viewer to not see the VFX, but to feel them. Join us as our team of industry leading VFX supervisors present the VFX of THE PENGUIN.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
Arts & Design
Research & Education
Not Livestreamed
Not Recorded
Industry Insight
Full Conference
Experience
DescriptionThoughtful3D is hosting an in-person industry meetup. Creative professionals will engage in an informal, lively conversation about the state of the 3D industry. Professionals, students, and anyone curious are all welcome to join us in this conversation!
Thoughtful3D is a mentorship community created by Conor Woodard & Mike 'Cash' Cacciamani. We have over 30 years of combined experience in the animation and visual effects industry. Thoughtful3D Mentorships focus on industry insight, individual growth, and community support.
Thoughtful3D is a mentorship community created by Conor Woodard & Mike 'Cash' Cacciamani. We have over 30 years of combined experience in the animation and visual effects industry. Thoughtful3D Mentorships focus on industry insight, individual growth, and community support.
Spatial Storytelling
Arts & Design
Not Livestreamed
Not Recorded
Art
Haptics
Performance
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionThresholds: stories of our inner selves is a live contemporary dance and extended reality performance, where the audience participates in different overlapping narratives by dancing with both real and virtual actors, immersed in a multisensory experience.
Birds of a Feather
Gaming & Interactive
New Technologies
Production & Animation
Animation
Pipeline Tools and Work
Full Conference
Experience
DescriptionThe creative industry is entering a transformative phase. Amid evolving production models, tighter timelines, and rising expectations, studios are being challenged to reimagine how they operate.
In this forward-looking session, our panel of industry experts leads a strategic discussion on how studios can excel. Topics include:
- Reducing technical overhead through smarter infrastructure
- Improving data visibility to support faster, more informed decisions
- Structuring teams for greater flexibility and efficiency
- Leveraging automation and emerging technologies to unlock targeted savings
- Scaling infrastructure up or down—on-prem, in the cloud, or both—to align with fluctuating production demand
In this forward-looking session, our panel of industry experts leads a strategic discussion on how studios can excel. Topics include:
- Reducing technical overhead through smarter infrastructure
- Improving data visibility to support faster, more informed decisions
- Structuring teams for greater flexibility and efficiency
- Leveraging automation and emerging technologies to unlock targeted savings
- Scaling infrastructure up or down—on-prem, in the cloud, or both—to align with fluctuating production demand
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Capture/Scanning
Industry Insight
Lighting
Simulation
Full Conference
Wednesday
DescriptionJoin DNEG VFX Supervisor Stephen James, DFX Supervisor Melaina Mace, and FX Supervisor Roberto Rodricks for a behind-the-scenes look at the visual effects that brought the incredible episodic series ‘The Last of Us’ to life!
Following the success of its award-winning work on the first season, DNEG was proud to return as a key visual effects partner for the highly anticipated second season. To kick off the session, the team will take the audience back to the first season where they reimagined, recreated, and brought to life iconic locations and environments from the video game, delivering over 530 visual effects shots across 71 sequences.
Now – in the series’ second season – Stephen, Melaina, and Roberto will discuss the show’s evolution and how the iconic visuals have developed and changed since the first season.
This season, the DNEG team imagined and built a post-apocalyptic Seattle, taking what they had learned from the first season and pushing the environment destruction and vegetation overgrowth even further. For ‘The Last of Us’ season two, DNEG’s artists delivered some of the company’s most complex and detailed environment, FX and compositing work to date.
The series culminates in the series’ most ambitious episode yet, which sees DNEG’s fully CG post-apocalyptic city environment hit by a huge storm, demanding some of the most detailed and complex ocean and water FX work that DNEG has ever delivered.
This talk will offer an in-depth and comprehensive look at the ground-breaking VFX work that created the locations, environments, and FX that brought the ‘The Last of Us’ season two to life, cementing its standing as one of the most successful video-game adaptions ever.
Following the success of its award-winning work on the first season, DNEG was proud to return as a key visual effects partner for the highly anticipated second season. To kick off the session, the team will take the audience back to the first season where they reimagined, recreated, and brought to life iconic locations and environments from the video game, delivering over 530 visual effects shots across 71 sequences.
Now – in the series’ second season – Stephen, Melaina, and Roberto will discuss the show’s evolution and how the iconic visuals have developed and changed since the first season.
This season, the DNEG team imagined and built a post-apocalyptic Seattle, taking what they had learned from the first season and pushing the environment destruction and vegetation overgrowth even further. For ‘The Last of Us’ season two, DNEG’s artists delivered some of the company’s most complex and detailed environment, FX and compositing work to date.
The series culminates in the series’ most ambitious episode yet, which sees DNEG’s fully CG post-apocalyptic city environment hit by a huge storm, demanding some of the most detailed and complex ocean and water FX work that DNEG has ever delivered.
This talk will offer an in-depth and comprehensive look at the ground-breaking VFX work that created the locations, environments, and FX that brought the ‘The Last of Us’ season two to life, cementing its standing as one of the most successful video-game adaptions ever.
Spatial Storytelling
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Education
Performance
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionTimbreSpace is an XR music sandbox where users sculpt sound in an immersive spatial environment. Merging AI-driven analysis, interactive sampling, and generative audio, it transforms audio into evolving soundscapes. Our demo showcases intuitive, embodied sound design, enabling playful, expressive music creation beyond traditional DAW interfaces.
Production Session
New Technologies
Production & Animation
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Generative AI
Full Conference
Virtual Access
Thursday
DescriptionJoin Production Supervisor Kevin Baillie and Metaphysic VFX Supervisor Jo Plaete for a deep dive into how artist-empowering artificial intelligence enabled unprecedented workflows on Robert Zemeckis’s new film, Here. Told from a single perspective that transcends time, Here follows characters played by Tom Hanks, Robin Wright, Paul Bettany, and Kelly Reilly across multiple decades of their lives. While these monumental age spans were crucial to the narrative, the production faced an immense technical challenge: how to maintain the actors’ performances and emotional realism while radically altering their appearances.
Baillie and Plaete will explain how the project began with a competitive screen test. At first, traditional 3D and motion-capture methods were deemed impractical for the required volume of shots and the nuance demanded by the close-up facial performances. Metaphysic’s early proof of concept, transforming the 67-year-old Tom Hanks into his Big-era twenties, demonstrated that an AI-based approach could bridge massive age gaps without sacrificing the authenticity of the actors’ expressions. Once the filmmakers chose this route, the Metaphysic team rapidly scaled, bringing together AI engineers, data scientists, VFX artists, and compositors who refined the technology into a production-ready pipeline that centered on the principle of empowering filmmakers and artists.
A central aspect of this pipeline was real-time on-set face swapping, in which a specialized server equipped with powerful GPUs received a direct feed from the main camera, processed it through neural networks, and returned a de-aged image to the director’s monitor. This setup gave Robert Zemeckis and the actors near-instant feedback, only a few frames of delay, allowing them to adjust on the spot. Tom Hanks and Robin Wright rehearsed in front of a “Youth Mirror” system that let them see their younger faces in real time, helping them modulate posture, eye lines, and subtle expressions. Baillie will recount how the production team integrated these tools into daily shooting schedules, and Plaete will offer insights into the technical hurdles of achieving high-fidelity performance transfer on set, highlighting how the AI’s flexibility liberated artists to iterate and refine with minimal technical friction.
The session also explores how the technology matured in post-production, where higher resolutions and additional refinements were required for final shots. By working with “plate prep” and proprietary compositing workflows, the VFX team preserved the liveliness of each performance, even when rewinding several decades, while avoiding the “uncanny valley.” Attendees will learn how neural networks were trained on massive libraries of archival footage for Tom Hanks, Robin Wright, and other cast members, capturing the shifts in bone structure and skin quality across each stage of life. Plaete will describe the “visual data science” approach his team used to tune these models, emphasizing that successful outputs demanded not just code but also a keen artistic and intuitive understanding of the actors’ faces, underscoring how human creativity remains paramount when wielding artist-empowering AI.
Throughout the session, both speakers will emphasize that while neural networks are powerful, they function best as a tool for artists.
Baillie and Plaete will explain how the project began with a competitive screen test. At first, traditional 3D and motion-capture methods were deemed impractical for the required volume of shots and the nuance demanded by the close-up facial performances. Metaphysic’s early proof of concept, transforming the 67-year-old Tom Hanks into his Big-era twenties, demonstrated that an AI-based approach could bridge massive age gaps without sacrificing the authenticity of the actors’ expressions. Once the filmmakers chose this route, the Metaphysic team rapidly scaled, bringing together AI engineers, data scientists, VFX artists, and compositors who refined the technology into a production-ready pipeline that centered on the principle of empowering filmmakers and artists.
A central aspect of this pipeline was real-time on-set face swapping, in which a specialized server equipped with powerful GPUs received a direct feed from the main camera, processed it through neural networks, and returned a de-aged image to the director’s monitor. This setup gave Robert Zemeckis and the actors near-instant feedback, only a few frames of delay, allowing them to adjust on the spot. Tom Hanks and Robin Wright rehearsed in front of a “Youth Mirror” system that let them see their younger faces in real time, helping them modulate posture, eye lines, and subtle expressions. Baillie will recount how the production team integrated these tools into daily shooting schedules, and Plaete will offer insights into the technical hurdles of achieving high-fidelity performance transfer on set, highlighting how the AI’s flexibility liberated artists to iterate and refine with minimal technical friction.
The session also explores how the technology matured in post-production, where higher resolutions and additional refinements were required for final shots. By working with “plate prep” and proprietary compositing workflows, the VFX team preserved the liveliness of each performance, even when rewinding several decades, while avoiding the “uncanny valley.” Attendees will learn how neural networks were trained on massive libraries of archival footage for Tom Hanks, Robin Wright, and other cast members, capturing the shifts in bone structure and skin quality across each stage of life. Plaete will describe the “visual data science” approach his team used to tune these models, emphasizing that successful outputs demanded not just code but also a keen artistic and intuitive understanding of the actors’ faces, underscoring how human creativity remains paramount when wielding artist-empowering AI.
Throughout the session, both speakers will emphasize that while neural networks are powerful, they function best as a tool for artists.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionThe goal of this work is to train lip sync animation models that can run in real-time and on-device. We design a two-stage knowledge distillation framework to distill large, high-quality models. Our results show that we can train small models with low latency and a comparatively small loss in quality.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionTokenVerse extracts complex visual elements from images by identifying semantic directions in per-token modulation space of DiT models for each word in the image caption. It's capable of combining concepts from multiple sources by adding corresponding directions, enabling flexible generation of new combinations including abstract concepts like lighting and poses.
Stage Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Computer Vision
Generative AI
Physical AI
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionAs video generation technology rapidly advances, the need for robust evaluation frameworks becomes critical. While automated metrics are mostly sufficient for traditional AI modalities, video generation models require human experts to assess temporal coherence, context, motion realism, visual quality, and aesthetics in ways that automated methods cannot easily replicate yet. Additionally, current benchmarks for visual content are not always actionable and understandable for the AI developers, being too subjective and lacking formal criteria.
At Toloka, together with industry professionals we have developed the Mainstream Movies video evaluation toolkit, a comprehensive evaluation framework for cutting-edge VideoGen models to close the gap between creative users and engineers. Our approach combines domain expertise with systematic and detailed evaluation protocols to deliver actionable insights for AI developers.
We will demonstrate how our toolkit addresses the unique challenges of video generation evaluation, showcase evaluation results from leading VideoGen models, and discuss the methodology that enables scalable, professional-grade video assessment.
At Toloka, together with industry professionals we have developed the Mainstream Movies video evaluation toolkit, a comprehensive evaluation framework for cutting-edge VideoGen models to close the gap between creative users and engineers. Our approach combines domain expertise with systematic and detailed evaluation protocols to deliver actionable insights for AI developers.
We will demonstrate how our toolkit addresses the unique challenges of video generation evaluation, showcase evaluation results from leading VideoGen models, and discuss the methodology that enables scalable, professional-grade video assessment.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionTopological Offsets is a method for generating offset surfaces that are topologically equivalent to an offset infinitesimally close to the surface. By construction, the offsets are manifold, watertight, self-intersection-free, and strictly enclose the input. Tested on Thingi10k, it supports applications like manifold extraction, layered offsets, and robust finite offset computation.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Art
Digital Twins
Ethics and Society
Games
Hardware
Industry Insight
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionWe surveyed 888 SIGGRAPH papers from 2018-2024 and gathered author-reported GPU models. By contextualizing the hardware reported in papers with available data of consumers' hardware, we demonstrate that research is consistently developed and tested on new high-end devices that do not reflect the state of the consumer-level market.
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Generative AI
Geometry
Image Processing
Lighting
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Thursday
DescriptionHow can generative AI seamlessly integrate into professional 3D workflows? This talk explores how vision-language models (VLMs) can automate tedious editing tasks, generate structured 3D scenes, and perform object placement while preserving editability. I’ll discuss key findings, open challenges, and future applications across industries like robotics, game design, and VFX.
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Generative AI
Geometry
Image Processing
Lighting
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Thursday
DescriptionWe improve Digital Domain's video-driven animation transfer technique by introducing automatic corrections as a post-process optimization. We minimize the difference between our face swap model output (extended for light invariance) and predicted animation parameters in a differentiable pipeline. Our method is now being integrated into Digital Domain's facial capture workflow.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe challenge the comprehensive neural material representation by thoroughly considering the essential aspects of the complete appearance. We introduce an int8-quantized model that keeps high fidelity while achieving an order of magnitude speedup compared to previous methods, and a controllable structure-preserving synthesis strategy, along with accurate displacement effects.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe demonstrate that stereoacuity is remarkably resilient to foveated rendering and remains unaffected with up to 2× stronger foveation than commonly used. To this end, we design a psychovisual experiment and derive a simple perceptual model that determines the amount of foveation that does not affect stereoacuity.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionIn a world ravaged by an incurable illness, a devoted husband is forced to make an impossible choice as his wife’s life hangs in the balance.
Industry Session
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Games
Performance
Real-Time
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionReal-time graphics solutions demand significant processing power from both CPU and GPU. Whether solutions are intended for PC or mobile, computational efficiency is key. As specialists in graphics, AI, or games, you need to understand emerging neural graphics trends to continue driving innovating.
Join Arm to explore our vision on how to create, train and deploy models, featuring live demos led by experts and open conversations where you can:
Experience next-gen upscaling: Dive into the latest interactive demos from Arm showcasing real-time AI-driven upscaling, built with flexibility and mobile efficiency in mind.
Test-drive our latest developer tools: Learn more about our upscaling technology tools.
Shape the future: Share your workflow, wishlist, and challenges directly with our engineering and product teams.
Grab a snack, stay for a conversation, and learn how neural technology trends and innovation are shaping mobile graphics development. Meet us at the Arm developer lounge, East Building, Room 10
Join Arm to explore our vision on how to create, train and deploy models, featuring live demos led by experts and open conversations where you can:
Experience next-gen upscaling: Dive into the latest interactive demos from Arm showcasing real-time AI-driven upscaling, built with flexibility and mobile efficiency in mind.
Test-drive our latest developer tools: Learn more about our upscaling technology tools.
Shape the future: Share your workflow, wishlist, and challenges directly with our engineering and product teams.
Grab a snack, stay for a conversation, and learn how neural technology trends and innovation are shaping mobile graphics development. Meet us at the Arm developer lounge, East Building, Room 10
Industry Session
Gaming & Interactive
New Technologies
Games
Graphics Systems Architecture
Performance
Real-Time
Full Conference
Experience
Exhibits Only
Wednesday
DescriptionReal-time graphics solutions demand significant processing power from both CPU and GPU. Whether solutions are intended for PC or mobile, computational efficiency is key. As specialists in graphics, AI, or games, you need to understand emerging neural graphics trends to continue driving innovating.
Join Arm to explore our vision on how to create, train and deploy models, featuring live demos led by experts and open conversations where you can:
Experience next-gen upscaling: Dive into the latest interactive demos from Arm showcasing real-time AI-driven upscaling, built with flexibility and mobile efficiency in mind.
Test-drive our latest developer tools: Learn more about our upscaling technology tools.
Shape the future: Share your workflow, wishlist, and challenges directly with our engineering and product teams.
Grab lunch on us, stay for a conversation, and learn how neural technology trends and innovation are shaping mobile graphics development. Meet us at the Arm developer lounge, East Building, Room 10
Join Arm to explore our vision on how to create, train and deploy models, featuring live demos led by experts and open conversations where you can:
Experience next-gen upscaling: Dive into the latest interactive demos from Arm showcasing real-time AI-driven upscaling, built with flexibility and mobile efficiency in mind.
Test-drive our latest developer tools: Learn more about our upscaling technology tools.
Shape the future: Share your workflow, wishlist, and challenges directly with our engineering and product teams.
Grab lunch on us, stay for a conversation, and learn how neural technology trends and innovation are shaping mobile graphics development. Meet us at the Arm developer lounge, East Building, Room 10
Industry Session
Gaming & Interactive
New Technologies
Games
Performance
Real-Time
Full Conference
Experience
Exhibits Only
Thursday
DescriptionReal-time graphics solutions demand significant processing power from both CPU and GPU. Whether solutions are intended for PC or mobile, computational efficiency is key. As specialists in graphics, AI, or games, you need to understand emerging neural graphics trends to continue driving innovating.
Join Arm to explore our vision on how to create, train and deploy models, featuring live demos led by experts and open conversations where you can:
Experience next-gen upscaling: Dive into the latest interactive demos from Arm showcasing real-time AI-driven upscaling, built with flexibility and mobile efficiency in mind.
Test-drive our latest developer tools: Learn more about our upscaling technology tools.
Shape the future: Share your workflow, wishlist, and challenges directly with our engineering and product teams.
Grab lunch on us, stay for a conversation, and learn how neural technology trends and innovation are shaping mobile graphics development. Meet us at the Arm developer lounge, East Building, Room 10.
Join Arm to explore our vision on how to create, train and deploy models, featuring live demos led by experts and open conversations where you can:
Experience next-gen upscaling: Dive into the latest interactive demos from Arm showcasing real-time AI-driven upscaling, built with flexibility and mobile efficiency in mind.
Test-drive our latest developer tools: Learn more about our upscaling technology tools.
Shape the future: Share your workflow, wishlist, and challenges directly with our engineering and product teams.
Grab lunch on us, stay for a conversation, and learn how neural technology trends and innovation are shaping mobile graphics development. Meet us at the Arm developer lounge, East Building, Room 10.
Industry Session
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Games
Performance
Real-Time
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionReal-time graphics solutions demand significant processing power from both CPU and GPU. Whether solutions are intended for PC or mobile, computational efficiency is key. As specialists in graphics, AI, or games, you need to understand emerging neural graphics trends to continue driving innovating.
Join Arm to explore our vision on how to create, train and deploy models, featuring live demos led by experts and open conversations where you can:
Experience next-gen upscaling: Dive into the latest interactive demos from Arm showcasing real-time AI-driven upscaling, built with flexibility and mobile efficiency in mind.
Test-drive our latest developer tools: Learn more about our upscaling technology tools.
Shape the future: Share your workflow, wishlist, and challenges directly with our engineering and product teams.
Grab lunch on us, stay for a conversation, and learn how neural technology trends and innovation are shaping mobile graphics development. Meet us at the Arm developer lounge, East Building, Room 10.
Join Arm to explore our vision on how to create, train and deploy models, featuring live demos led by experts and open conversations where you can:
Experience next-gen upscaling: Dive into the latest interactive demos from Arm showcasing real-time AI-driven upscaling, built with flexibility and mobile efficiency in mind.
Test-drive our latest developer tools: Learn more about our upscaling technology tools.
Shape the future: Share your workflow, wishlist, and challenges directly with our engineering and product teams.
Grab lunch on us, stay for a conversation, and learn how neural technology trends and innovation are shaping mobile graphics development. Meet us at the Arm developer lounge, East Building, Room 10.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionThis work proposes a dynamic calibration system for inertial motion capture, which can dynamically remove non-static IMU drift and sensor-body offset during usage, enable user-friendly calibration (without T-pose and IMU heading reset), and ensure long-term robustness.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionRecent methods have been developed to reconstruct 3D hair strand geometry from images. We introduce an inverse hair grooming pipeline to transform these unstructured hair strands into procedural hair grooms controlled by a small set of guide strands and artist-friendly grooming operators, enabling easy editing of hair shape and style.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe propose TransparentGS, a fast inverse rendering pipeline for transparent objects based on 3D-GS. The main contributions are three-fold: efficient transparent Gaussian primitives for specular refraction, GaussProbe to encode ambient light and nearby contents, and the IterQuery algorithm to reduce parallax errors in our probe-based framework.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionIn a dark alley, a scrawny rat has no choice but to fight with a pigeon for a small slice of pizza. Without a second thought they throw themselves in a vertiginous chase from the top to the bottom of the street.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionIn a near future dominated by the metaverse, AI agents investigate cybercrime. Violet, an AI detective, is assigned to question Mia, a defiant youth. But upon hearing Violet’s name, Mia reveals a shocking truth. In this sealed room, what revelation awaits? A neo-noir suspense short with a bold visual style.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionFirst flight. No parents. Total panic. A terrified boy just wants to survive takeoff, but the plane—and its deranged passengers—have other plans.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionThis paper presents UltraMeshRenderer, a GPU out-of-core method for real-time rendering of 3D scenes with billions of vertices and triangles. It features a balanced hierarchical mesh, coherence-based LOD selection, and parallel in-place GPU memory management, achieving efficient data transfer and memory use with significant improvements over existing out-of-core techniques.
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Description“︁Unbalanced”︁ is a multisensory VR simulation that immerses users in the embodied experience of working mothers managing a crying infant, household chores, and job demands. Through interactive stressors and audio-haptic feedback, it fosters empathy and reflection on gendered labor in education, DEI training, and mental health contexts.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWarped-area reparameterization is a powerful technique to compute differential visibility. The key is constructing a velocity field that is continuous in the domain interior and agrees with defined velocities on boundaries. We present a robust and efficient unbiased estimator for differential visibility, using a fixed-step walk-on-spheres and closest silhouette queries.
Art Gallery
Arts & Design
Full Conference
Experience
DescriptionUnbound Horizons – Wing Series (2018–2025, glass sculpture, programmed light installation) is an interactive installation composed of 12 glass seagull sculptures. The work evokes the fluid, collective motion of birds in flight through shifting patterns of light that animate the sculptures in a continuous, harmonious rhythm. These light movements, designed to resemble natural murmuration phenomena, create a dynamic visual experience that transforms throughout the day and night. Blending traditional glass craftsmanship with subtle interactivity, the installation invites contemplation of movement, light, and the delicate balance between nature and art.
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Graphics Systems Architecture
Image Processing
Industry Insight
Math Foundations and Theory
Modeling
Physical AI
Pipeline Tools and Work
Rendering
Scientific Visualization
Spatial Computing
Full Conference
Experience
DescriptionUNC Department of Computer Science SIGGRAPH Reunion Luncheon
By invitation only. To receive an invitation email: [email protected]
Hosted by Praneeth Chakravarthula and Henry Fuchs
Hybrid event
By invitation only. To receive an invitation email: [email protected]
Hosted by Praneeth Chakravarthula and Henry Fuchs
Hybrid event
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe quantify uncertainty for SVBRDF acquisition from multi-view captures using entropy. The otherwise heavy computation is accelerated in the frequency domain, yielding a practical, efficient method. We apply uncertainty to improve SVBRDF capture by guiding camera placement, inpainting uncertain regions, and sharing information from certain regions on the object.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe present a new SPH approach to replicate the behavior of droplets and other smaller scale fluid bodies. For this, we develop a new implicit surface tension formulation and implement a Coulomb friction force at the fluid-solid interface. A strong coupling between both forces and pressure is achieved through a unified solving mechanism.
Stage Session
Arts & Design
Art
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionJoin Adobe's Creative 3D Evangelists Wes McDermott and Nikie Monteleone for an inspiring session on innovative workflows that elevate your creative projects. Discover their favorite techniques, explore cutting edge tools, and unlock new possibilities.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Description3D2EP transforms 3D shapes into expressive, editable primitives by extruding 2D profiles along 3D curves. This approach creates compact, interpretable representations that support intuitive editing and flexible redesign. It delivers high fidelity and efficiency, outperforming existing methods across digital design, asset creation, and customization workflows.
Spatial Storytelling
Arts & Design
Gaming & Interactive
Not Livestreamed
Not Recorded
Games
Haptics
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionJoin us for a 30-minute live demo with real-time commentary, as we delve into the creative and technical journey behind First Encounters, a groundbreaking mixed reality (MR) experience that has captivated both the press and the public.
Stage Session
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Games
Graphics Systems Architecture
Performance
Real-Time
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionDevelopers will play a key role in shaping the use of powerful neural network methods for producing graphics across the industry, for captivating consumers and creating a platform for studio innovation. From frame rate upscaling to mimicking the natural movement of clothing, the inclusion of neural networks in games and media will soon be as common - and as easy -as adding meshes, textures, or animations.
In this talk we will:
Explore speed improvements and computational efficiency brought by using neural techniques and opportunities for smartphones, handhelds, and other resource-constrained devices.
Propose intuitive development flows for ML tasks.
Showcase our latest real-time upscaling techniques with Arm Unreal Engine 5 plugins, including live examples and demos.
Share our roadmap for enabling open-source software support and scalable tools to ease integration and empower developers to build next-generation experiences.
We highlight the latest Arm resources for experimenting with your own use cases and invite you to tell us what you think.
In this talk we will:
Explore speed improvements and computational efficiency brought by using neural techniques and opportunities for smartphones, handhelds, and other resource-constrained devices.
Propose intuitive development flows for ML tasks.
Showcase our latest real-time upscaling techniques with Arm Unreal Engine 5 plugins, including live examples and demos.
Share our roadmap for enabling open-source software support and scalable tools to ease integration and empower developers to build next-generation experiences.
We highlight the latest Arm resources for experimenting with your own use cases and invite you to tell us what you think.
Course
Production & Animation
Livestreamed
Recorded
Animation
Geometry
Pipeline Tools and Work
Full Conference
Virtual Access
Sunday
DescriptionBased on real production examples, this Universal Scene Description (USD) course will expand upon previously presented best practices for pipeline infrastructure and integration. Presenters will walk through how they are more powerfully leveraging USD, building flexible, context-driven workflows, while balancing optimizations for consumer and author performance.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Experience
DescriptionJoin the discussion with the developers and users of Universal Scene Description (USD), Hydra and OpenSubdiv.
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Augmented Reality
Display
Games
Geometry
Lighting
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
DescriptionFor the past three years, we have used VMs in our computer lab disk image to facilitate a hybrid Linux and Windows environment that is flexible, maintainable, and artist-friendly. We illustrate the potential of VM-enabled, OS-agnostic lab environments for enhancing small studio or educational workflows and maximizing resource utilization.
Birds of a Feather
Gaming & Interactive
Research & Education
Not Livestreamed
Not Recorded
Education
Games
Full Conference
Experience
DescriptionSDL is a popular open-source library used in games and other interactive software. Its latest version includes a new GPU API that provides a cross-platform modern graphics interface with less of a learning curve than Vulkan. This makes SDL GPU a perfect candidate for teaching students a modern approach to interactive graphics.
Join SDL GPU project lead Evan Hemsley alongside professors Sanjay Madhav (USC), Mike Shah (Yale), and Matt Whiting (USC), as they discuss the design of SDL GPU and present first steps towards integrating the library into interactive graphics curriculum.
Join SDL GPU project lead Evan Hemsley alongside professors Sanjay Madhav (USC), Mike Shah (Yale), and Matt Whiting (USC), as they discuss the design of SDL GPU and present first steps towards integrating the library into interactive graphics curriculum.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe propose a novel ICP framework that jointly optimizes a shared template and instance-wise deformations. Our approach automatically captures common shape features from input shapes, achieving state-of-the-art accuracy and consistency while eliminating the need to carefully select a preset template mesh.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionThis paper shows how to express variational time integration for a large class of elastic energies as an optimization problem with a “hidden” convex substructure. Our integrator improves the performance of elastic simulation tasks, while conserving physical invariants up to tolerance/numerical precision.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe present analytical formulas for evaluating Green and biharmonic 2D coordinates and their gradients and Hessians, for 2D cages made of polynomial arcs.
We present results of 2D image deformations by direct interaction with the cage and through variational solvers.
We demonstrate the flexibility
We present results of 2D image deformations by direct interaction with the cage and through variational solvers.
We demonstrate the flexibility
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduced a new surface reconstruction method from points without normals. The method robustly handles undersampled regions and scales to large input sizes.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe present a novel dual-extruder clay printer with a continuously steerable concentric nozzle that produces graded or high-contrast textured surfaces in a single pass before firing. Our design includes a compact implementation featuring a unique plunger system with two concentric reservoirs.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionVariance reduction techniques are widely used to reduce the noise of Monte Carlo integration. However, these techniques are typically designed with the assumption that the integrand is scalar-valued. To address this, we introduce ratio control variations, an estimator that leverages a ratio-based approach instead of the conventional difference-based control variates.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionDuring World War I, a family is torn apart by the horrors of war.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Birds of a Feather
New Technologies
Production & Animation
Not Livestreamed
Not Recorded
Animation
Industry Insight
Pipeline Tools and Work
Full Conference
Experience
DescriptionHear the latest on the VFX Reference Platform and discuss software versioning challenges and
opportunities with your peers from both studios and software vendors.
opportunities with your peers from both studios and software vendors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionWe propose VideoAnydoor, a zero-shot video object insertion framework with high-fidelity detail preservation and precise motion control, where a pixel warper and a image-video mix-training strategy are designed to warp the pixel details according to the trajectories. VideoAnydoor demonstrates significant superiority over existing methods and naturally supports various downstream applications.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
DescriptionVideoPainter introduces a dual-branch framework for video inpainting with a lightweight context encoder that integrates with pre-trained diffusion transformers. Its ID resampling strategy maintains identity consistency across any-length videos, while VPData and VPBench provide the largest segmentation-mask dataset with captions. The system achieves state-of-the-art performance in video inpainting and editing.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionThis paper presents VirCHEW Reality, a face-worn haptic device for virtual food intake in VR. It uses pneumatic actuation to simulate food textures, enhancing the chewing experience. User studies demonstrated its effectiveness in providing distinct kinesthetic feedback and improving virtual eating experiences, with applications in dining, healthcare, and entertainment.
Birds of a Feather
New Technologies
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Industry Insight
Lighting
Modeling
Rendering
Full Conference
Experience
DescriptionAs VFX and Animation studios embrace remote work, Virtual Desktop Infrastructure (VDI) is key. Join our Birds of a Feather session to explore on-prem and cloud-based VDI instances. We'll delve into the evolving opportunities and considerations for technology teams that arise with this infrastructure shift, sharing knowledge, experiences, and best practices for this evolving landscape. This is an open forum with Industry experts and newcomers alike are invited to contribute to this vital discussion.
Birds of a Feather
Production & Animation
Research & Education
Education
Full Conference
Experience
DescriptionParticipants will talk about what’s driving Virtual Production curriculum today, industry participation in the classroom and exchange ideas and feedback on curriculum. Educators teaching virtual production are encouraged to participate in this discussion which will include faculty from Drexel University and Texas A&M University, among others sharing their take-aways on teaching and updating curriculum for an industry whose technology is updating at warp speed.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionV3DG achieves real-time rendering of massive 3D Gaussians in large, composed scenes through a novel LOD approach.
Inspired by Nanite, V3DG processes detailed 3D assets into clusters at various granularities offline, and selectively renders 3D Gaussians at runtime—flexibly balancing rendering speed and visual fidelity based on user-defined tolerances.
Inspired by Nanite, V3DG processes detailed 3D assets into clusters at various granularities offline, and selectively renders 3D Gaussians at runtime—flexibly balancing rendering speed and visual fidelity based on user-defined tolerances.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Education
Fabrication
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
DescriptionRecent imaging advances have granted CGI enthusiasts unprecedented and affordable access to stereophotogrammetric 3D scanning techniques. This technological democratization has enabled dynamic "gateway" engagements (such as our “Virtualizing the Stanley”), which enable students to explore artistic theory through advanced, but accessible, CGI focused educational activities.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce ViSA (Virtual Stunt Actors), an interactive animation system using deep reinforcement learning to generate realistic ballistic stunt actions. It efficiently produces dynamic scenes commonly seen in films and TV dramas, such as traffic accidents and stairway falls. A novel action space design enables scene generation within minutes.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThis is a collection of bird sounds reinterpreted into a visual sound structures that reflect certain aspects of the subject matter. Each one is meticulously produced in a 3D program called Houdini. The artist hopes to inspire the audience about the beauty of nature and the importance of habitat protection.
Course
Research & Education
Livestreamed
Recorded
Scientific Visualization
Full Conference
Virtual Access
Monday
DescriptionThinking systematically about existing visualization systems provides good springboard for designing new ones. This course is focused on data and task abstractions, and the design choices for visual encoding and interaction; it will not cover algorithms. It encompasses techniques and data types spanning visual analytics, information visualization, scientific visualization.
Course
New Technologies
Livestreamed
Recorded
Augmented Reality
Performance
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Thursday
DescriptionA course introducing challenges of VR graphics and details of different optimization techniques to tackle those challenges, which are used in main stream VR products.
Course
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Digital Twins
Display
Games
Simulation
Virtual Reality
Full Conference
Virtual Access
Sunday
DescriptionCybersickness remains a persistent challenge in VR/XR, hindering user experience and adoption. This course bridges research and industry, equipping developers, designers, and researchers with science-backed strategies to mitigate cybersickness through optimized locomotion, interaction, and environment design. Engage with case studies and activities to create more comfortable, accessible VR/XR experiences.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionVR-Doh, an intuitive VR-based 3D modeling system that lets you sculpt and manipulate soft objects and edit 3D Gaussian Splatting scenes in real time. Combining physics-based simulation and expressive interaction, VR-Doh empowers both novices and experts to create rich, deformable, simulation-ready models with natural hand-based input.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Capture/Scanning
Industry Insight
Lighting
Simulation
Full Conference
Wednesday
DescriptionStarting with an exciting and challenging client brief – an infected horde thawing from frozen stasis to overrun the town of Jackson, and a tight timeline for this ambitious work, meant our Wētā FX team really had something to sink our teeth into.
VFX Supervisor Nick Epstein and Animation Supervisor Dennis Yoo will take you through how Wētā FX worked with the production VFX team to previs, build, animate, and integrate a thousand-strong horde to the gritty realism of The Last of Us Season 2.
Usually crowd work suggests compromises, but as this talk will show, for this artfully chaotic sequence there were none. Every infected was designed and built to hold up full screen, with full cloth and hair simulation, and a full set of face shapes. Essentially, every crowd character was a hero character, and that presented a lot of challenges.
Taking you onset, Nick will explore how Wētā's Previs was developed with and utilised by the production team to capture some incredibly complex action with confidence that it would translate to a predictable, yet flexible final product.
Extremely variable shooting conditions required innovative solutions, including the development of a robust depth extraction toolkit which enabled varying weather patterns to be inserted into any plate, and the entirely CG horde to easily be intermingled with live action performers.
Flexibility and variation was one of the most important design factors to consider to bring the sequence to life. For this, we required a ‘mix and match’ system that would allow any piece of clothing to be dynamically refitted to any infected, regardless of proportions, and crucially this needed to be represented upfront in animation as well as renders later.
Dennis will outline the challenging combination of motion capture, keyframe animation, and ragdoll dynamics required to achieve a realistic, but also somewhat inhuman – infected – cadence to horde performances.
This talk will detail the process for building a vast digital assets wardrobe based on client provided scans, and adapting these across the infected horde using a procedural texturing system in lighting, usually utilised earlier in the pipeline by lookdev.
In addition to this, the episode needed extensive cloth and hair simulation at a scale previously not undertaken at Wētā. This was further complicated by the dynamic refitting of garments, as well as the use of ragdoll dynamics, resulting in some (sometimes) comically terrifying situations for our creature team to wrestle through.
We developed a Nuke based system for ‘weather fixes’ driven by challenging conditions during the Jackson siege shoot. Finally, we’ll talk about how our environment/DMP team handled full replacements - from the snowfields and mountains surrounding Jackson in different weather conditions, to full CG forested snowscapes.
Last but certainly not least, is the return of the iconic Bloater. The new unforgiving environment and intense weather conditions meant that we had to carefully rethink the complex character build, and how to achieve all the menacing details…
VFX Supervisor Nick Epstein and Animation Supervisor Dennis Yoo will take you through how Wētā FX worked with the production VFX team to previs, build, animate, and integrate a thousand-strong horde to the gritty realism of The Last of Us Season 2.
Usually crowd work suggests compromises, but as this talk will show, for this artfully chaotic sequence there were none. Every infected was designed and built to hold up full screen, with full cloth and hair simulation, and a full set of face shapes. Essentially, every crowd character was a hero character, and that presented a lot of challenges.
Taking you onset, Nick will explore how Wētā's Previs was developed with and utilised by the production team to capture some incredibly complex action with confidence that it would translate to a predictable, yet flexible final product.
Extremely variable shooting conditions required innovative solutions, including the development of a robust depth extraction toolkit which enabled varying weather patterns to be inserted into any plate, and the entirely CG horde to easily be intermingled with live action performers.
Flexibility and variation was one of the most important design factors to consider to bring the sequence to life. For this, we required a ‘mix and match’ system that would allow any piece of clothing to be dynamically refitted to any infected, regardless of proportions, and crucially this needed to be represented upfront in animation as well as renders later.
Dennis will outline the challenging combination of motion capture, keyframe animation, and ragdoll dynamics required to achieve a realistic, but also somewhat inhuman – infected – cadence to horde performances.
This talk will detail the process for building a vast digital assets wardrobe based on client provided scans, and adapting these across the infected horde using a procedural texturing system in lighting, usually utilised earlier in the pipeline by lookdev.
In addition to this, the episode needed extensive cloth and hair simulation at a scale previously not undertaken at Wētā. This was further complicated by the dynamic refitting of garments, as well as the use of ragdoll dynamics, resulting in some (sometimes) comically terrifying situations for our creature team to wrestle through.
We developed a Nuke based system for ‘weather fixes’ driven by challenging conditions during the Jackson siege shoot. Finally, we’ll talk about how our environment/DMP team handled full replacements - from the snowfields and mountains surrounding Jackson in different weather conditions, to full CG forested snowscapes.
Last but certainly not least, is the return of the iconic Bloater. The new unforgiving environment and intense weather conditions meant that we had to carefully rethink the complex character build, and how to achieve all the menacing details…
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Computer Vision
Dynamics
Education
Fabrication
Generative AI
Geometry
Image Processing
Modeling
Simulation
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
DescriptionThis paper presents a procedural data generation method for Jacquard weaving that uses matrix computations to create textiles with complex shaded patterns and a triple-layer structure. Employing this method, the authors creatively applied noise functions to weave design and produced a textile artwork in collaboration with a traditional craft technique.
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionFungi are everywhere, in the air, water, and soil, where they support and mediate between living and non-living. We Are Entanglement invites visitors into an immersive, interactive environment in which humans become part of the networks of communication through fungal networks beneath the floor of a forest. The artwork combines procedural modeling, generative AI, and dynamic simulation of vast numbers of living organisms. It is grounded in an imperative of drawing attention to the importance of nonconscious cognition and interspecies communication in biological and machine senses, as a reminder of the essential broader world around us.
Birds of a Feather
Gaming & Interactive
Animation
Augmented Reality
Digital Twins
Industry Insight
Real-Time
Scientific Visualization
Simulation
Full Conference
Experience
DescriptionThis Birds of a Feather session will focus on recent alliances and technological advancements in the field of 3D graphics and virtual environments. This interactive gathering brings together industry leaders, researchers, and enthusiasts to discuss collaborative efforts and innovative solutions that are shaping the future of web-based 3D content. Attendees can expect engaging presentations and networking opportunities, highlighting the latest trends in interoperability, AI, real-time rendering, and immersive experiences. Together, participants will explore how these developments are enhancing user engagement and fostering a more interconnected digital X3D ecosystem.
Presenters:
Anita Havele
Johannes Behr
Aaron Bergstrom
Mike McCann
Casey Gomez
Presenters:
Anita Havele
Johannes Behr
Aaron Bergstrom
Mike McCann
Casey Gomez
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionWhen a teenage boy visits his Gramps at a seemingly mundane boring assisted living facility, he comes to find that they have much more in common than he thought. “Wednesdays with Gramps” is a story about connection, communication, and commonality, without saying a word.
Computer Animation Festival
Not Livestreamed
Not Recorded
DescriptionNexus Studios’ Fx Goby draws a parallel between romantic love and the passion athletes feel for their sports in this launch film for the Olympic Games coverage. Fx and the Nexus Studios team led an artful choreography of 58 shots and 36 athletes in 60 seconds of breathtaking film.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionWe studied preferences for different contrasts and peak luminances in HDR. To do this, we collected a new HDR video dataset, developed tone mappers, and built an HDR haploscope that can reproduce high luminance and contrast. Data was fit to a model which is used for applications like display design.
Industry Session
Arts & Design
Gaming & Interactive
Production & Animation
Games
Image Processing
Real-Time
Rendering
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
DescriptionCopernicus was released in beta form as part of Houdini 20.5, adding an entirely new context for GPU-accelerated image processing. Since then, the team has been busy addressing existing deficiencies and expanding its functionality to enable new possibilities for artists and tool developers. This talk offers a peek at some of the exciting features coming to Copernicus in Houdini 21. From newly-added nodes and interactive workflows, to baking and material blending: this presentation will give an overview of tools available in Copernicus for texture generation and editing. We will also cover other powerful constructs introduced in the upcoming version, such as cables and live simulations--including brand new solvers implemented within the Copernicus framework. Finally, the presentation will dive into improvements to slap comp, where we will demonstrate how Copernicus and Solaris can work together to help you get the most out of the viewport--or to stylize its look completely.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
DescriptionAfter the summary presentations, attendees will participate in an interactive discussion. Outside the room will be a series of poster boards for authors to gather around with the audience. Authors are invited to bring any material related to their paper that could instigate further conversation such as printouts, posters, demos, or other presentation aids. The interactive discussions provide attendees more interaction with the individual authors.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
DescriptionWe introduce Gaussian-enhanced Surfels (GESs), a bi-scale representation combining opaque surfels and Gaussians for high-fidelity radiance field rendering. GES is entirely sorting free, enabling high-fidelity view-consistent rendering with ultra fast speeds.
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionWe present wide field-of-view VR and passthrough MR headsets with compact form factors that achieve state-of-the-art 180-degree horizontal and 120-degree vertical fields of view.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionOur work is a lightweight static global illumination baking solution that achieves competitive lighting effects while using only approximately 5% of the memory required by mainstream industry techniques. By adopting a vertex-probe structure, we ensure excellent runtime performance, making it suitable for low-end devices.
Course
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Display
Education
Ethics and Society
Games
Generative AI
Modeling
Scientific Visualization
Simulation
Full Conference
DescriptionMisjudging others leads to misjudging the world around us, with unfortunate consequences. When visualizing others, our design choices can reinforce these misbeliefs, or correct them. This course explores the surprising interplay between visual representation and social psychology, and how equity-forward design promotes clear, constructive visualizations of people and social outcomes.
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Please note that computers will not be provided, so be sure to bring your own fully charged laptop to fully participate and enjoy the session.
Industry Session
Gaming & Interactive
Production & Animation
Research & Education
Animation
Education
Ethics and Society
Games
Industry Insight
Full Conference
Experience
Exhibits Only
Monday
DescriptionJoin Women of SIGGRAPH Conversations and Netflix for our panel (and solutions-oriented discussion!) with key leaders in tech and entertainment. Through this conversation, we aim to discover how individuals and organizations are igniting career reinvention through their impacts in art, design, science, and tech. All while reshaping the narrative around what being a woman in tech “reinventing their career” means.
This session brings together innovators who have navigated rapid technological change to transform challenges into opportunities for growth and advancement. Panelists will share insights on adapting to evolving fields, cultivating resilience, and harnessing new technologies to shape the future of work. Whether you’re seeking inspiration or practical strategies for career reinvention, join us to explore how embracing change can lead to lasting impact and success in a rapidly shifting landscape.
This session brings together innovators who have navigated rapid technological change to transform challenges into opportunities for growth and advancement. Panelists will share insights on adapting to evolving fields, cultivating resilience, and harnessing new technologies to shape the future of work. Whether you’re seeking inspiration or practical strategies for career reinvention, join us to explore how embracing change can lead to lasting impact and success in a rapidly shifting landscape.
Birds of a Feather
Production & Animation
Animation
Industry Insight
Full Conference
Experience
DescriptionJoin us for a panel featuring Women in Technical/Leadership Roles in the Animation Industry. During this event, panelists would be sharing their career journeys, challenges they have overcome, ways they revive themselves through the tough schedules, resources/tips on their respective area of expertise and thoughts looking ahead into the future.
Moderator: Heather Brown, Look Development Lead at Weta FX
- Rebecca Bever, Director Production Technology, Walt Disney Animation Studios
- Lisa Connors, Look Development Supervisor, DreamWorks Animation
- Karyn Buczek Monschein, Vice President of Animation Technology at Paramount Animation
- Veronica Costa Orvalho, SIGGRAPH 2025 General Submissions Chair and Founder & CEO of Didimo
Moderator: Heather Brown, Look Development Lead at Weta FX
- Rebecca Bever, Director Production Technology, Walt Disney Animation Studios
- Lisa Connors, Look Development Supervisor, DreamWorks Animation
- Karyn Buczek Monschein, Vice President of Animation Technology at Paramount Animation
- Veronica Costa Orvalho, SIGGRAPH 2025 General Submissions Chair and Founder & CEO of Didimo
Frontiers
Gaming & Interactive
Research & Education
Not Livestreamed
Not Recorded
Games
Rendering
Full Conference
Experience
DescriptionEsports is a unique challenge for rendering research, with players regularly turning off even basic rendering techniques to reduce latency. In this workshop, three esports developers and three competing esports athletes will form an expert panel on esports rendering needs. The workshop will have three parts: a traditional panel session, with questions from a moderator and from the panel itself; an audience discussion session, with groups led by organizers and panel members producing questions and raising issues; and a closing panel session, with the panel addressing the questions raised by the audience.
Frontiers
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Generative AI
Physical AI
Robotics
Spatial Computing
Virtual Reality
Full Conference
Experience
DescriptionAs artificial intelligence, mixed reality, and conversational interfaces become deeply embedded in daily life, how are they reshaping human connection and communication? Join human-centered mixed-reality designer Ketki Jadhav and cognitive linguist Aubrie Amstutz for an interactive exploration of communication's transformation. Through speculative dialogue and group activities, we'll examine critical questions: How does mixed reality's partial translation of body language impact group dynamics? Do voice assistants change how we communicate with humans? Can AI avatars fulfill our social needs? This 90-minute workshop combines expert perspectives with hands-on engagement to envision communication's evolving landscape.
Frontiers
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Capture/Scanning
Generative AI
Pipeline Tools and Work
Real-Time
Virtual Reality
Full Conference
Experience
DescriptionA creator’s play is never done. Our Hybrid Dance Xplorations workshop invites you to play with us as we adventurously explore virtual camera control, motion capture, generative AI, and touchless or gesture-based interaction – in evolving configurations of our XR sandbox for co-creation and performance. We will share previous and current work including three use/play cases for movement-based experiences with contemporary dance and salsa. Presenters include local artists, researchers and technologists; plus "Forest Wild XR" co-creators. Highlights include live PeelDev VCam demos, hands-on interactions and engaging group activities for workshop attendees to contribute to ideas, laughter and ambitions to date.
Frontiers
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Education
Ethics and Society
Generative AI
Pipeline Tools and Work
Rendering
Scientific Visualization
Virtual Reality
Full Conference
Experience
DescriptionThis course introduces the Onboarding Generative AI Canvas to support individuals and teams in organizations to create a road map for understanding how generative AI systems will best support the work that they do, remove obstacles, minimize risks, and accelerate adoption.
Frontiers
Not Livestreamed
Not Recorded
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Education
Ethics and Society
Generative AI
Image Processing
Industry Insight
Simulation
Virtual Reality
Full Conference
Experience
DescriptionIt’s time for the SIGGRAPH to have more impact! The convergence of AI, computer graphics, and immersive media is transforming healthcare. To ensure adoption, these innovations must combine science and human-centered perspectives. Gathering research, technology and culture, SIGGRAPH should be a core player in this domain.
Join the opening session to hear inputs from leading experts sharing strategies for obtaining international impact, behind the scenes of innovation ecosystems, real case studies, and human centered approach. Consulate General of Switzerland in Vancouver invites you to the networking cocktail, followed by the interactive workshop—help shape SIGGRAPH’s strategy for health innovation!
Join the opening session to hear inputs from leading experts sharing strategies for obtaining international impact, behind the scenes of innovation ecosystems, real case studies, and human centered approach. Consulate General of Switzerland in Vancouver invites you to the networking cocktail, followed by the interactive workshop—help shape SIGGRAPH’s strategy for health innovation!
Frontiers
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Digital Twins
Education
Ethics and Society
Generative AI
Image Processing
Industry Insight
Performance
Physical AI
Real-Time
Scientific Visualization
Simulation
Virtual Reality
Full Conference
Experience
DescriptionHuman creativity has never been more challenged: Through the advent of AI-based storytelling and creative tools, new forms of computational creativity emerge, which give rise to rapid advances across animation, storytelling, and computer graphics. Storyboards can now be created within seconds through AI-based platforms, animations are prompted into existence, and image inputs allow for digital doubles to take the lead in feature films. However, risks persist across authorship, accreditation, royalties, but also authenticity, individual human expression, and handcrafted, and digital artistry. Following brief presentations, this workshop invites participants to brainstorm their responses to a rapidly evolving field.
Frontiers
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionGaussian splats are a rapidly emerging method for the fast and efficient creation of photorealistic 3D visualizations, and is particularly suitable for real-time applications. Today, a growing number of software solutions support the capture, visualization, editing, and compression of Gaussian splats. However, as different companies adopt varying formats, the risk of ecosystem fragmentation becomes a possibility.
In this 90-minute Frontier Workshop we will discuss the current technologies, formats, and use cases and investigate the potential for standardization to enable interoperability and sustainable growth.
The workshop will cover:
- A Gaussian Splats 101
- Using Gaussian Splats for Digital Twins
- The state of Gaussian Splats on the Web
- Community engagement via a Panel Discussion
By fostering collaboration among users, tool developers, and engine vendors, this workshop seeks to guide the evolution of Gaussian splat interoperability and help remove pain points, drive adoption and prevent fragmentation
In this 90-minute Frontier Workshop we will discuss the current technologies, formats, and use cases and investigate the potential for standardization to enable interoperability and sustainable growth.
The workshop will cover:
- A Gaussian Splats 101
- Using Gaussian Splats for Digital Twins
- The state of Gaussian Splats on the Web
- Community engagement via a Panel Discussion
By fostering collaboration among users, tool developers, and engine vendors, this workshop seeks to guide the evolution of Gaussian splat interoperability and help remove pain points, drive adoption and prevent fragmentation
Frontiers
New Technologies
Research & Education
Not Livestreamed
Not Recorded
Augmented Reality
Computer Vision
Digital Twins
Industry Insight
Scientific Visualization
Virtual Reality
Full Conference
Experience
DescriptionThis SIGGRAPH Frontiers workshop brings together leading minds in immersive technology and clinical practice to explore what it truly takes to translate XR from lab demos to life-saving tools. Through concise presentations, clinical case studies, and a live, collaborative design session, attendees will engage directly with surgeons and physicians to uncover real-world needs, constraints, and opportunities. Rather than pitching solutions, this session fosters dialogue—inviting developers, researchers, and medical experts to co-design the future of XR in medicine. If you’re interested in meaningful impact, this is where innovative graphics meet practical care.
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
DescriptionThe day the radiation disappears, Simon rushes to the heart of the zone, taking his colleague Agathe with him, in the hope of rediscovering a lost past.
ACM SIGGRAPH 365 - Community Showcase
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Ethics and Society
Full Conference
Experience
DescriptionJoin Women of SIGGRAPH Conversations (WOSC) for breakfast, and an interactive and inspiring session diving into the power of resilience and the art of reinvention in creative and technical careers. Through dynamic ice breakers and collaborative discussions, we’ll share practical strategies for navigating change, the pivotal role communities like WOSC can play, and how tools like AI are opening up new possibilities—from streamlining workflows to sparking innovation. Bring your own stories and questions, and leave with fresh perspective, actionable ideas, and renewed confidence for the transitions and challenges ahead.
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
DescriptionWe introduce xADA, a generative model for creating expressive and realistic animation of the face, tongue, and head directly from speech audio.
The animation maps directly onto MetaHuman compatible rig controls enabling integration into industry-standard content creation pipelines.
xADA generalizes across languages, and voice styles, and can animate non-verbal sounds.
The animation maps directly onto MetaHuman compatible rig controls enabling integration into industry-standard content creation pipelines.
xADA generalizes across languages, and voice styles, and can animate non-verbal sounds.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Augmented Reality
Digital Twins
Education
Fabrication
Games
Haptics
Performance
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
DescriptionXR Performance is a cross-disciplinary course exploring how extended reality (XR) reshapes storytelling, audience interaction, and performance. Through hands-on projects in motion capture, virtual production, and immersive sound design, students develop technical fluency while critically examining XR’s impact on media, expanding the possibilities of digital and physical space.
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Augmented Reality
Digital Twins
Education
Fabrication
Games
Haptics
Performance
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
DescriptionWe describe the XRLive Project, a cross-discipline, experiential learning opportunity, built upon the Vertically Integrated Project (VIP) approach, that focuses on the production of live musical, theatrical, and dance performances using advanced technologies such as VR/AR/XR and motion capture.
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Art
Pipeline Tools and Work
Virtual Reality
Full Conference
Tuesday
DescriptionHave you ever wondered what it would be like to fly? To soar among the clouds on an adventure with Peter Pan?
That was the question Walt Disney Imagineering and Walt Disney Animation Studios set out to answer with Peter Pan’s Never Land Adventure, the new attraction which opened in June of 2024 in Tokyo DisneySea’s Fantasy Springs. The development of this major new ride-through adventure took over seven years, with hundreds of artists, technicians, and software developers partnering to get it off the ground.
In this session, our panelists will discuss the collaborative efforts between Walt Disney Imagineering and Walt Disney Animation Studios as they crafted the visually immersive, stereoscopic experience. We’ll go into detail about the visual and story development process, the creation of 3D assets based on the original hand-drawn theatrical film, and the technical innovations created throughout the project. This undertaking was a unique opportunity where Disney Animation artists got to take part in the magic that Walt Disney Imagineering creates every day.
Join us as we discuss the “faith, trust and pixie dust” that ensured a trip to Never Land became a reality.
That was the question Walt Disney Imagineering and Walt Disney Animation Studios set out to answer with Peter Pan’s Never Land Adventure, the new attraction which opened in June of 2024 in Tokyo DisneySea’s Fantasy Springs. The development of this major new ride-through adventure took over seven years, with hundreds of artists, technicians, and software developers partnering to get it off the ground.
In this session, our panelists will discuss the collaborative efforts between Walt Disney Imagineering and Walt Disney Animation Studios as they crafted the visually immersive, stereoscopic experience. We’ll go into detail about the visual and story development process, the creation of 3D assets based on the original hand-drawn theatrical film, and the technical innovations created throughout the project. This undertaking was a unique opportunity where Disney Animation artists got to take part in the magic that Walt Disney Imagineering creates every day.
Join us as we discuss the “faith, trust and pixie dust” that ensured a trip to Never Land became a reality.
Sessions
Production Session
Not Livestreamed
Not Recorded
Full Conference
Birds of a Feather
[title will be taken from first slot]
1:00pm - 2:00pm PDT Monday, 11 August 2025 Production & Animation
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Full Conference
Experience
Birds of a Feather
[title will be taken from first slot]
1:30pm - 4:30pm PDT Tuesday, 12 August 2025 East Building, Room 8Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Computer Vision
Digital Twins
Ethics and Society
Games
Industry Insight
Modeling
Physical AI
Real-Time
Robotics
Virtual Reality
Full Conference
Experience
Production Session
Production & Animation
Not Livestreamed
Not Recorded
Animation
Capture/Scanning
Industry Insight
Lighting
Simulation
Full Conference
Wednesday
Production Session
Not Livestreamed
Not Recorded
Full Conference
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Performance
Physical AI
Real-Time
Robotics
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Monday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Talk
Production & Animation
Livestreamed
Recorded
Animation
Art
Dynamics
Geometry
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Thursday
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Computer Animation Festival
Not Livestreamed
Not Recorded
Full Conference
Experience
Appy Hour
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Education
Games
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Spatial Computing
Full Conference
Experience
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Stage Session
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Games
Graphics Systems Architecture
Image Processing
Performance
Real-Time
Full Conference
Experience
Exhibits Only
Tuesday
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Art Gallery
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
Industry Session
Arts & Design
Production & Animation
Art
Geometry
Image Processing
Modeling
Pipeline Tools and Work
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
Talk
Arts & Design
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Dynamics
Geometry
Industry Insight
Modeling
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Sunday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Games
Modeling
Rendering
Full Conference
Experience
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Education
Fabrication
Generative AI
Image Processing
Lighting
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Computer Vision
Dynamics
Education
Fabrication
Generative AI
Geometry
Image Processing
Modeling
Simulation
Spatial Computing
Full Conference
Virtual Access
Experience
Monday
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Augmented Reality
Computer Vision
Digital Twins
Education
Fabrication
Games
Geometry
Modeling
Real-Time
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Dynamics
Games
Lighting
Math Foundations and Theory
Pipeline Tools and Work
Simulation
Full Conference
Virtual Access
Thursday
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Art
Artificial Intelligence/Machine Learning
Audio
Digital Twins
Ethics and Society
Games
Generative AI
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Stage Session
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Pipeline Tools and Work
Full Conference
Experience
Exhibits Only
Wednesday
Production Session
Not Livestreamed
Not Recorded
Full Conference
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Computer Animation Festival
Not Livestreamed
Not Recorded
Computer Animation Festival
Not Livestreamed
Not Recorded
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Emerging Technologies
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Fabrication
Games
Generative AI
Haptics
Hardware
Image Processing
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Talk
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Geometry
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Augmented Reality
Display
Games
Geometry
Lighting
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Exhibition
Full Conference
Experience
Exhibits Only
Exhibition
Full Conference
Experience
Exhibits Only
Exhibition
Full Conference
Experience
Exhibits Only
Production Session
Not Livestreamed
Not Recorded
Full Conference
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Talk
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Dynamics
Games
Image Processing
Industry Insight
Lighting
Pipeline Tools and Work
Real-Time
Simulation
Full Conference
Virtual Access
Tuesday
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Education
Ethics and Society
Games
Generative AI
Lighting
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Talk
Arts & Design
Gaming & Interactive
New Technologies
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Display
Education
Games
Generative AI
Geometry
Graphics Systems Architecture
Image Processing
Modeling
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Birds of a Feather
Arts & Design
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Ethics and Society
Pipeline Tools and Work
Full Conference
Experience
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Immersive Pavilion
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Animation
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Computer Vision
Education
Ethics and Society
Games
Generative AI
Haptics
Modeling
Performance
Real-Time
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Job Fair
Full Conference
Experience
Exhibits Only
Job Fair
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Educator's Forum
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Recorded
Animation
Art
Augmented Reality
Digital Twins
Education
Fabrication
Games
Haptics
Performance
Real-Time
Virtual Reality
Full Conference
Virtual Access
Experience
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Livestreamed
Recorded
Not Recorded
Animation
Digital Twins
Industry Insight
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Full Conference
Virtual Access
Sunday
Industry Session
New Technologies
Production & Animation
Animation
Pipeline Tools and Work
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
ACM SIGGRAPH 365 - Community Showcase
Not Livestreamed
Not Recorded
Full Conference
Experience
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Artificial Intelligence/Machine Learning
Capture/Scanning
Computer Vision
Generative AI
Geometry
Image Processing
Lighting
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Industry Session
Production & Animation
Rendering
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Talk
Arts & Design
Production & Animation
Livestreamed
Recorded
Animation
Art
Geometry
Industry Insight
Lighting
Modeling
Pipeline Tools and Work
Rendering
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Geometry
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Thursday
Industry Session
New Technologies
Production & Animation
Animation
Art
Digital Twins
Education
Geometry
Industry Insight
Modeling
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Tuesday
Industry Session
New Technologies
Artificial Intelligence/Machine Learning
Digital Twins
Generative AI
Hardware
Physical AI
Robotics
Simulation
Full Conference
Experience
Exhibits Only
Monday
Industry Session
Gaming & Interactive
Games
Graphics Systems Architecture
Hardware
Performance
Real-Time
Rendering
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
New Technologies
Research & Education
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Generative AI
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Experience
Exhibits Only
Wednesday
Industry Session
Production & Animation
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Games
Generative AI
Geometry
Graphics Systems Architecture
Hardware
Image Processing
Industry Insight
Lighting
Performance
Physical AI
Pipeline Tools and Work
Real-Time
Rendering
Robotics
Simulation
Virtual Reality
Full Conference
Experience
Exhibits Only
Tuesday
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Talk
Arts & Design
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Recorded
Animation
Dynamics
Pipeline Tools and Work
Rendering
Simulation
Full Conference
Virtual Access
Wednesday
Art Paper
Technical Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Fabrication
Games
Generative AI
Geometry
Image Processing
Modeling
Performance
Physical AI
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Sunday
Pathfinders
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Pathfinders
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Pathfinders
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Pathfinders
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Pathfinders
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Pathfinders
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Poster
Full Conference
Experience
Industry Session
New Technologies
Production & Animation
Animation
Full Conference
Experience
Exhibits Only
Tuesday
Talk
Gaming & Interactive
New Technologies
Production & Animation
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Digital Twins
Dynamics
Games
Generative AI
Modeling
Performance
Real-Time
Rendering
Simulation
Virtual Reality
Full Conference
Virtual Access
Sunday
Real-Time Live!
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Capture/Scanning
Computer Vision
Digital Twins
Display
Dynamics
Education
Ethics and Society
Games
Generative AI
Geometry
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Talk
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Capture/Scanning
Digital Twins
Dynamics
Geometry
Lighting
Modeling
Performance
Pipeline Tools and Work
Real-Time
Rendering
Simulation
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Production Session
Not Livestreamed
Not Recorded
Full Conference
Talk
Arts & Design
Research & Education
Livestreamed
Recorded
Art
Artificial Intelligence/Machine Learning
Augmented Reality
Capture/Scanning
Display
Hardware
Image Processing
Physical AI
Real-Time
Scientific Visualization
Spatial Computing
Full Conference
Virtual Access
Sunday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Stage Session
New Technologies
Production & Animation
Capture/Scanning
Digital Twins
Image Processing
Rendering
Simulation
Full Conference
Experience
Exhibits Only
Wednesday
Technical Paper
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Wednesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
Production Session
Not Livestreamed
Not Recorded
Full Conference
Talk
Arts & Design
Gaming & Interactive
Production & Animation
Research & Education
Livestreamed
Recorded
Art
Digital Twins
Ethics and Society
Games
Hardware
Industry Insight
Pipeline Tools and Work
Real-Time
Full Conference
Virtual Access
Sunday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Technical Paper
Technical Papers Closing Session
5:15pm - 5:30pm PDT Thursday, 14 August 2025 West Building, Rooms 211-214Arts & Design
Production & Animation
Research & Education
Not Livestreamed
Not Recorded
Full Conference
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Art Paper
Arts & Design
Gaming & Interactive
New Technologies
Production & Animation
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Animation
Art
Artificial Intelligence/Machine Learning
Audio
Augmented Reality
Digital Twins
Games
Generative AI
Modeling
Performance
Real-Time
Robotics
Scientific Visualization
Simulation
Spatial Computing
Virtual Reality
Full Conference
Virtual Access
Experience
Tuesday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Production Session
Not Livestreamed
Not Recorded
Full Conference
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Production Session
Not Livestreamed
Not Recorded
Full Conference
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Stage Session
Gaming & Interactive
New Technologies
Artificial Intelligence/Machine Learning
Games
Graphics Systems Architecture
Performance
Real-Time
Full Conference
Experience
Exhibits Only
Tuesday
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Thursday
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Tuesday
Connection Lounge Meet-Up
Full Conference
Experience
Exhibits Only
Technical Paper
Research & Education
Livestreamed
Not Livestreamed
Recorded
Not Recorded
Full Conference
Virtual Access
Monday
Production Session
Not Livestreamed
Not Recorded
Full Conference
Try a different query.