JamBot Logo
1-100 of about 172 matches for site:arxiv.org dynamic
https://arxiv.org/abs/2406.08332
2406.08332] UDON: Universal Dynamic Online distillatioN for generic image representations Skip to main content We gratefully
https://arxiv.org/abs/2208.08349
2208.08349] Open Long-Tailed Recognition in a Dynamic World Skip to main content We gratefully
https://arxiv.org/abs/2309.04581
2309.04581] Dynamic Mesh-Aware Radiance Fields Happy Open Access Week from arXiv! YOU make open access possible! Tell
https://arxiv.org/abs/2406.08332
2406.08332] UDON: Universal Dynamic Online distillatioN for generic image representations Skip to main content We gratefully
https://arxiv.org/abs/2501.10021
2501.10021] X-Dyna: Expressive Dynamic Human Image Animation Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2412.04463
2412.04463] MegaSaM: Accurate, Fast, and Robust Structure and Motion from Casual Dynamic Videos Happy Open Access
https://arxiv.org/abs/2405.14868
2405.14868] Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2501.10021
2501.10021] X-Dyna: Expressive Dynamic Human Image Animation Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2408.11375
2408.11375] Bootstrapping Dynamic APSP via Sparsification Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2408.11368
2408.11368] A Simple Dynamic Spanner via APSP The Scheduled Database Maintenance 2025-09-17 11am-1pm
https://arxiv.org/abs/2407.17470
2407.17470] SV4D: Dynamic 3D Content Generation with Multi-Frame and Multi-View Consistency Skip to main
https://arxiv.org/abs/2302.06548
2302.06548] Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning Skip to main content
https://arxiv.org/abs/2311.06402
2311.06402] A Dynamic Shortest Paths Toolbox: Low-Congestion Vertex Sparsifiers and their Applications Skip to
https://arxiv.org/abs/2303.05703
2303.05703] MovingParts: Motion-based 3D Part Discovery in Dynamic Radiance Field Skip to main content We
https://arxiv.org/abs/2412.03079
2412.03079] Align3R: Aligned Monocular Depth Estimation for Dynamic Videos Skip to main content We gratefully acknowledge
https://arxiv.org/abs/2304.01159
2304.01159] DribbleBot: Dynamic Legged Manipulation in the Wild Skip to main content We gratefully
https://arxiv.org/abs/2304.01159
2304.01159] DribbleBot: Dynamic Legged Manipulation in the Wild Skip to main content We gratefully
https://arxiv.org/abs/2404.12379
2404.12379] Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Dynamic Scenes Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2412.07755
2412.07755] SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models Happy Open Access Week from arXiv! YOU
https://arxiv.org/abs/2412.15199
2412.15199] LiDAR-RT: Gaussian-based Ray Tracing for Dynamic LiDAR Re-simulation Skip to main content
https://arxiv.org/abs/2211.11082
2211.11082] DynIBaR: Neural Dynamic Image-Based Rendering Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2404.11483
2404.11483] AgentKit: Structured LLM Reasoning with Dynamic Graphs Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2404.12379
2404.12379] Dynamic Gaussians Mesh: Consistent Mesh Reconstruction from Dynamic Scenes Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2312.03029
2312.03029] HHAvatar: Gaussian Head Avatar with Dynamic Hairs Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2312.03029
2312.03029] HHAvatar: Gaussian Head Avatar with Dynamic Hairs Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2002.09127
2002.09127] Learning Dynamic Belief Graphs to Generalize on Text-Based Games Skip to main content
https://arxiv.org/abs/2312.10904
2312.10904] Dynamic Retrieval Augmented Generation of Ontologies using Artificial Intelligence (DRAGON-AI) Skip to main
https://arxiv.org/abs/2309.05655
2309.05655] Dynamic Handover: Throw and Catch with Bimanual Hands Happy Open Access Week from arXiv! YOU
https://arxiv.org/abs/2410.18912
2410.18912] Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling Skip to main
https://arxiv.org/abs/2309.16118
2309.16118] D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement The Scheduled Database
https://arxiv.org/abs/2306.16700
2306.16700] Dynamic-Resolution Model Learning for Object Pile Manipulation Skip to main content We
https://arxiv.org/abs/2504.17788
2504.17788] Dynamic Camera Poses and Where to Find Them Skip to main
https://arxiv.org/abs/2302.01607
2302.01607] dynamite: An R Package for Dynamic Multivariate Panel Models Happy Open Access Week from arXiv! YOU make
https://arxiv.org/abs/2410.18912
2410.18912] Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling Happy Open Access Week from
https://arxiv.org/abs/2306.16700
2306.16700] Dynamic-Resolution Model Learning for Object Pile Manipulation arXiv Is Hiring a DevOps
https://arxiv.org/abs/2309.05655
2309.05655] Dynamic Handover: Throw and Catch with Bimanual Hands Skip to main content We
https://arxiv.org/abs/2008.07012
2008.07012] DyStaB: Unsupervised Object Segmentation via Dynamic-Static Bootstrapping Skip to main content We gratefully acknowledge support from
https://arxiv.org/abs/2309.16118
2309.16118] D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement Skip to main
https://arxiv.org/abs/2010.00560
2010.00560] Dynamic Facial Asset and Rig Generation from a Single Scan Happy Open Access
https://arxiv.org/abs/1612.07837
1612.07837] SampleRNN: An Unconditional End-to-End Neural Audio Generation Model Skip to main
https://arxiv.org/abs/1205.4788
1205.4788] Dynamic Logics of Dynamical Systems Planned Database Maintenance 2025-09-17 11am-1pm UTC Submission
https://arxiv.org/abs/1205.4788
1205.4788] Dynamic Logics of Dynamical Systems Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/1612.07837
1612.07837] SampleRNN: An Unconditional End-to-End Neural Audio Generation Model Skip to main
https://arxiv.org/abs/2112.06904
2112.06904] HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture Skip to
https://arxiv.org/abs/2411.18613
PDF HTML (experimental) Abstract: We present CAT4D, a method for creating 4D (dynamic 3D) scenes from monocular
https://arxiv.org/abs/2504.13152
the World, by Haiwen Feng and 7 other authors View PDF HTML (experimental) Abstract: Dynamic 3D reconstruction and
https://arxiv.org/abs/2106.02636
world of static images, allowing models to reason about the dynamic context behind visual scenes
https://arxiv.org/abs/2412.13196
challenge of enabling real-world humanoid robots to perform expressive and dynamic whole-body motions while
https://arxiv.org/abs/2112.03857
object detection tasks, a 1-shot GLIP rivals with a fully-supervised Dynamic Head. Code is released
https://arxiv.org/abs/2412.13196
challenge of enabling real-world humanoid robots to perform expressive and dynamic whole-body motions while
https://arxiv.org/abs/2112.03857
object detection tasks, a 1-shot GLIP rivals with a fully-supervised Dynamic Head. Code is released
https://arxiv.org/abs/2308.07903
relightable and animatable neural avatars from sparse-view (or even monocular) videos of dynamic humans under unknown illumination
https://arxiv.org/abs/2310.11448
Abstract: This paper targets high-fidelity and real-time view synthesis of dynamic 3D scenes at 4K
https://arxiv.org/abs/2110.11712
Omega}(n^{3/2})$ that applied to all previous approaches for partially-dynamic SSSP [STOC'14, SODA
https://arxiv.org/abs/2312.09138
and Konrad Schindler and Iro Armeni View PDF HTML (experimental) Abstract: Research into dynamic 3D scene understanding has
https://arxiv.org/abs/1904.05160
while acknowledging the novelty of the open world. Our so-called dynamic meta-embedding combines a
https://arxiv.org/abs/2311.02542
densely capture walkable spaces in high fidelity and with multi-view high dynamic range images in
https://arxiv.org/abs/2308.06595
4 reference in just 27% of the comparison. VisIT-Bench is dynamic to participate
https://arxiv.org/abs/2411.00773
based on customizable first-order logic (FOL) for an urban-like environment with multiple dynamic agents. LogiCity models diverse
https://arxiv.org/abs/2304.10530
regarding the latent denoising steps, where bilateral connections can be established upon. Specifically, we propose dynamic diffuser, a meta
https://arxiv.org/abs/2407.10830
computes the flow via a sequence of $m^{1+o(1)}$ dynamic min-ratio cut problems
https://arxiv.org/abs/2308.07903
relightable and animatable neural avatars from sparse-view (or even monocular) videos of dynamic humans under unknown illumination
https://arxiv.org/abs/2310.11448
Abstract: This paper targets high-fidelity and real-time view synthesis of dynamic 3D scenes at 4K
https://arxiv.org/abs/2107.00773
candidates for navigating challenging environments because of their agile and dynamic designs. This paper presents
https://arxiv.org/abs/2208.14023
environment, one's motion may also be influenced by the motion and dynamic movements of others
https://arxiv.org/abs/2311.02542
densely capture walkable spaces in high fidelity and with multi-view high dynamic range images in
https://arxiv.org/abs/2307.07947
turn to language as a source of supervision for dynamic traffic scene generation. Our
https://arxiv.org/abs/2502.09614
and imitation learning to boost the controller's performance in dynamic environments. At the
https://arxiv.org/abs/2106.13228
with unprecedented fidelity, and various recent works have extended NeRF to handle dynamic scenes. A common
https://arxiv.org/abs/2111.11432
the representations from coarse (scene) to fine (object), from static (images) to dynamic (videos), and from
https://arxiv.org/abs/2311.16854
and images. However, the challenging problem of text-to-4D dynamic 3D scene generation with
https://arxiv.org/abs/1403.2805
emphasize the importance of type consistency when using JSON to exchange dynamic data, and illustrate
https://arxiv.org/abs/2403.17920
PDF HTML (experimental) Abstract: Recent techniques for text-to-4D generation synthesize dynamic 3D scenes using supervision
https://arxiv.org/abs/2311.17984
to-image and text-to-video models to generate dynamic 3D scenes. However, current
https://arxiv.org/abs/2111.11432
the representations from coarse (scene) to fine (object), from static (images) to dynamic (videos), and from
https://arxiv.org/abs/1403.2805
emphasize the importance of type consistency when using JSON to exchange dynamic data, and illustrate
https://arxiv.org/abs/2412.09621
Jin and 5 other authors View PDF HTML (experimental) Abstract: Learning to understand dynamic 3D scenes from imagery
https://arxiv.org/abs/2411.18673
for camera control learning with a curated dataset of 20K diverse, dynamic videos with stationary cameras
https://arxiv.org/abs/2504.05304
mixture flow matching (GMFlow) model: instead of predicting the mean, GMFlow predicts dynamic Gaussian mixture (GM) parameters
https://arxiv.org/abs/2203.05557
each image an input-conditional token (vector). Compared to CoOp's static prompts, our dynamic prompts adapt to
https://arxiv.org/abs/2505.03729
descents, sitting and standing from chairs and benches, as well as other dynamic whole-body skills-all
https://arxiv.org/abs/2409.00138
of cases, even when prompted with privacy-enhancing instructions. We also demonstrate the dynamic nature of PrivacyLens
https://arxiv.org/abs/2303.06624
experiment results reveal their effectiveness and reliability in complex and dynamic environments. Subjects: Robotics (cs
https://arxiv.org/abs/2001.04642
Our approach yields state of the art view synthesis techniques, operates on low dynamic range imagery, and
https://arxiv.org/abs/2110.06648
the system to efficiently and robustly collect trolleys in dynamic and complex