JamBot Logo
1-53 of about 53 matches for site:arxiv.org capable
https://arxiv.org/abs/2310.07018
2310.07018] NEWTON: Are Large Language Models Capable of Physical Reasoning? Skip to main content We gratefully
https://arxiv.org/abs/2312.11805
2312.11805] Gemini: A Family of Highly Capable
https://arxiv.org/html/2404.14219v1
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone 1 Introduction 2 Technical Specifications Highly
https://arxiv.org/abs/2207.10662
no deep features and no NeRF-like volume rendering are needed. Our method is capable of predicting
https://arxiv.org/abs/2506.07643
remains a challenge. We introduce ROBIN: an MLM instruction-tuned with densely annotated relationships capable of constructing
https://arxiv.org/abs/2308.16891
is a long-standing problem in robotics to develop agents capable of executing
https://arxiv.org/abs/2501.10021
enhanced realism in animated scenes. Together, these components form a unified framework capable of learning
https://arxiv.org/abs/2302.06833
2 allows for the generation of new 3D scenes. VQ3D is capable of generating
https://arxiv.org/abs/2312.13216
progress in self-supervised representation learning has resulted in models that are capable of extracting
https://arxiv.org/abs/2312.04547
language as the universal medium to build autonomous 3D characters, who are capable of engaging
https://arxiv.org/abs/2412.16776
less memory than previous approaches. Building on this innovation, we present a reconstruction algorithm capable of generating
https://arxiv.org/abs/1612.00796
the development of artificial intelligence. Neural networks are not, in general, capable of this
https://arxiv.org/abs/2304.02602
input and, even in the presence of ambiguity, is capable of rendering
https://arxiv.org/abs/2312.17173
vacuous generalization bounds for pretrained large language models (LLMs), indicating that language models are capable of discovering
https://arxiv.org/abs/2402.14547
optimization databases in the world, our extensive experiments demonstrate that language models are capable of very
http://arxiv.org/abs/2308.02151
in which large language models (LLMs) are augmented to become autonomous language agents capable of performing
http://arxiv.org/abs/2011.04000
language generation models to generate affective (emotional) text. We posit a model capable of generating
https://arxiv.org/abs/2403.17246
models (LLMs) directly used for inferring plan steps rarely guarantee execution success, but are capable of leveraging
https://arxiv.org/abs/2403.14621
other authors View PDF HTML (experimental) Abstract: We introduce GRM, a large-scale reconstructor capable of recovering
https://arxiv.org/abs/2210.15663
representing 3D data poses significantly greater challenges. Ideally, a robust 3D representation should be capable of accurately
https://arxiv.org/abs/1805.07869
Recurrent Neural Networks, by John Clemens View PDF Abstract: Recurrent neural networks (RNNs) are powerful constructs capable of modeling
https://arxiv.org/abs/2505.03738
we construct a hybrid AMO dataset and train a network capable of robust
https://arxiv.org/abs/2503.16413
By integrating 3D Gaussian Splatting techniques with foundation models, M3 builds a multimodal memory capable of rendering
https://arxiv.org/abs/2206.08655
such, IFA implicitly aligns the feature maps at different levels and is capable of producing
https://arxiv.org/abs/2403.04115
action sequences from different tasks via the diffusion process, the model is capable of distinguishing
https://arxiv.org/abs/2501.12387
and 4 other authors View PDF HTML (experimental) Abstract: We present a unified framework capable of solving
https://arxiv.org/abs/2403.10518
and 6 other authors View PDF HTML (experimental) Abstract: We propose Lodge, a network capable of generating
https://arxiv.org/abs/1102.2331
three domains of life. Subsequently, we proceed to recent developments on models capable of more
http://arxiv.org/abs/1001.0036
finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating
https://arxiv.org/abs/2405.15071
to implicitly reason over parametric knowledge, a skill that even the most capable language models struggle with
https://arxiv.org/abs/2209.00579
control parameters directly for task performance by leveraging a design-conditioned controller capable of generalizing
https://arxiv.org/abs/1709.02349
for Learning Algorithms (MILA) for the Amazon Alexa Prize competition. MILABOT is capable of conversing
http://arxiv.org/abs/1107.1286
1.2% of all stars host a planet that may have been capable of supporting
https://arxiv.org/abs/2201.00411
have a much larger fields of view and brain circuitry capable of understanding
https://arxiv.org/abs/2404.08144
in the CVE description. When given the CVE description, GPT-4 is capable of exploiting
https://arxiv.org/abs/2502.10090
instruction manuals. This work marks a step forward in advancing robotic systems capable of understanding
https://arxiv.org/abs/2410.10803
by Yanjie Ze and 7 other authors View PDF HTML (experimental) Abstract: Humanoid robots capable of autonomous
https://arxiv.org/abs/2307.15043
PDF HTML (experimental) Abstract: Because "out-of-the-box" large language models are capable of generating
https://arxiv.org/abs/2212.08073
Yuntao Bai and 50 other authors View PDF Abstract: As AI systems become more capable, we would like to
https://arxiv.org/abs/2011.12948
Park and 6 other authors View PDF Abstract: We present the first method capable of photorealistically
https://arxiv.org/abs/2306.07970
In this work, we aim to reconstruct a time-varying 3D model, capable of rendering
https://arxiv.org/abs/2006.14769
authors View PDF Abstract: We present the Supermasks in Superposition (SupSup) model, capable of sequentially
https://arxiv.org/abs//2306.11698
on the trustworthiness of GPT models remains limited, practitioners have proposed employing capable GPT models for
https://arxiv.org/abs/2303.08721
and other endeavors. Advancements in artificial intelligence (AI) have produced AI systems that are capable of persuading
https://arxiv.org/abs/2111.02080
In contrast to messy large-scale datasets used to train LMs capable of in
https://arxiv.org/abs/2404.10667
static image and a speech audio clip. Our premiere model, VASA-1, is capable of not
https://arxiv.org/abs/2305.15581
other authors View PDF HTML (experimental) Abstract: Text-to-image diffusion models are now capable of generating
https://arxiv.org/abs/2301.04183
analyzed by machines, there is demand for a new codec paradigm that is capable of compressing
https://arxiv.org/abs/2401.16437
detection are developed and compared, including a novel deep learning (DL) architecture capable of processing
https://arxiv.org/abs/2207.10456
the matching of low-level features. In contrast, human vision is capable of distinguishing
https://arxiv.org/abs/2202.07785
Large-scale pre-training has recently emerged as a technique for creating capable, general purpose, generative models
https://arxiv.org/abs/1710.03748
the complexity of the environment. This suggests that a highly capable agent requires a
https://arxiv.org/abs/2108.07258v1
understanding of how they work, when they fail, and what they are even capable of due