1-75 of about 75 matches for site:arxiv.org patterns
http://arxiv.org/abs/1510.05434
DOI via DataCite Journal reference: Discrete Mathematics & Theoretical Computer Science, Vol. 18 no. 2, Permutation Patterns 2015, Permutation Patterns (March
https://arxiv.org/abs/1501.01990
1501.01990] Stable localized moving patterns in the 2-D Gray-Scott model Skip to main
https://arxiv.org/abs/1302.2274
1302.2274] Quadrant marked mesh patterns in 132-avoiding permutations II Skip to main content We gratefully
[2212.04801] A primer on twistronics: A massless Dirac fermion's journey to moiré patterns and flat
https://arxiv.org/abs/2212.04801
A primer on twistronics: A massless Dirac fermion's journey to moiré patterns and flat
https://arxiv.org/abs/2105.09377
2105.09377] Pure Tensor Program Rewriting via Access Patterns (Representation Pearl) Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2412.12463
images. By using a pattern analogy -- a pair of simple patterns to demonstrate
https://arxiv.org/abs/2412.12463
images. By using a pattern analogy -- a pair of simple patterns to demonstrate
https://arxiv.org/abs/2412.03937
AIpparel, a multimodal foundation model for generating and editing sewing patterns. Our model fine-tunes
https://arxiv.org/abs/2412.03937
AIpparel, a multimodal foundation model for generating and editing sewing patterns. Our model fine-tunes
[2307.13108] An Explainable Geometric-Weighted Graph Attention Network for Identifying Functional Ne
https://arxiv.org/abs/2307.13108
functional MRI (rs-fMRI) dataset of individuals with PD, xGW-GAT identifies functional connectivity patterns associated with gait impairment
https://arxiv.org/abs/2503.04036
watermarking in language models injects traceable signals, such as specific token sequences or stylistic patterns, into copyrighted text, allowing
https://arxiv.org/abs/1708.07239
boost a human fact checker's productivity by surfacing relevant facts and patterns to aid
https://arxiv.org/abs/2503.04036
watermarking in language models injects traceable signals, such as specific token sequences or stylistic patterns, into copyrighted text, allowing
https://arxiv.org/abs/2502.03461
simple tasks such as elementary-level math word problems. Analyzing these failures further reveals previously unidentified patterns of problems
https://arxiv.org/abs/2303.15046
first reflective flare removal dataset called BracketFlare, which contains diverse and realistic reflective flare patterns. We use continuous bracketing
https://arxiv.org/abs/2408.02752
use them to summarize the data by mining for visual patterns. Concretely, we show that
https://arxiv.org/abs/2503.21581
where complex optical distortions are common. Existing methods often rely on pre-rectified images or calibration patterns, which limits their applicability
https://arxiv.org/abs/2312.04966
specific movements as input, our method learns and generalizes the input motion patterns for diverse
https://arxiv.org/abs/2503.21581
where complex optical distortions are common. Existing methods often rely on pre-rectified images or calibration patterns, which limits their applicability
https://arxiv.org/abs/2312.13328
probes, and a basis factor (i.e., M) - efficiently encoding internal relationships and patterns within the scene
https://arxiv.org/abs/1707.03501
the construction should offer very significant insights into the internal representation of patterns by deep networks. If
[2312.01429] Transformers are uninterpretable with myopic methods: a case study with bounded Dyck gr
https://arxiv.org/abs/2312.01429
of the model, such as the weight matrices or the attention patterns. In this
[2404.08634] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models
https://arxiv.org/abs/2404.08634
the deeper layers, attention matrices frequently collapse to near rank-one, single-column patterns. We refer to
[2312.01429] Transformers are uninterpretable with myopic methods: a case study with bounded Dyck gr
https://arxiv.org/abs/2312.01429
of the model, such as the weight matrices or the attention patterns. In this
https://arxiv.org/abs/1805.07457
structural reasoning mostly through structure priors in a cooperative way where co-occurring patterns are encouraged. We, on
https://arxiv.org/abs/2406.07754
with HOI awareness; the model learns to adjust the interaction patterns, such as the
https://arxiv.org/abs/1707.03501
the construction should offer very significant insights into the internal representation of patterns by deep networks. If
https://arxiv.org/abs/2406.07754
with HOI awareness; the model learns to adjust the interaction patterns, such as the
https://arxiv.org/abs/1805.07457
structural reasoning mostly through structure priors in a cooperative way where co-occurring patterns are encouraged. We, on
[1512.03044] Enumeration and investigation of acute 0/1-simplices modulo the action of the hyperocta
https://arxiv.org/abs/1512.03044
generated by our code from a mathematical perspective. One of the patterns observed in the
[2501.00712] Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encod
https://arxiv.org/abs/2501.00712
diminish the effectiveness of position-based addressing. Many current methods enforce rigid patterns in attention
https://arxiv.org/abs/2412.09626
However, these methods are still prone to producing low-quality visual content with repetitive patterns. The key
https://arxiv.org/abs/2304.03442
fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling
https://arxiv.org/abs/2401.06209
similar despite their clear visual differences. With these pairs, we construct the Multimodal Visual Patterns (MMVP) benchmark. MMVP exposes
https://arxiv.org/abs/2308.03620
and supervised learning. Concretely, the former employs contrastive learning to acquire underlying patterns from large-scale unlabeled
https://arxiv.org/abs/2412.07660
to develop design rules. Recent generative models for building creation often overlook these patterns, leading to low
[2403.10518] Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Chara
https://arxiv.org/abs/2403.10518
dance sequences of extremely long length, striking a balance between global choreographic patterns and local
http://arxiv.org/abs/nlin/0307015
complex systems science. As a first step, I distinguish among the broad patterns which recur across complex
http://arxiv.org/abs/1609.01782
the permutations in $\mathfrak{S}_n$ avoiding a set of patterns $\Pi$. For various
https://arxiv.org/abs/2308.10214
in graphs, matroid bases, order ideals and linear extensions in posets, permutation patterns, and the
https://arxiv.org/abs/2304.03442
fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling
https://arxiv.org/abs/2111.01674
commonly studied and expressed as a discrete set of gait patterns, like walk, trot, gallop
https://arxiv.org/abs/2307.09553
type of a stream should be able to express complex sequential patterns of events
https://arxiv.org/abs/2111.07832
the state-of-the-art image classification results, we underline emerging local semantic patterns, which helps the
https://arxiv.org/abs/1204.0513
significantly modifying the formation and longevity of the resulting patterns. Contrary to expectations
https://arxiv.org/abs/2310.08810
microscopy. We provide clear experimental signatures in local mapping and quasiparticle interference patterns that visualize the
https://arxiv.org/abs/2506.08010
responsible for concentrating high-norm activations on outlier tokens, leading to irregular attention patterns and degrading
https://arxiv.org/abs/2004.03623
needs to be encouraged to learn about repeating and consistent patterns in data
https://arxiv.org/abs/2111.01674
commonly studied and expressed as a discrete set of gait patterns, like walk, trot, gallop
https://arxiv.org/abs/2306.06070
simplified ones, and 3) a broad spectrum of user interaction patterns. Based on Mind2Web, we
https://arxiv.org/abs/2001.05658
networks of coordinated Twitter accounts by examining their identities, images, hashtag sequences, retweets, or temporal patterns. The proposed
https://arxiv.org/abs/2502.12152
of learning to humanoid locomotion, the getting-up task involves complex contact patterns (which necessitates accurately modeling
https://arxiv.org/abs/2407.19451
previous work that jointly models the global hair structure and local curl patterns, we propose to
https://arxiv.org/abs/2407.19451
previous work that jointly models the global hair structure and local curl patterns, we propose to
[2504.13587] RAG Without the Lag: Interactive Debugging for Retrieval-Augmented Generation Pipelines
https://arxiv.org/abs/2504.13587
contribute the design and implementation of RAGGY, insights into expert debugging patterns through a qualitative
https://arxiv.org/abs/1805.09155
detection approaches generally focus on only one aspect of advertising or tracking (e.g. URL patterns, code structure), making existing
https://arxiv.org/abs/2506.00317
each for a total of 19 -- and extract common patterns regarding their mishandling of
https://arxiv.org/abs/2308.02645
on the spectral type of the donor star; these classes exhibit different patterns of X
https://arxiv.org/abs/2211.07638
camera. The small size of the robot necessitates discovering specialized gait patterns not seen elsewhere. The
https://arxiv.org/abs/2007.11678
estimation tasks on dynamic motions of dancing and sports with complex contact patterns. Comments: ECCV 2020 Subjects
https://arxiv.org/abs/2203.02231
Aware Loss, named OPAL, which successfully extracts and encodes the general occlusion patterns inherent in the
https://arxiv.org/abs/2007.11678
estimation tasks on dynamic motions of dancing and sports with complex contact patterns. Comments: ECCV 2020 Subjects
https://arxiv.org/abs/2304.00341
correlations between scene points, regions, or entities -- aiming to capture their mutual co-variation patterns. In contrast
https://arxiv.org/abs/2402.08576
take into consideration the additional information available to each player (e.g. traffic patterns, weather conditions, network congestion
https://arxiv.org/abs/2505.15962
9 other authors View PDF HTML (experimental) Abstract: Neural language models are black-boxes--both linguistic patterns and factual
https://arxiv.org/abs/2308.08155
natural language and computer code can be used to program flexible conversation patterns for different
https://arxiv.org/list/hep-ex/new
for this problem, but costly graph construction, irregular computations, and random memory access patterns substantially limit their throughput
https://arxiv.org/abs/2304.01373
develop and evolve over the course of training? How do these patterns change as models scale