1-34 of about 34 matches for site:arxiv.org adaptation
https://arxiv.org/abs/2308.04399
2308.04399] Fine-Tuning Games: Bargaining and Adaptation for General-Purpose Models Skip to main
https://arxiv.org/abs/2310.14034
2310.14034] Tree Prompting: Efficient Task Adaptation without Fine-Tuning Skip to main content We gratefully acknowledge support from
https://arxiv.org/abs/2210.15909
2210.15909] Subsidiary Prototype Alignment for Universal Domain Adaptation Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2312.02432
2312.02432] Orthogonal Adaptation for Modular Customization of Diffusion Models Skip to main content
https://arxiv.org/abs/2311.16102
2311.16102] Diffusion-TTA: Test-time Adaptation of Discriminative Models via Generative Feedback Skip to main content
https://arxiv.org/abs/2112.09343
2112.09343] Domain Adaptation on Point Clouds via Geometry-Aware Implicits Skip to main content We gratefully acknowledge
[2107.14285] ADeLA: Automatic Dense Labeling with Attention for Viewpoint Adaptation in Semantic Seg
https://arxiv.org/abs/2107.14285
2107.14285] ADeLA: Automatic Dense Labeling with Attention for Viewpoint Adaptation in Semantic Segmentation Skip to
https://arxiv.org/abs/2408.15239
to produce a video in between two input frames. We accomplish this adaptation through a lightweight
https://arxiv.org/abs/2205.01643
proposed method achieves state-of-the-art performance in three domain adaptation scenarios, especially the
https://arxiv.org/abs/2310.07018
the physics reasoning skills of LLMs. Further, to enable domain-specific adaptation of this
https://arxiv.org/
recent , search ) Mathematical Physics ( math-ph new , recent , search ) Nonlinear Sciences ( nlin new , recent , search ) includes: Adaptation and Self
https://arxiv.org/abs/2003.12649
trained on our generated "real" images predict more accurate depth and normals than domain adaptation approaches, suggesting that improving
https://arxiv.org/abs/2109.01349
images and the training images, we propose a self-supervised domain adaptation strategy for real
http://arxiv.org/abs/nlin/0307015
text Search open search GO open navigation menu quick links Login Help Pages About Nonlinear Sciences > Adaptation and Self
http://arxiv.org/abs/nlin/0409024
text Search open search GO open navigation menu quick links Login Help Pages About Nonlinear Sciences > Adaptation and Self
https://arxiv.org/abs/2505.03738
dataset and train a network capable of robust, on-demand adaptation to potentially
http://arxiv.org/abs/1001.0036
but same content Subjects: Neurons and Cognition (q-bio.NC) ; Information Theory (cs.IT); Adaptation and Self
http://arxiv.org/abs/q-bio/0609008
2: small clarifications, typo corrections, added reference Subjects: Neurons and Cognition (q-bio.NC) ; Adaptation and Self
http://arxiv.org/abs/nlin/0008038
small changes made per referee suggestions Subjects: Cellular Automata and Lattice Gases (nlin.CG) ; Adaptation and Self
https://arxiv.org/abs/2212.07016
adversarial robustness}. We first identify two key factors during model adaption -- training losses and adaptation methods -- that affect the
https://arxiv.org/abs/1511.07122
of-the-art semantic segmentation systems. In addition, we examine the adaptation of image
[cond-mat/9808147] Thermodynamic Depth of Causal States: When Paddling around in Occam's Pool Shallo
http://arxiv.org/abs/cond-mat/9808147
are optimally shallow. Comments: 11 pages, 9 figures, RevTeX Subjects: Statistical Mechanics (cond-mat.stat-mech) ; Adaptation and Self
[2201.00411] The Introspective Agent: Interdependence of Strategy, Physiology, and Sensing for Embod
https://arxiv.org/abs/2201.00411
complex tasks. Despite this success, biological organisms still hold one large advantage over these simulated agents: adaptation. While both living and
https://arxiv.org/abs/2406.09246
contribution, we show that OpenVLA can be fine-tuned on consumer GPUs via modern low-rank adaptation methods and served
https://arxiv.org/abs/1808.10654
enabling deploying the trained models in real-world without needing further domain adaptation, III. embodiment of
https://arxiv.org/abs/2312.13834
experimental) Abstract: In this paper, we introduce Fairy, a minimalist yet robust adaptation of image
https://arxiv.org/abs/1705.00930
datasets. In particular, on CUB-200-2011, we achieve 21.8% CIDEr-D improvement after adaptation. Utilizing critics during inference
[2104.01325] DARCNN: Domain Adaptive Region-based Convolutional Neural Network for Unsupervised Inst
https://arxiv.org/abs/2104.01325
loss, and an augmented pseudo-labelling stage within DARCNN to effectively perform domain adaptation across such large domain
[2007.03511] Estimating Generalization under Distribution Shifts via Domain-Invariant Representation
https://arxiv.org/abs/2007.03511
on the target risk. Empirically, our approach (1) enables self-tuning of domain adaptation models, and (2
https://arxiv.org/abs/2104.11228
and precise controls for each semantic attribute; and 2) cross-domain adaptation that bridges domain discrepancies
https://arxiv.org/abs/2112.05298
our proposed method. Results show that our model successfully learns priors and fast-interactive-adaptation strategies for exploring
https://arxiv.org/abs/1706.02275
that increases as the number of agents grows. We then present an adaptation of actor