1-100 of about 152 matches for site:arxiv.org transfer
https://arxiv.org/abs/2408.03326
2408.03326] LLaVA-OneVision: Easy Visual Task Transfer Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2408.03326
2408.03326] LLaVA-OneVision: Easy Visual Task Transfer Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2407.00369
2407.00369] How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models Skip to main
https://arxiv.org/abs/2407.00369
2407.00369] How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models Skip to main
https://arxiv.org/abs/2502.10377
2502.10377] ReStyle3D: Scene-Level Appearance Transfer with Semantic Correspondences arXiv Is Hiring a DevOps Engineer Work on one
https://arxiv.org/abs/2502.10377
2502.10377] ReStyle3D: Scene-Level Appearance Transfer with Semantic Correspondences Happy Open Access Week from arXiv! YOU make open access possible
[2303.03374] To Stay or Not to Stay in the Pre-train Basin: Insights on Ensembling in Transfer Learn
https://arxiv.org/abs/2303.03374
to Stay in the Pre-train Basin: Insights on Ensembling in Transfer Learning Skip to
https://arxiv.org/abs/2304.02744
2304.02744] StyleGAN Salon: Multi-View Latent Optimization for Pose-Invariant Hairstyle Transfer Happy Open Access Week from arXiv
https://arxiv.org/abs/2108.12847
2108.12847] Non-Parametric Neural Style Transfer Skip to main content We gratefully acknowledge support from the Simons
[2205.02841] Understanding Transfer Learning for Chest Radiograph Clinical Report Generation with Mo
https://arxiv.org/abs/2205.02841
2205.02841] Understanding Transfer Learning for Chest Radiograph Clinical Report Generation with Modified Transformer Architectures Skip to
https://arxiv.org/abs/2303.09665
2303.09665] LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding Skip to main
https://arxiv.org/abs/2303.09665
2303.09665] LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding Skip to main
https://arxiv.org/abs/2210.00912
2210.00912] Federated Domain Generalization for Image Recognition via Cross-Client Style Transfer Skip to main content
https://arxiv.org/abs/2503.09838
2503.09838] BioSpark: Beyond Analogical Inspiration to LLM-augmented Transfer Skip to main content We gratefully acknowledge
https://arxiv.org/abs/2503.09838
2503.09838] BioSpark: Beyond Analogical Inspiration to LLM-augmented Transfer Skip to main content We gratefully acknowledge
[cond-mat/0702502] Spatial Transportation Networks with Transfer Costs: Asymptotic Optimality of Hub
https://arxiv.org/abs/cond-mat/0702502
cond-mat/0702502] Spatial Transportation Networks with Transfer Costs: Asymptotic Optimality of Hub and Spoke Models Skip
https://arxiv.org/abs/2210.13702
2210.13702] DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to Reality Skip
https://arxiv.org/abs/2206.06522
2206.06522] LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning Skip to main
https://arxiv.org/abs/2112.06825
2112.06825] VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks Skip to main
https://arxiv.org/abs/1710.00756
1710.00756] Progressive Color Transfer with Dense Semantic Correspondences Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/1910.10683
1910.10683] Exploring the Limits of Transfer Learning with a Unified Text-to-Text
https://arxiv.org/abs/2401.09416
Abstract: We present TextureDreamer, a novel image-guided texture synthesis method to transfer relightable textures from a
https://arxiv.org/abs/2304.02168
modules for every new task misses an opportunity for cross-task knowledge transfer. We propose Improvise to
https://arxiv.org/abs/2401.09416
Abstract: We present TextureDreamer, a novel image-guided texture synthesis method to transfer relightable textures from a
https://arxiv.org/abs/2111.11432
and action recognition. Moreover, Florence demonstrates outstanding performance in many types of transfer learning: fully sampled fine
https://arxiv.org/abs/2111.11432
and action recognition. Moreover, Florence demonstrates outstanding performance in many types of transfer learning: fully sampled fine
https://arxiv.org/abs/1707.04175
data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In
https://arxiv.org/abs/2204.09222
concepts (or describe new ones) to enable zero-shot and few-shot transfer of the
https://arxiv.org/abs/1707.04175
data efficiency is multitask learning with shared neural network parameters, where efficiency may be improved through transfer across related tasks. In
https://arxiv.org/abs/2204.09222
concepts (or describe new ones) to enable zero-shot and few-shot transfer of the
https://arxiv.org/abs/2202.11094
datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater
https://arxiv.org/abs/2203.16521
demonstrate the quality of the learned dense correspondences through segmentation mask transfer on multiple datasets. We
https://arxiv.org/abs/2206.09059
methods can help mitigate forgetting during multimodal task learning, but do not enable cross-task knowledge transfer. We envision that CLiMB
https://arxiv.org/abs/2312.09250
opening the door to a notion of generative texture transfer. Comments: CVPR 2024. Code
https://arxiv.org/abs/1804.00168
dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple
https://arxiv.org/abs/2112.09106
has achieved impressive results on image classification in both zero-shot and transfer learning settings. However, we
https://arxiv.org/abs/1804.00168
dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple
https://arxiv.org/abs/2112.09106
has achieved impressive results on image classification in both zero-shot and transfer learning settings. However, we
https://arxiv.org/abs/2406.08332
multi-teacher distillation, where each teacher is specialized in one domain, to transfer detailed domain-specific knowledge
https://arxiv.org/abs/2501.10021
control module with our model to capture identity-disentangled facial expressions, facilitating accurate expression transfer for enhanced
[2004.01804] Google Landmarks Dataset v2 -- A Large-Scale Benchmark for Instance-Level Recognition a
https://arxiv.org/abs/2004.01804
We further demonstrate the suitability of the dataset for transfer learning by showing that
https://arxiv.org/abs/2406.08332
multi-teacher distillation, where each teacher is specialized in one domain, to transfer detailed domain-specific knowledge
[2305.01618] ContactArt: Learning 3D Interaction Priors for Category-level Articulated Object and Ha
https://arxiv.org/abs/2305.01618
guiding the hand pose estimation. Such structural and contact priors can easily transfer to real
https://arxiv.org/abs/2405.09546
of images, and training and evaluating simulation-to-real transfer for a
https://arxiv.org/abs/2501.10021
control module with our model to capture identity-disentangled facial expressions, facilitating accurate expression transfer for enhanced
[2004.01804] Google Landmarks Dataset v2 -- A Large-Scale Benchmark for Instance-Level Recognition a
https://arxiv.org/abs/2004.01804
We further demonstrate the suitability of the dataset for transfer learning by showing that
https://arxiv.org/abs/2304.02643
The model is designed and trained to be promptable, so it can transfer zero-shot to
https://arxiv.org/abs/1809.09761
materials in real photos, and employ 3D-2D alignment techniques to transfer materials to different
https://arxiv.org/abs/2210.15909
PDF Abstract: Universal Domain Adaptation (UniDA) deals with the problem of knowledge transfer between two datasets with
https://arxiv.org/abs/2205.01643
can fully exploit unlabeled target domain data in object detection training and transfer knowledge between domains via
https://arxiv.org/abs/2302.06548
margin, while using up to $95\%$ fewer weights. Furthermore, we devise a transfer learning setting for
https://arxiv.org/abs/2309.17002
that while slight noise in pre-training can benefit in-domain (ID) transfer performance, where the
https://arxiv.org/abs/2304.02643
The model is designed and trained to be promptable, so it can transfer zero-shot to
[2304.00553] From Isolated Islands to Pangea: Unifying Semantic Space for Human Action Understanding
https://arxiv.org/abs/2304.00553
Pangea. In extensive experiments, our new system shows significant superiority, especially in transfer learning. Our code and
https://arxiv.org/abs/2310.01361
to-real adaptation, the multitask policies pretrained on GPT4-generated simulation tasks exhibit stronger transfer to unseen
https://arxiv.org/abs/1102.2331
particular, we consider models that invoke processes of gene birth (duplication and transfer) and death
https://arxiv.org/abs/2410.03645
the generated demonstrations and exhibits strong sim-to-real zero-shot transfer. Combining the proposed
https://arxiv.org/abs/2410.03645
the generated demonstrations and exhibits strong sim-to-real zero-shot transfer. Combining the proposed
https://arxiv.org/abs/2501.18096
text-to-image generation, and even edit prompts for style transfer! Finally, being a
https://arxiv.org/abs/2203.09905
label as supervision. To this end, we devise a cross-view knowledge transfer framework that extracts affordance
https://arxiv.org/abs/2311.18303
motion synthesis is already extensively studied and benchmarked, it remains challenging to transfer this success to
https://arxiv.org/abs/2208.13196
different views. To this end, we devise a cross-view affordance knowledge transfer framework that extracts affordance
[2305.01618] ContactArt: Learning 3D Interaction Priors for Category-level Articulated Object and Ha
https://arxiv.org/abs/2305.01618
guiding the hand pose estimation. Such structural and contact priors can easily transfer to real
https://arxiv.org/abs/2210.14891
contrastive learning, AI alignment, AI capabilities, robotics, out-of-distribution (OOD) generalization, continual learning, transfer learning, uncertainty estimation / calibration
https://arxiv.org/abs/2109.01134
images and texts in a common feature space, which allows zero-shot transfer to a
https://arxiv.org/abs/2501.18096
text-to-image generation, and even edit prompts for style transfer! Finally, being a
https://arxiv.org/abs/2210.09276
most methods are currently either limited to specific editing types (e.g., object overlay, style transfer), or apply to
https://arxiv.org/abs/2311.18303
motion synthesis is already extensively studied and benchmarked, it remains challenging to transfer this success to
https://arxiv.org/abs/2008.12878
approaches to making the models more knowledge efficient such as multi-task learning, transfer learning, weakly supervised and
https://arxiv.org/abs/2503.14492
Abu Alhaija and 38 other authors View PDF HTML (experimental) Abstract: We introduce Cosmos-Transfer, a conditional
https://arxiv.org/abs/2501.03847
strong control capabilities across diverse tasks, including mesh-to-video generation, camera control, motion transfer, and object
https://arxiv.org/abs/2411.17188
and golden answers to evaluate models effectively on vision-centric tasks such as style transfer, a challenging
https://arxiv.org/abs/1907.10823
examples, malicious inputs crafted to fool trained models. Adversarial examples often exhibit black-box transfer, meaning that adversarial examples
https://arxiv.org/abs/2501.03847
strong control capabilities across diverse tasks, including mesh-to-video generation, camera control, motion transfer, and object
https://arxiv.org/abs/2210.10362
and 9 other authors View PDF Abstract: Prompt tuning is a new few-shot transfer learning technique that only
http://arxiv.org/abs/1602.05485
may well be from the use of power beaming to transfer energy and accelerate
https://arxiv.org/abs/2104.14559
paper, we present the first framework for one-shot 3D portrait style transfer, which can generate 3D
http://arxiv.org/abs/1108.4494
one of the sets and performs the only possible legal transfer of a
https://arxiv.org/abs/1606.04671
PDF Abstract: Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding
https://arxiv.org/abs/2310.08864
a high-capacity model trained on this data, which we call RT-X, exhibits positive transfer and improves
https://arxiv.org/abs/2108.07258
and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results
https://arxiv.org/abs/2303.12786
for various downstream tasks. We evaluate FeatureNeRF on tasks of 2D/3D semantic keypoint transfer and 2D
https://arxiv.org/abs/2410.11825
other authors View PDF HTML (experimental) Abstract: Reinforcement learning combined with sim-to-real transfer offers a general
https://arxiv.org/abs/2403.04436
real-time humanoid motion imitator in simulation using these refined motions and transfer it to the
https://arxiv.org/abs/2002.09505
the benefits of this approach in terms of value function transfer, learning within redundant action
https://arxiv.org/abs/1606.04671
PDF Abstract: Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding
https://arxiv.org/abs/2306.16156
to different problems in machine learning, such as generative modeling and transfer learning. In this
https://arxiv.org/abs/2005.11776
type of covenant transaction that enforces a time-lock on the transfer of control
https://arxiv.org/abs/2410.11825
other authors View PDF HTML (experimental) Abstract: Reinforcement learning combined with sim-to-real transfer offers a general
https://arxiv.org/abs/2503.15406
dataset of 580k paired human images across 100k unique identities. For precise appearance transfer, we introduce a
https://arxiv.org/abs/2403.16967
train both levels of policies in simulation and perform Sim2Real transfer for real
https://arxiv.org/abs/2103.00020
is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the
https://arxiv.org/abs/2303.03378
a variety of observation modalities, on multiple embodiments, and further, exhibits positive transfer: the model
[2502.01143] ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body
https://arxiv.org/abs/2502.01143
the simulator to align effectively with real-world dynamics. We evaluate ASAP across three transfer scenarios: IsaacGym to