1-100 of about 165 matches for site:arxiv.org effective
https://arxiv.org/abs/2306.11695
2306.11695] A Simple and Effective Pruning Approach for Large Language Models Skip to
[1910.04760] A cost-effective method for improving and re-purposing large, pre-trained GANs by fine-
https://arxiv.org/abs/1910.04760
1910.04760] A cost-effective method for improving and re-purposing large, pre-trained GANs
https://arxiv.org/abs/1912.04421
1912.04421] Basis Prediction Networks for Effective Burst Denoising with Large Kernels Skip to main content We
https://arxiv.org/abs/1910.13267
1910.13267] BPE-Dropout: Simple and Effective Subword Regularization Skip to main content We gratefully acknowledge support
http://arxiv.org/abs/1503.05884
1503.05884] Effective equidistribution and property tau Skip to main content We gratefully acknowledge support
http://arxiv.org/abs/0708.4040
0708.4040] Effective equidistribution for closed orbits of semisimple groups on homogeneous spaces Skip to
https://arxiv.org/abs/2311.01378
2311.01378] Vision-Language Foundation Models as Effective Robot Imitators Skip to main content We gratefully acknowledge support from
https://arxiv.org/abs/1901.05555
1901.05555] Class-Balanced Loss Based on Effective Number of Samples Skip to main content We gratefully
https://arxiv.org/abs/1805.07167
algebraic unit. His proof depends on Duke's Equidistribution Theorem and is hence non-effective. In this
https://arxiv.org/abs/1910.12755
1910.12755] Two recent p-adic approaches towards the (effective) Mordell conjecture Skip to main content We
https://arxiv.org/abs/2104.02244
In this paper, we propose novel approaches for unconditional GAN compression. We first introduce effective channel pruning and
https://arxiv.org/abs/2503.04036
to track and verify training data ownership. Previous data watermarking techniques primarily focus on effective memorization during pretraining, while
https://arxiv.org/abs/2207.00026
to theoretically explain the applicability of the proposed framework. 3) Effective: Comprehensive experimental analysis on
[2412.15211] Generative Multiview Relighting for 3D Reconstruction under Extreme Illumination Variat
https://arxiv.org/abs/2412.15211
reconstructing high-fidelity appearance from images taken under extreme illumination variation. Moreover, our approach is particularly effective at recovering view-dependent
https://arxiv.org/abs/1905.06214
object labels through conditional random fields for collective classification, whereas graph neural networks learn effective object representations for
https://arxiv.org/abs/2404.11018
domain-specific questions. We find that both Reinforced and Unsupervised ICL can be quite effective in the
https://arxiv.org/abs/2408.00771
Learned Discontinuities, by Chenxi Liu and 4 other authors View PDF HTML (experimental) Abstract: Effective representation of 2D
https://arxiv.org/abs/2001.07685
Sohn and 8 other authors View PDF Abstract: Semi-supervised learning (SSL) provides an effective means of leveraging
https://arxiv.org/abs/2311.17061
fine details or excessive training time. In this paper, we propose an efficient yet effective framework, HumanGaussian, that generates
https://arxiv.org/abs/2407.00369
social media, there is a critical need for systems that can provide effective real-time verification of
https://arxiv.org/abs/2406.08332
specific knowledge into the student universal embedding. UDON's distillation approach is not only effective, but also very efficient
https://arxiv.org/abs/2407.21121
and 3 other authors View PDF HTML (experimental) Abstract: Sinusoidal neural networks have been shown effective as implicit neural representations
https://arxiv.org/abs/2506.05282
on six benchmarks spanning pairwise registration and shape assembly. Notably, our unified formulation enables effective joint training on diverse
https://arxiv.org/abs/2304.09479
along with a shadow map, inferred using a simple and effective technique, to spatially
https://arxiv.org/abs/2412.13185
Abstract: Generating realistic human videos remains a challenging task, with the most effective methods currently relying on
https://arxiv.org/abs/2309.08250
rank losses and ensures robust training. Secondly, we use a simple yet effective loss function to
https://arxiv.org/abs/2104.05279
the number of images per class, leading to long-tailed distributions. An effective and simple
https://arxiv.org/abs/2111.15121
In this work, we present pyramid adversarial training (PyramidAT), a simple and effective technique to improve
https://arxiv.org/abs/2312.13216
in models that are capable of extracting image features that are not only effective at encoding image level
[2405.13208] Fundamental physics with the Lyman-alpha forest: constraints on the growth of structure
https://arxiv.org/abs/2405.13208
on the growth of structure and neutrino masses from SDSS with effective field theory Skip to
https://arxiv.org/abs/1912.02292
the above phenomena by defining a new complexity measure we call the effective model complexity and
https://arxiv.org/abs/1612.00796
weights important for those tasks. We demonstrate our approach is scalable and effective by solving a
https://arxiv.org/abs/2405.05967
a multi-scale discriminator with a text alignment loss to build an effective conditional GAN-based formulation
[2404.08634] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models
https://arxiv.org/abs/2404.08634
inefficient. To address this, we propose Inheritune, a simple and effective training recipe for
[2307.13108] An Explainable Geometric-Weighted Graph Attention Network for Identifying Functional Ne
https://arxiv.org/abs/2307.13108
in better understanding PD motor progression, thus advancing the development of more effective and personalized
https://arxiv.org/abs/2302.05128
the translated goal can be handed to domain-independent AI planners that are very effective at planning. Our empirical
https://arxiv.org/abs/2110.13623
self-supervised learning in several domains. In particular, contrastive methods are most effective where data augmentation can
https://arxiv.org/abs/2403.08137
to reading original paper texts, and the authors viewed our system as an effective way of communicating
https://arxiv.org/abs/2206.06360
faithful brushstrokes, and introduce a nearest neighbor-based loss that is highly effective at capturing style details
http://arxiv.org/abs/2305.08275
not diverse. To address this, we introduce ULIP-2, a simple yet effective tri-modal pre-training
[2303.03374] To Stay or Not to Stay in the Pre-train Basin: Insights on Ensembling in Transfer Learn
https://arxiv.org/abs/2303.03374
on the analysis of existing exploration methods, we propose a more effective modification of the
https://arxiv.org/abs/2406.07407
task of private GM with an excess error guarantee that scales with the effective diameter of the
https://arxiv.org/abs/2304.05866
class in the $\mathcal{W}$ latent space. With NoisyTwins, we first introduce an effective and inexpensive
https://arxiv.org/abs/2312.09168
Phongthawee and 6 other authors View PDF Abstract: We present a simple yet effective technique to estimate
https://arxiv.org/abs/2405.02794
and 4 other authors View PDF HTML (experimental) Abstract: Physical reasoning is important for effective robot manipulation. Recent work
https://arxiv.org/abs/2003.12649
that improving the visual realism of the images can be more effective than imposing task-specific
https://arxiv.org/abs/2402.08191
Pumacay and 5 other authors View PDF HTML (experimental) Abstract: To realize effective large-scale, real-world
https://arxiv.org/abs/1605.07681
and testing more challenging. In this work we introduce a simple, yet effective Convolutional Random Walk Network
https://arxiv.org/abs/2210.06642
the identity of the input portrait. Experiments show that our method is more effective in resynthesizing
https://arxiv.org/abs/2311.17034
processing. We show that incorporating this information can markedly enhance semantic correspondence performance with simple but effective solutions in both
[1808.08449] What is an answer? - remarks, results and problems on PIO formulas in combinatorial enu
https://arxiv.org/abs/1808.08449
from N to Z, we define the notion of an effective (or closed) formula. It
http://arxiv.org/abs/0810.3964
This emerges by minimizing the cost of producing a desired effective isotropic radiated power, which
[1607.03154] The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic S
http://arxiv.org/abs/1607.03154
a measurement of $D_A(z)/r_d$ and $H(z)r_d$ at nine effective redshifts with the
https://arxiv.org/abs/2406.07480
space that represents photorealistic image neural fields. We propose a simple and effective method, inspired by several
https://arxiv.org/abs/2410.03645
reduce the required human efforts. To utilize such data, we propose an effective multi-task language-conditioned
https://arxiv.org/abs/2306.05410
assumptions, such as a prior pose distribution or coarse pose initialization, making them less effective in a
https://arxiv.org/abs/2306.12423
in terms of generality. Following the most popular and effective paradigm in this
https://arxiv.org/abs/2311.17776
affordance learning. We then propose a vision-language framework with simple and effective designs that boost the
[1308.6617] Calibrations of Atmospheric Parameters Obtained from the First Year of SDSS-III APOGEE O
http://arxiv.org/abs/1308.6617
the accuracy and precision of the derived stellar parameters, considering especially effective temperature, surface gravity, and