JamBot Logo
1-100 of about 316 matches for site:arxiv.org site:arxiv.org site:arxiv.org site:arxiv.org site:arxiv.org effective
https://arxiv.org/abs/2306.11695
2306.11695] A Simple and Effective Pruning Approach for Large Language Models Skip to
https://arxiv.org/abs/1910.04760
1910.04760] A cost-effective method for improving and re-purposing large, pre-trained GANs
https://arxiv.org/abs/1910.04760
1910.04760] A cost-effective method for improving and re-purposing large, pre-trained GANs
https://arxiv.org/abs/1912.04421
1912.04421] Basis Prediction Networks for Effective Burst Denoising with Large Kernels Skip to main content We
https://arxiv.org/abs/1910.13267
1910.13267] BPE-Dropout: Simple and Effective Subword Regularization Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/1912.04421
1912.04421] Basis Prediction Networks for Effective Burst Denoising with Large Kernels Skip to main content We
https://arxiv.org/abs/2507.00195
2507.00195] What Makes Local Updates Effective: The Role of Data Heterogeneity and Smoothness Happy
http://arxiv.org/abs/1503.05884
1503.05884] Effective equidistribution and property tau Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2404.05868
2404.05868] Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning Skip to main content We gratefully
https://arxiv.org/abs/nucl-th/0510023
nucl-th/0510023] Five lectures on effective field theory Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2409.18125
2409.18125] LLaVA-3D: A Simple yet Effective Pathway to Empowering LMMs with 3D-awareness Skip to
http://arxiv.org/abs/0708.4040
0708.4040] Effective equidistribution for closed orbits of semisimple groups on homogeneous spaces Skip to
https://arxiv.org/abs/2311.01378
2311.01378] Vision-Language Foundation Models as Effective Robot Imitators Skip to main content We gratefully acknowledge support from
https://arxiv.org/abs/1901.05555
1901.05555] Class-Balanced Loss Based on Effective Number of Samples Skip to main content We gratefully
https://arxiv.org/abs/2504.21489
2504.21489] TRIED: Truly Innovative and Effective AI Detection Benchmark, developed by WITNESS Happy Open Access Week from arXiv
https://arxiv.org/abs/2311.01378
2311.01378] Vision-Language Foundation Models as Effective Robot Imitators Skip to main content We gratefully acknowledge support from
https://arxiv.org/abs/1805.07167
algebraic unit. His proof depends on Duke's Equidistribution Theorem and is hence non-effective. In this
https://arxiv.org/abs/1910.12755
1910.12755] Two recent p-adic approaches towards the (effective) Mordell conjecture Skip to main content We
https://arxiv.org/abs/2303.11156
a critical area of research. AI text detectors have shown to be effective under their specific settings
https://arxiv.org/abs/2104.02244
In this paper, we propose novel approaches for unconditional GAN compression. We first introduce effective channel pruning and
https://arxiv.org/abs/2311.11045
to help the model learn to determine the most effective solution strategy for
https://arxiv.org/abs/2503.04036
to track and verify training data ownership. Previous data watermarking techniques primarily focus on effective memorization during pretraining, while
https://arxiv.org/abs/2104.02244
In this paper, we propose novel approaches for unconditional GAN compression. We first introduce effective channel pruning and
https://arxiv.org/abs/2503.04036
to track and verify training data ownership. Previous data watermarking techniques primarily focus on effective memorization during pretraining, while
https://arxiv.org/abs/2309.04470
possible interventions for each individual, making the challenge of taking effective action more acute. Even
https://arxiv.org/abs/2412.15211
reconstructing high-fidelity appearance from images taken under extreme illumination variation. Moreover, our approach is particularly effective at recovering view-dependent
https://arxiv.org/abs/2207.00026
to theoretically explain the applicability of the proposed framework. 3) Effective: Comprehensive experimental analysis on
https://arxiv.org/abs/2207.00026
to theoretically explain the applicability of the proposed framework. 3) Effective: Comprehensive experimental analysis on
https://arxiv.org/abs/2412.15211
reconstructing high-fidelity appearance from images taken under extreme illumination variation. Moreover, our approach is particularly effective at recovering view-dependent
https://arxiv.org/abs/1905.06214
object labels through conditional random fields for collective classification, whereas graph neural networks learn effective object representations for
https://arxiv.org/abs/1708.07239
spread. Such approaches can be designed to not only be scalable and effective at assessing veracity of
https://arxiv.org/abs/2408.00771
Learned Discontinuities, by Chenxi Liu and 4 other authors View PDF HTML (experimental) Abstract: Effective representation of 2D
https://arxiv.org/abs/2312.02149
image super-resolution and outpainting, and show that our method is most effective at generating consistent multi
https://arxiv.org/abs/2309.11497
overlook the backbone semantics. Capitalizing on this discovery, we propose a simple yet effective method-termed "FreeU" - that
https://arxiv.org/abs/2312.07537
of the initial noise. Motivated by these observations, we propose a concise yet effective inference sampling strategy, FreeInit
https://arxiv.org/abs/2407.00369
social media, there is a critical need for systems that can provide effective real-time verification of
https://arxiv.org/abs/2001.07685
Sohn and 8 other authors View PDF Abstract: Semi-supervised learning (SSL) provides an effective means of leveraging
https://arxiv.org/abs/2311.17061
fine details or excessive training time. In this paper, we propose an efficient yet effective framework, HumanGaussian, that generates
https://arxiv.org/abs/2304.05669
accurate material and lighting optimization faster than previous work, and is more effective at resolving ambiguities. The
https://arxiv.org/abs/1905.06214
object labels through conditional random fields for collective classification, whereas graph neural networks learn effective object representations for
https://arxiv.org/abs/2404.11018
domain-specific questions. We find that both Reinforced and Unsupervised ICL can be quite effective in the
https://arxiv.org/abs/2408.00771
Learned Discontinuities, by Chenxi Liu and 4 other authors View PDF HTML (experimental) Abstract: Effective representation of 2D
https://arxiv.org/abs/2001.07685
Sohn and 8 other authors View PDF Abstract: Semi-supervised learning (SSL) provides an effective means of leveraging
https://arxiv.org/abs/2311.17061
fine details or excessive training time. In this paper, we propose an efficient yet effective framework, HumanGaussian, that generates
https://arxiv.org/abs/2407.00369
social media, there is a critical need for systems that can provide effective real-time verification of
https://arxiv.org/abs/1711.11575
and is easy to embed in existing networks. It is shown effective on improving object recognition
https://arxiv.org/abs/2406.08332
specific knowledge into the student universal embedding. UDON's distillation approach is not only effective, but also very efficient
https://arxiv.org/abs/2407.21121
and 3 other authors View PDF HTML (experimental) Abstract: Sinusoidal neural networks have been shown effective as implicit neural representations
https://arxiv.org/abs/2506.05282
on six benchmarks spanning pairwise registration and shape assembly. Notably, our unified formulation enables effective joint training on diverse
https://arxiv.org/abs/2306.06344
for a realistic and controllable traffic model backbone, and an effective method to interface
https://arxiv.org/abs/2410.16512
to as Text-Image Pretraining with Spatial awareness (TIPS), leverages two simple and effective insights. First, on textual
https://arxiv.org/abs/2403.07815
17 other authors View PDF HTML (experimental) Abstract: We introduce Chronos, a simple yet effective framework for pretrained
https://arxiv.org/abs/2412.13185
Abstract: Generating realistic human videos remains a challenging task, with the most effective methods currently relying on
https://arxiv.org/abs/2304.09479
along with a shadow map, inferred using a simple and effective technique, to spatially
https://arxiv.org/abs/2309.08250
rank losses and ensures robust training. Secondly, we use a simple yet effective loss function to
https://arxiv.org/abs/2111.15121
In this work, we present pyramid adversarial training (PyramidAT), a simple and effective technique to improve
https://arxiv.org/abs/2104.05279
the number of images per class, leading to long-tailed distributions. An effective and simple
https://arxiv.org/abs/2405.05967
a multi-scale discriminator with a text alignment loss to build an effective conditional GAN-based formulation
https://arxiv.org/abs/2406.08332
specific knowledge into the student universal embedding. UDON's distillation approach is not only effective, but also very efficient