JamBot Logo
1-70 of about 70 matches for site:arxiv.org private
https://arxiv.org/abs/1905.13229
1905.13229] Private Hypothesis Selection Skip to main content We gratefully acknowledge support from the Simons
https://arxiv.org/abs/2302.07956
2302.07956] Tight Auditing of Differentially Private Machine Learning Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2406.07407
2406.07407] Private Geometric Median Skip to main content We gratefully acknowledge support from the Simons
https://arxiv.org/abs/2406.07407
2406.07407] Private Geometric Median Skip to main content We gratefully acknowledge support from the Simons
https://arxiv.org/abs/1704.03024
1704.03024] Tight Lower Bounds for Differentially Private Selection Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2305.13209
PDF Abstract: Differentially private (stochastic) gradient descent is the workhorse of DP private machine learning in
https://arxiv.org/abs/2305.13209
PDF Abstract: Differentially private (stochastic) gradient descent is the workhorse of DP private machine learning in
https://arxiv.org/abs/2006.10559
2006.10559] Differentially-private Federated Neural Architecture Search Skip to main content We gratefully acknowledge support from the
https://arxiv.org/abs/2303.01256
2303.01256] Choosing Public Datasets for Private Machine Learning via Gradient Subspace Distance Skip to main content
https://arxiv.org/abs/2405.20405
2405.20405] Private Mean Estimation with Person-Level Differential Privacy Skip to main content We gratefully acknowledge
https://arxiv.org/abs/2405.20405
2405.20405] Private Mean Estimation with Person-Level Differential Privacy Skip to main content We gratefully acknowledge
https://arxiv.org/abs/2505.23682
2505.23682] Differentially Private Space-Efficient Algorithms for Counting Distinct Elements in the Turnstile Model
https://arxiv.org/abs/2106.13329
2106.13329] Covariance-Aware Private Mean Estimation Without Private Covariance Estimation Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2311.10825
2311.10825] Pudding: Private User Discovery in Anonymity Networks Skip to main content We gratefully acknowledge
https://arxiv.org/abs/2109.10074
2109.10074] STAR: Secret Sharing for Private Threshold Aggregation Reporting Skip to main content We gratefully acknowledge
https://arxiv.org/abs/2409.09676
2409.09676] Nebula: Efficient, Private and Accurate Histogram Estimation Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2409.09676
2409.09676] Nebula: Efficient, Private and Accurate Histogram Estimation Skip to main content We gratefully acknowledge support
https://arxiv.org/abs/2109.06153
2109.06153] Relaxed Marginal Consistency for Differentially Private Query Answering Skip to main content We gratefully acknowledge
https://arxiv.org/abs/2106.07153
2106.07153] Iterative Methods for Private Synthetic Data: Unifying Framework and New Methods Skip to main
https://arxiv.org/abs/2110.05679
2110.05679] Large Language Models Can Be Strong Differentially Private Learners Happy Open Access Week from arXiv! YOU make open access
https://arxiv.org/abs/1409.2177
1409.2177] The Large Margin Mechanism for Differentially Private Maximization Skip to main content We
https://arxiv.org/abs/2103.06641
2103.06641] Differentially Private Query Release Through Adaptive Projection Skip to main content We gratefully acknowledge support from
https://arxiv.org/abs/2010.12603
2010.12603] Permute-and-Flip: A new mechanism for differentially private selection Skip to
https://arxiv.org/abs/2110.03620
Nicolas Papernot and 1 other authors View PDF Abstract: For many differentially private algorithms, such as the
https://arxiv.org/abs/2202.12219
advances in auditing which have been used for estimating lower bounds on differentially private algorithms, here we show
https://arxiv.org/abs/2210.00597
the data of a set of people will still be differentially private as long as each
https://arxiv.org/abs/2106.00001
titled Privately Learning Subspaces, by Vikrant Singhal and 1 other authors View PDF Abstract: Private data analysis suffers a
https://arxiv.org/abs/1907.11692
gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different
https://arxiv.org/abs/2111.12981
use the powerful Sum of Squares method (SoS) to design differentially private algorithms. SoS proofs to
https://arxiv.org/abs/2305.08846
other authors View PDF Abstract: We propose a scheme for auditing differentially private machine learning systems with
https://arxiv.org/abs/1706.05069
a new approach for analyzing the generalization guarantees of differentially private algorithms. Comments: To
https://arxiv.org/abs/2105.07260
Abstract: The permute-and-flip mechanism is a recently proposed differentially private selection algorithm that was
https://arxiv.org/abs/2004.00010
2 other authors View PDF Abstract: A key tool for building differentially private systems is adding Gaussian
https://arxiv.org/abs/2205.14529
invisible actions taken by moderators on Reddit, using a unique dataset of private moderator logs for
https://arxiv.org/abs/2205.14528
to make apparent volunteer labor's value. Using a novel dataset of private logs generated by moderators
https://arxiv.org/abs/1905.02383
based relaxations, however, have several acknowledged weaknesses, either in handling composition of private algorithms or in
https://arxiv.org/abs/2301.13334
DP) is a rigorous notion of data privacy, used for private statistics. The canonical
https://arxiv.org/abs/2301.13334
DP) is a rigorous notion of data privacy, used for private statistics. The canonical
http://arxiv.org/abs/1204.3187
m ds_demo_code/experiment_toolbox/@myCleanup/myCleanup.m ds_demo_code/experiment_toolbox/experiment_load.m ds_demo_code/experiment_toolbox/experiment_run.m ds_demo_code/experiment_toolbox/private/experiment_base.m ds_demo_code/experiment_toolbox
https://arxiv.org/abs/2306.11698
easily misled to generate toxic and biased outputs and leak private information in both
https://arxiv.org/abs/2107.11839
trust the analyzer, local privacy comes at a price: a locally private protocol is less accurate
https://arxiv.org/abs/2212.05015
give the first black-box reduction from privacy to robustness which can produce private estimators with optimal tradeoffs
https://arxiv.org/abs/2405.20769
of computing tight privacy guarantees for the composition of subsampled differentially private mechanisms. Recent algorithms can
https://arxiv.org/abs/1912.06171
of sensitive attribute data. For each domain, we describe how and when private companies collect or infer
https://arxiv.org/abs/2405.20769
of computing tight privacy guarantees for the composition of subsampled differentially private mechanisms. Recent algorithms can
https://arxiv.org/abs/1912.06171
of sensitive attribute data. For each domain, we describe how and when private companies collect or infer
https://arxiv.org/abs/2109.11377
approaches remain a challenge. First, datasets used in existing works are often private and/or
https://arxiv.org/abs/2207.00220
potential for significant harm, particularly from pretraining on biased, obscene, copyrighted, and private information. Emerging ethical approaches
https://arxiv.org/abs/1702.07476
notions have appeared in several recent papers that analyzed composition of differentially private mechanisms. We argue that
https://arxiv.org/abs/2112.06324
used for cross-profile tracking (e.g., linking user behavior across normal and private browsing sessions). Finally, we
https://arxiv.org/abs/2404.05868
and 3 other authors View PDF HTML (experimental) Abstract: Large Language Models (LLMs) often memorize sensitive, private, or copyrighted data during
https://arxiv.org/abs/2503.18813
of a capability to prevent the exfiltration of private data over unauthorized data
https://arxiv.org/abs/2202.05776
Abstract: We initiate a systematic study of algorithms that are both differentially private and run
https://arxiv.org/abs/2407.07262
big data analysis. Although recent works have shown the existence of differentially private sublinear algorithms for
https://arxiv.org/abs/2110.13239
Privacy-protected microdata are often the desired output of a differentially private algorithm since microdata is
https://arxiv.org/abs/2110.07450
essential to secure the communications and devices of private citizens, businesses, and
https://arxiv.org/abs/2402.03239
a decentralized foundation for public social media. It was launched in private beta in February
https://arxiv.org/abs/2112.04359
performance by social group for LMs. The second focuses on risks from private data leaks or LMs
https://arxiv.org/abs/2110.07450
essential to secure the communications and devices of private citizens, businesses, and
https://arxiv.org/abs/2308.15309
to study the impact of clicking on search ads on three popular private search engines which have
https://arxiv.org/abs//2306.11698
easily misled to generate toxic and biased outputs and leak private information in both
https://arxiv.org/abs//2306.11698
easily misled to generate toxic and biased outputs and leak private information in both
https://arxiv.org/abs/1212.0297
x \in \R^N$, we seek to find the differentially private mechanism that has the
https://arxiv.org/abs/2106.02848
give a fast algorithm to optimally compose privacy guarantees of differentially private (DP) algorithms to
https://arxiv.org/abs/2301.13188
and data decisions affect privacy. Overall, our results show that diffusion models are much less private than prior generative models
https://arxiv.org/abs/1607.00133
may be crowdsourced and contain sensitive information. The models should not expose private information in these
https://arxiv.org/a/kleppmann_m_1.html
Data Structures and Algorithms (cs.DS) [5] arXiv:2311.10825 [ pdf , other ] Title: Pudding: Private User Discovery in
https://arxiv.org/html/0901.4016
do the same. Tools that use public-key encryption could encode the public/private key-pairs as proquints
https://arxiv.org/html/2404.08144v2
et al., 2019 ) , performing arbitrary remote code execution (Zheng & Zhang, 2013 ) , and exfiltrating private data (Ullah et al