DiaDSP
Differentially Private Distributed Optimization via State and Direction Perturbation in Multiagent Systems
Source
Prerequisite Knowledge
What is state perturbation ?
- State perturbation refers to the intentional modification of an agent’s current state by introducing randomness or noise.
What is differential privacy ?
- Differential privacy is a formal framework designed to ensure the privacy of individual’s data. Achieving differential privacy requires that any change in the statistics of the messages.
What is direction perturbation ?
Direction perturbation involves introducing randomness or noise into the direction of the optimization process in a distributed algorithm. This technique modifies the gradient or update direction that agents use when adjusting their parameters during optimization. By perturbing the direction, the algorithm can maintain privacy by making it more challenging for other agents or observers to infer the true gradient information, thereby protecting individual agents’ objective functions. Additionally, direction perturbation can help enhance the robustness of the optimization process and contribute to convergence properties, particularly in scenarios where privacy is a critical concern.
What is differential privacy ?
In simple terms, Laplace noise refers to a random variable that
follows a Laplace distribution. The probability density function of
Laplace noise is represented as follows:
x ∈ ℝu represents local observation.
x ∈ ℝn × u, The rows of x represent the local observation of each agent i.
Existence :
$$^{*}= ( x^{*} )^{T} {i=1}^{n} f{i} ( x^{*} ) = 0.$$
Abstract
The article focuses on the distributed optimization problem in multi-agent system, where the objective is to minimize the sum of cost functions. It demonstrates that achieving both convergence and differential privacy is impossible through state perturbation in existing algorithms. The authors propose a new algorithm called DiaDSP, which utilizes both state and direction perturbations with decaying Laplace noise to ensure differential privacy. Unlike many prior methods that require diminishing step sizes for convergence, DiaDSP converges in mean and almost surely with a constant step size. The authors establish conditions for linear convergence under strong convexity of the cost functions and R-linear convergence with Lipschitz gradients. The authors also analyze the trade-off between privacy and convergence accuracy and validate their findings through simulations in a sensor fusion context.
Introduction
Problem Formulation
The objectives of this article are twofold:
- Design a fully distributed update rule h(⋅) and a noise adding mechanism
g(⋅) only based on local
information to solve the DO problem
while preserving the differential privacy. - Characterize the trade-off between the accuracy of convergence and the level of differential privacy.
Notes:
- The update rule can be represented in a compact form as follows:
- The attacker has access to all the transmitted messages, which are
usually masked by noise ζ(k), i.e.,
- The privacy level of ϵ-differential privacy is generally determined by ϵ. The smaller ϵ is, the higher the privacy level. In other words, it becomes more challenging to distinguish between the two target functions when ϵ is small.
DiaDSP Algorithm
A. Motivation: An Impossibility Result
Summary: The result shows that the convergence and differential privacy cannot be guaranteed simultaneously for any global exact DO algorithm.
Proof of Proposition 1 (convergence (
x(l)(k)→
the convergence point by xl∞.
Hence, there is
B. Development of DiaDSP
We employ the combine-then-adapt strategy for each agent i, which is expressed as
$$x_{i}(k+1) = {j=1}^{n} a{ij} x_{j}(k) - {i}
y{i}(k).$$ This formula is analogous to the parameter update
mechanism in neural networks. xi(k)
can be considered as the transition between different states. yi(k)
is regarded as the gradient of the cost function. Therefore, yi(k)
is supposed to have the same direction as
Convergence Analysis of DiaDSP
This section mainly include three key points:
- Analyze the convergence of DiaDSP
- Prove the R-linear convergence of DiaDSP
- Determine the optimal step size
Differential Privacy
Simulations
Conclusion
Questions and Insights
- What is R-linear convergence ?