Welcome to visit Communications in Theoretical Physics,
Quantum Physics and Quantum Information

Probability density functions of quantum mechanical observable uncertainties

  • Lin Zhang , 1 ,
  • Jinping Huang 1 ,
  • Jiamei Wang 2 ,
  • Shao-Ming Fei 3
Expand
  • 1Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018, China
  • 2Department of Mathematics, Anhui University of Technology, Ma Anshan 243032, China
  • 3School of Mathematical Sciences, Capital Normal University, Beijing 100048, China

Received date: 2022-03-02

  Revised date: 2022-04-29

  Accepted date: 2022-04-29

  Online published: 2022-07-01

Copyright

© 2022 Institute of Theoretical Physics CAS, Chinese Physical Society and IOP Publishing

Abstract

We study the uncertainties of quantum mechanical observables, quantified by the standard deviation (square root of variance) in Haar-distributed random pure states. We derive analytically the probability density functions (PDFs) of the uncertainties of arbitrary qubit observables. Based on these PDFs, the uncertainty regions of the observables are characterized by the support of the PDFs. The state-independent uncertainty relations are then transformed into the optimization problems over uncertainty regions, which opens a new vista for studying state-independent uncertainty relations. Our results may be generalized to multiple observable cases in higher dimensional spaces.

Cite this article

Lin Zhang , Jinping Huang , Jiamei Wang , Shao-Ming Fei . Probability density functions of quantum mechanical observable uncertainties[J]. Communications in Theoretical Physics, 2022 , 74(7) : 075102 . DOI: 10.1088/1572-9494/ac6b93

1. Introduction

The uncertainty principle rules out the possibility to obtain precise measurement outcomes simultaneously when one measures two incomparable observables at the same time. Since the uncertainty relation is satisfied by the position and momentum [1], various uncertainty relations have been extensively investigated [27]. On the occasion of celebrating the 125th anniversary of the academic journal ‘Science', the magazine listed 125 challenging scientific problems [8]. The 21st problem asks: Do deeper principles underlie quantum uncertainty and nonlocality? As uncertainty relations play significant roles in entanglement detection [914] and quantum nonlocality [15], and many others, it is desirable to explore the mathematical structures and physical implications of uncertainties in more detail from various perspectives.
The state-dependent Robertson–Schrödinger uncertainty relation [1619] is of the form:
$\left(\Delta_{\rho} \boldsymbol{A}\right)^{2}\left(\Delta_{\rho} \boldsymbol{B}\right)^{2} \geqslant \frac{1}{4}\left[(\langle\{\boldsymbol{A}, \boldsymbol{B}\}\rangle_{\rho}\right. \\ \left.-\langle\boldsymbol{A}\rangle_{\rho}\langle\boldsymbol{B}\rangle_{\rho}\right)^{2}+\langle[\boldsymbol{A}, \boldsymbol{B}]\rangle_{\rho}^{2}],$
where $\left\{{\boldsymbol{A}},{\boldsymbol{B}}\right\}:={\boldsymbol{A}}{\boldsymbol{B}}+{\boldsymbol{B}}{\boldsymbol{A}}$, [A, B]:=ABBA, and ${\left({{\rm{\Delta }}}_{\rho }{\boldsymbol{X}}\right)}^{2}:=\mathrm{Tr}{({{\boldsymbol{X}}}^{2}\rho )-\mathrm{Tr}({\boldsymbol{X}}\rho )}^{2}$ is the variance of X with respect to the state ρ, X = A, B.
Recently, state-independent uncertainty relations have been investigated [10, 12], which have direct applications to entanglement detection. In order to get state-independent uncertainty relations, one considers the sum of the variances and solves the following optimization problems:
$\begin{eqnarray}\displaystyle \begin{array}{l}{\mathrm{Var}}_{\rho }({\boldsymbol{A}})+{\mathrm{Var}}_{\rho }({\boldsymbol{B}})\\ \quad \geqslant \mathop{\min }\limits_{\rho \in {\rm{D}}({{\mathbb{C}}}^{d})}\left({\mathrm{Var}}_{\rho }({\boldsymbol{A}})+{\mathrm{Var}}_{\rho }({\boldsymbol{B}})\right),\end{array}\end{eqnarray}$
$\begin{eqnarray}{{\rm{\Delta }}}_{\rho }{\boldsymbol{A}}+{{\rm{\Delta }}}_{\rho }{\boldsymbol{B}}\geqslant \mathop{\min }\limits_{\rho \in {\rm{D}}({{\mathbb{C}}}^{d})}\left({{\rm{\Delta }}}_{\rho }{\boldsymbol{A}}+{{\rm{\Delta }}}_{\rho }{\boldsymbol{B}}\right),\end{eqnarray}$
where ${\mathrm{Var}}_{\rho }({\boldsymbol{X}})={\left({{\rm{\Delta }}}_{\rho }{\boldsymbol{X}}\right)}^{2}$ is the variance of the observable X associated with state $\rho \in {\rm{D}}({{\mathbb{C}}}^{d})$.
Efforts have been devoted to providing quantitative uncertainty bounds for the above inequalities [20]. However, searching for such uncertainty bounds may not be the best way to get new uncertainty relations [21]. Recently, Busch and Reardon-Smitha proposed to consider the uncertainty region [20] of two observables A and B, instead of finding the bounds based on some particular choice of uncertainty functional, typically such as the product or sum of uncertainties [22]. Once we can identify what the structures of uncertainty regions are, we can infer specific information about the state with minimal uncertainty in some sense. In view of this, the above two optimization problems (1) and (2) become
$\begin{eqnarray*}\displaystyle \begin{array}{l}\mathop{\min }\limits_{\rho \in {\rm{D}}({{\mathbb{C}}}^{d})}\left({\mathrm{Var}}_{\rho }({\boldsymbol{A}})+{\mathrm{Var}}_{\rho }({\boldsymbol{B}})\right)\\ \quad =\,\,\min \{{x}^{2}+{y}^{2}:(x,y)\in {{ \mathcal U }}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{({\rm{m}})}\},\\ \mathop{\min }\limits_{\rho \in {\rm{D}}({{\mathbb{C}}}^{d})}\left({{\rm{\Delta }}}_{\rho }{\boldsymbol{A}}+{{\rm{\Delta }}}_{\rho }{\boldsymbol{B}}\right)\\ \quad =\,\,\min \{x+y:(x,y)\in {{ \mathcal U }}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{({\rm{m}})}\},\end{array}\end{eqnarray*}$
where ${{ \mathcal U }}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{({\rm{m}})}$ is the so-called uncertainty region of two observables A and B defined by
$\begin{eqnarray*}{{ \mathcal U }}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{({\rm{m}})}=\{({{\rm{\Delta }}}_{\rho }{\boldsymbol{A}},{{\rm{\Delta }}}_{\rho }{\boldsymbol{B}})\in {{\mathbb{R}}}_{+}^{2}:\rho \in {\rm{D}}({{\mathbb{C}}}^{d})\}.\end{eqnarray*}$
Random matrix theory or probability theory are powerful tools in quantum information theory. Recently, the non-additivity of quantum channel capacity [23] has been cracked via probabilistic tools. The Duistermaat–Heckman measure on moment polytope has been used to derive the probability distribution density of one-body quantum marginal states of multipartite random quantum states [24, 25] and that of classical probability mixture of random quantum states [26, 27]. As a function of random quantum pure states, the probability density function (PDF) of the quantum expectation value of an observable is also analytical calculated [28]. Motivated by these works, we investigate the joint PDFs of uncertainties of observables. By doing so, we find that it is not necessarily to solve directly the uncertainty regions of observables. It is sufficient to identify the support of such PDF because PDF vanishes exactly beyond the uncertainty regions. Thus all the problems are reduced to compute the PDF of uncertainties of observables since all information concerning uncertainty regions and state-independent uncertainty relations are encoded in such PDFs. In [29] we have studied such PDFs for the random mixed quantum state ensembles, where all problems concerning qubit observables are completely solved, i.e. analytical formulae of the PDFs of uncertainties are obtained, and the characterization of uncertainty regions over which the optimization problems for state-independent lower bound of the sum of variances is presented. In this paper, we will focus on the same problem for random pure quantum state ensembles.
Let δ(x) be delta function [30] defined by
$\begin{eqnarray*}\delta (x)=\left\{\begin{array}{ll}+\infty , & \mathrm{if}\,x\ne 0;\\ 0, & \mathrm{if}\,x=0.\end{array}\right.\end{eqnarray*}$
One has $\left\langle \delta ,f\right\rangle :={\int }_{{\mathbb{R}}}f(x)\delta (x){\rm{d}}x=f(0)$. Denote by δa(x):=δ(xa). Then $\left\langle {\delta }_{a},f\right\rangle =f(a)$. Let Z(g):={xD(g): g(x) = 0} be the zero set of function g(x) with its domain D(g). We will use the following definition.

[31, 32]

If $g:{\mathbb{R}}\to {\mathbb{R}}$ is a smooth function (the first derivative $g^{\prime} $ is a continuous function) such that $Z(g)\cap Z(g^{\prime} )=\varnothing $, then the composite $\delta \circ g$ is defined by:

$\begin{eqnarray*}\delta (g(x))=\sum _{x\in Z(g)}\displaystyle \frac{1}{\left|g^{\prime} (x)\right|}{\delta }_{x}.\end{eqnarray*}$

2. Uncertainty regions of observables

We can extend the notion of the uncertainty region of two observables A and B, put forward in [20], into that of multiple observables.

Let $({{\boldsymbol{A}}}_{1},\ldots ,\,{{\boldsymbol{A}}}_{n})$ be an n-tuple of qudit observables acting on ${{\mathbb{C}}}^{d}$. The uncertainty region of such n-tuple $({{\boldsymbol{A}}}_{1},\ldots ,{{\boldsymbol{A}}}_{n})$, for the mixed quantum state ensemble, is defined by

$\begin{eqnarray*}\displaystyle \begin{array}{l}{{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,\,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}:=\left\{({{\rm{\Delta }}}_{\rho }{{\boldsymbol{A}}}_{1},\ldots ,\,{{\rm{\Delta }}}_{\rho }{{\boldsymbol{A}}}_{n})\right.\\ \left.\quad \in \,{{\mathbb{R}}\,}_{+}^{n}:\rho \in {\rm{D}}({{\mathbb{C}}}^{d})\right\}.\end{array}\end{eqnarray*}$
Similarly, the uncertainty region of such n-tuple $({{\boldsymbol{A}}}_{1},\ldots ,{{\boldsymbol{A}}}_{n})$, for the pure quantum state ensemble, is defined by
$\begin{eqnarray*}\displaystyle \begin{array}{l}{{ \mathcal U }}_{{\rm{\Delta }}\,{{\boldsymbol{A}}}_{1},\ldots ,\,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{p}})}:\; =\left\{({{\rm{\Delta }}}_{\psi }{{\boldsymbol{A}}}_{1}\,,\ldots ,\,{{\rm{\Delta }}}_{\psi }{{\boldsymbol{A}}}_{n})\right.\\ \quad \left.\in \,{{\mathbb{R}}\,}_{+}^{n}:\left|\psi \right\rangle \in {{\mathbb{C}}}^{d}\right\}.\end{array}\end{eqnarray*}$
Apparently, ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,\,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{p}})}\subset {{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,\,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}$.

Note that our definition of the uncertainty region is different from the one given in [2]. In the above definition, we use the standard deviation instead of variance.
Next, we will show that ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}$ is contained in the supercube in ${{\mathbb{R}}}_{+}^{n}$. To this end, we study the following sets ${\mathscr{P}}({\boldsymbol{A}})=\{{\mathrm{Var}}_{\psi }({\boldsymbol{A}}):\left|\psi \right\rangle \in {{\mathbb{C}}}^{d}\}$ and ${\mathscr{M}}({\boldsymbol{A}})=\{{\mathrm{Var}}_{\rho }({\boldsymbol{A}}):\rho \in {\rm{D}}\left({{\mathbb{C}}}^{d}\right)\}$ for a qudit observable A acting on ${{\mathbb{C}}}^{d}$. The relationship between both sets ${\mathscr{P}}({\boldsymbol{A}})$ and ${\mathscr{M}}({\boldsymbol{A}})$ is summarized in the following proposition.

It holds that

$\begin{eqnarray*}{\mathscr{P}}({\boldsymbol{A}})={\mathscr{M}}({\boldsymbol{A}})=\mathrm{conv}({\mathscr{P}}({\boldsymbol{A}}))\end{eqnarray*}$
is a closed interval $[0,{\max }_{\psi }{\mathrm{Var}}_{\psi }({\boldsymbol{A}})]$.

Note that ${\mathscr{P}}({\boldsymbol{A}})\subset {\mathscr{M}}({\boldsymbol{A}})\subset \mathrm{conv}({\mathscr{P}}({\boldsymbol{A}}))$. Here the first inclusion is apparently; the second inclusion follows immediately from the result obtained in [33]: for any density matrix $\rho \in {\rm{D}}\left({{\mathbb{C}}}^{d}\right)$ and a qudit observable ${\boldsymbol{A}}$, there is a pure state ensemble decomposition $\rho ={\sum }_{j}{p}_{j}| {\psi }_{j}\rangle \langle {\psi }_{j}| $ such that

$\begin{eqnarray}{\mathrm{Var}}_{\rho }({\boldsymbol{A}})=\sum _{j}{p}_{j}{\mathrm{Var}}_{{\psi }_{j}}({\boldsymbol{A}}).\end{eqnarray}$
Since all pure states on ${{\mathbb{C}}}^{d}$ can be generated via a fixed ${\psi }_{0}$ and the whole unitary group ${\mathsf{U}}(d)$, it follows that
$\begin{eqnarray*}{\mathscr{P}}({\boldsymbol{A}})=\mathrm{im}({\rm{\Phi }}),\end{eqnarray*}$
where the mapping ${\rm{\Phi }}:{\mathsf{U}}(d)\to {\mathscr{P}}({\boldsymbol{A}})$ is defined by ${\rm{\Phi }}({\boldsymbol{U}})={\mathrm{Var}}_{{\boldsymbol{U}}{\psi }_{0}}({\boldsymbol{A}})$. This mapping Φ is surjective and continuous. Due to the fact that ${\mathsf{U}}(d)$ is a compact Lie group, we see that Φ can attain maximal and minimal values over the unitary group ${\mathsf{U}}(d)$. In fact, ${\min }_{{\mathsf{U}}(d)}{\rm{\Phi }}=0$. This can be seen if we take some ${\boldsymbol{U}}$ such that ${\boldsymbol{U}}\left|{\psi }_{0}\right\rangle $ is an eigenvector of ${\boldsymbol{A}}$. Since ${\mathsf{U}}(d)$ is also connected, then $\mathrm{im}({\rm{\Phi }})={\rm{\Phi }}({\mathsf{U}}(d))$ is also connected, thus $\mathrm{im}({\rm{\Phi }})=[0,{\max }_{{\mathsf{U}}(d)}{\rm{\Phi }}]$. This amounts to saying that ${\mathscr{P}}({\boldsymbol{A}})$ is a closed interval $[0,{\max }_{{\mathsf{U}}(d)}{\rm{\Phi }}]$ which means that ${\mathscr{P}}({\boldsymbol{A}})$ is a compact and convex set, i.e.
$\begin{eqnarray*}{\mathscr{P}}({\boldsymbol{A}})=\mathrm{conv}({\mathscr{P}}({\boldsymbol{A}})).\end{eqnarray*}$
Therefore
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\mathscr{P}}({\boldsymbol{A}})={\mathscr{M}}({\boldsymbol{A}})=\mathrm{conv}({\mathscr{P}}({\boldsymbol{A}}))=[0,\mathop{\max }\limits_{{\mathsf{U}}(d)}{\rm{\Phi }}]\\ \quad =\,\,[0,\mathop{\max }\limits_{\psi }{\mathrm{Var}}_{\psi }({\boldsymbol{A}})].\end{array}\end{eqnarray*}$
This completes the proof.

Next, we determine ${\max }_{\rho \in {\rm{D}}\left({{\mathbb{C}}}^{d}\right)}{\mathrm{Var}}_{\rho }({\boldsymbol{A}})$ for an observable A. To this end, we recall the following (d − 1)-dimensional probability simplex, which is defined by
$\begin{eqnarray*}\displaystyle \begin{array}{l}{{\rm{\Delta }}}_{d-1}:=\left\{{\boldsymbol{p}}=({p}_{1},\ldots ,{p}_{d})\in {{\mathbb{R}}}^{d}:{p}_{k}\right.\\ \quad \left.\geqslant 0(\forall k\in [d]),\sum _{j}{p}_{j}=1\right\}.\end{array}\end{eqnarray*}$
Its interior of Δd−1 is denoted by ${{\rm{\Delta }}}_{d-1}^{\circ }$:
$\begin{eqnarray*}\displaystyle \begin{array}{l}{{\rm{\Delta }}}_{d-1}^{\circ }:=\left\{{\boldsymbol{p}}=({p}_{1},\ldots ,{p}_{d})\in {{\mathbb{R}}}^{d}:{p}_{k}\right.\\ \quad \left.\gt 0(\forall k\in [d]),\displaystyle \sum _{j}{p}_{j}=1\right\}.\end{array}\end{eqnarray*}$
This indicates that a point x in the boundary ∂Δd−1 means that there must be at least a component xi = 0 for some i ∈ [d], where $[d]:=\{1,2,\ldots ,d\}$. Now we separate the boundary of ∂Δd−1 into the union of the following subsets:
$\begin{eqnarray*}\partial {{\rm{\Delta }}}_{d-1}=\bigcup _{j=1}^{d}{F}_{j},\end{eqnarray*}$
where Fj:={x ∈ ∂Δd−1: xj = 0}. Although the following result is known in 1935 [34], we still include our proof for completeness.

Assume that ${\boldsymbol{A}}$ is an observable acting on ${{\mathbb{C}}}^{d}$. Denote the vector consisting of eigenvalues of ${\boldsymbol{A}}$ by $\lambda ({\boldsymbol{A}})$ with components being ${\lambda }_{1}({\boldsymbol{A}})\leqslant \cdots \leqslant {\lambda }_{d}({\boldsymbol{A}})$. It holds that

$\begin{eqnarray*}\displaystyle \begin{array}{l}\max \{{\mathrm{Var}}_{\rho }({\boldsymbol{A}}):\rho \in {\rm{D}}\left({{\mathbb{C}}}^{d}\right)\}\\ \quad =\,\displaystyle \frac{1}{4}{\left({\lambda }_{\max }({\boldsymbol{A}})-{\lambda }_{\min }({\boldsymbol{A}})\right)}^{2}.\end{array}\end{eqnarray*}$
Here ${\lambda }_{\min }({\boldsymbol{A}})={\lambda }_{1}({\boldsymbol{A}})$ and ${\lambda }_{\max }({\boldsymbol{A}})={\lambda }_{d}({\boldsymbol{A}})$.

Assume that ${\boldsymbol{a}}:=\lambda ({\boldsymbol{A}})$ where ${a}_{j}:={\lambda }_{j}({\boldsymbol{A}})$. Note that ${\mathrm{Var}}_{\rho }({\boldsymbol{A}})=\mathrm{Tr}({{\boldsymbol{A}}}^{2}\rho )$ $-\mathrm{Tr}{({\boldsymbol{A}}\rho )}^{2}=\left\langle {{\boldsymbol{a}}}^{2},{{\boldsymbol{D}}}_{{\boldsymbol{U}}}\lambda (\rho )\right\rangle $ $-{\left\langle {\boldsymbol{a}},{{\boldsymbol{D}}}_{{\boldsymbol{U}}}\lambda (\rho )\right\rangle }^{2}$, where ${\boldsymbol{a}}={\left({a}_{1},\ldots ,{a}_{d}\right)}^{{\mathsf{T}}},{{\boldsymbol{a}}}^{2}={\left({a}_{1}^{2},\ldots ,{a}_{d}^{2}\right)}^{{\mathsf{T}}}$, and ${{\boldsymbol{D}}}_{{\boldsymbol{U}}}=\overline{{\boldsymbol{U}}}\circ {\boldsymbol{U}}$ (here ◦ stands for Schur product, i.e. entrywise product), and $\lambda (\rho )={\left({\lambda }_{1}(\rho ),\ldots ,{\lambda }_{d}(\rho )\right)}^{{\mathsf{T}}}$. Denote

$\begin{eqnarray*}\displaystyle \begin{array}{l}{\boldsymbol{x}}:={{\boldsymbol{D}}}_{{\boldsymbol{U}}}\lambda (\rho )\in {{\rm{\Delta }}}_{d-1}:=\left\{{\boldsymbol{p}}=({p}_{1},\ldots ,{p}_{d})\right.\\ \quad \left.\in {{\mathbb{R}}}_{+}^{d}:\sum _{j}{p}_{j}=1\right\},\end{array}\end{eqnarray*}$
the $(d-1)$-dimensional probability simplex. Then
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{\mathrm{Var}}_{\rho }({\boldsymbol{A}}) & = & \left\langle {{\boldsymbol{a}}}^{2},{\boldsymbol{x}}\right\rangle -{\left\langle {\boldsymbol{a}},{\boldsymbol{x}}\right\rangle }^{2}=\sum _{j=1}^{d}{a}_{j}^{2}{x}_{j}\\ & & -{\left(\sum _{j=1}^{d}{a}_{j}{x}_{j}\right)}^{2}=: f({\boldsymbol{x}}).\end{array}\end{eqnarray*}$
i

(i) If d = 2,

$\begin{eqnarray*}\displaystyle \begin{array}{rcl}f({x}_{1},{x}_{2}) & = & {a}_{1}^{2}{x}_{1}+{a}_{2}^{2}{x}_{2}-{\left({a}_{1}{x}_{1}+{a}_{2}{x}_{2}\right)}^{2}\\ & = & {a}_{1}^{2}{x}_{1}+{a}_{2}^{2}(1-{x}_{1})-{\left({a}_{1}{x}_{1}+{a}_{2}(1-{x}_{2})\right)}^{2}\\ & = & {\left({a}_{2}-{a}_{1}\right)}^{2}\left[\displaystyle \frac{1}{4}-{\left({x}_{1}-\displaystyle \frac{1}{2}\right)}^{2}\right]\leqslant \displaystyle \frac{1}{4}{\left({a}_{2}-{a}_{1}\right)}^{2},\end{array}\end{eqnarray*}$
implying that ${f}_{\max }=\tfrac{1}{4}{\left({a}_{2}-{a}_{1}\right)}^{2}$ when ${x}_{1}={x}_{2}=\tfrac{1}{2}$.

ii

(ii) If $d\geqslant 3$, without loss of generality, we assume that ${a}_{1}\lt {a}_{2}\lt \cdots \lt {a}_{d}$, we will show that the function f takes its maximal value on the point $({x}_{1},{x}_{2},\ldots ,{x}_{d-1},{x}_{d})\,=(\tfrac{1}{2},0,\ldots ,0,\tfrac{1}{2})$, with the maximal value being $\tfrac{1}{4}{\left({a}_{d}-{a}_{1}\right)}^{2}=\tfrac{1}{4}{\left({\lambda }_{\max }({\boldsymbol{A}})-{\lambda }_{\min }({\boldsymbol{A}})\right)}^{2}$. Then, using the Lagrangian multiplier method, we let

$\begin{eqnarray*}\displaystyle \begin{array}{l}L({x}_{1},{x}_{2},\cdots ,{x}_{d},\lambda )=\sum _{i=1}^{d}{a}_{i}^{2}{x}_{i}\\ \quad -\,{\left(\sum _{i=1}^{d}{a}_{i}{x}_{i}\right)}^{2}+\lambda \left(\sum _{i=1}^{d}{x}_{i}-1\right).\end{array}\end{eqnarray*}$

Thus
$\begin{eqnarray}\displaystyle \begin{array}{rcl}\displaystyle \frac{\partial L}{\partial {x}_{i}} & = & {a}_{i}^{2}-2{a}_{i}\left(\sum _{i=1}^{d}{a}_{i}{x}_{i}\right)\\ & & +\lambda =0\quad (i=1,\ldots ,d),\\ \displaystyle \frac{\partial L}{\partial \lambda } & = & \sum _{i=1}^{d}{x}_{i}-1=0.\end{array}\end{eqnarray}$
Denote $m:={\sum }_{i=1}^{d}{a}_{i}{x}_{i}$. Because (4) holds for all i = 1,…,d, we see that
$\begin{eqnarray*}\lambda =-{a}_{i}^{2}+2{a}_{i}m=-{a}_{j}^{2}+2{a}_{j}m,\end{eqnarray*}$
that is, $m=\tfrac{{a}_{i}+{a}_{j}}{2}$ and $\lambda ={a}_{i}{a}_{j}$ for all distinct indices i and j. Furthermore, for all distinct indices i and j, the system of equations ${\sum }_{i=1}^{d}{a}_{i}{x}_{i}=m=\tfrac{{a}_{i}+{a}_{j}}{2}$ have no solution on ${{\rm{\Delta }}}_{d-1}$. Hence there is no stationary point on ${{\rm{\Delta }}}_{d-1}$, and thus fmax is obtained on the boundary $\partial {{\rm{\Delta }}}_{d-1}$ of ${{\rm{\Delta }}}_{d-1}$. Suppose, by induction, that the conclusion holds for the case where $d=k\geqslant 2$, i.e. the function f takes its maximal value ${f}_{\max }\,=\,\tfrac{1}{4}{\left({a}_{k}-{a}_{1}\right)}^{2}$ on the point $({x}_{1},{x}_{2},\ldots ,{x}_{k-1},{x}_{k})=(\tfrac{1}{2},0,\ldots ,0,\tfrac{1}{2})\in \partial {{\rm{\Delta }}}_{k-1}$.

Next, we consider the case where $d=k+1$, i.e. the extremal value of $f({x}_{1},\ldots ,{x}_{k+1})$ on $\partial {{\rm{\Delta }}}_{k}$. If ${\boldsymbol{x}}\in {F}_{j}\subset {{\rm{\Delta }}}_{k}$, where $j\in \left\{2,\ldots ,k\right\}$, then ${f}_{k+1}({x}_{1},{x}_{2},\cdots ,{x}_{k+1})={f}_{k}({y}_{1},{y}_{2},$ $\cdots ,{y}_{k})={\sum }_{i=1}^{k}{b}_{i}^{2}{y}_{i}$ $-{\left({\sum }_{i=1}^{k}{b}_{i}{y}_{i}\right)}^{2},$ where ${b}_{1}={a}_{1},\cdots ,{b}_{i-1}\,={a}_{i-1},{b}_{i}={a}_{i+1},\cdots ,{b}_{k}={a}_{k+1},$ it is obvious that ${b}_{1}\lt {b}_{2}\,\lt \cdots \lt {b}_{k}$. By the previous assumption, we have

$\begin{eqnarray*}{f}_{\max }=\displaystyle \frac{1}{4}{\left({b}_{k}-{b}_{1}\right)}^{2}=\displaystyle \frac{1}{4}{\left({a}_{k+1}-{a}_{1}\right)}^{2},\end{eqnarray*}$
and the maximal value is attained at $({x}_{1},{x}_{2},\ldots ,{x}_{k},{x}_{k+1})=\left(\tfrac{1}{2},0,\ldots ,0,\tfrac{1}{2}\right);$ similarly ${f}_{\max }=\tfrac{1}{4}{\left({a}_{k+1}-{a}_{2}\right)}^{2}$ is attained on F1; ${f}_{\max }=\tfrac{1}{4}{\left({a}_{k}-{a}_{1}\right)}^{2}$ is attained on ${F}_{k+1}$.

By comparing these extremal values, we know that

$\begin{eqnarray*}{f}_{\max }=\displaystyle \frac{1}{4}{\left({a}_{k+1}-{a}_{1}\right)}^{2}\end{eqnarray*}$
is attained on $\partial {{\rm{\Delta }}}_{k}$ and the maximal value is attained at $({x}_{1},{x}_{2},\ldots ,{x}_{k},{x}_{k+1})=(\tfrac{1}{2},0,\ldots ,0,\tfrac{1}{2})$.

By using the spectral decomposition theorem to ${\boldsymbol{A}}$, we get that ${\boldsymbol{A}}={\sum }_{j=1}^{d}{a}_{j}| {a}_{j}\rangle \langle {a}_{j}| $. Denote $\left|\psi \right\rangle =\tfrac{\left|{a}_{1}\right\rangle +\left|{a}_{d}\right\rangle }{\sqrt{2}}$. Then we see that

$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{\mathrm{Var}}_{\psi }({\boldsymbol{A}}) & = & \mathrm{Tr}({{\boldsymbol{A}}}^{2}| \psi \rangle \langle \psi | )-\mathrm{Tr}{\left({\boldsymbol{A}}| \psi \rangle \langle \psi | \right)}^{2}\\ & = & \displaystyle \frac{1}{4}{\left({\lambda }_{\max }({\boldsymbol{A}})-{\lambda }_{\min }({\boldsymbol{A}})\right)}^{2}.\end{array}\end{eqnarray*}$

Let $({{\boldsymbol{A}}}_{1},\ldots ,{{\boldsymbol{A}}}_{n})$ be an n-tuple of qudit observables acting on ${{\mathbb{C}}}^{d}$. Denote $v({{\boldsymbol{A}}}_{k}):=\tfrac{1}{2}({\lambda }_{\max }({{\boldsymbol{A}}}_{k})\,-{\lambda }_{\min }({{\boldsymbol{A}}}_{k}))$, where k=1, …, n and ${\lambda }_{\max /\min }({{\boldsymbol{A}}}_{k})$ stands for the maximal/minimal eigenvalue of ${\boldsymbol{A}}$. Then

$\begin{eqnarray*}{{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}\subset \left[0,v({{\boldsymbol{A}}}_{1})\right]\times \cdots \times \left[0,v({{\boldsymbol{A}}}_{n})\right].\end{eqnarray*}$

The proof is easily obtained by combining proposition 1 and proposition 2.

3. PDFs of expectation values and uncertainties of qubit observables

Assume A is a non-degenerate positive matrix with eigenvalues λ1(A) < ⋯ < λd(A), denoted by λ(A) = (λ1(A), …, λd(A)). In view of the specialty of pure state ensemble and noting propositions 1 and 2, we will consider only the variances of observable A over pure states. In fact, the same problem is also considered very recently for mixed states [29]. Then the PDF of $\langle {\boldsymbol{A}}{\rangle }_{\psi }:=\left\langle \psi \left|{\boldsymbol{A}}\right|\psi \right\rangle $ is defined by
$\begin{eqnarray*}{f}_{\langle {\boldsymbol{A}}\rangle }^{(d)}(r):=\int \delta (r-\langle {\boldsymbol{A}}{\rangle }_{\psi }){\rm{d}}\mu (\psi ).\end{eqnarray*}$
Here dμ(ψ) is the so-called uniform probability measure, which is invariant under the unitary rotations, and can be realized in the following way:
$\begin{eqnarray*}{\rm{d}}\mu (\psi )=\displaystyle \frac{{\rm{\Gamma }}(d)}{2{\pi }^{d}}\delta (1-\parallel \psi \parallel )[{\rm{d}}\psi ],\end{eqnarray*}$
where $[{\rm{d}}\psi ]={\prod }_{k=1}^{d}{\rm{d}}{x}_{k}{\rm{d}}{y}_{k}$ for ψk = xk + iyk(k = 1, …, d), and Γ( · ) is the Gamma function. Thus
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{f}_{\langle {\boldsymbol{A}}\rangle }^{(d)}(r) & = & {\rm{\Gamma }}(d){\displaystyle \int }_{{{\mathbb{R}}}_{+}^{d}}\delta \left(r-\sum _{i=1}^{d}{\lambda }_{i}({\boldsymbol{A}}){r}_{i}\right)\delta \\ & & \times \,\left(1-\sum _{i=1}^{d}{r}_{i}\right)\prod _{i=1}^{d}{\rm{d}}{r}_{i}.\end{array}\end{eqnarray*}$
For completeness, we will give a different proof of it although the following result is already obtained in [28]:

For a given quantum observable ${\boldsymbol{A}}$ with simple spectrum $\lambda ({\boldsymbol{A}})=({\lambda }_{1}({\boldsymbol{A}}),\ldots ,{\lambda }_{d}({\boldsymbol{A}}))$, where ${\lambda }_{1}({\boldsymbol{A}})\,\lt \cdots \lt {\lambda }_{d}({\boldsymbol{A}})$, the PDF of $\langle {\boldsymbol{A}}{\rangle }_{\psi }$, where $\left|\psi \right\rangle $ a Haar-distributed random pure state on ${{\mathbb{C}}}^{d}$, is given by the following:

$\begin{eqnarray}\displaystyle \begin{array}{rcl}{f}_{\langle {\boldsymbol{A}}\rangle }^{(d)}(r) & = & {\left(-1\right)}^{d-1}(d-1)\\ & & \times \,\sum _{i=1}^{d}\,\displaystyle \frac{{\left(r-{\lambda }_{i}({\boldsymbol{A}})\right)}^{d-2}}{\prod _{j\in \hat{i}}({\lambda }_{i}({\boldsymbol{A}})-{\lambda }_{j}({\boldsymbol{A}}))}H(r-{\lambda }_{i}({\boldsymbol{A}})),\end{array}\end{eqnarray}$
where $\hat{i}:=\left\{1,2,\ldots ,d\right\}\setminus \left\{i\right\}$ and H is the so-called Heaviside function, defined by $H(t)=1$ if $t\gt 0$, 0 otherwise. Thus the support of ${f}_{\langle {\boldsymbol{A}}\rangle }^{(d)}(r)$ is the closed interval $[{\lambda }_{1}({\boldsymbol{A}}),{\lambda }_{n}({\boldsymbol{A}})]$. In particular, for d = 2, we have
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{f}_{\langle {\boldsymbol{A}}\rangle }^{(2)}(r) & = & \displaystyle \frac{1}{{\lambda }_{2}({\boldsymbol{A}})-{\lambda }_{1}({\boldsymbol{A}})}(H(r-{\lambda }_{1}({\boldsymbol{A}}))\\ & & -H(r-{\lambda }_{2}({\boldsymbol{A}}))).\end{array}\end{eqnarray*}$

By performing Laplace transformation $(r\to s)$ of ${f}_{\langle {\boldsymbol{A}}\rangle }^{(d)}(r)$, we get that

$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{\mathscr{L}}({f}_{\langle {\boldsymbol{A}}\rangle }^{(d)})(s) & = & {\rm{\Gamma }}(d)\displaystyle \int \exp \left(-s\sum _{i=1}^{d}{\lambda }_{i}({\boldsymbol{A}}){r}_{i}\right)\delta \\ & & \times \,\left(1-\sum _{i=1}^{d}{r}_{i}\right)\prod _{i=1}^{d}{\rm{d}}{r}_{i}.\end{array}\end{eqnarray*}$
Let
$\begin{eqnarray*}\displaystyle \begin{array}{l}{F}_{s}(t):={\rm{\Gamma }}(d)\displaystyle \int \exp \left(-s\sum _{i=1}^{d}{\lambda }_{i}({\boldsymbol{A}}){r}_{i}\right)\delta \\ \quad \times \,\left(t-\sum _{i=1}^{d}{r}_{i}\right)\prod _{i=1}^{d}{\rm{d}}{r}_{i}.\end{array}\end{eqnarray*}$
Still by performing Laplace transformation $(t\to x)$ of Fs(t):
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\mathscr{L}}({F}_{s})(x)={\rm{\Gamma }}(d)\displaystyle \int \exp \left(-s\sum _{i=1}^{d}{\lambda }_{i}({\boldsymbol{A}}){r}_{i}\right)\\ \,\times \,\exp \left(-x\sum _{i=1}^{d}{r}_{i}\right)\prod _{i=1}^{d}{\rm{d}}{r}_{i}\\ \,=\,{\rm{\Gamma }}(d)\prod _{i=1}^{d}{\displaystyle \int }_{0}^{\infty }\exp \left(-(s{\lambda }_{i}({\boldsymbol{A}})+x){r}_{i}\right){\rm{d}}{r}_{i}\\ \,=\,\displaystyle \frac{{\rm{\Gamma }}(d)}{{\prod }_{i=1}^{d}(s{\lambda }_{i}({\boldsymbol{A}})+x)},\end{array}\end{eqnarray*}$
implying that [35]
$\begin{eqnarray*}{F}_{s}(t)={\rm{\Gamma }}(d)\sum _{i=1}^{d}\displaystyle \frac{\exp \left(-{\lambda }_{i}({\boldsymbol{A}}){st}\right)}{{\left(-s\right)}^{d-1}{\prod }_{j\in \hat{i}}({\lambda }_{i}({\boldsymbol{A}})-{\lambda }_{j}({\boldsymbol{A}}))},\end{eqnarray*}$
where $\hat{i}:=\{1,\ldots ,d\}\setminus \left\{i\right\}$. Thus
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\mathscr{L}}({f}_{\langle {\boldsymbol{A}}\rangle }^{(d)})(s)={F}_{s}(1)\\ \quad =\,{\rm{\Gamma }}(d)\sum _{i=1}^{d}\,\displaystyle \frac{\exp \left(-{\lambda }_{i}({\boldsymbol{A}})s\right)}{{\left(-s\right)}^{d-1}{\prod }_{j\in \hat{i}}({\lambda }_{i}({\boldsymbol{A}})-{\lambda }_{j}({\boldsymbol{A}}))}.\end{array}\end{eqnarray*}$
Therefore, we get that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle }^{(d)}(r)={\left(-1\right)}^{d-1}(d-1)\\ \,\times \sum _{i=1}^{d}\,\displaystyle \frac{H(r-{\lambda }_{i}({\boldsymbol{A}})){\left(r-{\lambda }_{i}({\boldsymbol{A}})\right)}^{d-2}}{{\prod }_{j\in \hat{i}}({\lambda }_{i}({\boldsymbol{A}})-{\lambda }_{j}({\boldsymbol{A}}))},\end{array}\end{eqnarray*}$
where $H(r-{\lambda }_{i}({\boldsymbol{A}}))$ is the so-called Heaviside function, defined by $H(t)=1$ if $t\gt 0;$ otherwise 0. The support of this pdf is the closed interval $[l,u]$ where
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}l & = & \min \{{\lambda }_{i}({\boldsymbol{A}}):i=1,\ldots ,d\},\\ u & = & \max \{{\lambda }_{i}({\boldsymbol{A}}):i=1,\ldots ,d\}.\end{array}\end{eqnarray*}$
The normalization of ${f}_{\langle {\boldsymbol{A}}\rangle }^{(d)}(r)$ (i.e. ${\int }_{{\mathbb{R}}}{f}_{\langle {\boldsymbol{A}}\rangle }^{(d)}(r){\rm{d}}r\,=\,1$) can be checked by assuming ${\lambda }_{1}\lt {\lambda }_{2}\lt \cdots \lt {\lambda }_{d}$, then $[l,u]=[{\lambda }_{1},{\lambda }_{n}]$ since ${f}_{\langle {\boldsymbol{A}}\rangle }^{(d)}(r)$ is a symmetric of ${\lambda }_{i}$'s.

3.1. The case for one qubit observable

Let us now turn to the qubit observables. Any qubit observable A, which may be parameterized as
$\begin{eqnarray}{\boldsymbol{A}}={a}_{0}{\mathbb{1}}+{\boldsymbol{a}}\cdot {\boldsymbol{\sigma }},\quad ({a}_{0},{\boldsymbol{a}})\in {{\mathbb{R}}}^{4},\end{eqnarray}$
where 1 is the identity matrix on the qubit Hilbert space ${{\mathbb{C}}}^{2}$, and Σ = (Σ1, Σ2, Σ3) is the vector of the standard Pauli matrices:
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{\sigma }_{1} & = & \left(\begin{array}{cc}0 & 1\\ 1 & 0\end{array}\right),\quad {\sigma }_{2}=\left(\begin{array}{cc}0 & -{\rm{i}}\\ {\rm{i}} & 0\end{array}\right),\\ {\sigma }_{3} & = & \left(\begin{array}{cc}1 & 0\\ 0 & -1\end{array}\right).\end{array}\end{eqnarray*}$
Without loss of generality, we assume that our qubit observables are of simple eigenvalues, otherwise the problem is trivial. Thus the two eigenvalues of A are
$\begin{eqnarray*}{\lambda }_{k}({\boldsymbol{A}})={a}_{0}+{\left(-1\right)}^{k}a,\qquad k=1,2,\end{eqnarray*}$
with $a:=\left|{\boldsymbol{a}}\right|=\sqrt{{a}_{1}^{2}+{a}_{2}^{2}+{a}_{3}^{2}}\gt 0$ being the length of vector ${\boldsymbol{a}}=({a}_{1},{a}_{2},{a}_{3})\in {{\mathbb{R}}}^{3}$. Thus (5) becomes
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle }^{(2)}(r)=\displaystyle \frac{1}{{\lambda }_{2}({\boldsymbol{A}})-{\lambda }_{1}({\boldsymbol{A}})}\\ \quad \times \,[H(r-{\lambda }_{1}({\boldsymbol{A}}))-H(r-{\lambda }_{2}({\boldsymbol{A}}))].\end{array}\end{eqnarray*}$

For the qubit observable ${\boldsymbol{A}}$ defined by equation (6), the PDF of ${{\rm{\Delta }}}_{\psi }{\boldsymbol{A}}$, where ψ is a Haar-distributed random pure state on ${{\mathbb{C}}}^{2}$, is given by

$\begin{eqnarray*}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(2)}(x)=\displaystyle \frac{x}{\left|{\boldsymbol{a}}\right|\sqrt{{\left|{\boldsymbol{a}}\right|}^{2}-{x}^{2}}},\qquad x\in [0,\left|{\boldsymbol{a}}\right|).\end{eqnarray*}$

Note that

$\begin{eqnarray}\delta ({r}^{2}-{r}_{0}^{2})=\displaystyle \frac{1}{2\left|{r}_{0}\right|}\left[\delta (r-{r}_{0})+\delta (r+{r}_{0})\right].\end{eqnarray}$
For $x\geqslant 0$, because
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}\delta ({x}^{2}-{{\rm{\Delta }}}_{\psi }{{\boldsymbol{A}}}^{2}) & = & \displaystyle \frac{1}{2x}\left[\delta (x+{{\rm{\Delta }}}_{\psi }{\boldsymbol{A}})+\delta (x-{{\rm{\Delta }}}_{\psi }{\boldsymbol{A}})\right]\\ & = & \displaystyle \frac{1}{2x}\delta (x-{{\rm{\Delta }}}_{\psi }{\boldsymbol{A}}),\end{array}\end{eqnarray*}$
we see that
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(2)}(x) & = & \displaystyle \int \delta (x-{{\rm{\Delta }}}_{\psi }{\boldsymbol{A}}){\rm{d}}\mu (\psi )\\ & = & 2x\displaystyle \int \delta \left({x}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{A}}\right){\rm{d}}\mu (\psi ).\end{array}\end{eqnarray*}$
For any complex 2 × 2 matrix ${\boldsymbol{A}}$, ${{\boldsymbol{A}}}^{2}=\mathrm{Tr}({\boldsymbol{A}}){\boldsymbol{A}}-\det ({\boldsymbol{A}}){\mathbb{1}}$. Then ${{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{A}}=(\langle {\boldsymbol{A}}{\rangle }_{\psi }-{\lambda }_{1}({\boldsymbol{A}}))({\lambda }_{2}({\boldsymbol{A}})-\langle {\boldsymbol{A}}{\rangle }_{\psi })$
$\begin{eqnarray*}\displaystyle \begin{array}{l}\delta \left({x}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{A}}\right)=\delta \left({x}^{2}-(\langle {\boldsymbol{A}}{\rangle }_{\psi }-{\lambda }_{1}({\boldsymbol{A}}))\right.\\ \left.\times \,({\lambda }_{2}({\boldsymbol{A}})-\langle {\boldsymbol{A}}{\rangle }_{\psi },)\right).\end{array}\end{eqnarray*}$
In particular, we see that
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(2)}(x) & = & 2x{\displaystyle \int }_{{\lambda }_{1}({\boldsymbol{A}})}^{{\lambda }_{2}({\boldsymbol{A}})}{\rm{d}}r\delta \left({x}^{2}-(r-{\lambda }_{1}({\boldsymbol{A}}))({\lambda }_{2}({\boldsymbol{A}})-r)\right)\\ & & \times {\displaystyle \int }_{{{\mathbb{C}}}^{2}}\delta (r-\langle {\boldsymbol{A}}{\rangle }_{\psi }){\rm{d}}\mu (\psi )\\ & = & 2x{\displaystyle \int }_{{\lambda }_{1}({\boldsymbol{A}})}^{{\lambda }_{2}({\boldsymbol{A}})}{\rm{d}}{{rf}}_{\langle {\boldsymbol{A}}\rangle }^{(2)}(r)\\ & & \times \,\delta \left({x}^{2}-(r-{\lambda }_{1}({\boldsymbol{A}}))({\lambda }_{2}({\boldsymbol{A}})-r)\right).\end{array}\end{eqnarray*}$
Denote ${f}_{x}(r)={x}^{2}-(r-{\lambda }_{1}({\boldsymbol{A}}))({\lambda }_{2}({\boldsymbol{A}})-r)$. Thus ${\partial }_{r}{f}_{x}(r)=2r-{\lambda }_{1}({\boldsymbol{A}})-{\lambda }_{2}({\boldsymbol{A}})$. Then ${f}_{x}(r)=0$ has two distinct roots in $[{\lambda }_{1}({\boldsymbol{A}}),{\lambda }_{2}({\boldsymbol{A}})]$ if and only if $x\in \left[0,\tfrac{{V}_{2}(\lambda ({\boldsymbol{A}}))}{2}\right)$, where ${V}_{2}(\lambda ({\boldsymbol{A}}))={\lambda }_{2}({\boldsymbol{A}})-{\lambda }_{1}({\boldsymbol{A}})$. Now the roots are given by
$\begin{eqnarray*}{r}_{\pm }(x)=\displaystyle \frac{{\lambda }_{1}({\boldsymbol{A}})+{\lambda }_{2}({\boldsymbol{A}})\pm \sqrt{{V}_{2}{\left(\lambda ({\boldsymbol{A}})\right)}^{2}-4{x}^{2}}}{2}.\end{eqnarray*}$
Thus
$\begin{eqnarray*}\delta \left({f}_{x}(r)\right)=\displaystyle \frac{1}{\left|{\partial }_{r={r}_{+}(x)}{f}_{x}(r)\right|}{\delta }_{{r}_{+}(x)}+\displaystyle \frac{1}{\left|{\partial }_{r={r}_{-}(x)}{f}_{x}(r)\right|}{\delta }_{{r}_{-}(x)},\end{eqnarray*}$
implying that
$\begin{eqnarray*}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(2)}(x)=\displaystyle \frac{4x}{{V}_{2}(\lambda ({\boldsymbol{A}}))\sqrt{{V}_{2}{\left(\lambda ({\boldsymbol{A}})\right)}^{2}-4{x}^{2}}}.\end{eqnarray*}$
Now for ${\boldsymbol{A}}={a}_{0}{\mathbb{1}}+{\boldsymbol{a}}\cdot {\boldsymbol{\sigma }}$, we have ${V}_{2}(\lambda ({\boldsymbol{A}}))=2\left|{\boldsymbol{a}}\right|$. Substituting this into the above expression, we get the desired result:
$\begin{eqnarray*}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(2)}(x)=\displaystyle \frac{x}{\left|{\boldsymbol{a}}\right|\sqrt{{\left|{\boldsymbol{a}}\right|}^{2}-{x}^{2}}},\end{eqnarray*}$
where $x\in [0,\left|{\boldsymbol{a}}\right|)$. This is the desired result.

3.2. The case for two-qubit observables

Let A = a0 + a · Σ and B = b0 + b · Σ
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)=\displaystyle \int \delta (r-\left\langle \psi \left|{\boldsymbol{A}}\right|\psi \right\rangle )\delta \\ \times \,(s-\left\langle \psi \left|{\boldsymbol{B}}\right|\psi \right\rangle ){\rm{d}}\mu (\psi )\\ \quad =\,\displaystyle \frac{1}{{\left(2\pi \right)}^{2}}{\displaystyle \int }_{{{\mathbb{R}}}^{2}}{\rm{d}}\alpha {\rm{d}}\beta \exp \left({\rm{i}}(r\alpha +s\beta )\right)\\ \quad \times \,\displaystyle \int \exp \left(-{\rm{i}}\left\langle \psi \left|\alpha {\boldsymbol{A}}+\beta {\boldsymbol{B}}\right|\psi \right\rangle \right){\rm{d}}\mu (\psi ),\end{array}\end{eqnarray*}$
where
$\begin{eqnarray*}\displaystyle \begin{array}{l}\displaystyle \int \exp \left(-{\rm{i}}\left\langle \psi \left|\alpha {\boldsymbol{A}}+\beta {\boldsymbol{B}}\right|\psi \right\rangle \right){\rm{d}}\mu (\psi )\\ \quad =\,{\displaystyle \int }_{{\lambda }_{-}(\alpha {\boldsymbol{A}}+\beta {\boldsymbol{B}})}^{{\lambda }_{+}(\alpha {\boldsymbol{A}}+\beta {\boldsymbol{B}})}\exp \left(-{\rm{i}}t\right){f}_{2}(t){\rm{d}}t\\ \quad =\,\displaystyle \frac{1}{2\left|\alpha {\boldsymbol{a}}+\beta {\boldsymbol{b}}\right|}{\displaystyle \int }_{{\lambda }_{-}(\alpha {\boldsymbol{A}}+\beta {\boldsymbol{B}})}^{{\lambda }_{+}(\alpha {\boldsymbol{A}}+\beta {\boldsymbol{B}})}\exp \left(-{\rm{i}}t\right){\rm{d}}t\\ \quad =\,\exp \left(-{\rm{i}}({a}_{0}\alpha +{b}_{0}\beta )\right)\displaystyle \frac{\sin \left|\alpha {\boldsymbol{a}}+\beta {\boldsymbol{b}}\right|}{\left|\alpha {\boldsymbol{a}}+\beta {\boldsymbol{b}}\right|},\end{array}\end{eqnarray*}$
for
$\begin{eqnarray*}{\lambda }_{\pm }(\alpha {\boldsymbol{A}}+\beta {\boldsymbol{B}})=\alpha {a}_{0}+\beta {b}_{0}\pm \left|\alpha {\boldsymbol{a}}+\beta {\boldsymbol{b}}\right|.\end{eqnarray*}$
Now that
$\begin{eqnarray*}{f}_{\langle \alpha {\boldsymbol{A}}+\beta {\boldsymbol{B}}\rangle }^{(2)}(r)=\displaystyle \frac{1}{{\lambda }_{2}-{\lambda }_{1}}(H(r-{\lambda }_{1})-H(r-{\lambda }_{2})),\end{eqnarray*}$
therefore
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)=\displaystyle \frac{1}{{\left(2\pi \right)}^{2}}{\displaystyle \int }_{{{\mathbb{R}}}^{2}}{\rm{d}}\alpha {\rm{d}}\beta \\ \quad \exp \left({\rm{i}}((r-{a}_{0})\alpha +(s-{b}_{0})\beta )\right)\displaystyle \frac{\sin \left|\alpha {\boldsymbol{a}}+\beta {\boldsymbol{b}}\right|}{\left|\alpha {\boldsymbol{a}}+\beta {\boldsymbol{b}}\right|}.\end{array}\end{eqnarray*}$
i

(i) If $\left\{{\boldsymbol{a}},{\boldsymbol{b}}\right\}$ is linearly independent, then the following matrix Ta,b is invertible, and thus

$\begin{eqnarray*}\displaystyle \begin{array}{rcl}\left(\begin{array}{c}\tilde{\alpha }\\ \tilde{\beta }\end{array}\right) & = & {{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}}^{\tfrac{1}{2}}\left(\begin{array}{c}\alpha \\ \beta \end{array}\right),\\ \mathrm{where}\,{{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}} & = & \left(\begin{array}{cc}\left\langle {\boldsymbol{a}},{\boldsymbol{a}}\right\rangle & \left\langle {\boldsymbol{a}},{\boldsymbol{b}}\right\rangle \\ \left\langle {\boldsymbol{a}},{\boldsymbol{b}}\right\rangle & \left\langle {\boldsymbol{b}},{\boldsymbol{b}}\right\rangle \end{array}\right).\end{array}\end{eqnarray*}$
Thus we see that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)=\displaystyle \frac{1}{{\left(2\pi \right)}^{2}\sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}})}}{\displaystyle \int }_{{{\mathbb{R}}}^{2}}{\rm{d}}\tilde{\alpha }{\rm{d}}\tilde{\beta }\\ \,\times \,\exp \left({\rm{i}}((\tilde{r}-{\tilde{a}}_{0})\alpha +(\tilde{s}-{\tilde{b}}_{0})\tilde{\beta })\right)\displaystyle \frac{\sin \sqrt{\tilde{\alpha }+\tilde{\beta }}}{\sqrt{\tilde{\alpha }+\tilde{\beta }}}\\ \quad =\,\displaystyle \frac{1}{{\left(2\pi \right)}^{2}\sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}})}}{\displaystyle \int }_{0}^{\infty }{\rm{d}}t\sin t{\displaystyle \int }_{0}^{2\pi }{\rm{d}}\theta \\ \,\times \,\exp \left({\rm{i}}t((\tilde{r}-{\tilde{a}}_{0})\cos \theta +(\tilde{s}-{\tilde{b}}_{0})\sin \theta )\right)\\ \quad =\,\displaystyle \frac{1}{(2\pi )\sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}})}}{\displaystyle \int }_{0}^{\infty }{\rm{d}}t\sin {{tJ}}_{0}\\ \,\times \,\left(t\sqrt{{\left(\tilde{r}-{\tilde{a}}_{0}\right)}^{2}+{\left(\tilde{s}-{\tilde{b}}_{0}\right)}^{2}}\right),\end{array}\end{eqnarray*}$
where J0(z) is the so-called Bessel function of the first kind, defined by
$\begin{eqnarray*}{J}_{0}(z)=\displaystyle \frac{1}{\pi }{\int }_{0}^{\pi }\cos (z\cos \theta ){\rm{d}}\theta .\end{eqnarray*}$
Therefore
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)=\displaystyle \frac{1}{2\pi \sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}})}}{\displaystyle \int }_{0}^{+\infty }{\rm{d}}t\sin {{tJ}}_{0}\\ \quad \times \,\left(t\cdot \sqrt{(r-{a}_{0},s-{b}_{0}){{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}}^{-1}\left(\begin{array}{c}r-{a}_{0}\\ s-{b}_{0}\end{array}\right)}\right),\end{array}\end{eqnarray*}$
where
$\begin{eqnarray*}{\int }_{0}^{\infty }{J}_{0}(\lambda t)\sin (t){\rm{d}}t=\displaystyle \frac{1}{\sqrt{1-{\lambda }^{2}}}H(1-\left|\lambda \right|).\end{eqnarray*}$
Therefore
$\begin{eqnarray*}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)=\displaystyle \frac{H(1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}(r,s))}{2\pi \sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}})(1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}^{2}({\rm{r}},{\rm{s}}))}},\end{eqnarray*}$
where
$\begin{eqnarray}{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}(r,s)=\sqrt{(r-{a}_{0},s-{b}_{0}){{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}}^{-1}\left(\begin{array}{c}r-{a}_{0}\\ s-{b}_{0}\end{array}\right)}.\end{eqnarray}$

ii

(ii) If $\left\{{\boldsymbol{a}},{\boldsymbol{b}}\right\}$ is linearly dependent, without loss of generality, let b = κ · a for some nonzero κ ≠ 0, then

$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)=\displaystyle \frac{1}{{\left(2\pi \right)}^{2}}{\displaystyle \int }_{{{\mathbb{R}}}^{2}}{\rm{d}}\alpha {\rm{d}}\beta \\ \quad \times \,\exp \left({\rm{i}}((r-{a}_{0})\alpha +(s-{b}_{0})\beta )\right)\\ \quad \times \,\displaystyle \frac{\sin (a\left|\alpha +\beta \kappa \right|)}{a\left|\alpha +\beta \kappa \right|}.\end{array}\end{eqnarray*}$
Here $a=\left|{\boldsymbol{a}}\right|$. We perform the change of variables $(\alpha ,\beta )\to (\alpha ^{\prime} ,\beta ^{\prime} )$, where $\alpha ^{\prime} =\alpha +\kappa \beta $ and $\beta ^{\prime} =\beta $. We get its Jacobian, given by
$\begin{eqnarray*}\det \left(\displaystyle \frac{\partial (\alpha ^{\prime} ,\beta ^{\prime} )}{\partial (\alpha ,\beta )}\right)=\left|\begin{array}{cc}1 & \kappa \\ 0 & 1\end{array}\right|=1\ne 0.\end{eqnarray*}$

Thus
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)=\displaystyle \frac{1}{{\left(2\pi \right)}^{2}}\iint {\rm{d}}\alpha ^{\prime} {\rm{d}}\beta ^{\prime} \exp ({\rm{i}}((r-{a}_{0})(\alpha ^{\prime} -\kappa \beta ^{\prime} )\\ +\,(s-{b}_{0})\beta ^{\prime} ))\displaystyle \frac{\sin (a\left|\alpha ^{\prime} \right|)}{a\left|\alpha ^{\prime} \right|}\\ \quad =\,\displaystyle \frac{1}{2\pi }\displaystyle \int \exp \left({\rm{i}}((s-{b}_{0})-\kappa (r-{a}_{0}))\beta ^{\prime} \right){\rm{d}}\beta ^{\prime} \\ \quad \times \,\displaystyle \frac{1}{2\pi }\displaystyle \int {\rm{d}}\alpha ^{\prime} \exp \left({\rm{i}}(r-{a}_{0})\alpha ^{\prime} \right)\displaystyle \frac{\sin (a\left|\alpha ^{\prime} \right|)}{a\left|\alpha ^{\prime} \right|}\\ \quad =\,\delta ((s-{b}_{0})-\kappa (r-{a}_{0})){f}_{\langle {\boldsymbol{A}}\rangle }^{(2)}(r).\end{array}\end{eqnarray*}$

For a pair of qubit observables ${\boldsymbol{A}}\,={a}_{0}{\mathbb{1}}+{\boldsymbol{a}}\cdot {\boldsymbol{\sigma }}$ and ${\boldsymbol{B}}={b}_{0}{\mathbb{1}}+{\boldsymbol{b}}\cdot {\boldsymbol{\sigma }}$, (i) if $\left\{{\boldsymbol{a}},{\boldsymbol{b}}\right\}$ is linearly independent, then the pdf of $(\langle {\boldsymbol{A}}{\rangle }_{\psi },\langle {\boldsymbol{B}}{\rangle }_{\psi })$, where $\psi \in {{\mathbb{C}}}^{2}$ a Haar-distributed pure state, is given by

$\begin{eqnarray*}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)=\displaystyle \frac{H(1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}(r,s))}{2\pi \sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}})(1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}^{2}({\rm{r}},{\rm{s}}))}}.\end{eqnarray*}$
i

(ii) If $\left\{{\boldsymbol{a}},{\boldsymbol{b}}\right\}$ is linearly dependent, without loss of generality, let ${\boldsymbol{b}}=\kappa \cdot {\boldsymbol{a}}$, then

$\begin{eqnarray*}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)=\delta ((s-{b}_{0})-\kappa (r-{a}_{0})){f}_{\langle {\boldsymbol{A}}\rangle }^{(2)}(r).\end{eqnarray*}$

From proposition 5, we can directly infer the results obtained in [36, 37].
We now turn to a pair of qubit observables
$\begin{eqnarray}\displaystyle \begin{array}{l}{\boldsymbol{A}}={a}_{0}{\mathbb{1}}+{\boldsymbol{a}}\cdot {\boldsymbol{\sigma }},\quad {\boldsymbol{B}}={b}_{0}{\mathbb{1}}+{\boldsymbol{b}}\cdot {\boldsymbol{\sigma }},\\ \quad ({a}_{0},{\boldsymbol{a}}),({b}_{0},{\boldsymbol{b}})\in {{\mathbb{R}}}^{4},\end{array}\end{eqnarray}$
whose uncertainty region
$\begin{eqnarray}{{ \mathcal U }}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}:=\{({{\rm{\Delta }}}_{\psi }{\boldsymbol{A}},{{\rm{\Delta }}}_{\psi }{\boldsymbol{B}})\in {{\mathbb{R}}}_{+}^{2}:\left|\psi \right\rangle \in {{\mathbb{C}}}^{2}\}\end{eqnarray}$
was proposed by Busch and Reardon-Smith [20] in the mixed state case. We consider the probability distribution density
$\begin{eqnarray*}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{(2)}(x,y):=\int \delta (x-{{\rm{\Delta }}}_{\psi }{\boldsymbol{A}})\delta (y-{{\rm{\Delta }}}_{\psi }{\boldsymbol{B}}){\rm{d}}\mu (\psi ),\end{eqnarray*}$
on the uncertainty region defined by equation (10). Denote
$\begin{eqnarray*}{{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}}}:=\left(\begin{array}{cc}\left\langle {\boldsymbol{a}},{\boldsymbol{a}}\right\rangle & \left\langle {\boldsymbol{a}},{\boldsymbol{b}}\right\rangle \\ \left\langle {\boldsymbol{b}},{\boldsymbol{a}}\right\rangle & \left\langle {\boldsymbol{b}},{\boldsymbol{b}}\right\rangle \end{array}\right).\end{eqnarray*}$

The joint probability distribution density of the uncertainties $({{\rm{\Delta }}}_{\psi }{\boldsymbol{A}},{{\rm{\Delta }}}_{\psi }{\boldsymbol{B}})$ for a pair of qubit observables defined by equation (9), where ψ is a Haar-distributed random pure state on ${{\mathbb{C}}}^{2}$, is given by

$\begin{eqnarray*}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{(2)}(x,y)=\displaystyle \frac{2{xy}{\sum }_{j\in \{\pm \}}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}({r}_{+}(x),{s}_{j}(y))}{\sqrt{({a}^{2}-{x}^{2})({b}^{2}-{y}^{2})}},\end{eqnarray*}$
where $a=\left|{\boldsymbol{a}}\right|\gt 0,b=\left|{\boldsymbol{b}}\right|\gt 0$, ${r}_{\pm }(x)={a}_{0}\pm \sqrt{{a}^{2}-{x}^{2}}$, ${s}_{\pm }(y)\,=\,{b}_{0}\pm \sqrt{{b}^{2}-{y}^{2}}$.

Note that in the proof of theorem 1, we have already obtained that

$\begin{eqnarray*}\displaystyle \begin{array}{l}\delta ({x}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{A}})=\delta ({x}^{2}-(r-{\lambda }_{1}({\boldsymbol{A}}))\\ \quad \times \,({\lambda }_{2}({\boldsymbol{A}})-r)))=\delta ({f}_{x}(r)),\end{array}\end{eqnarray*}$
where ${f}_{x}(r):={x}^{2}-(r-{\lambda }_{1}({\boldsymbol{A}}))({\lambda }_{2}({\boldsymbol{A}})-r)$. Similarly,
$\begin{eqnarray*}\delta ({y}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{B}})=\delta ({g}_{y}(s)),\end{eqnarray*}$
where ${g}_{y}(s)={y}^{2}-(s-{\lambda }_{1}({\boldsymbol{B}}))({\lambda }_{2}({\boldsymbol{B}})-s)$.

Again, by using (7), we get that

$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{(2)}(x,y) & = & 4{xy}\displaystyle \int \delta ({x}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{A}})\delta ({y}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{B}}){\rm{d}}\mu (\psi )\\ & = & 4{xy}\iint {\rm{d}}r{\rm{d}}{{sf}}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)\delta ({f}_{x}(r))\delta ({g}_{y}(s)),\end{array}\end{eqnarray*}$
where ${f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s)$ is determined by proposition 5. Hence
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}\delta \left({f}_{x}(r)\right) & = & \displaystyle \frac{1}{\left|{\partial }_{r={r}_{+}(x)}{f}_{x}(r)\right|}{\delta }_{{r}_{+}(x)}\\ & & +\displaystyle \frac{1}{\left|{\partial }_{r={r}_{-}(x)}{f}_{x}(r)\right|}{\delta }_{{r}_{-}(x)},\\ \delta \left({g}_{y}(s)\right) & = & \displaystyle \frac{1}{\left|{\partial }_{s={s}_{+}(y)}{g}_{y}(s)\right|}{\delta }_{{s}_{+}(y)}\\ & & +\displaystyle \frac{1}{\left|{\partial }_{s={s}_{-}(y)}.{g}_{y}(s)\right|}{\delta }_{{s}_{-}(y)}.\end{array}\end{eqnarray*}$
From the above, we have already known that
$\begin{eqnarray*}\displaystyle \begin{array}{l}\delta ({f}_{x}(r))\delta ({g}_{y}(s))\\ \quad =\,\displaystyle \frac{{\delta }_{({r}_{+},{s}_{+})}+{\delta }_{({r}_{+},{s}_{-})}+{\delta }_{({r}_{-},{s}_{+})}+{\delta }_{({r}_{-},{s}_{-})}}{4\sqrt{({a}^{2}-{x}^{2})({b}^{2}-{y}^{2})}}.\end{array}\end{eqnarray*}$
Based on this observation, we get that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{(2)}(x,y)=\displaystyle \frac{{xy}}{\sqrt{({a}^{2}-{x}^{2})({b}^{2}-{y}^{2})}}\\ \quad \times \,\sum _{i,j\in \{\pm \}}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }({r}_{i}(x),{s}_{j}(y)).\end{array}\end{eqnarray*}$
It is easily checked that ${\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}(\cdot ,\cdot )$, defined in (8), satisfies that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}({r}_{+}(x),{s}_{+}(y))={\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}({r}_{-}(x),{s}_{-}(y)),\\ \quad {\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}({r}_{+}(x),{s}_{-}(y))={\omega }_{{\boldsymbol{A}},{\boldsymbol{B}}}({r}_{-}(x),{s}_{+}(y)).\end{array}\end{eqnarray*}$
These lead to the fact that
$\begin{eqnarray*}\displaystyle \begin{array}{l}\sum _{i,j\in \{\pm \}}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}({r}_{i}(x),{s}_{j}(y))=2\\ \quad \times \,\sum _{j\in \{\pm \}}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }({r}_{+}(x),{s}_{j}(y)).\end{array}\end{eqnarray*}$
Therefore
$\begin{eqnarray*}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{(2)}(x,y)=\displaystyle \frac{2{xy}{\sum }_{j\in \{\pm \}}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}({r}_{+}(x),{s}_{j}(y))}{\sqrt{({a}^{2}-{x}^{2})({b}^{2}-{y}^{2})}}.\end{eqnarray*}$
We get the desired result.

3.3. The case for three-qubit observables

We now turn to the case where there are three-qubit observables
$\begin{eqnarray}\displaystyle \begin{array}{rcl}{\boldsymbol{A}} & = & {a}_{0}{\mathbb{1}}+{\boldsymbol{a}}\cdot {\boldsymbol{\sigma }},\quad {\boldsymbol{B}}={b}_{0}{\mathbb{1}}+{\boldsymbol{b}}\cdot {\boldsymbol{\sigma }},\\ {\boldsymbol{C}} & = & {c}_{0}{\mathbb{1}}+{\boldsymbol{c}}\cdot {\boldsymbol{\sigma }}\quad ({a}_{0},{\boldsymbol{a}}),({b}_{0},{\boldsymbol{b}}),({c}_{0},{\boldsymbol{c}})\in {{\mathbb{R}}}^{4},\end{array}\end{eqnarray}$
whose uncertainty region
$\begin{eqnarray}{{ \mathcal U }}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}},{\rm{\Delta }}{\boldsymbol{C}}}:=\{({{\rm{\Delta }}}_{\psi }{\boldsymbol{A}},{{\rm{\Delta }}}_{\psi }{\boldsymbol{B}},{{\rm{\Delta }}}_{\psi }{\boldsymbol{C}})\in {{\mathbb{R}}}_{+}^{3}:\left|\psi \right\rangle \in {{\mathbb{C}}}^{2}\}.\end{eqnarray}$
We define the probability distribution density
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}},{\rm{\Delta }}{\boldsymbol{C}}}^{(2)}(x,y,z)\\ \quad :=\displaystyle \int \delta (x-{{\rm{\Delta }}}_{\psi }{\boldsymbol{A}})\delta (y-{{\rm{\Delta }}}_{\psi }{\boldsymbol{B}})\delta (z-{{\rm{\Delta }}}_{\psi }{\boldsymbol{C}}){\rm{d}}\mu (\psi ),\end{array}\end{eqnarray*}$
on the uncertainty region defined by equation (12). Denote
$\begin{eqnarray*}{{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}}:=\left(\begin{array}{ccc}\left\langle {\boldsymbol{a}},{\boldsymbol{a}}\right\rangle & \left\langle {\boldsymbol{a}},{\boldsymbol{b}}\right\rangle & \left\langle {\boldsymbol{a}},{\boldsymbol{c}}\right\rangle \\ \left\langle {\boldsymbol{b}},{\boldsymbol{a}}\right\rangle & \left\langle {\boldsymbol{b}},{\boldsymbol{b}}\right\rangle & \left\langle {\boldsymbol{b}},{\boldsymbol{c}}\right\rangle \\ \left\langle {\boldsymbol{c}},{\boldsymbol{a}}\right\rangle & \left\langle {\boldsymbol{c}},{\boldsymbol{b}}\right\rangle & \left\langle {\boldsymbol{c}},{\boldsymbol{c}}\right\rangle \end{array}\right).\end{eqnarray*}$
Again note that Ta,b,c is also a semidefinite positive matrix. We find that rank(Ta,b,c) ≤ 3. There are three cases that would be possible: rank(Ta,b,c) = 1, 2, 3. Thus Ta,b,c is invertible (i.e. rank(Ta,b,c) = 3) if and only if $\left\{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}\right\}$ linearly independent. In such case, we write
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)\\ \quad :=\sqrt{(r-{a}_{0},s-{b}_{0},t-{c}_{0}){{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}}^{-1}{\left(r-{a}_{0},s-{b}_{0},t-{c}_{0}\right)}^{{\mathsf{T}}}}.\end{array}\end{eqnarray*}$
In order to calculate fΔABC, essentially we need to derive the joint probability distribution density of (⟨Aψ, ⟨Bψ, ⟨Cψ), which is defined by
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}(r,s,t)\\ \quad :=\displaystyle \int \delta (r-\langle {\boldsymbol{A}}{\rangle }_{\psi })\delta (s-\langle {\boldsymbol{B}}{\rangle }_{\psi })\delta (t-\langle {\boldsymbol{C}}{\rangle }_{\psi }){\rm{d}}\mu (\psi ).\end{array}\end{eqnarray*}$
We have the following result:

For three-qubit observables, given by equation (11), (i) if $\mathrm{rank}({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})=3$, i.e. $\left\{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}\right\}$ is linearly independent, then the joint probability distribution density of $(\langle {\boldsymbol{A}}{\rangle }_{\psi },\langle {\boldsymbol{B}}{\rangle }_{\psi },\langle {\boldsymbol{C}}{\rangle }_{\psi })$, where ψ is a Haar-distributed random pure state on ${{\mathbb{C}}}^{2}$, is given by the following:

$\begin{eqnarray}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}(r,s,t)\\ \quad =\,\displaystyle \frac{1}{4\pi \sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})}}\delta (1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)).\end{array}\end{eqnarray}$
i

(ii) If $\mathrm{rank}({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})=2$, i.e. $\left\{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}\right\}$ is linearly dependent, without loss of generality, we assume that $\left\{{\boldsymbol{a}},{\boldsymbol{b}}\right\}$ is linearly independent and ${\boldsymbol{c}}={\kappa }_{{\boldsymbol{a}}}\cdot {\boldsymbol{a}}+{\kappa }_{{\boldsymbol{b}}}\cdot {\boldsymbol{b}}$ for some ${\kappa }_{{\boldsymbol{a}}}$ and ${\kappa }_{{\boldsymbol{b}}}$ with ${\kappa }_{{\boldsymbol{a}}}{\kappa }_{{\boldsymbol{b}}}\ne 0$, then

$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}(r,s,t)=\delta ((t-{c}_{0})\\ \quad -\,{\kappa }_{{\boldsymbol{a}}}(r-{a}_{0})-{\kappa }_{{\boldsymbol{b}}}(s-{b}_{0})){f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s).\end{array}\end{eqnarray*}$

ii

(iii) If $\mathrm{rank}({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})=1$, i.e. $\left\{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}\right\}$ is linearly dependent, without loss of generality, we assume that ${\boldsymbol{a}}$ are linearly independent and ${\boldsymbol{b}}={\kappa }_{{\boldsymbol{b}}{\boldsymbol{a}}}\cdot {\boldsymbol{a}},{\boldsymbol{c}}={\kappa }_{{\boldsymbol{c}}{\boldsymbol{a}}}\cdot {\boldsymbol{a}}$ for some ${\kappa }_{{\boldsymbol{b}}{\boldsymbol{a}}}$ and ${\kappa }_{{\boldsymbol{c}}{\boldsymbol{a}}}$ with ${\kappa }_{{\boldsymbol{b}}{\boldsymbol{a}}}{\kappa }_{{\boldsymbol{c}}{\boldsymbol{a}}}\ne 0$, then

$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}(r,s,t)=\delta ((s-{b}_{0})\\ \quad -\,{\kappa }_{{\boldsymbol{b}}{\boldsymbol{a}}}(r-{a}_{0}))\delta ((t-{c}_{0})-{\kappa }_{{\boldsymbol{c}}{\boldsymbol{a}}}(r-{a}_{0})){f}_{\langle {\boldsymbol{A}}\rangle }^{(2)}(r).\end{array}\end{eqnarray*}$

i

(i) If $\mathrm{rank}({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})=3$, then ${{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}}$ is invertible. By using Bloch representation, $| \psi \rangle \langle \psi | =\tfrac{1}{2}({{\mathbb{1}}}_{2}+{\boldsymbol{u}}\cdot {\boldsymbol{\sigma }})$, where $\left|{\boldsymbol{u}}\right|=1$. Then for $(r,s,t)=(\langle {\boldsymbol{A}}{\rangle }_{\psi },\langle {\boldsymbol{B}}{\rangle }_{\psi },\langle {\boldsymbol{C}}{\rangle }_{\psi })$ $=\,({a}_{0}+\left\langle {\boldsymbol{u}},{\boldsymbol{a}}\right\rangle ,{b}_{0}+\left\langle {\boldsymbol{u}},{\boldsymbol{b}}\right\rangle ,\left\langle {\boldsymbol{u}},{\boldsymbol{c}}\right\rangle )$, we see that

$\begin{eqnarray*}(r-{a}_{0},s-{b}_{0},t-{c}_{0})=(\left\langle {\boldsymbol{u}},{\boldsymbol{a}}\right\rangle ,\left\langle {\boldsymbol{u}},{\boldsymbol{b}}\right\rangle ,\left\langle {\boldsymbol{u}},{\boldsymbol{c}}\right\rangle ).\end{eqnarray*}$
Denote ${\boldsymbol{Q}}:=({\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}})$, which is a 3 × 3 invertible real matrix due to the fact that $\left\{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}\right\}$ is linearly independent. Then ${{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}}={{\boldsymbol{Q}}}^{{\mathsf{T}}}{\boldsymbol{Q}}$ and $(r-{a}_{0},s-{b}_{0},t-{c}_{0})=\left\langle {\boldsymbol{u}}\right|{\boldsymbol{Q}}$, which means that
$\begin{eqnarray*}{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)=\sqrt{\left\langle {\boldsymbol{u}}\right|{\boldsymbol{Q}}{\left({{\boldsymbol{Q}}}^{{\mathsf{T}}}{\boldsymbol{Q}}\right)}^{-1}{{\boldsymbol{Q}}}^{{\mathsf{T}}}\left|{\boldsymbol{u}}\right\rangle }=\left|{\boldsymbol{u}}\right|=1.\end{eqnarray*}$
This tells us an interesting fact that $(\langle {\boldsymbol{A}}{\rangle }_{\psi },\langle {\boldsymbol{B}}{\rangle }_{\psi },\langle {\boldsymbol{C}}{\rangle }_{\psi })$ lies at the boundary surface of the ellipsoid ${\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)\leqslant 1$, i.e. ${\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)=1$. This indicates that the PDF of $(\langle {\boldsymbol{A}}{\rangle }_{\psi },\langle {\boldsymbol{B}}{\rangle }_{\psi },\langle {\boldsymbol{C}}{\rangle }_{\psi })$ satisfies that
$\begin{eqnarray*}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}(r,s,t)\propto \delta (1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)).\end{eqnarray*}$
Next, we calculate the following integral:
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\displaystyle \int }_{{{\mathbb{R}}}^{3}}\delta (1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)){\rm{d}}r{\rm{d}}s{\rm{d}}t\\ \quad =\,4\pi \sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})}.\end{array}\end{eqnarray*}$
Apparently
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\displaystyle \int }_{{{\mathbb{R}}}^{3}}\delta (1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)){\rm{d}}r{\rm{d}}s{\rm{d}}t\\ \quad =\,{\displaystyle \int }_{{{\mathbb{R}}}^{3}}\delta \left(1-\sqrt{\left\langle {\boldsymbol{x}}\left|{{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}}^{-1}\right|{\boldsymbol{x}}\right\rangle }\right)[{\rm{d}}{\boldsymbol{x}}].\end{array}\end{eqnarray*}$
Here ${\boldsymbol{x}}=(r-{a}_{0},s-{b}_{0},t-{c}_{0})$ and $[{\rm{d}}{\boldsymbol{x}}]={\rm{d}}r{\rm{d}}s{\rm{d}}t$. Indeed, by using the spectral decomposition theorem for the Hermitian matrix, we get that there is orthogonal matrix ${\boldsymbol{O}}\in \unicode{x000D8}(3)$ such that ${{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}}={{\boldsymbol{O}}}^{{\mathsf{T}}}\mathrm{diag}({\lambda }_{1},{\lambda }_{2},{\lambda }_{3}){\boldsymbol{O}}$ where ${\lambda }_{k}\gt 0(k=1,2,3)$. Thus
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)=\left\langle {\boldsymbol{O}}{\boldsymbol{x}}\left|\mathrm{diag}({\lambda }_{1}^{-1},{\lambda }_{2}^{-1},{\lambda }_{3}^{-1})\right|{\boldsymbol{O}}{\boldsymbol{x}}\right\rangle \\ \quad =\,\left\langle {\boldsymbol{y}}\left|\mathrm{diag}({\lambda }_{1}^{-1},{\lambda }_{2}^{-1},{\lambda }_{3}^{-1})\right|{\boldsymbol{y}}\right\rangle \end{array}\end{eqnarray*}$
where ${\boldsymbol{y}}={\boldsymbol{O}}{\boldsymbol{x}}$. Thus
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\displaystyle \int }_{{{\mathbb{R}}}^{3}}\delta (1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)){\rm{d}}r{\rm{d}}s{\rm{d}}t\\ \quad =\,{\displaystyle \int }_{{{\mathbb{R}}}^{3}}\delta \left(1-\sqrt{\left\langle {\boldsymbol{y}}\left|\mathrm{diag}({\lambda }_{1}^{-1},{\lambda }_{2}^{-1},{\lambda }_{3}^{-1})\right|{\boldsymbol{y}}\right\rangle }\right)[{\rm{d}}{\boldsymbol{y}}].\end{array}\end{eqnarray*}$
Let ${\boldsymbol{z}}=\mathrm{diag}({\lambda }_{1}^{-1/2},{\lambda }_{2}^{-1/2},{\lambda }_{3}^{-1/2}){\boldsymbol{y}}$. Then $[{\rm{d}}{\boldsymbol{z}}]\,=\tfrac{1}{\sqrt{{\lambda }_{1}{\lambda }_{2}{\lambda }_{3}}}[{\rm{d}}{\boldsymbol{y}}]=\tfrac{1}{\sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})}}[{\rm{d}}{\boldsymbol{y}}]$ and
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\displaystyle \int }_{{{\mathbb{R}}}^{3}}\delta (1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)){\rm{d}}r{\rm{d}}s{\rm{d}}t\\ \quad =\,\sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})}{\displaystyle \int }_{{{\mathbb{R}}}^{3}}\delta \left(1-\left|{\boldsymbol{z}}\right|\right)[{\rm{d}}{\boldsymbol{z}}]=4\pi \sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})}.\end{array}\end{eqnarray*}$
Finally, we get that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}(r,s,t)\\ \quad =\,\displaystyle \frac{1}{4\pi \sqrt{\det ({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})}}\delta (1-{\omega }_{{\boldsymbol{A}},{\boldsymbol{B}},{\boldsymbol{C}}}(r,s,t)).\end{array}\end{eqnarray*}$

ii

(ii) If $\mathrm{rank}({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})=2$, then $\left\{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}\right\}$ is linearly dependent. Without loss of generality, we assume that $\left\{{\boldsymbol{a}},{\boldsymbol{b}}\right\}$ is independent. Now ${\boldsymbol{c}}={\kappa }_{{\boldsymbol{a}}}{\boldsymbol{a}}+{\kappa }_{{\boldsymbol{b}}}{\boldsymbol{b}}$ for some ${\kappa }_{{\boldsymbol{a}}},{\kappa }_{{\boldsymbol{b}}}\,\in {\mathbb{R}}$ with ${\kappa }_{{\boldsymbol{a}}}{\kappa }_{{\boldsymbol{b}}}\ne 0$. Thus

$\begin{eqnarray*}\displaystyle \begin{array}{rcl}t-{c}_{0} & = & \langle {\boldsymbol{C}}{\rangle }_{\psi }-{c}_{0}=\left\langle \psi \left|{\boldsymbol{c}}\cdot {\boldsymbol{\sigma }}\right|\psi \right\rangle \\ & = & {\kappa }_{{\boldsymbol{a}}}\left\langle \psi \left|{\boldsymbol{a}}\cdot {\boldsymbol{\sigma }}\right|\psi \right\rangle +{\kappa }_{{\boldsymbol{b}}}\left\langle \psi \left|{\boldsymbol{b}}\cdot {\boldsymbol{\sigma }}\right|\psi \right\rangle \\ & = & {\kappa }_{{\boldsymbol{a}}}(r-{a}_{0})+{\kappa }_{{\boldsymbol{b}}}(s-{b}_{0}).\end{array}\end{eqnarray*}$
Therefore we get that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}(r,s,t)=\delta ((t-{c}_{0})\\ \quad -\,{\kappa }_{{\boldsymbol{a}}}(r-{a}_{0})-{\kappa }_{{\boldsymbol{b}}}(s-{b}_{0})){f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle }^{(2)}(r,s).\end{array}\end{eqnarray*}$

iii

(iii) If $\mathrm{rank}({{\boldsymbol{T}}}_{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}})=1$, then $\left\{{\boldsymbol{a}},{\boldsymbol{b}},{\boldsymbol{c}}\right\}$ is linearly dependent. Without loss of generality, we assume that ${\boldsymbol{a}}$ is linearly independent and ${\boldsymbol{b}}={\kappa }_{{\boldsymbol{b}}{\boldsymbol{a}}}\cdot {\boldsymbol{a}},{\boldsymbol{c}}={\kappa }_{{\boldsymbol{c}}{\boldsymbol{a}}}\cdot {\boldsymbol{a}}$ for some ${\kappa }_{{\boldsymbol{b}}{\boldsymbol{a}}}$ and ${\kappa }_{{\boldsymbol{c}}{\boldsymbol{a}}}$ with ${\kappa }_{{\boldsymbol{b}}{\boldsymbol{a}}}{\kappa }_{{\boldsymbol{c}}{\boldsymbol{a}}}\ne 0$. Then we get the desired result by mimicking the proof in (ii).

The joint probability distribution density of $({{\rm{\Delta }}}_{\psi }{\boldsymbol{A}},{{\rm{\Delta }}}_{\psi }{\boldsymbol{B}},{{\rm{\Delta }}}_{\psi }{\boldsymbol{C}})$ for a triple of qubit observables defined by equation (11), where $\left|\psi \right\rangle $ is a Haar-distributed random pure state on ${{\mathbb{C}}}^{2}$, is given by

$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}},{\rm{\Delta }}{\boldsymbol{C}}}^{(2)}(x,y,z)=\displaystyle \frac{2{xyz}}{\sqrt{({a}^{2}-{x}^{2})({b}^{2}-{y}^{2})({c}^{2}-{z}^{2})}}\\ \quad \times \,\sum _{j,k\in \{\pm \}}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{+}(x),{s}_{j}(y),{t}_{k}(z)),\end{array}\end{eqnarray*}$
where ${f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }(r,s,t)$ is the joint probability distribution density of the expectation values $(\langle {\boldsymbol{A}}{\rangle }_{\psi },\langle {\boldsymbol{B}}{\rangle }_{\psi },\langle {\boldsymbol{C}}{\rangle }_{\psi })$, which is determined by equation (13) in proposition 6; and
$\begin{eqnarray*}\displaystyle \begin{array}{l}{r}_{\pm }(x):={a}_{0}\pm \sqrt{{a}^{2}-{x}^{2}},\\ \quad {s}_{\pm }(y):={b}_{0}\pm \sqrt{{b}^{2}-{y}^{2}},\\ \quad {t}_{\pm }(z):={c}_{0}\pm \sqrt{{c}^{2}-{z}^{2}}.\end{array}\end{eqnarray*}$

Note that

$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}},{\rm{\Delta }}{\boldsymbol{C}}}^{(2)}(x,y,z)\\ \quad =\,\displaystyle \int \delta (x-{{\rm{\Delta }}}_{\psi }{\boldsymbol{A}})\delta (y-{{\rm{\Delta }}}_{\psi }{\boldsymbol{B}})\delta (z-{{\rm{\Delta }}}_{\psi }{\boldsymbol{C}}){\rm{d}}\mu (\psi )\\ \quad =\,8{xyz}\displaystyle \int \delta \left({x}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{A}}\right)\cdot \delta \left({y}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{B}}\right)\\ \quad \cdot \delta \left({z}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{C}}\right){\rm{d}}\mu (\psi ).\end{array}\end{eqnarray*}$
Again using the method in the proof of theorem 1, we have already obtained that
$\begin{eqnarray*}\displaystyle \begin{array}{l}\delta \left({x}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{A}}\right)\cdot \delta \left({y}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{B}}\right)\cdot \delta \left({z}^{2}-{{\rm{\Delta }}}_{\psi }^{2}{\boldsymbol{C}}\right)\\ \quad =\,\delta ({f}_{x}(r))\delta ({g}_{y}(s))\delta ({h}_{z}(t)),\end{array}\end{eqnarray*}$
where
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{f}_{x}(r) & := & {x}^{2}-(r-{\lambda }_{1}({\boldsymbol{A}}))({\lambda }_{2}({\boldsymbol{A}})-r),\\ {g}_{y}(s) & := & {y}^{2}-(s-{\lambda }_{1}({\boldsymbol{B}}))({\lambda }_{2}({\boldsymbol{B}})-s),\\ {h}_{z}(t) & := & {z}^{2}-(t-{\lambda }_{1}({\boldsymbol{C}}))({\lambda }_{2}({\boldsymbol{C}})-t).\end{array}\end{eqnarray*}$
Then
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}},{\rm{\Delta }}{\boldsymbol{C}}}^{(2)}(x,y,z)=8{xyz}\\ \quad \times \,\iiint \delta ({f}_{x}(r))\delta ({g}_{y}(s))\delta ({h}_{z}(t)){f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}(r,s,t){\rm{d}}r{\rm{d}}s{\rm{d}}t.\end{array}\end{eqnarray*}$
Furthermore, we have
$\begin{eqnarray*}\displaystyle \begin{array}{l}\delta ({f}_{x}(r))\delta ({g}_{y}(s))\delta ({h}_{z}(t))\\ \quad =\,\displaystyle \frac{\sum _{i,j,k\in \{\pm \}}{\delta }_{({r}_{i}(x),{s}_{j}(y),{t}_{k}(z))}}{8\sqrt{({a}^{2}-{x}^{2})({b}^{2}-{y}^{2})({c}^{2}-{z}^{2})}}.\end{array}\end{eqnarray*}$
Based on this observation, we get that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}},{\rm{\Delta }}{\boldsymbol{C}}}^{(2)}(x,y,z)\\ \quad =\,\displaystyle \frac{{xyz}}{\sqrt{({a}^{2}-{x}^{2})({b}^{2}-{y}^{2})({c}^{2}-{z}^{2})}}\\ \quad \times \,\sum _{i,j,k\in \{\pm \}}\left\langle {\delta }_{({r}_{i}(x),{s}_{j}(y),{t}_{k}(z))},{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}\right\rangle .\end{array}\end{eqnarray*}$
Thus
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}},{\rm{\Delta }}{\boldsymbol{C}}}^{(2)}(x,y,z)\\ \quad =\,\displaystyle \frac{{xyz}\sum _{i,j,k\in \{\pm \}}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{i}(x),{s}_{j}(y),{t}_{k}(z))}{\sqrt{({a}^{2}-{x}^{2})({b}^{2}-{y}^{2})({c}^{2}-{z}^{2})}}.\end{array}\end{eqnarray*}$
It is easily seen that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{+}(x),{s}_{+}(y),{t}_{+}(z))\\ \quad =\,{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{-}(x),{s}_{-}(y),{t}_{-}(z)),\\ {f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{+}(x),{s}_{+}(y),{t}_{-}(z))\\ \quad =\,{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{-}(x),{s}_{-}(y),{t}_{+}(z)),\\ {f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{+}(x),{s}_{-}(y),{t}_{+}(z))\\ \quad =\,{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{-}(x),{s}_{+}(y),{t}_{-}(z)),\\ {f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{+}(x),{s}_{-}(y),{t}_{-}(z))\\ \quad =\,{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{-}(x),{s}_{+}(y),{t}_{+}(z)).\end{array}\end{eqnarray*}$
From these observations, we can reduce the above expression to the following:
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}},{\rm{\Delta }}{\boldsymbol{C}}}^{(2)}(x,y,z)\\ \quad =\,\displaystyle \frac{2{xyz}{\sum }_{j,k\in \{\pm \}}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {\boldsymbol{B}}\rangle ,\langle {\boldsymbol{C}}\rangle }^{(2)}({r}_{+}(x),{s}_{j}(y),{t}_{k}(z))}{\sqrt{({a}^{2}-{x}^{2})({b}^{2}-{y}^{2})({c}^{2}-{z}^{2})}}.\end{array}\end{eqnarray*}$
The desired result is obtained.

Note that the PDFs of uncertainties of multiple qubit observables (more than three) will be reduced into the three situations above, as shown in [29]. Here we will omit it here.

4. PDF of uncertainty of a single qudit observable

Assume A is a non-degenerate positive matrix with eigenvalues λk(A) = ak(k = 1, …, d) with ad > ⋯ > a1, denoted by ${V}_{d}({\boldsymbol{a}})=\prod _{1\leqslant i\lt j\leqslant d}({a}_{j}-{a}_{i})$. Due to the following relation ${\left({{\rm{\Delta }}}_{\psi }{\boldsymbol{A}}\right)}^{2}=\left\langle \psi \left|{{\boldsymbol{A}}}^{2}\right|\psi \right\rangle -{\left\langle \psi \left|{\boldsymbol{A}}\right|\psi \right\rangle }^{2}$, i.e. the variance of A is the function of $r=\left\langle \psi \left|{\boldsymbol{A}}\right|\psi \right\rangle $ and $s=\left\langle \psi \left|{{\boldsymbol{A}}}^{2}\right|\psi \right\rangle $, where $\left|\psi \right\rangle $ is a Haar-distributed pure state. Thus firstly we derive the joint PDF of $(\left\langle \psi \left|{\boldsymbol{A}}\right|\psi \right\rangle ,\left\langle \psi \left|{{\boldsymbol{A}}}^{2}\right|\psi \right\rangle )$, defined by
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}(r,s):=\displaystyle \int \delta (r-\left\langle \psi \left|{\boldsymbol{A}}\right|\psi \right\rangle )\delta \\ \quad \times \,(s-\left\langle \psi \left|{{\boldsymbol{A}}}^{2}\right|\psi \right\rangle ){\rm{d}}\mu (\psi ).\end{array}\end{eqnarray*}$
By performing Laplace transformation (r, s) → (α, β) to ${f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}(r,s)$, we get that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\mathscr{L}}\left({f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}\right)(\alpha ,\beta )\\ \quad =\,\displaystyle \int \exp \left(-\left\langle \psi \left|\alpha {\boldsymbol{A}}+\beta {{\boldsymbol{A}}}^{2}\right|\psi \right\rangle \right){\rm{d}}\mu (\psi )\\ \quad =\,\displaystyle \int \exp \left(-z\right){f}_{\langle \alpha {\boldsymbol{A}}+\beta {{\boldsymbol{A}}}^{2}\rangle }^{(d)}(z){\rm{d}}z,\end{array}\end{eqnarray*}$
where ${f}_{\langle \alpha {\boldsymbol{A}}+\beta {{\boldsymbol{A}}}^{2}\rangle }^{(d)}(z)$ is determined by proposition 4:
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle \alpha {\boldsymbol{A}}+\beta {{\boldsymbol{A}}}^{2}\rangle }^{(d)}(z)={\left(-1\right)}^{d-1}(d-1)\\ \quad \times \,\sum _{i=1}^{d}\,\displaystyle \frac{{\left(z-(\alpha {a}_{i}+\beta {a}_{i}^{2})\right)}^{d-2}}{{\prod }_{j\in \hat{i}}({a}_{i}-{a}_{j})(\alpha +\beta ({a}_{i}+{a}_{j}))}H(z-(\alpha {a}_{i}+\beta {a}_{i}^{2})).\end{array}\end{eqnarray*}$
Theoretically, we can calculate the above integral for any finite natural number d, but instead, we will focus on the case where d = 3, 4, we use Mathematica to do this tedious job. By simplifying the results obtained via the Laplace transformation/inverse Laplace transformation in Mathematica, we get the following results without details.

For a given qutrit observable ${\boldsymbol{A}}$, acting on ${{\mathbb{C}}}^{3}$, with their eigenvalues ${a}_{1}\lt {a}_{2}\lt {a}_{3}$, the joint pdf of $(\langle {\boldsymbol{A}}{\rangle }_{\psi },\langle {{\boldsymbol{A}}}^{2}{\rangle }_{\psi })$, where $\left|\psi \right\rangle \in {{\mathbb{C}}}^{3}$, is given by

$\begin{eqnarray*}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(3)}(r,s)=\displaystyle \frac{{\rm{\Gamma }}(3)}{{V}_{3}({\boldsymbol{a}})},\end{eqnarray*}$
on $D={D}_{1}\cup {D}_{2}$ for
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{D}_{1} & = & \left\{(r,s):{a}_{1}\leqslant r\leqslant {a}_{2},({a}_{1}+{a}_{2})r\right.\\ & & \left.-{a}_{1}{a}_{2}\leqslant s\leqslant ({a}_{1}+{a}_{3})r-{a}_{1}{a}_{3}\right\},\\ {D}_{2} & = & \left\{(r,s):{a}_{2}\leqslant r\leqslant {a}_{3},({a}_{2}+{a}_{3})r\right.\\ & & \left.-{a}_{2}{a}_{3}\leqslant s\leqslant ({a}_{1}+{a}_{3})r-{a}_{1}{a}_{3}\right\};\end{array}\end{eqnarray*}$
and ${f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(3)}(r,s)=0$ otherwise. Thus
$\begin{eqnarray*}{f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(3)}(r,x)=\displaystyle \frac{2{\rm{\Gamma }}(3)}{{V}_{3}({\boldsymbol{a}})}x\quad (\forall (r,x)\in R),\end{eqnarray*}$
where $R={R}_{1}\cup {R}_{2}$ with
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{R}_{1} & = & \left\{(r,x):{a}_{1}\leqslant r\leqslant {a}_{2},\sqrt{({a}_{2}-r)(r-{a}_{1})}\right.\\ & & \left.\leqslant x\leqslant \sqrt{({a}_{3}-r)(r-{a}_{1})}\right\},\\ {R}_{2} & = & \left\{(r,x):{a}_{2}\leqslant r\leqslant {a}_{3},\sqrt{({a}_{3}-r)(r-{a}_{2})}\right.\\ & & \left.\leqslant x\leqslant \sqrt{({a}_{3}-r)(r-{a}_{1})}\right\},\end{array}\end{eqnarray*}$
and ${f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(3)}(r,x)=0$ otherwise. Moreover, we get that
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(3)}(x)=\displaystyle \frac{4{\rm{\Gamma }}(3)}{{V}_{3}({\boldsymbol{a}})}x\left({\chi }_{\left[0,\tfrac{{a}_{3}-{a}_{1}}{2}\right]}(x){\varepsilon }_{31}(x)\right.\\ \quad \left.-{\chi }_{\left[0,\tfrac{{a}_{3}-{a}_{2}}{2}\right]}(x){\varepsilon }_{32}(x)-{\chi }_{\left[0,\tfrac{{a}_{2}-{a}_{1}}{2}\right]}(x){\varepsilon }_{21}(x,)\right),\end{array}\end{eqnarray*}$
where $x\in \left[0,\tfrac{{a}_{3}-{a}_{1}}{2}\right]$ and ${\chi }_{S}(x)$ is the indicator of the set S, i.e. ${\chi }_{S}(x)=1$ if $x\in S$, and ${\chi }_{S}(x)=0$ if $x\notin S;$
$\begin{eqnarray}{\varepsilon }_{{ij}}(x):=\sqrt{{\left(\displaystyle \frac{{a}_{i}-{a}_{j}}{2}\right)}^{2}-{x}^{2}}.\end{eqnarray}$

For a given qudit observable ${\boldsymbol{A}}$, acting on ${{\mathbb{C}}}^{4}$, with their eigenvalues ${a}_{1}\lt {a}_{2}\lt {a}_{3}\lt {a}_{4}$, the joint pdf of $(\langle {\boldsymbol{A}}{\rangle }_{\psi },\langle {{\boldsymbol{A}}}^{2}{\rangle }_{\psi })$, where $\left|\psi \right\rangle \in {{\mathbb{C}}}^{4}$, is given by

$\begin{eqnarray*}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(4)}(r,s)=\displaystyle \frac{{\rm{\Gamma }}(4)}{{V}_{4}({\boldsymbol{a}})}g(r,s),\end{eqnarray*}$
where $g(r,s)$ is defined in the following way: via ${\varphi }_{i,j}(r):=({a}_{i}+{a}_{j})r-{a}_{i}{a}_{j}$,
$\begin{eqnarray*}\displaystyle \begin{array}{l}g(r,s)\\ =\,\left\{\begin{array}{ll}({a}_{4}-{a}_{3})(s-{\varphi }_{\mathrm{1,2}}(r)), & \mathrm{if}(r,s)\in {D}_{1}^{(1)}\cup {D}_{2}^{(2)};\\ ({a}_{4}-{a}_{1})(s-{\varphi }_{\mathrm{2,3}}(r)), & \mathrm{if}(r,s)\in {D}_{1}^{(2)}\cup {D}_{4}^{(2)};\\ ({a}_{2}-{a}_{1})(s-{\varphi }_{\mathrm{3,4}}(r)), & \mathrm{if}(r,s)\in {D}_{5}^{(2)}\cup {D}_{1}^{(3)};\\ ({a}_{2}-{a}_{3})(s-{\varphi }_{\mathrm{1,4}}(r)), & \mathrm{if}(r,s)\in {D}_{2}^{(1)}\cup {D}_{3}^{(2)}\cup {D}_{6}^{(2)}\cup {D}_{2}^{(3)};\end{array}\right.\end{array}\end{eqnarray*}$
on $D={\bigcup }_{i=1}^{2}{D}_{i}^{(1)}\cup {\bigcup }_{j=1}^{6}{D}_{j}^{(2)}\cup {\bigcup }_{k=1}^{2}{D}_{k}^{(3)}$ for
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{D}_{1}^{(1)} & = & \left\{(r,s):{a}_{1}\leqslant r\leqslant {a}_{2},{\varphi }_{\mathrm{1,2}}(r)\right.\\ & & \left.\leqslant s\leqslant {\varphi }_{\mathrm{1,3}}(r)\right\},\\ {D}_{2}^{(1)} & = & \left\{(r,s):{a}_{1}\leqslant r\leqslant {a}_{2},{\varphi }_{\mathrm{1,3}}(r)\right.\\ & & \left.\leqslant s\leqslant {\varphi }_{\mathrm{1,4}}(r)\right\},\end{array}\end{eqnarray*}$
and
$\begin{eqnarray*}\displaystyle \begin{array}{l}{D}_{1}^{(2)}=\left\{(r,s):{a}_{2}\leqslant r\leqslant \displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}},\right.\\ \left.{\varphi }_{\mathrm{2,3}}(r)\leqslant s\leqslant {\varphi }_{\mathrm{2,4}}(r)\right\},\\ \quad {D}_{2}^{(2)}=\left\{(r,s):{a}_{2}\leqslant r\leqslant \displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}},\right.\\ \left.{\varphi }_{\mathrm{2,4}}(r)\leqslant s\leqslant {\varphi }_{\mathrm{1,3}}(r)\right\},\\ {D}_{3}^{(2)}=\left\{(r,s):{a}_{2}\leqslant r\leqslant \displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}},\right.\\ \quad \left.{\varphi }_{\mathrm{1,3}}(r)\leqslant s\leqslant {\varphi }_{\mathrm{1,4}}(r)\right\},\\ {D}_{4}^{(2)}=\left\{(r,s):\displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}}\leqslant r\leqslant {a}_{3},\right.\\ \quad \left.{\varphi }_{\mathrm{2,3}}(r)\leqslant s\leqslant {\varphi }_{\mathrm{1,3}}(r)\right\},\\ {D}_{5}^{(2)}=\left\{(r,s):\displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}}\leqslant r\leqslant {a}_{3},\right.\\ \quad \left.{\varphi }_{\mathrm{1,3}}(r)\leqslant s\leqslant {\varphi }_{\mathrm{2,4}}(r)\right\},\\ {D}_{6}^{(2)}=\left\{(r,s):\displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}}\leqslant r\leqslant {a}_{3},\right.\\ \quad \left.{\varphi }_{\mathrm{2,4}}(r)\leqslant s\leqslant {\varphi }_{\mathrm{1,4}}(r)\right\},\end{array}\end{eqnarray*}$
and
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{D}_{1}^{(3)} & = & \left\{(r,s):{a}_{3}\leqslant r\leqslant {a}_{4},{\varphi }_{\mathrm{3,4}}(r)\leqslant s\leqslant {\varphi }_{\mathrm{2,4}}(r)\right\},\\ {D}_{2}^{(3)} & = & \left\{(r,s):{a}_{3}\leqslant r\leqslant {a}_{4},{\varphi }_{\mathrm{2,4}}(r)\leqslant s\leqslant {\varphi }_{\mathrm{1,4}}(r)\right\},\end{array}\end{eqnarray*}$
and ${f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(4)}(r,s)=0$ otherwise.

Denote by ${D}_{1}={\bigcup }_{i=1}^{2}{D}_{i}^{(1)},{D}_{2}={\bigcup }_{i=1}^{6}{D}_{j}^{(2)}$, and ${D}_{3}={\bigcup }_{k=1}^{2}{D}_{k}^{(3)}$, respectively. Thus

$\begin{eqnarray*}\displaystyle \begin{array}{l}{D}_{1}=\left\{(r,s):{a}_{1}\leqslant r\leqslant {a}_{2},{\varphi }_{\mathrm{1,2}}(r)\right.\\ \quad \left.\leqslant s\leqslant {\varphi }_{\mathrm{1,4}}(r)\right\},\\ {D}_{2}=\left\{(r,s):{a}_{2}\leqslant r\leqslant {a}_{3},{\varphi }_{\mathrm{2,3}}(r)\right.\\ \quad \left.\leqslant s\leqslant {\varphi }_{\mathrm{1,4}}(r)\right\},\\ {D}_{3}=\left\{(r,s):{a}_{3}\leqslant r\leqslant {a}_{4},{\varphi }_{\mathrm{3,4}}(r)\right.\\ \quad \left.\leqslant s\leqslant {\varphi }_{\mathrm{1,4}}(r)\right\}.\end{array}\end{eqnarray*}$
This implies that the support of ${f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(4)}$ is just ${D}_{1}\cup {D}_{2}\cup {D}_{3}$, i.e. $\mathrm{supp}({f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(4)})={D}_{1}\cup {D}_{2}\cup {D}_{3}$.

For a given qudit observable ${\boldsymbol{A}}$, acting on ${{\mathbb{C}}}^{4}$, with their eigenvalues ${a}_{1}\lt {a}_{2}\lt {a}_{3}\lt {a}_{4}$, the joint pdf of $(\langle {\boldsymbol{A}}{\rangle }_{\psi },\langle {{\boldsymbol{A}}}^{2}{\rangle }_{\psi })$, where $\left|\psi \right\rangle \in {{\mathbb{C}}}^{4}$, is given by

$\begin{eqnarray*}{f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(r,x)=\displaystyle \frac{2{\rm{\Gamma }}(4)}{{V}_{4}({\boldsymbol{a}})}{xg}(r,{x}^{2}+{r}^{2}).\end{eqnarray*}$
Moreover, via ${\varphi }_{i,j}(r):=({a}_{i}+{a}_{j})r-{a}_{i}{a}_{j}$,
$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(r,x)=\displaystyle \frac{2{\rm{\Gamma }}(4)}{{V}_{4}({\boldsymbol{a}})}x\\ \times \left\{\begin{array}{ll}({a}_{4}-{a}_{3})({x}^{2}+{r}^{2}-{\varphi }_{\mathrm{1,2}}(r)), & \mathrm{if}(r,s)\in {R}_{1}^{(1)}\cup {R}_{2}^{(2)};\\ ({a}_{4}-{a}_{1})({x}^{2}+{r}^{2}-{\varphi }_{\mathrm{2,3}}(r)), & \mathrm{if}(r,s)\in {R}_{1}^{(2)}\cup {R}_{4}^{(2)};\\ ({a}_{2}-{a}_{1})({x}^{2}+{r}^{2}-{\varphi }_{\mathrm{3,4}}(r)), & \mathrm{if}(r,s)\in {R}_{5}^{(2)}\cup {R}_{1}^{(3)};\\ ({a}_{2}-{a}_{3})({x}^{2}+{r}^{2}-{\varphi }_{\mathrm{1,4}}(r)), & \mathrm{if}(r,s)\in {R}_{2}^{(1)}\cup {R}_{3}^{(2)}\cup {R}_{6}^{(2)}\cup {R}_{2}^{(3)};\end{array}\right.\end{array}\end{eqnarray*}$
on $R={\bigcup }_{i=1}^{2}{R}_{i}^{(1)}\cup {\bigcup }_{j=1}^{6}{R}_{j}^{(2)}\cup {\bigcup }_{k=1}^{2}{R}_{k}^{(3)}$ for
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{R}_{1}^{(1)} & = & \left\{(r,x):{a}_{1}\leqslant r\leqslant {a}_{2},\sqrt{({a}_{2}-r)(r-{a}_{1})}\right.\\ & & \left.\leqslant x\leqslant \sqrt{({a}_{3}-r)(r-{a}_{1})}\right\},\\ {R}_{2}^{(1)} & = & \left\{(r,x):{a}_{1}\leqslant r\leqslant {a}_{2},\sqrt{({a}_{3}-r)(r-{a}_{1})}\right.\\ & & \left.\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{1})}\right\},\end{array}\end{eqnarray*}$
and
$\begin{eqnarray*}\displaystyle \begin{array}{l}{R}_{1}^{(2)}=\left\{(r,x):{a}_{2}\leqslant r\leqslant \displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}},\right.\\ \quad \left.\sqrt{({a}_{3}-r)(r-{a}_{2})}\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{2})}\right\},\\ {R}_{2}^{(2)}=\left\{(r,x):{a}_{2}\leqslant r\leqslant \displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}},\right.\\ \quad \left.\sqrt{({a}_{4}-r)(r-{a}_{2})}\leqslant x\leqslant \sqrt{({a}_{3}-r)(r-{a}_{1})}\right\},\\ {R}_{3}^{(2)}=\left\{(r,x):{a}_{2}\leqslant r\leqslant \displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}},\right.\\ \quad \left.\sqrt{({a}_{3}-r)(r-{a}_{1})}\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{1})}\right\},\\ {R}_{4}^{(2)}=\left\{(r,x):\displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}}\leqslant r\leqslant {a}_{3},\right.\\ \quad \left.\sqrt{({a}_{3}-r)(r-{a}_{2})}\leqslant x\leqslant \sqrt{({a}_{3}-r)(r-{a}_{1})}\right\},\\ {R}_{5}^{(2)}=\left\{(r,x):\displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}}\leqslant r\leqslant {a}_{3},\right.\\ \quad \left.\sqrt{({a}_{3}-r)(r-{a}_{1})}\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{2})}\right\},\\ {R}_{6}^{(2)}=\left\{(r,x):\displaystyle \frac{{a}_{2}{a}_{4}-{a}_{1}{a}_{3}}{{a}_{2}+{a}_{4}-{a}_{1}-{a}_{3}}\leqslant r\leqslant {a}_{3},\right.\\ \quad \left.\sqrt{({a}_{4}-r)(r-{a}_{2})}\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{1})}\right\},\end{array}\end{eqnarray*}$
and
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{R}_{1}^{(3)} & = & \left\{(r,x):{a}_{3}\leqslant r\leqslant {a}_{4},\sqrt{({a}_{4}-r)(r-{a}_{3})}\right.\\ & & \left.\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{2})}\right\},\\ {R}_{2}^{(3)} & = & \left\{(r,x):{a}_{3}\leqslant r\leqslant {a}_{4},\sqrt{({a}_{4}-r)(r-{a}_{2})}\right.\\ & & \left.\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{1})}\right\},\end{array}\end{eqnarray*}$
and ${f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(r,x)=0$ otherwise. Moreover ${f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(x)$ can be identified as
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(x) & = & \displaystyle \frac{{4}^{2}}{{V}_{4}({\boldsymbol{a}})}x\left[({a}_{4}-{a}_{3}){\chi }_{\left[0,\tfrac{{a}_{2}-{a}_{1}}{2}\right]}(x){\varepsilon }_{12}^{3}(x)\right.\\ & & +({a}_{4}-{a}_{1}){\chi }_{\left[0,\tfrac{{a}_{3}-{a}_{2}}{2}\right]}(x){\varepsilon }_{23}^{3}(x)\\ & & -({a}_{4}-{a}_{2}){\chi }_{\left[0,\tfrac{{a}_{3}-{a}_{1}}{2}\right]}(x){\varepsilon }_{13}^{3}(x)\\ & & +({a}_{2}-{a}_{1}){\chi }_{\left[0,\tfrac{{a}_{4}-{a}_{3}}{2}\right]}(x){\varepsilon }_{34}^{3}(x)\\ & & -({a}_{3}-{a}_{1}){\chi }_{\left[0,\tfrac{{a}_{4}-{a}_{2}}{2}\right]}(x){\varepsilon }_{24}^{3}(x)\\ & & \left.+({a}_{3}-{a}_{2}){\chi }_{\left[0,\tfrac{{a}_{4}-{a}_{1}}{2}\right]}(x){\varepsilon }_{14}^{3}(x)\right].\end{array}\end{eqnarray*}$
Here the meanings of the notations χ and ${\varepsilon }_{{ij}}$ can be found in theorem 4.

Denote by ${R}_{1}={\bigcup }_{i=1}^{2}{R}_{i}^{(1)},{R}_{2}={\bigcup }_{i=1}^{6}{R}_{j}^{(2)}$, and ${R}_{3}={\bigcup }_{k=1}^{2}{R}_{k}^{(3)}$, respectively. Thus

$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{R}_{1} & = & \left\{(r,x):{a}_{1}\leqslant r\leqslant {a}_{2},\sqrt{({a}_{2}-r)(r-{a}_{1})}\right.\\ & & \left.\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{1})}\right\},\\ {R}_{2} & = & \left\{(r,x):{a}_{2}\leqslant r\leqslant {a}_{3},\sqrt{({a}_{3}-r)(r-{a}_{2})}\right.\\ & & \left.\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{1})}\right\},\\ {R}_{3} & = & \left\{(r,x):{a}_{3}\leqslant r\leqslant {a}_{4},\sqrt{({a}_{4}-r)(r-{a}_{3})}\right.\\ & & \left.\leqslant x\leqslant \sqrt{({a}_{4}-r)(r-{a}_{1})}\right\}.\end{array}\end{eqnarray*}$
This implies that the support of ${f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}$ is just ${R}_{1}\cup {R}_{2}\cup {R}_{3}$, i.e. $\mathrm{supp}({f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(4)})={R}_{1}\cup {R}_{2}\cup {R}_{3}$. We draw the plot of the support of ${f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}(r,x)$, where d = 3,4, in figure 1 as below:

Figure 1. Plots of the supports of ${f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}(r,x)$ for qudit observables A.
Based on the above corollary 1, we can derive ${f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(x)$ just like to do similarly for ${f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(3)}(x)$. As an illustration, we will present a specific example where the eigenvalues of A are given by λ(A) = (1, 3, 9, 27). In fact, this approach can go for any qudit observable with much computational complexity. In addition, deriving the joint PDF of two uncertainties (ΔA, ΔB) of two qudit observables A and B, ${f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{(d)}$, is very complicated. This is not the goal of the present paper.

For a qudit observable ${\boldsymbol{A}}$, acting on ${{\mathbb{C}}}^{4}$, with the eigenvalues $\lambda ({\boldsymbol{A}})=(1,3,9,27)$. Still employing the notation in (14) here:

$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{\varepsilon }_{12}(x) & = & \sqrt{1-{x}^{2}},\quad {\varepsilon }_{13}(x)=\sqrt{{4}^{2}-{x}^{2}},\\ {\varepsilon }_{14}(x) & = & \sqrt{{13}^{2}-{x}^{2}},\quad {\varepsilon }_{23}(x)=\sqrt{{3}^{2}-{x}^{2}},\\ {\varepsilon }_{24}(x) & = & \sqrt{{12}^{2}-{x}^{2}},\quad {\varepsilon }_{34}(x)=\sqrt{{9}^{2}-{x}^{2}}.\end{array}\end{eqnarray*}$
Then from corollary 1, using marginal integral, we can derive the PDF of ${{\rm{\Delta }}}_{\psi }{\boldsymbol{A}}$ that
i

(i) If $x\in [0,1]$, then

$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(x)=\displaystyle \frac{x}{33696}\left(9{\varepsilon }_{12}^{3}(x)+13{\varepsilon }_{23}^{3}(x)\right.\\ \quad \left.-12{\varepsilon }_{13}^{3}(x)+{\varepsilon }_{34}^{3}(x)-4{\varepsilon }_{24}^{3}(x)+3{\varepsilon }_{14}^{3}(x,)\right).\end{array}\end{eqnarray*}$

ii

(ii) If $x\in [1,3]$, then

$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(x)=\displaystyle \frac{x}{33696}\left(13{\varepsilon }_{23}^{3}(x)-12{\varepsilon }_{13}^{3}(x)\right.\\ \quad \left.+{\varepsilon }_{34}^{3}(x)-4{\varepsilon }_{24}^{3}(x)+3{\varepsilon }_{14}^{3}(x,)\right).\end{array}\end{eqnarray*}$

iii

(iii) If $x\in [3,4]$, then

$\begin{eqnarray*}\displaystyle \begin{array}{l}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(x)=\displaystyle \frac{x}{33696}\left(-12{\varepsilon }_{13}^{3}(x)\right.\\ \quad \left.+{\varepsilon }_{34}^{3}(x)-4{\varepsilon }_{24}^{3}(x)+3{\varepsilon }_{14}^{3}(x,)\right).\end{array}\end{eqnarray*}$

iv

(iv) If $x\in [4,9]$, then

$\begin{eqnarray*}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(x)=\displaystyle \frac{x}{33696}\left({\varepsilon }_{34}^{3}(x)-4{\varepsilon }_{24}^{3}(x)+3{\varepsilon }_{14}^{3}(x)\right).\end{eqnarray*}$

v

(v) If $x\in [9,12]$, then

$\begin{eqnarray*}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(x)=\displaystyle \frac{x}{33696}\left(-4{\varepsilon }_{24}^{3}(x)+3{\varepsilon }_{14}^{3}(x)\right).\end{eqnarray*}$

vi

(vi) If $x\in [12,13]$, then

$\begin{eqnarray*}{f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(4)}(x)=\displaystyle \frac{x}{11232}{\varepsilon }_{14}^{3}(x).\end{eqnarray*}$

For illustrations, we draw the plot of ${f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}(x)$ for qudit observables ${\boldsymbol{A}}$, where d = 3, 4, in Figure 2 as below:

Figure 2. Plots of the PDFs ${f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}(x)$ for qudit observables A.
In the last, we will identify the supports of ${f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}(r,s)$ and ${f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}(r,x)$, where
$\begin{eqnarray*}\displaystyle \begin{array}{rcl}{f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}(r,s) & = & \displaystyle \int \delta (r-\langle {\boldsymbol{A}}{\rangle }_{\psi })\delta (s-\langle {{\boldsymbol{A}}}^{2}{\rangle }_{\psi }){\rm{d}}\mu (\psi ),\\ {f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}(r,x) & = & 2{{xf}}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}\left(r,{r}^{2}+{x}^{2}\right).\end{array}\end{eqnarray*}$

For a qudit observable ${\boldsymbol{A}}$, acting on ${{\mathbb{C}}}^{d}(d\gt 1)$, with eigenvalues $\lambda ({\boldsymbol{A}})=({a}_{1},\ldots ,{a}_{d})$, where ${a}_{1}\lt \cdots \lt {a}_{d}$, the supports of the PDFs of ${f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}(r,s)$ and ${f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}(r,x)$, respectively, given by the following:

$\begin{eqnarray*}\mathrm{supp}\left({f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}\right)=\bigcup _{k=1}^{d-1}{F}_{k,k+1},\end{eqnarray*}$
where
$\begin{eqnarray*}\displaystyle \begin{array}{l}{F}_{k,k+1}:=\left\{(r,s):{a}_{k}\leqslant r\leqslant {a}_{k+1},\right.\\ \left.{\varphi }_{k,k+1}(r)\leqslant s\leqslant {\varphi }_{1,d}(r)\right\};\end{array}\end{eqnarray*}$
$\begin{eqnarray*}\mathrm{supp}\left({f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}\right)=\bigcup _{k=1}^{d-1}{V}_{k,k+1},\end{eqnarray*}$
where
$\begin{eqnarray*}\displaystyle \begin{array}{l}{V}_{k,k+1}:=\left\{(r,x):{a}_{k}\leqslant r\leqslant {a}_{k+1},\right.\\ \quad \left.\sqrt{({a}_{k+1}-r)(r-{a}_{k})}\leqslant x\leqslant \sqrt{({a}_{d}-r)(r-{a}_{1})}\right\}.\end{array}\end{eqnarray*}$

Without loss of generality, we assume that ${\boldsymbol{A}}\,=\mathrm{diag}({a}_{1},\ldots ,{a}_{d})$ with ${a}_{1}\lt \cdots \lt {a}_{d}$. Let $\left|\psi \right\rangle ={\left({\psi }_{1},\ldots ,{\psi }_{d}\right)}^{{\mathsf{T}}}\in {{\mathbb{C}}}^{d}$ be a pure state and ${r}_{k}={\left|{\psi }_{k}\right|}^{2}$. Thus $({r}_{1},\ldots ,{r}_{d})$ is a d-dimensional probability vector. Then

$\begin{eqnarray*}r=\langle {\boldsymbol{A}}\rangle =\sum _{k=1}^{d}{a}_{k}{r}_{k}\quad \mathrm{and}\quad s=\langle {{\boldsymbol{A}}}^{2}\rangle =\sum _{k=1}^{d}{a}_{k}^{2}{r}_{k}.\end{eqnarray*}$
Thus, for each $k\in \left\{1,\ldots ,d-1\right\}$,
$\begin{eqnarray*}\displaystyle \begin{array}{l}s-{\varphi }_{k,k+1}(r)=\sum _{i=1}^{d}{a}_{i}^{2}{r}_{i}-({a}_{k}+{a}_{k+1})\sum _{i=1}^{d}{a}_{i}{r}_{i}+{a}_{k}{a}_{k+1}\\ \quad =\,\sum _{i=1}^{d}{a}_{i}^{2}{r}_{i}-({a}_{k}+{a}_{k+1})\sum _{i=1}^{d}{a}_{i}{r}_{i}+{a}_{k}{a}_{k+1}\sum _{i=1}^{d}{r}_{i}\\ \quad =\,\sum _{i=1}^{d}({a}_{i}-{a}_{k})({a}_{i}-{a}_{k+1}){r}_{i}.\end{array}\end{eqnarray*}$
Note that ${a}_{1}\lt \cdots \lt {a}_{d}$ and ${r}_{i}\geqslant 0$ for each i = 1, …, d. We see that, when $i=k,k+1$, $({a}_{i}-{a}_{k})({a}_{i}-{a}_{k+1}){r}_{i}=0$, and $({a}_{i}-{a}_{k})({a}_{i}-{a}_{k+1}){r}_{i}\geqslant 0$ otherwise. This means that $s\,\geqslant {\varphi }_{k,k+1}(r);$ and this inequality is saturated if $({r}_{k},{r}_{k+1})\,=(t,1-t)$ for $t\in (0,1)$ and ri = 0 for $i\ne k,k+1$. Similarly, we can easily get that $s\leqslant {\varphi }_{1,d}(r)$. Denote
$\begin{eqnarray*}\displaystyle \begin{array}{l}{F}_{k,k+1}=\left\{(r,s):{a}_{k}\leqslant r\leqslant {a}_{k+1},\right.\\ \quad \left.{\varphi }_{k,k+1}(r)\leqslant s\leqslant {\varphi }_{1,d}(r)\right\}.\end{array}\end{eqnarray*}$
Hence
$\begin{eqnarray*}\mathrm{supp}\left({f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}\right)=\bigcup _{k=1}^{d-1}{F}_{k,k+1}.\end{eqnarray*}$
Now, for $x={{\rm{\Delta }}}_{\psi }{\boldsymbol{A}}$, we see that $s={r}^{2}+{x}^{2}$. By employing the support of ${f}_{\langle {\boldsymbol{A}}\rangle ,\langle {{\boldsymbol{A}}}^{2}\rangle }^{(d)}$, we can derive the support of ${f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}(r,x)$ as follows: Denote
$\begin{eqnarray*}\displaystyle \begin{array}{l}{V}_{k,k+1}=\left\{(r,x):{a}_{k}\leqslant r\leqslant {a}_{k+1},\sqrt{({a}_{k+1}-r)(r-{a}_{k})}\right.\\ \quad \left.\leqslant x\leqslant \sqrt{({a}_{d}-r)(r-{a}_{1})}\right\},\end{array}\end{eqnarray*}$
then
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\varphi }_{k,k+1}(r)\leqslant s={r}^{2}+{x}^{2}\leqslant {\varphi }_{1,d}(r)\\ \quad \Longleftrightarrow (r,x)\in {V}_{k,k+1}.\end{array}\end{eqnarray*}$
Therefore the support of ${f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}(r,x)$ is given by
$\begin{eqnarray*}\mathrm{supp}\left({f}_{\langle {\boldsymbol{A}}\rangle ,{\rm{\Delta }}{\boldsymbol{A}}}^{(d)}\right)=\bigcup _{k=1}^{d-1}{V}_{k,k+1}.\end{eqnarray*}$
This completes the proof.

For the joint PDF of uncertainties of multiple qudit observables acting on ${{\mathbb{C}}}^{d}(d\geqslant 3)$, say, a pair of qudit observables (A, B), deriving the joint PDF ${f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{(d)}(x,y)$ is very complicated because there is much difficulty in calculating the Laplace transformation/inverse Laplace transformation of ${f}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{(d)}(x,y)$. The reason is that we still cannot figure out what the relationship between λk(αA + βB), λk(A), and λk(B) is varied $(\alpha ,\beta )\in {{\mathbb{R}}}^{2}$. A fresh method to do this is expected to discover in the future.

5. Discussion and concluding remarks

Recall that the support supp(f) of a function f is given by the closure of the subset of preimage for which f does not vanish. From theorem 1, we see that the support of ${f}_{{\rm{\Delta }}{\boldsymbol{A}}}^{(2)}$ is the closed interval $[0,\left|{\boldsymbol{a}}\right|]$. This is inconsistent with the fact that ΔψA ∈ [0, v(A)], where $v({\boldsymbol{A}}):=\tfrac{1}{2}({\lambda }_{\max }({\boldsymbol{A}})-{\lambda }_{\min }({\boldsymbol{A}}))$ and $\left|\psi \right\rangle $ is any pure state.
From proposition 5 and theorem 2, we can infer that, for d = 2, each element in ${{ \mathcal U }}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{({\rm{p}})}$ is just the solution of the following inequality:
$\begin{eqnarray*}\displaystyle \begin{array}{l}{\left|{\boldsymbol{b}}\right|}^{2}{x}^{2}+{\left|{\boldsymbol{a}}\right|}^{2}{y}^{2}+2\left|\left\langle {\boldsymbol{a}},{\boldsymbol{b}}\right\rangle \right|\\ \quad \times \,\sqrt{({\left|{\boldsymbol{a}}\right|}^{2}-{x}^{2})({\left|{\boldsymbol{b}}\right|}^{2}-{y}^{2})}\\ \quad \geqslant {\left|{\boldsymbol{a}}\right|}^{2}{\left|{\boldsymbol{b}}\right|}^{2}+{\left|\left\langle {\boldsymbol{a}},{\boldsymbol{b}}\right\rangle \right|}^{2},\end{array}\end{eqnarray*}$
which is exactly the one we obtained in [29] for mixed states. This indicates that, in the qubit situation, we have that

For a pair of qubit observables $({\boldsymbol{A}},{\boldsymbol{B}})$ acting on ${{\mathbb{C}}}^{2}$, it holds that

$\begin{eqnarray*}{{ \mathcal U }}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{({\rm{p}})}={{ \mathcal U }}_{{\rm{\Delta }}{\boldsymbol{A}},{\rm{\Delta }}{\boldsymbol{B}}}^{({\rm{m}})}.\end{eqnarray*}$

One may wonder if this identity holds for general d ≥ 2, as the variance of A with respect to a mixed state can always be decomposed as a convex combination of some variances of A associated with pure states, see equation (3).
For multiple qudit observables Ak (k = 1, …, n) acting on ${{\mathbb{C}}}^{d}$, comparing the set ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{p}})}$ with the set ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}$ is an interesting problem. Unfortunately, our theorem 3, together with the result obtained in [29], indicates that ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},{\rm{\Delta }}{{\boldsymbol{A}}}_{2},{\rm{\Delta }}{{\boldsymbol{A}}}_{3}}^{({\rm{m}})}={{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},{\rm{\Delta }}{{\boldsymbol{A}}}_{2},{\rm{\Delta }}{{\boldsymbol{A}}}_{3}}^{({\rm{p}})}$ does not hold in general. In fact, $\partial {{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},{\rm{\Delta }}{{\boldsymbol{A}}}_{2},{\rm{\Delta }}{{\boldsymbol{A}}}_{3}}^{({\rm{m}})}={{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},{\rm{\Delta }}{{\boldsymbol{A}}}_{2},{\rm{\Delta }}{{\boldsymbol{A}}}_{3}}^{({\rm{p}})}$ in the qubit situations, that is, the boundary surface of ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},{\rm{\Delta }}{{\boldsymbol{A}}}_{2},{\rm{\Delta }}{{\boldsymbol{A}}}_{3}}^{({\rm{m}})}$ is just ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},{\rm{\Delta }}{{\boldsymbol{A}}}_{2},{\rm{\Delta }}{{\boldsymbol{A}}}_{3}}^{({\rm{p}})}$ in the qubit situations. This also indicates that the following inclusion is proper in general for multiple observables,
$\begin{eqnarray*}{{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{p}})}\subsetneq {{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}.\end{eqnarray*}$
Based on this, two extreme cases: ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{p}})}={{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}$ or ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{p}})}=\partial {{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}$ should be characterized.
In addition, we also see that once we obtain the uncertainty regions for observables Ak, we can infer additive uncertainty relations such as
$\begin{eqnarray*}\displaystyle \begin{array}{l}\sum _{k=1}^{n}{\left({{\rm{\Delta }}}_{\rho }{{\boldsymbol{A}}}_{k}\right)}^{2}\geqslant \mathop{\min }\limits_{\rho \in {\rm{D}}\left({{\mathbb{C}}}^{d}\right)}\sum _{k=1}^{n}{\left({{\rm{\Delta }}}_{\rho }{{\boldsymbol{A}}}_{k}\right)}^{2}\\ \quad =\,\min \left\{\sum _{k=1}^{n}{x}_{k}^{2}:({x}_{1},\ldots ,{x}_{n})\in {{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}\right\},\end{array}\end{eqnarray*}$
or
$\begin{eqnarray*}\displaystyle \begin{array}{l}\sum _{k=1}^{n}{{\rm{\Delta }}}_{\rho }{{\boldsymbol{A}}}_{k}\geqslant \mathop{\min }\limits_{\rho \in {\rm{D}}\left({{\mathbb{C}}}^{d}\right)}\sum _{k=1}^{n}{{\rm{\Delta }}}_{\rho }{{\boldsymbol{A}}}_{k}\\ \quad =\,\min \left\{\sum _{k=1}^{n}{x}_{k}:({x}_{1},\ldots ,{x}_{n})\in {{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{m}})}\right\}.\end{array}\end{eqnarray*}$
Analogous optimal problems can also be considered for ${{ \mathcal U }}_{{\rm{\Delta }}{{\boldsymbol{A}}}_{1},\ldots ,{\rm{\Delta }}{{\boldsymbol{A}}}_{n}}^{({\rm{p}})}$. These results can be used to detect entanglement [10, 12]. The current results and the results in [29] together give the complete solutions to the uncertainty region and uncertainty relations for qubit observables.
We hope the results obtained in the present paper can shed new light on the related problems in quantum information theory. Our approach may be applied to the study of PDFs in higher dimensional spaces. It would also be interesting to apply PDFs to measurement and/or quantum channel uncertainty relations.

This work is supported by the NSF of China under Grant Nos. 11 971 140, 12 075 159, and 12 171 044, Beijing Natural Science Foundation (Z190005), the Academician Innovation Platform of Hainan Province, and Academy for Multidisciplinary Studies, Capital Normal University. LZ is also funded by Natural Science Foundations of Hubei Province Grant No. 2020CFB538.

1
Heisenberg W 1927 Über den anschaulichen inhalt der quantentheoretischen kinematik und mechanik Z. Phys. 43 172 198

DOI

2
Dammeier L Schwonneck R Werner R F 2015 Uncertainty relations for angular momentum New J. Phys. 17 093046

DOI

3
Li J L Qiao C F 2015 Reformulating the quantum uncertainty relation Sci. Rep. 5 12708

DOI

4
de Guise H Maccone L Sanders B C Shukla N 2018 State-independent uncertainty relations Phys. Rev. A 98 042121

DOI

5
Giorda P Maccone L Riccardi A 2019 State-independent uncertainty relations from eigenvalue minimization Phys. Rev. A 99 052121

DOI

6
Xiao Y Guo C Meng F Jing N Yung M-H 2019 Incompatibility of observables as state-independent bound of uncertainty relations Phys. Rev. A 100 032118

DOI

7
Sponar S Danner A Obigane K Hack S Hasegawa Y 2020 Experimental test of tight state-independent preparation uncertainty relations for qubits Phys. Rev. A 102 042204

DOI

8
Seife C 2005 Do deeper principles underlie quantum uncertainty and nonlocality? Science 309 98

DOI

9
Hofmann H F Takeuchi S 2003 Violation of local uncertainty relations as a signature of entanglement Phys. Rev. A 68 032103

DOI

10
Gühne O 2004 Characterizing entanglement via uncertainty relations Phys. Rev. Lett. 92 117903

DOI

11
Gühne O Tóth G 2009 Entanglement detection Phys. Rep. 474 1

DOI

12
Schwonnek R Dammeier L Werner R F 2017 State-independent uncertainty relations and entanglement detection in noisy systems Phys. Rev. Lett. 119 170404

DOI

13
Qian C Li J-L Qiao C-F 2018 State-independent uncertainty relations and entanglement detection Quantum Inf. Process. 17 84

DOI

14
Zhao Y-Y Xiang G Y Hu X M Liu B H Li C F Guo G C Schwonnek R Wolf R 2019 Entanglement detection by violations of noisy uncertainty relations:a proof of principle Phys. Rev. Lett. 122 220401

DOI

15
Oppenheim J Wehner S 2010 The uncertainty principle determines the nonlocality of quantum mechanics Science 330 1072 1074

DOI

16
Kennard E H 1927 Zur Quantenmechanik einfacher Bewegungstypen Z. Phys. 44 326 352

DOI

17
Weyl H 1928 Gruppentheorie und Quantenmechanik Hirzel Leipzig

18
Robertson H P 1929 The uncertainty principle Phys. Rev. 34 163 164

DOI

19
Schrödinger E 1930 Zum heisenbergschen unscharfeprinzip Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-Mathematische Klasse 14 296 303

20
Busch P Reardon-Smith O 1901 On quantum uncertainty relations and uncertainty regions arXiv:1901.03695

21
Zhang L Wang J 2018 Average of uncertainty product for bounded observables Open Syst. Inf. Dyn. 25 1850008

DOI

22
Maccone L Pati A K 2014 Stronger uncertainty relations for all incompatible observables Phys. Rev. Lett. 113 260401

DOI

23
Hastings M B 2009 Superadditivity of communication capacity using entangled inputs Nat. Phys. 5 255 257

DOI

24
Christandl M Doran B Kousidis S Walter M 2014 Eigenvalue distributions of reduced density matrices Commun. Math. Phys. 332 1 52

DOI

25
Dartois S Lionni L Nechita I 2020 The joint distribution of the marginals of multipartite random quantum states Random Matrices: Theory Appl. 9 2050010

DOI

26
Zhang L Wang J Chen Z H 2018 Spectral density of mixtures of random density matrices for qubits Phys. Lett. A 382 1516 1523

DOI

27
Zhang L Jiang Y X Wu J D 2019 Duistermaat–Heckman measure and the mixture of quantum states J. Phys. A: Math. Theor. 52 495203

DOI

28
Venuti L C Zanardi P 2013 Probability density of quantum expectation values Phys. Lett. A 377 1854 1861

DOI

29
Zhang L Luo S Fei S-M Wu J 2021 Uncertainty regions of observables and state-independent uncertainty relations Quantum Inf. Process. 20 357

DOI

30
Hoskins R F 2009 Delta Function Amsterdam Elsevier

31
Zhang L 2021 Dirac delta function of matrix argument Int. J. Theor. Phys. 60 2445 2472

DOI

32
Bauer M Zuber J-B 2020 On products of delta distributions and resultants SIGMA 16 083

33
Petz D Tóth G 2012 Matrix variances with projections Acta Sci. Math. 78 683 688

DOI

34
Bhatia R Davis C 2000 A better bound on the variance Am. Math. Mon. 107 353 357

DOI

35
Zhang L Ma Z Chen Z Fei S-M 2018 Coherence generating power of unitary transformations via probabilistic average Quantum Inf. Process. 17 186

DOI

36
Gutkin E Życzkowski K 2013 Joint numerical ranges, quantum maps, and joint numerical shadows Linear Algebr. Appl. 438 2394 2404

DOI

37
Gallay T Serre D 2012 The numerical measure of a complex matrix Commun. Pure Appl. Math. 65 287 336

DOI

Outlines

/