Abstract
Several applications in medical imaging and nondestructive material testing lead to inverse elliptic coefficient problems, where an unknown coefficient function in an elliptic PDE is to be determined from partial knowledge of its solutions. This is usually a highly nonlinear illposed inverse problem, for which unique reconstructability results, stability estimates and global convergence of numerical methods are very hard to achieve. The aim of this note is to point out a new connection between inverse coefficient problems and semidefinite programming that may help addressing these challenges. We show that an inverse elliptic Robin transmission problem with finitely many measurements can be equivalently rewritten as a uniquely solvable convex nonlinear semidefinite optimization problem. This allows to explicitly estimate the number of measurements that is required to achieve a desired resolution, to derive an error estimate for noisy data, and to overcome the problem of local minima that usually appears in optimizationbased approaches for inverse coefficient problems.
Introduction
Inverse elliptic coefficient problems arise in a number of applications in medical imaging and nondestructive material testing. The arguably most prominent example is the Calderón problem [5, 6] which models electrical impedance tomography (EIT) where the electrical conductivity distribution inside a patient is to be determined from current/voltage measurements on its surface, cf. [1] for an overview. Theoretical uniqueness questions for inverse elliptic coefficient problems have mostly been studied in the idealized infinitedimensional setting where (intuitively speaking) the unknown coefficient function is to be determined with infinite resolution from infinitely many measurements, cf., e.g., [7, 12, 15]. Lipschitz stability results have been obtained for finitely many unknowns and infinitely many measurements in, e.g., [3, 4, 11]. Recently there has been progress on the practically very relevant case of finitely many unknowns and measurements, cf., e.g., [2, 8, 14]. But little is known yet about explicitly characterizing the required number of measurements for a given desired resolution.
Practical reconstruction algorithms for inverse coefficient problems are usually based on regularized datafitting, which formulates the inverse problem as a minimization problem for a residuum functional together with a regularization term. As the residuum formulation is typically nonconvex, this approach highly suffers from the problem of local minima. Convexification approaches for inverse coefficient problems have been studied in, e.g., [13]. But, to the knowledge of the author, no equivalent convex reformulations of inverse coefficient problems with finitely many measurements have been found yet.
The aim of this work is to show that a uniquely solvable convex reformulation of an inverse coefficient problem is indeed possible if enough measurements are being taken, and that the required number of measurements can be explicitly characterized. More precisely, we state a criterion that is sufficient for unique solvability and for the solution minimizing a linear cost functional under a convex nonlinear semidefinite constraint. For a given desired resolution and a given number of measurements, the criterion can be explicitly checked by calculating finitely many forward solutions. The criterion is fulfilled if sufficiently many measurements are taken. Thus, the required number of measurements can be found by starting with a low number and incrementally increasing it until the criterion is fulfilled. The criterion also yields explicit error estimates for noisy data.
This work is closely related to [10] that gives an explicit construction of special measurements that uniquely determine the same number of unknowns in an inverse elliptic coefficient problem by a globally convergent Newton rootfinding method. We also formulate our result for the same inverse Robin transmission problem as in [10] which is motivated by EITbased corrosion detection and may be considered as a simpler variant of the Calderón problem. Our main advance in this work is the step from Newton rootfinding to a convex semidefinite program. This allows utilizing a redundant set of given measurements, and eliminates the need of specially constructed measurements. It also simplifies the underlying theory as it no longer requires simultaneously localized potentials, and allows the criterion to be written using the Loewner order, which very naturally arises in elliptic inverse coefficient problems with finite resolution and finitely many measurements [9]. Also, to the knowledge of the author, this work is the first connection between the emerging research fields of semidefinite optimization and inverse coefficient problems, which might bring new inspiration to these important fields.
Inverse problems for convex monotonous functions
Let “\(\le \)” denote the entrywise order on \(\mathbb {R}^n\), and “\(\preceq \)” denote the Loewner order on the space of symmetric matrices \(\mathbb {S}_m\subseteq \mathbb {R}^{m\times m}\), with \(n,m\in \mathbb {N}\). For \(A\in \mathbb {S}_m\) the largest eigenvalue is denoted by \(\lambda _{\max }(A)\).
Given apriori bounds \(b>a>0\), we consider the inverse problem to
where \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\) is assumed to be a continuously differentiable, convex and monotonically nonincreasing matrixvalued function, i.e., for all \(x,x^{(0)}\in \mathbb {R}^n_+\), and all \(0\le d\in \mathbb {R}^n\),
Such problems naturally arise in inverse coefficient problems in elliptic PDEs with finite resolution and finitely many measurements [9].
Note that, here and in the following, we write the derivative of F in a point \(x\in \mathbb {R}^n_+\) as \(F'(x)\in \mathcal {L}(\mathbb {R}^n,\mathbb {S}_m)\), so that \(F'(x)d\) is a symmetric \(m\times m\)matrix for all \(d\in \mathbb {R}^n\). Also note that a continuously differentiable function \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\) fulfills (2) and (3), if and only if F fulfills
cf., e.g., [9, Lemma 2] for the onlyifpart. The if part immediately follows from writing the directional derivative as differential quotient.
In this section we will derive a sufficient criterion for unique solvability of the finitedimensional inverse problem (1) and for reformulating it as a convex optimization problem. Note that our criterion may appear technical at a first glance, but we stress that it only requires finitely many evaluations of directional derivatives of F, so that it can be easily checked in practice. Moreover, for the inverse Robin transmission problem considered in Sect. 3, we will show that the criterion will always be fulfilled if sufficiently many measurements are taken. Hence, the criterion allows to constructively determine the number of measurements that are required for a certain resolution and for convex reformulation by simply increasing the number of measurements until the criterion is fulfilled.
To formulate our result, let \(e_j\in \mathbb {R}^n\) denote the jth unit vector, \(\mathbb {1}\in \mathbb {R}^n\) denote the vector of ones, and \(e_j':=\mathbb {1}e_j\) is the vector containing zero in the jth component and ones in all others. For a matrix \(A\in \mathbb {R}^{n\times n}\), \(\Vert A\Vert _2\) denotes the spectral norm, and for a number \(\lambda \in \mathbb {R}\), \(\lceil \lambda \rceil \) denotes the ceiling function, i.e., the least integer greater than or equal to \(\lambda \).
Theorem 1
Let \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\), \(n,m\ge 2\), be continuously differentiable, convex and monotonically nonincreasing, and \(b\ge a>0\). If
where
and \(K:=\lceil \frac{4(n1)b}{a}\rceil 4n+5\in \mathbb {N}\), then the following holds

(a)
\(\hat{x}\in [a,b]^n\) is uniquely determined by knowledge of \(\hat{Y}:=F(\hat{x})\). \(\hat{x}\) is the unique minimizer of the convex optimization problem
$$\begin{aligned} \texttt {minimize} \quad \ \Vert x\Vert _1=\sum _{j=1}^n x_j \ \texttt { subject to } \ x\in [a,b]^n,\ F(x)\preceq \hat{Y}. \end{aligned}$$(4) 
(b)
For \(\hat{x}\in [a,b]^n\), \(\hat{Y}:=F(\hat{x})\), \(\delta >0\), and \(Y^\delta \in \mathbb {S}_m\), with \(\Vert \hat{Y}Y^\delta \Vert _2\le \delta \), the convex optimization problem
$$\begin{aligned} \texttt {minimize } \quad \ \Vert x\Vert _1=\sum _{j=1}^n x_j \ \texttt { subject to } \ x\in [a,b]^n, \ F(x)\preceq Y^\delta +\delta I \end{aligned}$$(5)possesses a minimum, and every such minimum \(x^\delta \) fulfills
To prove Theorem 1 we will show the following lemmas.
Lemma 1
Let \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\), \(n,m\ge 2\), be continuously differentiable, convex and monotonically nonincreasing. If, for some \(x\in \mathbb {R}^n_+\),
then, for all \(d\in \mathbb {R}^n\), and \(y\in \mathbb {R}^n_+\),
with \(\lambda :=\min _{j=1,\ldots ,n} \lambda _\text {max}(F'(x)((n1)e_j'e_j))\).
Proof
We will show that, for all \(d\in \mathbb {R}^n\),
which clearly implies (7). (8) then follows from (7) by the convexity property \(F'(x)(yx)\preceq F(y)F(x)\).
We prove (9) by contraposition and assume that there exists an index \(k\in \{1,\ldots ,n\}\) with
We have that either
and in both cases it follows that
Hence, by (6) and monotonicity,
which yields that
so that (9) is proven. \(\square \)
Remark 1
Lemma 1 can be considered a converse monotonicity result, as it yields that, for all \(y\in \mathbb {R}^n_+\), with \(y\ne x\),
Lemma 2
Let \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\), \(n,m\ge 2\), be continuously differentiable, convex and monotonically nonincreasing, and \(b\ge a>0\). If
where \(z_{j,k}\in \mathbb {R}_+^n\), \(d_j\in \mathbb {R}^n\), and \(K\in \mathbb {N}\) are defined as in Theorem 1, then
Proof
We will show that for all \(j\in \{1,\ldots ,n\}\) and \(x\in [a,b]^n\), there exists \(t\in [a+\frac{a}{2(n1)},\ b+\frac{a}{2(n1)}]\subset \mathbb {R}\), so that, for all \(0\le \delta \le \frac{a}{4(n1)}\),
Since \(a+K\frac{a}{4(n1)}\ge b+\frac{a}{4(n1)}\), we have that for every \(t\in [a+\frac{a}{2(n1)},\ b+\frac{a}{2(n1)}]\), there exists \(k\in \{2,\ldots ,K\}\), so that
Hence, if (10) is proven, then
so that the assertion follows.
To prove (10), let \(j\in \{1,\ldots ,n\}\), and \(x\in [a,b]^n\). We define \( t:=x_j+\frac{a}{2(n1)}. \) Then, for all \(0\le \delta \le \frac{a}{4(n1)}\)
and
so that we obtain from monotonicity (2) and convexity (3)
which proves (10) and thus the assertion. \(\square \)
Proof of Theorem 1
Under the assumption of Theorem 1, it follows from Lemma 2, that the assumptions of Lemma 1 are fulfilled, so that (8) holds for all \(x,y\in [a,b]^n\). In particular this yields that \(\hat{Y}:=F(\hat{x})\) uniquely determines \(\hat{x}\in [a,b]^n\). Moreover, for every \(x\in [a,b]^n\) with \(x\ne \hat{x}\), and \(F(x)\preceq \hat{Y}=F(\hat{x})\), we obtain from Remark 1 that
which shows that \(\hat{x}\) is the unique minimizer of (4). This proves Theorem 1(a). To prove Theorem 1(b), we note that the set of all \(x\in [a,b]^n\) with \(F(x)\preceq \hat{Y}^\delta +\delta I\) is compact and nonempty since it contains \(\hat{x}\). Hence, at least one minimizer of (5) exists. Every minimizer \(x^\delta \in [a,b]^n\) fulfills
If \(2\delta < \frac{\lambda \Vert x^\delta \hat{x}\Vert _\infty }{n1}\), then (8) would imply that
which contradicts the minimality of \(x^\delta \). Hence \(\Vert x^\delta \hat{x}\Vert _\infty \le \frac{2\delta (n1)}{\lambda }\). \(\square \)
Application to an inverse elliptic coefficient problem
We will now study the problem of determining a Robin transmission coefficient in an elliptic PDE from finitely many measurements. Using Theorem 1 we will show that this inverse coefficient problem can be rewritten as a uniquely solvable convex nonlinear semidefinite optimization problem if enough measurements are being used. This also gives a constructive criterion whether a certain number of measurements suffices to determine the Robin parameter with a given desired resolution by convex optimization, and yields an error estimate for noisy data.
The infinitedimensional inverse Robin transmission problem
Let \(\varOmega \subset \mathbb {R}^d\) (\(d\ge 2\)) be a bounded domain and \(D\subset \varOmega \) be an open subset with \(\overline{D}\subset \varOmega \). \(\varOmega \) and D are assumed to have Lipschitz boundaries, \(\partial \varOmega \) and \(\varGamma :=\partial D\), and \(\varOmega {\setminus }D\) is assumed to be connected. \(L^\infty _+(\varGamma )\) denotes the subset of \(L^\infty (\varGamma )\)functions with positive essential infima.
We consider the inverse problem of recovering the coefficient \(\gamma \in L^\infty _+(\varGamma )\) in the elliptic Robin transmission problem
from the NeumannDirichletOperator
Using the LaxMilgram theorem and the compactness of the trace operator from \(H^{1}(\varOmega )\) to \(L^2(\partial \varOmega )\), it easily follows that (11)–(14) is uniquely solvable, and that \(\varLambda (\gamma )\in \mathcal {L}(L^2(\partial \varOmega ))\) is selfadjoint and compact.
We summarize and reformulate some known results on the NeumannDirichlet operator that motivate why the corresponding finitedimensional inverse problem can be treated with the methods from Sect. 2. In the following theorem “\(\le \)” is to be understood pointwise almost everywhere for \(L^\infty \)functions, and “\(\succeq \)” is the Loewner order on the space of selfadjoint operators.
Theorem 2
\(\varLambda :\ L^\infty _+(\varOmega )\rightarrow \mathcal {L}(L^2(\partial \varOmega ))\) is Fréchet differentiable. Moreover,

(a)
\(\varLambda \) is monotonically nonincreasing and convex, i.e.,
$$\begin{aligned} \varLambda '(\gamma )\delta&\preceq 0 \quad&\text {for all }\gamma \in L^\infty _+(\varOmega ), 0\le \delta \in L^\infty (\varOmega ),\\ \varLambda (\gamma )\varLambda (\gamma ^{(0)})&\succeq \varLambda '(\gamma ^{(0)}) (\gamma \gamma ^{(0)}) \quad&\text {for all }\gamma ,\gamma ^{(0)}\in L^\infty _+(\varOmega ). \end{aligned}$$ 
(b)
For all \(\gamma \in L^\infty _+(\varOmega )\), \(C>0\), and \(M\subseteq \varGamma \) measurable with positive measure
$$\begin{aligned} \varLambda '(\gamma )(C\chi _{\varGamma {\setminus } M}\chi _M)\not \preceq 0. \end{aligned}$$ 
(c)
For all \(\gamma _1,\gamma _2\in L^\infty _+(\partial \varOmega )\)
$$\begin{aligned} \gamma _1\le \gamma _2 \quad \text { if and only if } \quad \varLambda (\gamma _1)\succeq \varLambda (\gamma _2). \end{aligned}$$In particular, \(\varLambda (\gamma )\) uniquely determines \(\gamma \).
Proof
Fréchet differentiability, monotonicity and convexity of \(\varLambda \) are shown in [10, Lemma 5], cf. also [11, Lemma 4.1]. (b) follows from the localized potentials result in [11, Lemma 4.3]. The “only if”part in (c) follows from (a), and the “if”part in (c) easily follows from using (b) together with (a). \(\square \)
The inverse problem with finitely many measurements
We now consider the inverse Robin transmission problem with finite resolution and finitely many measurements as in [10]. We assume that the unknown coefficient function \(\gamma \in L^\infty _+(\varGamma )\) is piecewise constant on an apriori known partition of \(\varGamma \), i.e.
where \(\varGamma _1,\ldots ,\varGamma _n\), \(n\ge 2\), are pairwise disjoint measurable subsets of \(\varGamma \). For the ease of notation, we identify a piecewise constant function \(\gamma \in L^\infty (\varGamma )\) with the vector \(\gamma =(\gamma _1,\ldots ,\gamma _n)^T\in \mathbb {R}^n\) in the following. We also assume that we know apriori bounds \(b>a>0\), so that \(\gamma \in [a,b]^n\).
We aim to reconstruct \(\gamma \in [a,b]^n\) from finitely many measurements of \(\varLambda (\gamma )\). More precisely, we assume that \((g_j)_{j\in \mathbb {N}}\subseteq L^2(\partial \varOmega )\) has dense span in \(L^2(\partial \varOmega )\), and that we can measure
for some number \(m\in \mathbb {N}\). Thus, the question whether a certain number of measurements determine the unknown coefficient with a certain resolution can be written as the problem to
Using our results in Sect. 2 we can now show that this inverse problem is uniquely solvable if sufficiently many measurements are being used, and that it can be equivalently reformulated as a convex semidefinite program.
Theorem 3

(a)
If \(m\in \mathbb {N}\) is sufficiently large then \(\hat{Y}:=F(\hat{\gamma })\in \mathbb {S}_m\) uniquely determines \(\hat{\gamma }\in [a,b]^n\). \(\hat{\gamma }\) is the unique minimizer of the convex semidefinite optimization problem
$$\begin{aligned} \texttt {minimize } \quad \ \Vert \gamma \Vert _1=\sum _{j=1}^n \gamma _j \ \texttt { subject to } \ \gamma \in [a,b]^n,\ F(\gamma )\preceq \hat{Y}. \end{aligned}$$ 
(b)
The assertion in (a) holds if all matrices \(F'(z_{j,k})d_j\in \mathbb {S}_m\), (with \(z_{j,k}\in \mathbb {R}_+^n\), \(d_j\in \mathbb {R}^n\), and \(K\in \mathbb {N}\) given in Theorem 1) possess at least one positive eigenvalue. This criterion is fulfilled for sufficiently large \(m\in \mathbb {N}\). Moreover, for \(\delta >0\), and \(Y^\delta \in \mathbb {S}_m\), with \(\Vert \hat{Y}Y^\delta \Vert _2\le \delta \), the convex optimization problem
$$\begin{aligned} \texttt {minimize } \quad \ \Vert \gamma \Vert _1=\sum _{j=1}^n \gamma _j \ \texttt { subject to } \ \gamma \in [a,b]^n,\ F(\gamma )\preceq Y^\delta +\delta I \end{aligned}$$possesses a minimum, and every such minimum \(\gamma ^\delta \) fulfills
Proof
\(F(\gamma )\) is a symmetric matrix since \(\varLambda (\gamma )\) is selfadjoint. Fréchet differentiability, monotonicity and convexity of \(F:\ \mathbb {R}^n_+\rightarrow \mathbb {S}_m\) immediately follow from the corresponding properties of \(\varLambda \) in Theorem 2. For all \(j=1,\ldots ,n\), \(k=2,\ldots ,K\), we have that \(\varLambda '(z_{j,k})d_j\not \preceq 0\) by Theorem 2(b). By density, it follows that for all \(j=1,\ldots ,n\), \(k=2,\ldots ,K\) there exists \(m\in \mathbb {N}\) so that \(F'(z_{j,k})d_j\not \preceq 0\), and since there are only finitely many such combinations of j and k, there exists \(m\in \mathbb {N}\), so that all these matrices possess a positive eigenvalue. Hence, the assertions follow from Theorem 1. \(\square \)
Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analysed during the current study
References
 1.
Adler, A., Gaburro, R., Lionheart, W.: Electrical impedance tomography. In: O. Scherzer (ed.) Handbook of Mathematical Methods in Imaging, pp. 701–762. Springer (2015)
 2.
Alberti, G.S., Santacesaria, M.: Calderón’s inverse problem with a finite number of measurements. Forum Math. Sigma 7, e35 (2019)
 3.
Alessandrini, G., Vessella, S.: Lipschitz stability for the inverse conductivity problem. Adv. Appl. Math. 35(2), 207–241 (2005)
 4.
Beretta, E., de Hoop, M.V., Francini, E., Vessella, S.: Stable determination of polyhedral interfaces from boundary data for the Helmholtz equation. Comm. Partial Differential Equations 40(7), 1365–1392 (2015)
 5.
Calderón, A.P.: On an inverse boundary value problem. In: W.H. Meyer, M.A. Raupp (eds.) Seminar on Numerical Analysis and its Application to Continuum Physics, pp. 65–73. Brasil. Math. Soc., Rio de Janeiro (1980)
 6.
Calderón, A.P.: On an inverse boundary value problem. Comput. Appl. Math. 25(2–3), 133–138 (2006)
 7.
Harrach, B.: On uniqueness in diffuse optical tomography. Inverse Problems 25(5), 055010 (2009)
 8.
Harrach, B.: Uniqueness and Lipschitz stability in electrical impedance tomography with finitely many electrodes. Inverse Problems 35(2), 024005 (2019)
 9.
Harrach, B.: An introduction to finite element methods for inverse coefficient problems in elliptic PDEs. Jahresber. Dtsch. Math. Ver. 123(3), 183–210 (2021)
 10.
Harrach, B.: Uniqueness, stability and global convergence for a discrete inverse elliptic Robin transmission problem. Numer. Math. 147, 29–70 (2021)
 11.
Harrach, B., Meftahi, H.: Global uniqueness and Lipschitzstability for the inverse Robin transmission problem. SIAM J. Appl. Math. 79(2), 525–550 (2019)
 12.
Kenig, C., Salo, M.: Recent progress in the Calderón problem with partial data. Contemp. Math 615, 193–222 (2014)
 13.
Klibanov, M.V., Li, J., Zhang, W.: Convexification of electrical impedance tomography with restricted DirichlettoNeumann map data. Inverse Problems 35(3), 035005 (2019)
 14.
Rüland, A., Sincich, E.: Lipschitz stability for the finite dimensional fractional Calderón problem with finite Cauchy data. Inverse Probl. Imaging 13(5), 1023–1044 (2019)
 15.
Uhlmann, G.: Electrical impedance tomography and Calderón’s problem. Inverse Problems 25(12), 123011 (2009)
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Affiliations
Corresponding author
Ethics declarations
Conflict of interest and data availability statement
The author declares that he has no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Harrach, B. Solving an inverse elliptic coefficient problem by convex nonlinear semidefinite programming. Optim Lett (2021). https://doi.org/10.1007/s11590021018024
Received:
Accepted:
Published:
Keywords
 Inverse Problem
 Finitely many measurements
 Monotonicity
 Convexity
 Loewner order
Mathematics Subject Classification
 35R30
 90C22