Proof of the Theorem
Consider the auxiliary vectors . Note that each entry of $y$ is less than 1:
The following inequality holds for each entry:
࢘
2. Cone
Definition: A set $\mathcal{K}$ is called cone if for all $x\in \mathcal{K},\theta\geq 0$, it holds that \begin{gather*} \theta x\in \mathcal{K} \end{gather*}
Definition: Let $\mathcal{K}$ be a cone, the dual cone $\mathcal{K}^*$ is \begin{gather*} \mathcal{K}^*:= \left\{y\mid \langle x,y\rangle \geq 0, \forall x\in\mathcal{K}\right\} \end{gather*}
Definition: Let $\mathcal{K}$ be a cone, the polar cone $\mathcal{K}^o$ is \begin{gather*} \mathcal{K}^o:= \left\{y\mid \langle x,y\rangle \leq 0, \forall x\in\mathcal{K}\right\} \end{gather*}
Apparently, by the definition, $\mathcal{K}^o=-\mathcal{K}^*$.
Definition: Let $C$ be a set, for any $x\in C$ the normal cone $\mathcal{N}_C(x)$ at this point is: \begin{gather*} \mathcal{N}_C(x):= \left\{y\mid \langle y,c-x\rangle\leq 0, \forall c\in\mathcal{K}\right\} \end{gather*}
Proof
Just use the definition, it is easy to verify that $\mathcal{K}\subseteq \mathcal{K}^{**}$. To show that they are actually the same set. Suppose there exists $x\in\mathcal{K}^{**},x\notin \mathcal{K}$. By Hyperplane separation theorem, there exists $c$ such that
\begin{gather*}
\mathcal{K}\subseteq H^+=\left\{z\mid c^Tz\geq 0\right\},\\
c^Tx < 0.
\end{gather*}
From the first relation, it holds that $c\in \mathcal{K}^*$. However, the second relation indicates that such $x\notin \mathcal{K}^{**}$ which contradicts the assumption. Therefore $\mathcal{K}^{**}=\mathcal{K}$.
Proof
We only need to show that there does not exists such $y\in\mathcal{N}_{\mathcal{K}}$ that $\langle y,x\rangle < 0$. Suppose there exists, then for $x\in \mathcal{K}$,
$$
\langle y, \frac{1}{2}x - x\rangle=\langle y, -\frac{1}{2}x \rangle > 0
$$
which contradicts the definition of the normal cone, as $\frac{1}{2}x\in \mathcal{K}$.
3. Positive Definite Matrix
Proof of the Lemma$\Rightarrow$: This is trivial by imposing the trace operation to $AB$. $\Leftarrow$: Let $A=PP^T,B=QQ^T$, where $P,Q$ can be chosen such as Cholesky decomposition or square root of the corresponding matrice. It holds that
\begin{gather*}
tr(AB)=tr(PP^TQQ^T)=tr(Q^TPP^TQ)=tr(P^TQ)^2=\|P^TQ\|_F^2=0\\
\Rightarrow AB=PP^TQQ^T=P 0 Q^T=0.
\end{gather*}
3.1. Linear Matrix Inequality(LMI) and SemiDefinite Programming(SDP)
Definition: A square matrix $A\in\mathbb{R}^{n\times n}$ is Hurwitz-stable iff. \begin{gather*} Re[\lambda_i(A)]<0,\ \forall i=1,\cdots,n \end{gather*}
This Theorem was originated from control theory. I would like to present an algebra view-point.
Proof of the TheoremNecessary condition '$\Leftarrow$'
This is a common result in control theory in which one can prove using contradiction. Suppose there exists $P\succ 0,1\leq i\leq n$ such that \eqref{eq:continuous-time Lyapunov inequality} holds and $Re[\lambda_i(A)]\geq 0$. Consider the following dynamics:
\begin{gather*}
\dot{x}(t)=Ax(t), x(0)=x_0.
\end{gather*}
For any $x_0\neq 0,\displaystyle\lim_{t\to\infty}x(t)\neq 0$. However, consider the Lyapunov function $V(x)=x^TPx$. From \eqref{eq:continuous-time Lyapunov inequality}, it holds that:
\begin{gather*}
\frac{d}{dt}{V}(x)=x^T\left(PA+A^TP\right)x<0,\forall x\neq 0\\ v(x)> 0,\forall x(t)\neq 0
\end{gather*}
By LaSalle's invariance principle, it holds that $x(t)\to 0$, which contradicts the condition that $\displaystyle\lim_{t\to\infty}x(t)\neq 0$.
0,\forall>Sufficient condition '$\Rightarrow$'
Since $A$ is real, from linear algebra, there always exists invertible $Q\in\mathbb{R}^{n\times n}$ such that $A=Q^{-1}JQ$, where
\begin{gather}
\label{eq: similar transfrom for J}
\begin{array}{c}
J=diag\left\{
J_1,\cdots, J_m
\right\}, J_i=\begin{bmatrix}
C_i&I&0&0\\
0&C_i&I&0\\
0&0&\ddots&\vdots\\
0&0&\cdots&C_i
\end{bmatrix},\\
C_i=\lambda_i\in\mathbb{R}, \text{ or } C_i=
\begin{bmatrix}
a_i&b_i\\
-b_i&a_i
\end{bmatrix}.
\end{array}
\end{gather}
These $J_i$ are called real Jordan block. Apply the congruent transformation to \eqref{eq:continuous-time Lyapunov inequality} with $Q^{-1}$, one can find:
\begin{gather*}
Q^{-T}PQ^{-1}J+J^TQ^{-T}PQ^{-1}\prec0.
\end{gather*}
Denote $\hat{P}:=Q^{-T}PQ^{-1}$, it leads to
\begin{gather}
\hat{P}J+J^T\hat{P}\prec0.
\label{eq: transoformed continuous-time Lyapunov equation}
\end{gather}
If we construct $\hat{P}$ to be also diagonally arranged with corresponding dimension: $ diag\{\hat{P}_1,\cdots,\hat{P}_m\}$, the resulting \eqref{eq: transoformed continuous-time Lyapunov equation} will be block diagonal. Therefore, it suffices to prove the case that $J$ is a single Jordan block, i.e., $m=1$.
When $C$ is real. Note here if $J$ is a scalar, then we are done. Because any $\hat{P}>0$ satisfies \eqref{eq: transoformed continuous-time Lyapunov equation}. What we are really dealing with, is the case where $\ell=dim(J)\geq 2$. To show there exists $\hat{P}$ such that \eqref{eq: transoformed continuous-time Lyapunov equation} holds, constructing $\hat{P}=diag\{\hat{p}_1,\cdots,\hat{p}_\ell\}$. With the proposed $\hat{P}$, \eqref{eq: transoformed continuous-time Lyapunov equation} becomes:
\begin{gather}
\label{eq: expended continuous-time Lyapunov function}
\begin{bmatrix}
2\hat{p}_1\lambda&\hat{p}_1&&&\\
\hat{p}_1&2\hat{p}_2\lambda&\hat{p}_2&&\\
&\hat{p}_2&\ddots&\ddots&\\
&&\ddots&\ddots&\hat{p}_{\ell-1}\\
&&&\hat{p}_{\ell-1}&2\hat{p}_\ell\lambda
\end{bmatrix}\prec0, \lambda<0. \end{gather} finite induction is used to prove such $\hat{p}$ exists. pick any $\hat{p}_\ell>0$
For the case $\ell =2$, \eqref{eq: transoformed continuous-time Lyapunov equation} becomes:
\begin{gather*}
\begin{bmatrix}
2\hat{p}_1\lambda&\hat{p}_1\\
\hat{p}_1&2\hat{p}_2\lambda
\end{bmatrix}\prec0\Leftrightarrow 2\hat{p}_2\lambda-\frac{1}{2}\hat{p}_1\lambda^{-1}<0\leftrightarrow 4\hat{p}_2\lambda^2>\hat{p}_1,
\end{gather*}
where such $\hat{p}_1$ always exists.
0\leftrightarrow>
For the case that $\ell>2$, suppose $\hat{p}_\ell,\cdots,\hat{p}_{2}$ have been chosen, \eqref{eq: transoformed continuous-time Lyapunov equation} looks like:
\begin{gather*}
\begin{bmatrix}
\begin{array}{c|ccccc}
2\hat{p_1}\lambda&\hat{p}_1&&\\
\hline
\hat{p}_1& 2\hat{p}_2\lambda&\hat{p}_2&&\\
&\hat{p}_2&\ddots&\ddots&\\
&&\ddots&\ddots&\hat{p}_{\ell-1}\\
&&&\hat{p}_{\ell-1}&2\hat{p}_\ell\lambda
\end{array}
\end{bmatrix}\prec0
\end{gather*}
By Schur complement, it is equivalent to
\begin{gather*}
\begin{bmatrix}
2\hat{p_1}\lambda&\hat{p}_1\\
\hat{p}_1&\alpha
\end{bmatrix}\prec 0,
\end{gather*}
for some $\alpha<0$, in which such $\hat{p}_1$ always exists.
0$,>
When $C=
\begin{bmatrix}
a&b\\
-b&a
\end{bmatrix}$, I would like to transfer it back to case 1. Consider $\hat{P}=diag\{\hat{p}_1,\hat{p}_1,\hat{p}_2,\hat{p}_2,\cdots,\hat{p}_\ell,\hat{p}_\ell\}$. Then \eqref{eq: transoformed continuous-time Lyapunov equation} becomes:
\begin{gather*}
\begin{bmatrix}
2\hat{p}_1\lambda&0&\hat{p}_1&0 &&&&\\
0&2\hat{p}_1\lambda&0&\hat{p}_1 &&&&\\
\hat{p}_1&0&2\hat{p}_2\lambda&0&\hat{p}_2&0\\
0&\hat{p}_1&0&2\hat{p}_2\lambda&0&\hat{p}_2 &&\\
&& \hat{p}_2&0&\ddots&\ddots&&\\
&&0&\hat{p}_2&\ddots&\ddots&\ddots&\\
&&&&\ddots&\ddots&\ddots&\ddots&\\
&&&&&&2\hat{p}_{\ell-1}\lambda&0&\hat{p}_{\ell-1}&0\\
&&&&&&0&2\hat{p}_{\ell-1}\lambda&0&\hat{p}_{\ell-1}\\
&&&&&&\hat{p}_{\ell-1}&0&2\hat{p}_\ell\lambda&0\\
&&&&&&0&\hat{p}_{\ell-1}&0&2\hat{p}_\ell\lambda
\end{bmatrix}\prec 0.
\end{gather*}
By selecting odd rows and columns and even rows and columns, one can find it gets identical expressions as in \eqref{eq: expended continuous-time Lyapunov function}.