Welcome to visit Communications in Theoretical Physics,
Statistical Physics, Soft Matter and Biophysics

Applications of Markov spectra for the weighted partition network by the substitution rule

  • Mei-Feng Dai , 1 ,
  • Ting-Ting Ju 1 ,
  • Yong-Bo Hou 1 ,
  • Fang Huang 1 ,
  • Dong-Lei Tang 2 ,
  • Wei-Yi Su 3
Expand
  • 1Institute of Applied System Analysis, Jiangsu University, Zhenjiang, 212013, China
  • 2School of Mathematics and Statistics, Nanjing Audit University, Nanjing, 211815, China
  • 3Department of Mathematics, Nanjing University, Nanjing, 210093, China

Received date: 2020-01-07

  Revised date: 2020-03-02

  Accepted date: 2020-03-03

  Online published: 2020-05-09

Copyright

© 2020 Chinese Physical Society and IOP Publishing Ltd

Abstract

The weighted self-similar network is introduced in an iterative way. In order to understand the topological properties of the self-similar network, we have done a lot of research in this field. Firstly, according to the symmetry feature of the self-similar network, we deduce the recursive relationship of its eigenvalues at two successive generations of the transition-weighted matrix. Then, we obtain eigenvalues of the Laplacian matrix from these two successive generations. Finally, we calculate an accurate expression for the eigentime identity and Kirchhoff index from the spectrum of the Laplacian matrix.

Cite this article

Mei-Feng Dai , Ting-Ting Ju , Yong-Bo Hou , Fang Huang , Dong-Lei Tang , Wei-Yi Su . Applications of Markov spectra for the weighted partition network by the substitution rule[J]. Communications in Theoretical Physics, 2020 , 72(5) : 055602 . DOI: 10.1088/1572-9494/ab7ed3

1. Introduction

In recent years, the new research field of complex network theory has penetrated into many subjects and become a hot topic for scholars and scientists [17]. Because of the rapid development of computer data processing and computing power, scientists have more and more knowledge of complex networks. Through a lot of research, complex networks have been found to have many properties, such as small-world effects [8], average path length [9], clustering coefficient [10], betweenness [11] and so on. By using these properties, the behavior of network systems can be predicted on the basis of known network structure characteristics and the formation of rules. Recently there has been considerable interest in the study of complex networks, and the theories of complex networks have been developed in many fields, for example neural networks [12], the internet [13], traffic networks [14] and communication networks [15]. Understanding the theory behind complex networks is also conducive to deeper understanding of the real world.
A fractal structure is formed after numerous iterations. It is because of continuous iteration that the fractal graph has self-similarity. So, each part of a self-similar graph is similar to the whole graph. The Koch curve is an example that shows good self-similarity. More and more scholars have joined in the study of graphs that show self-similarity. Choongbum and Po-Shen [16] studied the self-similarity of graphs. Qi et al [17] made a self-similarity analysis of the eubacteria genome based on weighted graphs, and so on.
The calculation of Laplacian spectra of networks has attracted much attention, and Laplacian spectra are widely used [1824]. The minimum eigenvalue of the Laplace matrix of a complex network corresponds to the diameter of that network [25]. Besides, the ratio of the maximum and minimum eigenvalues of the Laplace matrix of a complex network reflects the network’s capability for synchronization [26]. We can obtain expressions for the eigentime identity and Kirchhoff index, which can reflect topological properties and dynamical system properties of networks.
In recent years, complex networks have also attracted the attention of many scholars. Xia and Xi [27] studied the modeling and robust design of remanufacturing logistics networks based on the design of experiments. Dai et al [28] studied eigentime identities for weighted polymer networks. Shan et al [29] studied domination number and minimum dominating sets in a pseudofractal scale-free web and Sierpinski graph. In order to research the dynamic properties of different network models, one needs to calculate some index, such as the Laplacian spectrum [30], eigentime identity [31] and Kirchhoff index [32], etc.
The eigenvalues of the Laplacian matrix of a graph can reflect information from graph theory. We deduce these eigenvalues from two successive generations of a transition-weighted matrix so that we can obtain an accurate expression for the Kirchhoff index from the spectrum of the Laplacian matrix.
Kirchhoff discovered that Laplace matrices have some important applications for circuit networks. So, a Laplace matrix is also called a Kirchhoff matrix. In physics, the Kirchhoff index characterizes the average power consumed by a resistance network when current is arbitrarily injected into it. If the Kirchhoff index is small, the electrical energy consumed by the resistor network per unit time is small. In addition, the Kirchhoff index can be used as an indicator of network robustness.
In this paper, according to the symmetry feature of the self-similar network by the substitution rule, we can obtain its global properties by understanding its local properties. Firstly, by using some elementary matrix transformations, we deduce the recursive relationship of its eigenvalues at two successive generations of the Markov operator matrix. Then, according to the relationship between the Laplacian matrix and Markov operator matrix, we obtain eigenvalues of the Laplacian matrix. Finally, in order to understand the topological properties of the self-similar network, we make use of Vieta’s formulas to calculate accurate expressions for the eigentime identity and Kirchhoff index from the spectrum of the Laplacian matrix.

2. The weighted self-similar network by the substitution rule

In this section, we will construct a weighted self-similar network by the substitution rule in an iterative way [33].
Let r (0 < r ≤ 1) be a positive real number. Let Gt be the network generated after t (t ≥ 0) steps.
For t = 0, G0 is one edge with unit weight connecting the two nodes.
For t ≥ 1, Gt is obtained from Gt−1 by performing the following operations.
For each existing edge having two endpoints and edge weight w in Gt−1, one can substitute a connected cluster on the right-hand side of the arrow in figure 1 for each edge in Gt−1 on the left-hand side, as described below.
Figure 1. Iterative construction method on every edge at two successive generations.
(i) The connected cluster is obtained by inserting two paths where one length is 2 and the other length is 3. The two endpoints of the paths are the same as the endpoints of the original edge, and all new nodes in the connected cluster are connected to each other.
(ii) Every edge weight in the connected cluster is scaled as shown in figure 1: the weights of the four edges linking the new and old nodes in Gt are equal to the weight of the original edge in Gt−1. The weights of the remaining three edges linking any two new nodes in Gt are scaled by a weight factor r. We show the substitution rule of Gt from t = 0 to t = 2 in figure 2.
Let the generalized adjacency matrix (weight matrix) of Gt be Wt, which is used to describe the connection among nodes of Gt as follows: Wt(i, j) = wt(i, j) if nodes i and j are adjacent in Gt, or Wt(i, j) = 0 otherwise, where wij(t) is the weight of the edge between i and j. The strength matrix St of Gt is diagonal and given as St = diag(s1(t), s2(t), ⋯, ${s}_{{N}_{t}}(t)$), where si(t) is the strength of node i in Gt. Nt is the total number of nodes at generation t, and ${N}_{t}=\tfrac{{7}^{t}+3}{2}$. Et is the total number of edges at generation t, and $| {E}_{t}| ={7}^{t}$. The sum of all weights in Gt is Qt. The initial connected graph G0 has two nodes, labeled Node 1 and Node 2. The strength of Node 1 and Node 2 is 2 in G1.
Then, the Markov operator matrix Pt of Gt is defined as ${P}_{t}={S}_{t}^{-1}{W}_{t}$. The element at row i and column j of Pt is ${P}_{i,j}(t)={{s}_{i}}^{-1}{w}_{{ij}}(i\ne j)$. Then, we introduce Laplacian matrix Lt = It − Pt, where It is the identity matrix with the same order as Pt.
Let α represent the set of old nodes and β represent the set of new nodes in Gt. So, the Markov operator matrix Pt is divided into several blocks as follows:
$\begin{eqnarray}{P}_{t}=\left(\begin{array}{cc}{P}_{\alpha ,\alpha } & {P}_{\alpha ,\beta }\\ {P}_{\beta ,\alpha } & {P}_{\beta ,\beta }\end{array}\right)\equiv \left(\begin{array}{cc}0 & {s}_{\alpha }^{-1}{w}_{\alpha ,\beta }\\ {s}_{\beta }^{-1}{w}_{\beta ,\alpha } & {s}_{\beta }^{-1}{w}_{\beta ,\beta }\end{array}\right),\end{eqnarray}$
where Pα,β is the submatrix whose entries are the quotients of the weights and the strength of old nodes, while Pβ,α is the submatrix whose entries are the quotients of the weights and the strength of new nodes.

2.1. The recursive relationship of eigenvalues for Pt and Pt+1

In this section, we will deduce the recursive relationship of eigenvalues for the Markov operator matrices Pt and Pt+1 for self-similar network by the substitution rule.
The set of edges is ${E}_{t}=\{{e}_{1},{e}_{2},\ \cdots ,\ {e}_{{7}^{t}}\}$. Let ${\beta }_{{e}_{i}}$ be the set of new nodes generated by an edge ei, i = 1, ⋯, 7t. We rewrite the Markov operator matrix Pt+1 as
$\begin{eqnarray*}{P}_{t+1}=\left(\begin{array}{ccccc}0 & {P}_{\alpha ,{\beta }_{{e}_{1}}} & {P}_{\alpha ,{\beta }_{{e}_{2}}} & \cdots & {P}_{\alpha ,{\beta }_{{e}_{{7}^{t}}}}\\ {P}_{{\beta }_{{e}_{1}},\alpha } & {P}_{{\beta }_{{e}_{1}},{\beta }_{{e}_{1}}} & 0 & \cdots & 0\\ {P}_{{\beta }_{{e}_{2}},\alpha } & 0 & {P}_{{\beta }_{{e}_{2}},{\beta }_{{e}_{2}}} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots \\ {P}_{{\beta }_{{e}_{{7}^{t}}},\alpha } & 0 & 0 & \cdots & {P}_{{\beta }_{{e}_{{7}^{t}}},{\beta }_{{e}_{{7}^{t}}}}\end{array}\right),\end{eqnarray*}$
where
$\begin{eqnarray}\begin{array}{l}({P}_{\alpha ,{\beta }_{{e}_{1}}},{P}_{\alpha ,{\beta }_{{e}_{2}}},\ \cdots ,\ {P}_{\alpha ,{\beta }_{{e}_{{7}^{t}}}})\\ \quad \ =({{s}_{\alpha }}^{-1}{w}_{\alpha ,{\beta }_{{e}_{1}}},{{s}_{\alpha }}^{-1}{w}_{\alpha ,{\beta }_{{e}_{2}}},\ \cdots ,\ {{s}_{\alpha }}^{-1}{w}_{\alpha ,{\beta }_{{e}_{{7}^{t}}}})\\ \quad \ =\ \ {P}_{\alpha ,\beta },\end{array}\end{eqnarray}$
and
$\begin{eqnarray}\begin{array}{l}{\left({P}_{{\beta }_{{e}_{1}},\alpha },{P}_{{\beta }_{{e}_{2}},\alpha },\cdots ,{P}_{{\beta }_{{e}_{{7}^{t}}},\alpha }\right)}^{\top }\\ \quad \ =\ {\left({s}_{{\beta }_{{e}_{1}}}^{-1}{w}_{{\beta }_{{e}_{1}},\alpha },{s}_{{\beta }_{{e}_{2}}}^{-1}{w}_{{\beta }_{{e}_{1}},\alpha },\cdots ,{s}_{{\beta }_{{e}_{{7}^{t}}}}^{-1}{w}_{{\beta }_{{e}_{{7}^{t}}},\alpha }\right)}^{\top }\\ \quad \ =\ {P}_{\beta ,\alpha }.\end{array}\end{eqnarray}$
In order to obtain the recursive relationship of eigenvalues of the Markov operator matrices Pt and Pt+1, we need to calculate
$\begin{eqnarray}\begin{array}{c}\begin{array}{c}{\rm{\det }}({P}_{t+1}-{xI})={\rm{\det }}\left(\begin{array}{cc}-{xI} & {B}_{n}\\ {D}_{n} & {C}_{n}\end{array}\right),\end{array}\end{array}\end{eqnarray}$
where ${B}_{n}=({P}_{\alpha ,{\beta }_{{e}_{1}}}$ ${P}_{\alpha ,{\beta }_{{e}_{2}}}\cdots {P}_{\alpha ,{\beta }_{{e}_{{7}^{t}}}})$, ${C}_{n}={\rm{diag}}({P}_{{\beta }_{{e}_{1}},{\beta }_{{e}_{1}}}$ − ${xI},{P}_{{\beta }_{{e}_{2}},{\beta }_{{e}_{2}}}-{xI},\ \cdots ,\ {P}_{{\beta }_{{e}_{{7}^{t}}},{\beta }_{{e}_{{7}^{t}}}}-{xI})$ and ${D}_{n}\,={\left({P}_{{\beta }_{{e}_{1}},\alpha },{P}_{{\beta }_{{e}_{2}},\alpha },\cdots ,{P}_{{\beta }_{{e}_{{7}^{t}}},\alpha }\right)}^{\top }$.
For $x\in {\mathbb{R}}$, the matrix ${P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}(i=1,2,\ \cdots ,\ {7}^{t})$ is invertible. Let
$\begin{eqnarray*}\begin{array}{rcl}{R}_{1} & = & \left(\begin{array}{ccccc}I & 0 & 0 & \cdots & 0\\ -{\left({P}_{{\beta }_{{e}_{1}},{\beta }_{{e}_{1}}}-{xI}\right)}^{-1}{P}_{{\beta }_{{e}_{1}},\alpha } & I & 0 & \cdots & 0\\ 0 & 0 & I & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & I\end{array}\right),\\ {R}_{2} & = & \left(\begin{array}{ccccc}I & 0 & 0 & \cdots & 0\\ 0\, & I & 0 & \cdots & 0\\ -{\left({P}_{{\beta }_{{e}_{2}},{\beta }_{{e}_{2}}}-{xI}\right)}^{-1}{P}_{{\beta }_{{e}_{2}},\alpha } & 0 & I & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & I\end{array}\right),\\ & & \vdots \\ {R}_{{7}^{t}} & = & \left(\begin{array}{ccccc}I & 0 & 0 & \cdots & 0\\ 0 & I & 0 & \cdots & 0\\ 0\, & 0 & I & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -\ {\left({P}_{{\beta }_{{e}_{{7}^{t}}},{\beta }_{{e}_{{7}^{t}}}}-{xI}\right)}^{-1}{P}_{{\beta }_{{e}_{{7}^{t}}},\alpha } & 0 & 0 & \cdots & I\end{array}\right).\end{array}\end{eqnarray*}$
So, we can obtain that
$\begin{eqnarray*}\begin{array}{lcl}{\rm{\det }}({P}_{t+1}-{xI}) & = & {\rm{\det }}\left(({P}_{t+1}-{xI})({R}_{1}{R}_{2}\cdots {R}_{{7}^{t}})\right)\\ & = & {\rm{\det }}\left(\begin{array}{cc}M & {B}_{n}\\ 0 & {C}_{n}\end{array}\right)\\ & = & {\rm{\det }}(M)\displaystyle \prod _{i=1}^{{7}^{t}}\left(det({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})\right),\end{array}\end{eqnarray*}$
where $M=-{xI}-{\sum }_{i=1}^{{7}^{t}}{P}_{\alpha ,{\beta }_{{e}_{i}}}{({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})}^{-1}{P}_{{\beta }_{{e}_{i}},\alpha }$.
Then, the following result can be obtained (see appendix A for detail):
$\begin{eqnarray}\,\begin{array}{c}\begin{array}{l}{\rm{d}}{\rm{e}}{\rm{t}}({P}_{t+1}-xI)=\,{\left(\displaystyle \frac{1}{2}\right)}^{{\textstyle \tfrac{{7}^{t}+3}{2}}}\displaystyle \prod _{i=1}^{{\textstyle \tfrac{{7}^{t}+3}{2}}}\left[(16{r}^{3}+32{r}^{2}+20r+4){x}^{4}\right.\\ \,\,-\,(8{r}^{3}+16{r}^{2}+10r+3+4{r}^{3}{\lambda }_{t}+4r{\lambda }_{t}+{\lambda }_{t}){x}^{2}\\ \,\,-\,\left.(4{r}^{3}+4{r}^{2}+2r+6{r}^{2}{\lambda }_{t}+4r{\lambda }_{t})x-2{r}^{2}{\lambda }_{t}\right]\\ \,\,\times \,\left[\left(x-\displaystyle \frac{r+r\sqrt{(9r+5)(r+1)}+{r}^{2}}{2(2{r}^{2}+3r+1)})\right.\right.\\ \,\,\times \,{\left.\left(x-\displaystyle \frac{r-r\sqrt{(9r+5)(r+1)}+{r}^{2}}{2(2{r}^{2}+3r+1)}\right)\left(x+\displaystyle \frac{r}{2r+1}\right)\right]}^{{\textstyle \tfrac{{7}^{t}-3}{2}}},\end{array}\end{array}\,\end{eqnarray}$
where λt are eigenvalues of the Markov operator matrix Pt.

3. Eigentime identity and Kirchhoff index

In this section, we will calculate relevant invariants related to the structure of the weighted self-similar network Gt by the substitution rule. From above, we obtained the relationship of eigenvalues Pt and Pt+1. Then, we can obtain exact closed expressions for their eigentime identity and Kirchhoff index from two successive generations of the Laplacian matrix for a weighted self-similar network by the substitution rule.

3.1. Eigentime identity

The first-passage time hi,j is the expected time for a walker from node i to node j, which also can be thought of as the trapping time of a trapping problem. The stationary distribution Gt is $\pi ={({\pi }_{1},{\pi }_{1},\cdots ,{\pi }_{{N}_{t}})}^{\top }$, where ${\pi }_{i}=\tfrac{{s}_{i}(t)}{2{Q}_{t}}$, Qt is the sum of all weights in Gt, and it satisfies ${\sum }_{i=1}^{{N}_{t}}{\pi }_{i}=1$ and πPt = π. The eigentime identity, denoted Ht, is defined as the expected time needed by a walker to travel from a node i to another target node j, chosen randomly from all nodes according to the steady-state distribution, that is,
$\begin{eqnarray*}{H}_{t}=\sum _{j=1}^{{N}_{t}}{\pi }_{i}{h}_{i,j}(t).\end{eqnarray*}$
It is useful to encode much information about trapping in Gt. Suppose that ${L}_{t}=\{{\sigma }_{1},{\sigma }_{2},\ \cdots \ {\sigma }_{{N}_{t}}\}$ is the set of all the eigenvalues of the matrix Lt at generation t. Referring to σi = 1 − λi, we have [28]
$\begin{eqnarray}{H}_{t}=\sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{{\sigma }_{i}}=\sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{1-{\lambda }_{i}}.\end{eqnarray}$
From equation (5), we can divide Lt into two disjoint subsets ${L}_{t}^{(1)}$ and ${L}_{t}^{(2)}$.
The first portion of equation (5) means that each eigenvalue ${\lambda }_{i}^{(t)}\in {P}_{t}$ generates four eigenvalues ${\lambda }_{i,1}(t+1)\in {L}_{t}^{(1)}$, ${\lambda }_{i,2}(t+1)\in {L}_{t}^{(1)}$, ${\lambda }_{i,3}(t+1)\in {L}_{t}^{(1)}$ and ${\lambda }_{i,4}(t+1)\in {L}_{t}^{(1)}$. ${L}_{t}^{(1)}$ includes those eigenvalues λi,1(t + 1), λi,2(t + 1), λi,3(t + 1) and λi,4(t + 1) $\left(i=1,2,\ \cdots ,\ \tfrac{{7}^{t}+3}{2}\right)$. ${L}_{t}^{(2)}$ is the set of roots of the polynomial ${(Q(x))}^{{E}_{t}-{N}_{t}}$.
Now, we will calculate the eigentime identity Ht from the eigenvalues of Pt.
$\begin{eqnarray}\begin{array}{l}{H}_{t}\,=\,\displaystyle \sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{{\sigma }_{i}}=\sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{1-{\lambda }_{i}}\\ \,=\,\displaystyle \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(1)}}\displaystyle \frac{1}{1-{\lambda }_{i}(t)}+\sum _{{\lambda }_{i}(t)\in {L}_{t}^{(2)}}\displaystyle \frac{1}{1-{\lambda }_{i}(t)}\\ \,=\,\displaystyle \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(1)}}\left(\displaystyle \frac{1}{1-{\lambda }_{i,1}(t)}+\displaystyle \frac{1}{1-{\lambda }_{i,2}(t)}+\displaystyle \frac{1}{1-{\lambda }_{i,3}(t)}\right.\\ \,+\,\left.\displaystyle \frac{1}{1-{\lambda }_{i,4}(t)}\right)+\displaystyle \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(2)}}\displaystyle \frac{1}{1-{\lambda }_{i}(t)}.\end{array}\end{eqnarray}$
Then, Ht can be simplified as (see appendix B for detail)
$\begin{eqnarray*}\begin{array}{lcl}{H}_{t} & = & \displaystyle \frac{1}{2}{\left(\displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}\right)}^{t}\\ & & +\,\displaystyle \frac{7\left({\textstyle \tfrac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}}+{\textstyle \tfrac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}}\right)}{7-{\textstyle \tfrac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}}}\\ & & \times \,\left({7}^{t}-{\left(\displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}\right)}^{t}\right)\\ & & +\,\displaystyle \frac{{\textstyle \tfrac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}}-{\textstyle \tfrac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}}}{1-{\textstyle \tfrac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}}}\\ & & \times \,\left(1-{\left(\displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}\right)}^{t}\right).\end{array}\end{eqnarray*}$
Let $C(r)=\tfrac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}$, then
$\begin{eqnarray}\begin{array}{lcl}{H}_{t} & = & \displaystyle \frac{1}{2}{\left(C(r)\right)}^{t}+\displaystyle \frac{7\left({\textstyle \tfrac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}}+{\textstyle \tfrac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}}\right)}{7-C(r)}\\ & & \times \,({7}^{t}-{\left(C(r)\right)}^{t})\\ & & +\,\displaystyle \frac{{\textstyle \tfrac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}}-{\textstyle \tfrac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}}}{1-C(r)}\times {\left(1-C(r)\right)}^{t}).\end{array}\end{eqnarray}$
The function C(r) is continuous on the interval (0, 1], ${\mathrm{lim}}_{r\to {0}^{+}}C(r)=8$ and $C(1)=5\tfrac{21}{31}$. By the intermediate value theorem, there exists a number r0 ∈ (0, 1] such that C(r0) = 7. From figure 3, we know that if and only if r0 ∈ (0, 1] such that $C({r}_{0})=7$.
For very large networks (i.e. ${N}_{t}\to \infty $), using equation (8), the leading term of Ht obeys
$\begin{eqnarray}{H}_{t}\sim \left\{\begin{array}{ll}{N}_{t}^{{\mathrm{log}}_{7}C(r)}, & {\rm{if}}\ 0\lt r\lt {r}_{0},\ {\rm{or}}\ {r}_{0}\lt r\leqslant 1,\\ {N}_{t}, & {\rm{if}}\ r={r}_{0}.\end{array}\right.\end{eqnarray}$

3.2. Kirchhoff index

Each edge in the complex network is replaced by a valid resistor, which reflects its weight. We can obtain the corresponding electrical network H* associated with Gt. The resistance distance rij between vertices i and j of Gt is equal tothe effective resistance between the two corresponding vertices of H. Compared with the traditional network robustness, the Kirchhoff index can better reflect the stability and connectivity of the network. The Kirchhoff index is the sum of all resistance distance as follows:
$\begin{eqnarray*}{Kf}^{\ast }(H)=\displaystyle \sum _{i\lt j}{s}_{i}{s}_{j}{r}_{ij},\,i,j=1,2,\,\cdots ,\,{N}_{t}.\end{eqnarray*}$
We will calculate the Kirchhoff index from eigenvalues of Pt. The Kirchhoff index Kf*(Gt) can be expressed in terms of the spectrum of the Laplacian matrix of Gt as follows:
$\begin{eqnarray}{{Kf}}^{\ast }({G}_{t})=2{E}_{t}\displaystyle \sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{{\sigma }_{i}}=2{E}_{t}\sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{1-{\lambda }_{i}}=2{E}_{t}{H}_{t}.\end{eqnarray}$
For very large networks (i.e. ${N}_{t}\to \infty $), using equations (8) and (9) the leading term of Ht obeys
$\begin{eqnarray}\begin{array}{c}{Kf}^{\ast }({G}_{t})\sim \left\{\begin{array}{ll}{N}_{t}^{1+{{\rm{l}}{\rm{o}}{\rm{g}}}_{7}C(r)}, & {\rm{i}}{\rm{f}}\,0\lt r\lt {r}_{0},{\rm{o}}{\rm{r}}\,{r}_{0}\lt r\leqslant 1.\\ {N}_{t}^{2}, & {\rm{i}}{\rm{f}}\,r={r}_{0}.\end{array}\right.\end{array}\end{eqnarray}$
Equations (9) and (11) give the following results: in the limit of large t, if r = r0 then the eigentime identity grows linearly with the network order while the Kirchhoff index grows superlinearly. If 0 < r < r0, or r0 < r ≤ 1, then the eigentime identity and Kirchhoff index grow as a power-law function of the network order with the exponent represented by ${\mathrm{log}}_{7}C(r)$ and $1+{\mathrm{log}}_{7}C(r)$, respectively.

4. Conclusions

In conclusion, first, we can calculate eigenvalues of ${P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}$. Then, based on symmetry of the weighted self-similar network, we deduce the iterative relationship of its eigenvalues and multiplicities at two successive generations of the transition matrix for the weighted self-similar network. Finally, we deduce accurate expressions for the eigentime identity and Kirchhoff index.
In order to obtain the topologies and dynamical properties, we need to calculate eigenvalues of the Laplacian matrix. However, it is very difficult to obtain the Laplacian spectrum of different models. There still exist a lot of problems that need solving. There are still many unknown things in this field waiting for us to explore.

Acknowledgments

The authors express their gratitude to the referee for valuable comments. Research is supported by the Natural Science Foundation of China (Nos. 11 671 172).

Appendix A

The problem of determining det(Pt+1xI) is reduced to finding ${\sum }_{i=1}^{{7}^{t}}{P}_{\alpha ,{\beta }_{{e}_{i}}}{({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})}^{-1}{P}_{{\beta }_{{e}_{i}},\alpha }$. Firstly, given ${w}_{\alpha ,{\beta }_{{e}_{i}}}$, there exists a positive real number w such that
$\begin{eqnarray*}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}=\left(\begin{array}{ccc}1 & 1 & 0\\ 1 & 0 & 1\end{array}\right).\end{eqnarray*}$
Based on the symmetric property of the self-similar network by the substitution rule (see figure 2) and equations (2) and (3), we have
$\begin{eqnarray}\begin{array}{l}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha })\\ \quad \ =\ \displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{\alpha ,{\beta }_{{e}_{i}}}^{\top })\\ \quad \ =\ \displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}{w}_{\alpha ,{\beta }_{{e}_{i}}}^{\top }.\end{array}\end{eqnarray}$
Here ${w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}$ is symmetric and thus ${\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}$ is symmetric. So we can obtain $\tfrac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha })$ = $\tfrac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}{w}_{\alpha ,{\beta }_{{e}_{i}}}^{\top }$ is symmetric.
Figure 2. Iterative construction method for the weighted self-similar network Gt by the substitution rule from t = 0 to t = 2.
Figure 3. Plot of the function C(r) in the interval (0, 1].
Suppose ${P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}$ has a characteristic polynomial Q(x), which can be written Q(x) = det $({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})$. So, we can obtain that
$\begin{eqnarray*}\begin{array}{l}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}\,=\,\displaystyle \frac{{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{\ast }}{{\rm{d}}{\rm{e}}{\rm{t}}\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}\\ \,=\,\displaystyle \frac{{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{\ast }}{\left.Q(x\right)},\end{array}\end{eqnarray*}$
where ${({P}_{\beta ,\beta }-{xI})}^{* }$ is the adjoint matrix of ${P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}$. Every element of ${({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})}^{* }$ is a polynomial. Because the matrix is symmetric, we have
$\begin{eqnarray*}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha })=\left(\begin{array}{cc}{f}_{1}(x) & g(x)\\ g(x) & {f}_{2}(x)\end{array}\right).\end{eqnarray*}$
There is a permutation $\sigma =\left(\begin{array}{cc}0 & 1\\ 1 & 0\end{array}\right)$ that satisfies
$\begin{eqnarray*}\begin{array}{l}\sigma \left(\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}{w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}^{\top }\right)\sigma \\ \quad \ =\ \displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{{xs}}_{{\beta }_{{e}_{i}}}\right)}^{-1}{w}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}^{\top }.\end{array}\end{eqnarray*}$
Thus, the polynomials f1(x) and f2(x) are equal, so we deduce that
$\begin{eqnarray}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha })=\left(\begin{array}{cc}f(x) & g(x)\\ g(x) & f(x)\end{array}\right).\end{eqnarray}$
Secondly, we will calculate the characteristic polynomials Q(x), f(x) and g(x). Because of the symmetric property of the self-similar network by the substitution rule, we have
$\begin{eqnarray*}\begin{array}{rcl}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w} & = & \left(\begin{array}{ccc}1 & 1 & 0\\ 1 & 0 & 1\end{array}\right),\ {s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha }=\left(\begin{array}{cc}\displaystyle \frac{1}{2r+2} & \displaystyle \frac{1}{2r+1}\\ \displaystyle \frac{1}{2r+1} & 0\\ 0 & \displaystyle \frac{1}{2r+1}\end{array}\right),\\ {P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}} & = & \left(\begin{array}{ccc}0 & \displaystyle \frac{r}{2r+2} & \displaystyle \frac{r}{2r+2}\\ \displaystyle \frac{r}{2r+1} & 0 & \displaystyle \frac{r}{2r+1}\\ \displaystyle \frac{r}{2r+1} & \displaystyle \frac{r}{2r+1} & 0\end{array}\right).\end{array}\end{eqnarray*}$
So, we can obtain
$\begin{eqnarray}\,\begin{array}{c}\begin{array}{l}Q(x)={\rm{d}}{\rm{e}}{\rm{t}}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-xI\right)}^{-1}\\ \,\,=\,{\rm{d}}{\rm{e}}{\rm{t}}{\left(\begin{array}{ccc}-x & \displaystyle \frac{r}{2r+2} & \displaystyle \frac{r}{2r+2}\\ \displaystyle \frac{r}{2r+1} & -x & \displaystyle \frac{r}{2r+1}\\ \displaystyle \frac{r}{2r+1} & \displaystyle \frac{r}{2r+1} & -x\end{array}\right)}^{-1}\\ \,\,=\,2(4{r}^{3}{x}^{3}-3{r}^{3}x-{r}^{3}+8{r}^{2}{x}^{3}-2{r}^{2}x+5{rx}^{3}+{x}^{3}).\end{array}\end{array}\,\end{eqnarray}$
Now, we will calculate f(x) and g(x) as follows:
$\begin{eqnarray*}\begin{array}{l}\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}\displaystyle \frac{{w}_{{\beta }_{{e}_{i}},\alpha }}{{s}_{{\beta }_{{e}_{i}}}}\,=\,\left(\begin{array}{ccc}1 & 1 & 0\\ 1 & 0 & 1\end{array}\right){\left(\begin{array}{ccc}-x & \displaystyle \frac{r}{2r+2} & \displaystyle \frac{r}{2r+2}\\ \displaystyle \frac{r}{2r+1} & -x & \displaystyle \frac{r}{2r+1}\\ \displaystyle \frac{r}{2r+1} & \displaystyle \frac{r}{2r+1} & -x\end{array}\right)}^{-1}\,\times \,\left(\begin{array}{cc}\displaystyle \frac{1}{2r+2} & \displaystyle \frac{1}{2r+1}\\ \displaystyle \frac{1}{2r+1} & 0\\ 0 & \displaystyle \frac{1}{2r+1}\end{array}\right)\\ \ \ =\ \left(\begin{array}{cc}\displaystyle \frac{-x(2r+1)(2r+3x+4{rx})}{2(4{r}^{3}{x}^{3}-3{r}^{3}x-{r}^{3}+8{r}^{2}{x}^{3}-2{r}^{2}x+5{{rx}}^{3}+{x}^{3})} & \displaystyle \frac{-(4{r}^{2}{x}^{2}+6{r}^{2}x+2{r}^{2}+4{{rx}}^{2}+4{rx}+{x}^{2})}{2(4{r}^{3}{x}^{3}-3{r}^{3}x-{r}^{3}+8{r}^{2}{x}^{3}-2{r}^{2}x+5{{rx}}^{3}+{x}^{3})}\\ \displaystyle \frac{-(4{r}^{2}{x}^{2}+6{r}^{2}x+2{r}^{2}+4{{rx}}^{2}+4{rx}+{x}^{2})}{2(4{r}^{3}{x}^{3}-3{r}^{3}x-{r}^{3}+8{r}^{2}{x}^{3}-2{r}^{2}x+5{{rx}}^{3}+{x}^{3})} & \displaystyle \frac{-x(2r+1)(2r+3x+4{rx})}{2(4{r}^{3}{x}^{3}-3{r}^{3}x-{r}^{3}+8{r}^{2}{x}^{3}-2{r}^{2}x+5{{rx}}^{3}+{x}^{3})}\end{array}\right).\end{array}\end{eqnarray*}$
So, we have f(x) = −x(2r + 1)(2r + 3x + 4rx), g(x) = −(4r2x2 + 6r2x + 2r2 + 4rx2 + 4rx + x2).
Thirdly, we found it is difficult to directly calculate the eigenvalues of the matrix M. So, we need to separate three cases as follows.
Case 1. In Gt+1, nodes ${x}_{1},{x}_{2}\in \alpha ({x}_{1}\ne {x}_{2})$, and nodes x1, x2 are adjacent in Gt.
The corresponding entry of ${\sum }_{i=1}^{{7}^{t}}{P}_{\alpha ,{\beta }_{{e}_{i}}}$ ${({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})}^{-1}{P}_{{\beta }_{{e}_{i}},\alpha }$ is as follows.
$\begin{eqnarray*}\begin{array}{l}{\left(\sum _{i=1}^{{7}^{t}}{P}_{\alpha ,{\beta }_{{e}_{i}}}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}{P}_{{\beta }_{{e}_{i}},\alpha }\right)}_{{x}_{1},{x}_{2}}\\ =\ {\left(\left(\begin{array}{cc}\displaystyle \frac{w}{{s}_{{x}_{1}}(t+1)} & 0\\ 0 & \displaystyle \frac{w}{{s}_{{x}_{2}}(t+1)}\end{array}\right)\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}{P}_{{\beta }_{{e}_{i}},\alpha }\right)}_{{x}_{1},{x}_{2}}\\ \ \ =\ \left(\left(\begin{array}{cc}\displaystyle \frac{w}{{s}_{{x}_{1}}(t+1)} & 0\\ 0 & \displaystyle \frac{w}{{s}_{{x}_{2}}(t+1)}\end{array}\right)\displaystyle \frac{{w}_{\alpha ,{\beta }_{{e}_{i}}}}{w}\right.\\ \ \ \ \ \ \ {\left.\times {\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}({s}_{{\beta }_{{e}_{i}}}^{-1}{w}_{{\beta }_{{e}_{i}},\alpha }\right)}_{{x}_{1},{x}_{2}}\end{array}\end{eqnarray*}$
$\begin{eqnarray*}\begin{array}{l}\ \ =\ {\left(\left(\begin{array}{cc}\displaystyle \frac{w}{{s}_{{x}_{1}}(t+1)} & 0\\ 0 & \displaystyle \frac{w}{{s}_{{x}_{2}}(t+1)}\end{array}\right)\left(\begin{array}{cc}\displaystyle \frac{f(x)}{Q(x)} & \displaystyle \frac{g(x)}{Q(x)}\\ \displaystyle \frac{g(x)}{Q(x)} & \displaystyle \frac{f(x)}{Q(x)}\end{array}\right)\right)}_{{x}_{1},{x}_{2}}\\ \ \ =\ {\left(\begin{array}{cc}\displaystyle \frac{w}{{s}_{{x}_{1}}(t+1)}\displaystyle \frac{f(x)}{Q(x)} & \displaystyle \frac{w}{{s}_{{x}_{1}}(t+1)}\displaystyle \frac{g(x)}{Q(x)}\\ \displaystyle \frac{w}{{s}_{{x}_{2}}(t+1)}\displaystyle \frac{g(x)}{Q(x)} & \displaystyle \frac{w}{{s}_{{x}_{2}}(t+1)}\displaystyle \frac{f(x)}{Q(x)}\end{array}\right)}_{{x}_{1},{x}_{2}}\\ \ \ =\ \displaystyle \frac{w}{{s}_{{x}_{1}}(t+1)}\displaystyle \frac{g(x)}{Q(x)}\\ \ \ =\ \displaystyle \frac{w}{2{s}_{{x}_{1}}(t)}\displaystyle \frac{g(x)}{Q(x)}\\ \ \ =\ \displaystyle \frac{1}{2}\displaystyle \frac{g(x)}{Q(x)}{\left({P}_{t}\right)}_{{x}_{1},{x}_{2}}.\end{array}\end{eqnarray*}$
Case 2. In Gt+1, nodes x1, x2 ∈ α, and nodes x1 = x2 in Gt.
The corresponding entry of ${\sum }_{i=1}^{{7}^{t}}{P}_{\alpha ,{\beta }_{{e}_{i}}}{({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI})}^{-1}{P}_{{\beta }_{{e}_{i}},\alpha }$ is as follows.
$\begin{eqnarray*}\begin{array}{l}{\left({\sum }_{i=1}^{{7}^{t}}{P}_{\alpha ,{\beta }_{{e}_{i}}}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}{P}_{{\beta }_{{e}_{i}},\alpha }\right)}_{{x}_{1}}\\ \quad \ =\ {\left(\displaystyle \frac{1}{{s}_{{x}_{1}}(t+1)}\sum \displaystyle \frac{f(x)}{Q(x)}\right)}_{{x}_{1}}\\ \quad \ =\ {\left(\displaystyle \frac{1}{{s}_{{x}_{1}}(t+1)}{s}_{{x}_{1}}(t)\displaystyle \frac{f(x)}{Q(x)}\right)}_{{x}_{1}}\\ \quad \ =\ \displaystyle \frac{1}{2}\displaystyle \frac{f(x)}{Q(x)}.\end{array}\end{eqnarray*}$
Case 3. In Gt+1, nodes ${x}_{1},{x}_{2}\in \alpha ({x}_{1}\ne {x}_{2})$, and nodes x1, x2 are not adjacent in Gt.
We have ${P}_{\alpha ,{\beta }_{{e}_{i}}}$ = 0. So,
$\begin{eqnarray*}{\left(\sum _{i=1}^{{7}^{t}}{P}_{\alpha ,{\beta }_{{e}_{i}}}{\left({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-{xI}\right)}^{-1}{P}_{{\beta }_{{e}_{i}},\alpha }\right)}_{{x}_{1},{x}_{2}}=0.\end{eqnarray*}$
Finally, from the above three cases, we can obtain that
$\begin{eqnarray}\,\begin{array}{c}\begin{array}{l}{\rm{d}}{\rm{e}}{\rm{t}}({P}_{t+1}-xI)=\,{\rm{d}}{\rm{e}}{\rm{t}}\left(-\displaystyle \frac{1}{2}\displaystyle \frac{f(x)}{Q(x)}I\right.\\ \,\,-\,\left.\displaystyle \frac{1}{2}\displaystyle \frac{g(x)}{Q(x)}{P}_{t}-xI\right)\displaystyle \prod _{i=1}^{{7}^{t}}\left({\rm{d}}{\rm{e}}{\rm{t}}({P}_{{\beta }_{{e}_{i}},{\beta }_{{e}_{i}}}-xI)\right)\\ \,\,=\,{\rm{d}}{\rm{e}}{\rm{t}}\left(\displaystyle \frac{{\textstyle \tfrac{g(x)}{2}}{P}_{t}+\left({\textstyle \tfrac{f(x)}{2}}+xQ(x)\right)I}{-Q(x)}\right){\left(Q(x)\right)}^{{7}^{t}}\\ \,\,=\,det\left(\displaystyle \frac{g(x)}{2}{P}_{t}+\left(\displaystyle \frac{f(x)}{2}+xQ(x)\right)I\right){\left(Q(x)\right)}^{{7}^{t}-{N}_{t}}\\ \,\,=\,\displaystyle \prod _{i=1}^{{N}_{t}}\left(\displaystyle \frac{g(x)}{2}{\lambda }_{t}+\displaystyle \frac{f(x)}{2}+xQ(x)\right){\left(Q(x)\right)}^{{7}^{t}-{N}_{t}}\\ \,\,=\,{\left(\displaystyle \frac{1}{2}\right)}^{{N}_{t}}\displaystyle \prod _{i=1}^{{N}_{t}}\left[(16{r}^{3}+32{r}^{2}+20r+4){x}^{4}\right.\\ \,\,-\,(8{r}^{3}+16{r}^{2}+10r+3+4{r}^{3}{\lambda }_{t}+4r{\lambda }_{t}+{\lambda }_{t}){x}^{2}\\ \,\,-\,\left.(4{r}^{3}+4{r}^{2}+2r+6{r}^{2}{\lambda }_{t}+4r{\lambda }_{t})x-2{r}^{2}{\lambda }_{t}\right]\\ \times \,{\left[2(4{r}^{3}{x}^{3}-3{r}^{3}x-{r}^{3}+8{r}^{2}{x}^{3}-2{r}^{2}x+5{rx}^{3}+{x}^{3})\right]}^{{7}^{t}-{N}_{t}}\\ \,\,=\,{\left(\displaystyle \frac{1}{2}\right)}^{{\textstyle \tfrac{{7}^{t}+3}{2}}}\displaystyle \prod _{i=1}^{{\textstyle \tfrac{{7}^{t}+3}{2}}}\left[(16{r}^{3}+32{r}^{2}+20r+4){x}^{4}\right.\\ \,\,-\,(8{r}^{3}+16{r}^{2}+10r+3+4{r}^{3}{\lambda }_{t}+4r{\lambda }_{t}+{\lambda }_{t}){x}^{2}\\ \,\,-\,\left.(4{r}^{3}+4{r}^{2}+2r+6{r}^{2}{\lambda }_{t}+4r{\lambda }_{t})x-2{r}^{2}{\lambda }_{t}\right]\\ \,\,\times \,{\left[2(4{r}^{3}{x}^{3}-3{r}^{3}x-{r}^{3}+8{r}^{2}{x}^{3}-2{r}^{2}x+5{rx}^{3}+{x}^{3})\right]}^{{\textstyle \tfrac{{7}^{t}-3}{2}}}\\ \,\,=\,{\left(\displaystyle \frac{1}{2}\right)}^{{\textstyle \tfrac{{7}^{t}+3}{2}}}\displaystyle \prod _{i=1}^{{\textstyle \tfrac{{7}^{t}+3}{2}}}\left[(16{r}^{3}+32{r}^{2}+20r+4){x}^{4}\right.\\ \,\,-\,(8{r}^{3}+16{r}^{2}+10r+3+4{r}^{3}{\lambda }_{t}+4r{\lambda }_{t}+{\lambda }_{t}){x}^{2}\\ \,\,-\,\left.(4{r}^{3}+4{r}^{2}+2r+6{r}^{2}{\lambda }_{t}+4r{\lambda }_{t})x-2{r}^{2}{\lambda }_{t}\right]\\ \,\,\times \,\left[\left(x-\displaystyle \frac{r+r\sqrt{(9r+5)(r+1)}+{r}^{2}}{2(2{r}^{2}+3r+1)})\right.\right.\\ \,\times \,{\left.\left(x-\displaystyle \frac{r-r\sqrt{(9r+5)(r+1)}+{r}^{2}}{2(2{r}^{2}+3r+1)}\right)\left(x+\displaystyle \frac{r}{2r+1}\right)\right]}^{{\textstyle \tfrac{{7}^{t}-3}{2}}},\end{array}\end{array}\,\end{eqnarray}$
where λt are eigenvalues of the Markov operator matrix Pt.

Appendix B

The analytical expression for Ht is not difficult to calculate. From equation (5), according to Vieta’s formulas, we can obtain the following equations:
$\begin{eqnarray}{\lambda }_{i,1}(t)+{\lambda }_{i,2}(t)+{\lambda }_{i,3}(t)+{\lambda }_{i,4}(t)=0,\,\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}{\lambda }_{i,1}(t){\lambda }_{i,2}(t)+{\lambda }_{i,1}(t){\lambda }_{i,3}(t)+{\lambda }_{i,1}(t){\lambda }_{i,4}(t)+{\lambda }_{i,2}(t){\lambda }_{i,3}(t)\\ \qquad \ +\ {\lambda }_{i,2}(t){\lambda }_{i,4}(t)+{\lambda }_{i,3}(t){\lambda }_{i,4}(t)\\ \quad \ =\ \displaystyle \frac{-(8{r}^{3}+16{r}^{2}+10r+4{r}^{2}{\lambda }_{t}+4r{\lambda }_{t})}{16{r}^{3}+32{r}^{2}+20r+4},\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}{\lambda }_{i,1}(t){\lambda }_{i,2}(t){\lambda }_{i,3}(t)+{\lambda }_{i,1}(t){\lambda }_{i,2}(t){\lambda }_{i,4}(t)\\ \qquad \ +\ {\lambda }_{i,1}(t){\lambda }_{i,3}(t){\lambda }_{i,4}(t)+{\lambda }_{i,2}(t){\lambda }_{i,3}(t){\lambda }_{i,4}(t)\\ \quad \ =\ \displaystyle \frac{4{r}^{3}+2{r}^{2}+2r+16{r}^{2}{\lambda }_{t}+4r{\lambda }_{t}}{16{r}^{3}+32{r}^{2}+20r+4},\end{array}\end{eqnarray}$
$\begin{eqnarray}{\lambda }_{i,1}(t){\lambda }_{i,2}(t){\lambda }_{i,3}(t){\lambda }_{i,4}(t)=\displaystyle \frac{-2{r}^{2}{\lambda }_{t}}{16{r}^{3}+32{r}^{2}+20r+4}.\,\end{eqnarray}$
From equations (16)–(19) we have
$\begin{eqnarray}\begin{array}{l}\sum _{{\lambda }_{i}(t)\in {L}_{t}^{(1)}}\left(\displaystyle \frac{1}{1-{\lambda }_{i,1}(t)}+\displaystyle \frac{1}{1-{\lambda }_{i,2}(t)}\right.\\ \qquad \ +\ \left.\displaystyle \frac{1}{1-{\lambda }_{i,3}(t)}+\displaystyle \frac{1}{1-{\lambda }_{i,4}(t)}\right)\\ \quad \ =\ \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(1)}}\displaystyle \frac{(1-{\lambda }_{n})(24{r}^{2}+12r+2)+44{r}^{3}+78{r}^{2}+46r+8}{(1-{\lambda }_{n})(22{r}^{2}+8r+1)}\\ \quad \ =\ \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(1)}}\displaystyle \frac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}\\ \qquad \ +\ \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(1)}}\displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{(1-{\lambda }_{t})(22{r}^{2}+8r+1)}\\ \quad \ =\ \displaystyle \frac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}\cdot \displaystyle \frac{{7}^{t}+3}{2}\\ \qquad \ +\ \displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}{H}_{t-1},\end{array}\end{eqnarray}$
and
$\begin{eqnarray}\begin{array}{l}\sum _{{\lambda }_{i}(t)\in {L}_{t}^{(2)}}\displaystyle \frac{1}{1-{\lambda }_{i}(t)}=\sum _{{\lambda }_{i}(t)\in {L}_{t}^{(2)}}\left(\displaystyle \frac{1}{1-\tfrac{r+r\sqrt{(9r+5)(r+1)}+{r}^{2}}{2(2{r}^{2}+3r+1)}}\right.\\ \qquad \ +\ \left.\displaystyle \frac{1}{1-\tfrac{r-r\sqrt{(9r+5)(r+1)}+{r}^{2}}{2(2{r}^{2}+3r+1)}}+\displaystyle \frac{1}{1+\tfrac{r}{2r+1})}\right)\\ \quad \ =\ \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(2)}}\displaystyle \frac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}\\ \quad \ =\ \displaystyle \frac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}\cdot \displaystyle \frac{{7}^{t}-3}{2}.\end{array}\end{eqnarray}$
So, substituting equations (20) and (21) into equation (7), we can obtain that
$\begin{eqnarray}\begin{array}{rcl}{H}_{t} & = & \sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{{\sigma }_{i}}=\sum _{i=1}^{{N}_{t}}\displaystyle \frac{1}{1-{\lambda }_{i}}\\ & = & \ \sum _{{\lambda }_{i}(t)\in {L}_{t}^{(1)}}\left(\displaystyle \frac{1}{1-{\lambda }_{i,1}(t)}+\displaystyle \frac{1}{1-{\lambda }_{i,2}(t)}\right.\\ & & +\ \left.\displaystyle \frac{1}{1-{\lambda }_{i,3}(t)}+\displaystyle \frac{1}{1-{\lambda }_{i,4}(t)}\right)+\sum _{{\lambda }_{i}(t)\in {L}_{t}^{(2)}}\displaystyle \frac{1}{1-{\lambda }_{i}(t)}\\ & = & \ \displaystyle \frac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}\cdot \displaystyle \frac{{7}^{t}+3}{2}\\ & & +\ \displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}{H}_{t-1}\\ & & +\ \displaystyle \frac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}\cdot \displaystyle \frac{{7}^{t}-3}{2}\\ & = & \ \displaystyle \frac{44{r}^{3}+78{r}^{2}+46r+8}{22{r}^{2}+8r+1}{H}_{t-1}+\left(\displaystyle \frac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}\right.\\ & & +\ \left.\displaystyle \frac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}\right)\displaystyle \frac{{7}^{t}}{2}\\ & & +\ \left(\displaystyle \frac{24{r}^{2}+12r+2}{22{r}^{2}+8r+1}-\displaystyle \frac{9{r}^{3}+22{r}^{2}+15r+3}{6{r}^{2}+5r+1}\right)\displaystyle \frac{3}{2}.\end{array}\end{eqnarray}$
1
Albert R Barabsi A 2002 Rev. Mod. Phys. 74 47

DOI

2
Newman M 2003 Siam. Rev. 45 167 256

DOI

3
Albert R Jeong H Barabasi A 2000 Nature 340 378 382

DOI

4
Rubinov M Sporns O 2010 NeuroImage 52 1059 1069

DOI

5
Motter A Zhou C Kurths J 2004 Vaccine 22 1820 1825

DOI

6
Chen Y F Dai M F Wang X Q 2018 Fractals 26 1850017

DOI

7
Dai M F Wang X Q Zong Y 2017 Fractals 25 1750049

DOI

8
Kuperman M Abramson G 2001 Phys. Rev. Lett. 86 2909 2912

DOI

9
Fronczak A Fronczak P 2004 Phys. Rev. E 70 056110

DOI

10
Zhang P Wang J Li X 2012 Physica A 387 6869 6875

DOI

11
Barthlemy M 2004 Eur. Phys. J. B 38 163 168

DOI

12
Cowley M Smart J Rubinstein M 2001 Nature 411 480 484

DOI

13
Albert R Jeong H Barabsi A 1999 Nature 401 130 131

DOI

14
Yang H Huang H J 2004 Transport. Res. Part. B 38 1 15

DOI

15
Bolton P Dewatripont M 1994 Q. J. Econ 109 809 839

DOI

16
Lee C Loh P S Sudakov B 2013 Siam. J. Discrete. Math. 27 959 972

DOI

17
Qi Z H Li L Zhang Z M 2011 J. Theor. Biol. 280 10 18

DOI

18
Dai M F Ju T T Liu J Y 2018 Int. J. Mod. Phys. B 32 1850353

DOI

19
Dai M F Wang X Q Chen Y F 2018 Physica A 505 1 8

DOI

20
Dai M F He J J Zong Y 2018 Chaos 28 043110

DOI

21
Xie P Zhang Z Z Comellas F 2015 Appl. Math. Comput 273 1123 1129

DOI

22
Julaiti A Wu B Zhang Z 2013 J. Chem. Phys. 138 204116

DOI

23
Jia Q Tang W 2017 IEEE Trans. Circuits Syst. I 65 723 732

DOI

24
Jia Q Tang W 2018 IEEE Trans. Circuits Syst. II 65 1969 1973

DOI

25
Li H B Huang T Z Shen S Q 2007 Linear. Algebra. Appl. 420 235 247

DOI

26
Pucci C 1966 Proc. Am. Math. Soc. 17 788 795

DOI

27
Xia S C Xi L F 2004 Chin. J. Mech. Eng. 11 405 410

DOI

28
Dai M F Tang H L Zou J H 2017 Int. J. Mod. Phys. B 32 1850021

DOI

29
Shan L Li H Zhang Z 2017 Theor. Comput. Sci. 677 12 30

DOI

30
Chen H Zhang F 2007 Discrete. Appl. Math. 155 654 661

DOI

31
Mao Y 2004 J. Appl. Probab 41 1071 1080

DOI

32
Gao X Luo Y Liu W 2012 Discrete. Appl. Math. 160 560 565

DOI

33
Li T T Xi L F Ye Q Q 2018 Fractals 26 1850064

DOI

Outlines

/