Welcome to visit Communications in Theoretical Physics,
Particle Physics and Quantum Field Theory

Generation function for one-loop tensor reduction

  • Bo Feng
Expand
  • Beijing Computational Science Research Center, Beijing 100084, China
  • Peng Huanwu Center for Fundamental Theory, Hefei, Anhui, 230026, China

Received date: 2022-10-04

  Revised date: 2022-11-11

  Accepted date: 2022-11-14

  Online published: 2023-02-06

Copyright

© 2023 Institute of Theoretical Physics CAS, Chinese Physical Society and IOP Publishing

Abstract

For loop integrals, reduction is the standard method. Having an efficient way to find reduction coefficients is an important topic in scattering amplitudes. In this paper, we present the generation functions of reduction coefficients for general one-loop integrals with an arbitrary tensor rank in their numerator.

Cite this article

Bo Feng . Generation function for one-loop tensor reduction[J]. Communications in Theoretical Physics, 2023 , 75(2) : 025203 . DOI: 10.1088/1572-9494/aca253

1. Motivation

As the bridge connecting the theoretical frame and the experiment data, scattering amplitude is always one of the central concepts in the quantum field theory. Its efficient computation (including higher order contributions in perturbation) becomes necessary, especially with the coming of the LHC experiment [1]. The on-shell program1(1 There are many works on this topic. For an introduction, please see the following two books [2, 3]. Some early works with the on-shell concept are [49].) in the scattering amplitudes, as the outcome of the challenge, has made the calculations of one-loop amplitude straightforward [10].
In general, the loop computation can be divided into two parts: the construction of integrands and then performing the integration. Although the construction of integrands using Feynman diagrams is well established, sometimes it is not the economic way to do so. Looking for a better way to construct integrands is one of the current research directions2(2For example, the unitarity cut method proposed in [4, 5, 7] uses on-shell tree-level amplitudes as input.). For the second part, i.e. doing the integration, a very useful method is the reduction. It has been shown that any integral can be written as the linear combination of some basis integrals (called the master integrals) with coefficients as the rational function of external momenta, polarization vectors, masses and spacetime dimension. Using the idea of reduction, loop integration can be separated into two parallel tasks: the computation of master integrals and the algorithm to efficiently extract the reduction coefficients. Progresses in any one task will enable us to do more and more complicated integrations (for a nice introduction of recent developments, see [11]).
For the reduction, it can be classified into two categories: the reduction at the integrand level and the reduction at the integral level. The reduction at the integrand level can be systematically solved using computational algebraic geometry [1215]. For the reduction at the integral level, the first proposal is the famous Passarino-Veltman reduction (PV-reduction) method [16]. There are other proposals, such as the Integration-by-Part (IBP) method [1723], the unitarity cut method [4, 5, 7, 2430], and Intersection number [3136]. Although there are a lot of developments for the reduction at the integral level, it is still desirable to improve them by the current complexity of computations.
In recent papers [3741] we have introduced the auxiliary vector R to improve the traditional PV-reduction method. Using R we can construct the differential operators and then establish algebraic recurrence relation to determine reduction coefficients analytically. This method has also been generalized to a two-loop sunset diagram (see [40]) where the original PV-reduction method is hard to be used. When using the auxiliary vector R in the IBP method, the efficiency of reduction has also been improved as shown in [42, 43].
Although the advantage of using auxiliary vector R has been demonstrated from various aspects, the algebraic recursive structure makes it still hard to have a general understanding of reduction coefficients for higher and higher tensor ranks in the numerators of integrands. Could we get more understanding of the analytical structures of reduction coefficients by this method? As we will show in this paper, indeed we can get more if we probe the reduction problem from a new angle. The key idea is the concept of generation function. In fact, the generation function is well-known in physics and mathematics. Sometimes the coefficients of a series is hard to imagine, but the series itself is easy to write down. For example, the Hermite Polynomial Hn(x) can be read out from the generation function
$\begin{eqnarray}{{\rm{e}}}^{2{tx}-{t}^{2}}=\displaystyle \sum _{n=0}^{\infty }{{\rm{H}}}_{n}(x)\frac{{t}^{n}}{n!}.\,\,\,\,\end{eqnarray}$
Thus we can ask that if we sum the reduction coefficients of different tensor ranks together, could we get a simpler answer? For the reduction problem, the numerator of tensor rank k is given by ${\left(2{\ell }\cdot R\right)}^{k}$ in our method and we need to see how to incorporate them together. There are many ways to do this. Two typical ones are
$\begin{eqnarray}\begin{array}{rcl}{\psi }_{1}(t) & = & \displaystyle \sum _{n=0}^{\infty }{t}^{n}{\left(2\ell \cdot R\right)}^{n}=\frac{1}{1-t(2\ell \cdot R)},\\ {\psi }_{2}(t) & = & \displaystyle \sum _{n=0}^{\infty }\frac{{\left(2\ell \cdot R\right)}^{n}{t}^{n}}{n!}={{\rm{e}}}^{t(2\ell \cdot R)}.\end{array}\end{eqnarray}$
In this paper, we will focus on the generation function of the type ψ2(t) because it is invariant under the differential action, i.e. $\frac{{\mathrm{de}}^{x}}{{\rm{d}}x}={{\rm{e}}}^{x}$. We will see that the generation functions satisfy simple differential equations, which can be solved analytically.
The plan of the paper is following. In section two, we present the generation function of reduction coefficients of the tadpole integral. The tadpole example is a little bit trivial, thus in section three, we discuss carefully how to find generation functions for the bubble integral, which is the simplest nontrivial example. With the experience obtained for bubble, we present the solution for general one-loop integrals in section four. To demonstrate the frame established in section four, we discuss briefly the triangle example in section five. Finally, a brief summary and discussion are given in section six. Some technical details have been collected in the appendix, where in appendix A, the solution of two typical differential equations is presented, while the solution of the recursive relation for the bubble has been explained in appendix B.

2. Tadpole

With the above brief discussion, we start from the simplest case, i.e. the tadpole topology to discuss the generation function. Summing all tensor ranks properly we have3(3The mass dimension of parameter t is −2.)
$\begin{eqnarray}\begin{array}{l}{I}_{\mathrm{tad}}(t,R)\equiv \int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{{\ell }^{2}-{M}^{2}}\\ \quad ={c}_{1\to 1}(t,R,M)\int {\rm{d}}\ell \frac{1}{{\ell }^{2}-{M}^{2}},\,\,\,\end{array}\end{eqnarray}$
where c1→1(t, R, M) is the generation function of reduction coefficients and for simplicity, we have defined $\int {\rm{d}}{\ell }_{i}(\bullet )\equiv \int \frac{{{\rm{d}}}^{D}{\ell }_{i}}{{\rm{i}}{\pi }^{D/2}}(\bullet )$. To find the closed analytic expression for c1→1(t, R, M) we establish the corresponding differential equation. Acting with ∂R we have
$\begin{eqnarray}\begin{array}{l}\frac{\partial }{\partial R}\cdot \frac{\partial }{\partial R}{I}_{\mathrm{tad}}(t,R)=\int {\rm{d}}\ell \frac{4{t}^{2}{\ell }^{2}{{\rm{e}}}^{t(2\ell \cdot R)}}{{\ell }^{2}-{M}^{2}}\\ \quad =4{t}^{2}{M}^{2}{I}_{\mathrm{tad}}(t,R)\end{array}\end{eqnarray}$
at one side, and $\left(\displaystyle \frac{\partial }{\partial R}\cdot \displaystyle \frac{\partial }{\partial R}{c}_{1\to 1}(t,R,M)\right)\int {\rm{d}}{\ell }\displaystyle \frac{1}{{{\ell }}^{2}-{M}^{2}}$ at the other side, thus we get
$\begin{eqnarray}\displaystyle \frac{\partial }{\partial R}\cdot \displaystyle \frac{\partial }{\partial R}{c}_{1\to 1}(t,R,M)=4{t}^{2}{M}^{2}{c}_{1\to 1}(t,R,M).\,\,\,\end{eqnarray}$
By the Lorentz invariance, c1→1(t, R, M) is the function of r = R · R only, i.e. c1→1(t, R, M) = f(r). It is easy to see that the differential equation (2.3) becomes
$\begin{eqnarray}\boxed{4{rf}^{\prime\prime} +2{Df}^{\prime} -4{t}^{2}{M}^{2}f\,=\,0},\,\,\,\end{eqnarray}$
which is the form (A.14) studied in appendix A. This second order differential equation has two singular points r = 0 and r = ∞ , where the singular point r = 0 is canonical. The solution has been given in (A.33). Putting A = 4, B = 2D, C = −4t2M2 in (A.29) and the boundary condition c0 = 1, we get immediately
$\begin{eqnarray}\begin{array}{l}{c}_{1\to 1}(t,R,M)=\sum _{n=0}^{\infty }\displaystyle \frac{{\left({t}^{2}{M}^{2}r\right)}^{n}}{n!{\left(\tfrac{D}{2}\right)}_{n}}\\ \quad ={\,}_{0}{F}_{1}\left(\varnothing ;\displaystyle \frac{D}{2};{t}^{2}{M}^{2}r\right).\,\,\,\end{array}\end{eqnarray}$
Before ending this section, let us mention that when we do the reduction for other topologies, we will meet the reduction of $\int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{{\left(\ell -K\right)}^{2}-{M}^{2}}$. Using the momentum shifting, it is easy to see that
$\begin{eqnarray}\begin{array}{l}\int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{{\left(\ell -K\right)}^{2}-{M}^{2}}=\int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2(\ell +K)\cdot R)}}{{\ell }^{2}-{M}^{2}}\\ \quad ={{\rm{e}}}^{2t(K\cdot R)}{c}_{1\to 1}(t,R,M)\int {\rm{d}}\ell \frac{1}{{\ell }^{2}-{M}^{2}}.\,\,\,\end{array}\end{eqnarray}$
The results show the advantage of using the generation function with exponential form.

3. Bubble

Having found the generation function of tadpole reduction, we move to the first nontrivial example, i.e. the generation function of bubble reduction, which is defined through
$\begin{eqnarray}\begin{array}{l}{I}_{\mathrm{bub}}(t,R)\equiv \int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{({\ell }^{2}-{M}_{0}^{2})({\left(\ell -K\right)}^{2}-{M}_{1}^{2})}\\ \quad ={c}_{2\to 2}\int {\rm{d}}\ell \frac{1}{({\ell }^{2}-{M}_{0}^{2})({\left(\ell -K\right)}^{2}-{M}_{1}^{2})}\\ \quad +{c}_{2\to 1;\hat{1}}\int {\rm{d}}\ell \frac{1}{({\ell }^{2}-{M}_{0}^{2})}+{c}_{2\to 1;\hat{0}}\\ \quad \times \int {\rm{d}}\ell \frac{1}{({\left(\ell -K\right)}^{2}-{M}_{1}^{2})}.\,\,\,\end{array}\end{eqnarray}$
For simplicity, we have not written down the variables of reduction coefficients explicitly. If written explicitly, it will be c(t, R, K; M0, M1) or c(t, R2, K · R, K2; M0, M1) if using the Lorentz contraction form.
The generation form (3.1) can produce some nontrivial relations among these generation functions of reduction coefficients easily. Noticing that4(4Comparing to the shifting symmetry discussed in (3.2), one can also consider the symmetry with → −. For this one, we have R→ −R and K→ −K. If using the variables R2, K · R, it is invariant. In other words, the symmetry → − is trivial.)
$\begin{eqnarray}\begin{array}{l}{I}_{\mathrm{bub}}(t,R)\equiv \int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{({\ell }^{2}-{M}_{0}^{2})({\left(\ell -K\right)}^{2}-{M}_{1}^{2})}\\ \quad ={{\rm{e}}}^{t(2K\cdot R)}\int {\rm{d}}\tilde{\ell }\frac{{{\rm{e}}}^{t(2\tilde{\ell }\cdot R)}}{({\left(\tilde{\ell }+K\right)}^{2}-{M}_{0}^{2})({\tilde{\ell }}^{2}-{M}_{1}^{2})}\\ \quad ={{\rm{e}}}^{t(2K\cdot R)}\left\{{c}_{2\to 2}(t,{R}^{2},-K\cdot R,{K}^{2};{M}_{1},{M}_{0})\right.\\ \quad \times \int {\rm{d}}\ell \frac{1}{({\ell }^{2}-{M}_{0}^{2})({\left(\ell -K\right)}^{2}-{M}_{1}^{2})}\\ \quad +{c}_{2\to 1;\hat{0}}(t,{R}^{2},-K\cdot R,{K}^{2};{M}_{1},{M}_{0})\int {\rm{d}}\ell \frac{1}{({\ell }^{2}-{M}_{0}^{2})}\\ \quad +{c}_{2\to 1;\hat{1}}(t,{R}^{2},-K\cdot R,{K}^{2};{M}_{1},{M}_{0})\\ \quad \left.\times \int {\rm{d}}\ell \frac{1}{({\left(\ell -K\right)}^{2}-{M}_{1}^{2})}\right\},\end{array}\end{eqnarray}$
we have
$\begin{eqnarray}\begin{array}{l}{c}_{2\to 2}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad ={{\rm{e}}}^{t(2K\cdot R)}{c}_{2\to 2}(t,{R}^{2},-K\cdot R,{K}^{2};{M}_{1},{M}_{0}),\,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}{c}_{2\to 1;\hat{1}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad ={{\rm{e}}}^{t(2K\cdot R)}{c}_{2\to 1;\hat{0}}(t,{R}^{2},-K\cdot R,{K}^{2};{M}_{1},{M}_{0}),\,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}{c}_{2\to 1;\hat{0}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad ={{\rm{e}}}^{t(2K\cdot R)}{c}_{2\to 1;\hat{1}}(t,{R}^{2},-K\cdot R,{K}^{2};{M}_{1},{M}_{0})\,\,\,\,,\end{array}\end{eqnarray}$
by comparing (3.2) with (3.1). The first relation (3.3) can be a consistent check for c2→2 while the second relation (3.4) and the third relation (3.5) tell us that we need to compute only one of ${c}_{2\to 1;\widehat{i}}$ functions. Another useful check is the mass dimension. Since the mass dimension of t is (−2), we have
$\begin{eqnarray}[{c}_{2\to 2}]=0,\,\,\,\,[{c}_{2\to 1}]=-2.\,\end{eqnarray}$

3.1. Differential equations

Now we will write down differential equations for these generation functions. Acting ∂R · ∂R on both sides of (3.1) we have
$\begin{eqnarray}\begin{array}{l}{\partial }_{R}\cdot {\partial }_{R}{I}_{\mathrm{bub}}(t,R)=\int {\rm{d}}\ell \frac{4{t}^{2}{\ell }^{2}{{\rm{e}}}^{t(2\ell \cdot R)}}{({\ell }^{2}-{M}_{0}^{2})({\left(\ell -K\right)}^{2}-{M}_{1}^{2})}\\ \quad =4{t}^{2}{M}_{0}^{2}{I}_{\mathrm{bub}}(t,R)+4{t}^{2}{{\rm{e}}}^{t(2K\cdot R)}\int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{({\ell }^{2}-{M}_{1}^{2})},\,\,\,\end{array}\end{eqnarray}$
thus we derive
$\begin{eqnarray}\begin{array}{l}{\partial }_{R}\cdot {\partial }_{R}{c}_{2\to 2}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad =4{t}^{2}{M}_{0}^{2}{c}_{2\to 2}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1}),\,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}{\partial }_{R}\cdot {\partial }_{R}{c}_{2\to 1;\widehat{1}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad =4{t}^{2}{M}_{0}^{2}{c}_{2\to 1;\widehat{1}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1}),\,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}{\partial }_{R}\cdot {\partial }_{R}{c}_{2\to 1;\hat{0}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad =4{t}^{2}{M}_{0}^{2}{c}_{2\to 1;\hat{0}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad +4{t}^{2}{{\rm{e}}}^{t(2K\cdot R)}{c}_{1\to 1}(t,{R}^{2},{M}_{1}).\,\,\,\end{array}\end{eqnarray}$
Acting with K · ∂R we have
$\begin{eqnarray}\begin{array}{l}K\cdot {\partial }_{R}{I}_{\mathrm{bub}}(t,R)=\int {\rm{d}}\ell \frac{t(2K\cdot \ell ){{\rm{e}}}^{t(2\ell \cdot R)}}{({\ell }^{2}-{M}_{0}^{2})({\left(\ell -K\right)}^{2}-{M}_{1}^{2})}\\ \quad =\int {\rm{d}}\ell \frac{t({D}_{0}-{D}_{1}+f){{\rm{e}}}^{t(2\ell \cdot R)}}{({\ell }^{2}-{M}_{0}^{2})({\left(\ell -K\right)}^{2}-{M}_{1}^{2})}\\ \quad ={{tfI}}_{\mathrm{bub}}(t,R)-t\int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{({\ell }^{2}-{M}_{0}^{2})}\\ \quad +t{{\rm{e}}}^{t(2K\cdot R)}\int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{({\ell }^{2}-{M}_{1}^{2})},\,\,\,\end{array}\end{eqnarray}$
where $f={K}^{2}-{M}_{1}^{2}+{M}_{0}^{2}$, thus we derive
$\begin{eqnarray}\begin{array}{l}K\cdot {\partial }_{R}{c}_{2\to 2}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad ={{tfc}}_{2\to 2}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1}),\,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}K\cdot {\partial }_{R}{c}_{2\to 1;\widehat{1}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad ={{tfc}}_{2\to 1;\widehat{1}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad -{{tc}}_{1\to 1}(t,{R}^{2},{M}_{0}),\,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}K\cdot {\partial }_{R}{c}_{2\to 1;\hat{0}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad ={{tfc}}_{2\to 1;\hat{0}}(t,{R}^{2},K\cdot R,{K}^{2};{M}_{0},{M}_{1})\\ \quad +t{{\rm{e}}}^{t(2K\cdot R)}{c}_{1\to 1}(t,{R}^{2},{M}_{1}).\,\,\,\,\,\end{array}\end{eqnarray}$
The above two groups of differential equations can be uniformly written as
$\begin{eqnarray}{\partial }_{R}\cdot {\partial }_{R}{c}_{T}=4{t}^{2}{M}_{0}^{2}{c}_{T}+4{t}^{2}{\xi }_{R}{h}_{T},\,\,\end{eqnarray}$
$\begin{eqnarray}K\cdot {\partial }_{R}{c}_{T}={{tfc}}_{T}+t{\xi }_{K}{h}_{T},\,\,\end{eqnarray}$
where hT is the possible non-homogenous contribution coming from lower topology (tadpole). For different type T we have
$\begin{eqnarray}T=\{2\to 2\}:\qquad {h}_{T}=0\qquad \mathrm{or}\qquad {\xi }_{R}={\xi }_{K}=0,\,\,\end{eqnarray}$
$\begin{eqnarray}\begin{array}{rcl}T & = & \{2\to 1;\widehat{1}\}:\qquad {h}_{T}={c}_{1\to 1}(t,{R}^{2},{M}_{0}),\\ {\xi }_{R} & = & 0,\qquad {\xi }_{K}=-1,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{rcl}T & = & \{2\to 1;\widehat{0}\}:\qquad {h}_{T}={e}^{t(2K\cdot R)}{c}_{1\to 1}(t,{R}^{2},{M}_{1}),\\ {\xi }_{R} & = & 1,\quad {\xi }_{K}=+1.\,\,\end{array}\end{eqnarray}$
Generation functions are functions of f(r, p) with r = R2 and p = K · R. It is easy to work out that
$\begin{eqnarray}\begin{array}{rcl}\displaystyle \frac{\partial }{\partial {R}^{\mu }} & = & 2{\eta }_{\rho \mu }{R}^{\rho }{\partial }_{r}+{K}_{\mu }{\partial }_{p},\\ K\cdot \displaystyle \frac{\partial }{\partial {R}^{\mu }} & = & 2p{\partial }_{r}+{K}^{2}{\partial }_{p},\,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}{\eta }^{\mu \nu }\displaystyle \frac{\partial }{\partial {R}^{\nu }}\displaystyle \frac{\partial }{\partial {R}^{\mu }}=(4r{\partial }_{r}^{2}+4p{\partial }_{p}{\partial }_{r}+{K}^{2}{\partial }_{p}^{2}+2D{\partial }_{r}),\,\,\,\end{eqnarray}$
thus (3.15) and (3.16) can be written as
$\begin{eqnarray}\begin{array}{l}(4r{\partial }_{r}^{2}+4p{\partial }_{p}{\partial }_{r}+{K}^{2}{\partial }_{p}^{2}+2D{\partial }_{r}-4{t}^{2}{M}_{0}^{2}){c}_{T}\\ \quad =4{t}^{2}{\xi }_{R}{h}_{T},\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}(2p{\partial }_{r}+{K}^{2}{\partial }_{p}-{tf}){c}_{T}=t{\xi }_{K}{h}_{T}.\,\,\end{eqnarray}$
Equations (3.22) and (3.23) are the differential equations we need to solve. We will present two ways to solve them. One is by the series expansion of the naive variables r, p. This is the method used in [37, 38]. However, as we will show, using the idea of the generation function, the powers of r, p are independent of each other, thus the recursion relations become simpler and can be solved explicitly. Another method is to solve the differential equation directly and get a more compact and analytical expression. An important lesson from the second method is that the right variables to do the series expansion are not r, p but their proper combination.

3.2. The series expansion

In this subsection, we will present the solution in the form of a series expansion of r, p. Writing5(5Since we consider the generation functions, the n, m are free indices, while in previous works [37, 38] with fixed tensor rank K, n, m are constrained by 2n + m = K. One can see that many manipulations are simplified using the idea of generation functions.)
$\begin{eqnarray}c=\sum _{n,m=0}^{\infty }{c}_{n,m}{r}^{n}{p}^{m},\,\,\,\,\,{h}_{c}=\sum _{n,m=0}^{\infty }{h}_{n,m}{r}^{n}{p}^{m},\,\,\end{eqnarray}$
and putting them to (3.22) and (3.23) we get the following equations
$\begin{eqnarray}\begin{array}{rcl}0 & = & 2(n+1)(2n+2m+D){c}_{n+1,m}\\ & & +{K}^{2}(m+2)(m+1){c}_{n,m+2}\\ & &-4{t}^{2}{M}_{0}^{2}{c}_{n,m}-4{t}^{2}{\xi }_{R}{h}_{n,m}\,\,n,m\geqslant 0,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{rcl}0 & = & 2(n+1){c}_{n+1,m}+{K}^{2}(m+2){c}_{n,m+2}-{{tfc}}_{n,m+1}\\ & & -t{\xi }_{K}{h}_{n,m+1}\,\,\,\,m,n\geqslant 0,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}0\,=\,{K}^{2}{c}_{n,1}-{{tfc}}_{n,0}-t{\xi }_{K}{h}_{n,0}\,\,\,\,\,n\geqslant 0.\,\,\end{eqnarray}$
Using (3.25) and (3.26) we can solve
$\begin{eqnarray}\begin{array}{rcl}{c}_{n+1,m} & = & \displaystyle \frac{4{M}_{0}^{2}{t}^{2}}{2(n+1)(D+2n+m-1)}{c}_{n,m}\\ & & -\displaystyle \frac{{tf}(m+1)}{2(n+1)(D+2n+m-1)}{c}_{n,m+1}\\ & & +\displaystyle \frac{4{t}^{2}{\xi }_{R}{h}_{n,m}-t{\xi }_{K}(m+1){h}_{n,m+1}}{2(n+1)(D+2n+m-1)},\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{rcl}{c}_{n,m+2} & = & \displaystyle \frac{-4{M}_{0}^{2}{t}^{2}}{(m+2)(D+2n+m-1){K}^{2}}{c}_{n,m}\\ & & +\displaystyle \frac{{tf}(D+2n+2m)}{(m+2)(D+2n+m-1){K}^{2}}{c}_{n,m+1}\\ & & +\displaystyle \frac{-4{t}^{2}{\xi }_{R}{h}_{n,m}+t{\xi }_{K}(D+2n+2m){h}_{n,m+1}}{2(n+1)(D+2n+m-1)}.\end{array}\end{eqnarray}$
Now setting m = 0 in (3.28) and combining (3.27) we can solve
$\begin{eqnarray}\begin{array}{rcl}{c}_{n+\mathrm{1,0}} & = & \displaystyle \frac{(-{t}^{2}{f}^{2}+4{M}_{0}^{2}{t}^{2}{K}^{2})}{2(n+1)(D+2n-1){K}^{2}}{c}_{n,0}\\ & & -\displaystyle \frac{{t}^{2}({\xi }_{K}f-4{\xi }_{R}{K}^{2}){h}_{n,0}+t{\xi }_{K}{K}^{2}{h}_{n,1}}{2(n+1)(D+2n-1){K}^{2}},\end{array}\end{eqnarray}$
$\begin{eqnarray}{c}_{n,1}=\displaystyle \frac{{tf}}{{K}^{2}}{c}_{n,0}+\displaystyle \frac{t{\xi }_{K}{h}_{n,0}}{{K}^{2}}.\,\,\,\,\,\end{eqnarray}$
Using equation (3.30) we can recursively solve all cn,0 starting from the boundary condition c0,0 = 1 for c2→2 or c0,0 = 0 for c2→1. Knowing all cn,0 we can use (3.31) to get all cn,1. To solve all cn,m, we use (3.25) and (3.26) again, but now solve
$\begin{eqnarray}\begin{array}{rcl}{c}_{n,m+1} & = & \displaystyle \frac{4{M}_{0}^{2}t}{f(m+1)}{c}_{n,m}\\ & & -\displaystyle \frac{2(n+1)(D+2n+m-1)}{{tf}(m+1)}{c}_{n+1,m}\\ & & +4t{\xi }_{R}{h}_{n,m}-{\xi }_{K}(m+1){h}_{n,m+1},\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{rcl}{c}_{n,m+2} & = & \displaystyle \frac{-2(n+1)(D+2n+2m)}{(m+1)(m+2){K}^{2}}{c}_{n+1,m}\\ & & +\displaystyle \frac{4{t}^{2}{M}_{0}^{2}}{(m+1)(m+2){K}^{2}}{c}_{n,m}\\ & & +\displaystyle \frac{4{t}^{2}{\xi }_{R}{h}_{n,m}}{(m+1)(m+2){K}^{2}}.\end{array}\end{eqnarray}$
Both equations can be used recursively to solve cn,m. After using one of them to get all cn,m, another one becomes a nontrivial consistent check. Among them (3.32) is better, since it solves (m + 1) from m.
Using above algorithm, we present the first few terms of generation functions for comparison. For c2→2 we have
$\begin{eqnarray}\begin{array}{rcl}{c}_{\mathrm{2,2}} & = & 1+\displaystyle \frac{{ft}}{{K}^{2}}p+\displaystyle \frac{({{Df}}^{2}-4{K}^{2}{M}_{0}^{2}){t}^{2}}{2(D-1){\left({K}^{2}\right)}^{2}}{p}^{2}\\ & & +\displaystyle \frac{(4{K}^{2}{M}_{0}^{2}-{f}^{2}){t}^{2}}{2(D-1){K}^{2}}r\\ & & +\displaystyle \frac{f((2+D){f}^{2}-12{K}^{2}{M}_{0}^{2}){t}^{3}}{6(D-1){\left({K}^{2}\right)}^{3}}{p}^{3}\\ & & -\displaystyle \frac{f({f}^{2}-4{K}^{2}{M}_{0}^{2}){t}^{3}}{2(D-1){\left({K}^{2}\right)}^{2}}{rp}+...\end{array}\end{eqnarray}$
and
$\begin{eqnarray}\begin{array}{rcl}{c}_{2\to 1;\widehat{1}} & = & 0-\displaystyle \frac{t}{{K}^{2}}p-\displaystyle \frac{{{Dft}}^{2}}{2(D-1){\left({K}^{2}\right)}^{2}}{p}^{2}+\displaystyle \frac{{{ft}}^{2}}{2(D-1){K}^{2}}r\\ & & -\displaystyle \frac{(D(2+D){f}^{2}-8(D-1){K}^{2}{M}_{0}^{2}){t}^{3}}{6(D-1)D{\left({K}^{2}\right)}^{3}}{p}^{3}\\ & & +\displaystyle \frac{({{Df}}^{2}-4(D-1){K}^{2}{M}_{0}^{2}){t}^{3}}{2(D-1)D{\left({K}^{2}\right)}^{2}}{rp}+\end{array}\end{eqnarray}$
and
$\begin{eqnarray}\begin{array}{l}{c}_{2\to 1;\widehat{0}}=0+\displaystyle \frac{t}{{K}^{2}}p+\displaystyle \frac{(-D\widetilde{f}+4(D-1){K}^{2}){t}^{2}}{2(D-1){\left({K}^{2}\right)}^{2}}{p}^{2}\\ \quad +\displaystyle \frac{\widetilde{f}{t}^{2}}{2(D-1){K}^{2}}r+\displaystyle \frac{({Df}\widetilde{f}+4(D-1){K}^{2}{M}_{1}^{2}){t}^{3}}{2(D-1)D{\left({K}^{2}\right)}^{2}}{rp}\\ \quad +\displaystyle \frac{(6{{DK}}^{2}({K}^{2}(D-2)+{{DM}}_{0}^{2})-2(D+2)(3D-2){K}^{2}{M}_{1}^{2}+D(D+2){\widetilde{f}}^{2}){t}^{3}}{6(D-1){\left({K}^{2}\right)}^{2}}{p}^{3}+...,\,\,\,\end{array}\end{eqnarray}$
where we have defined $\widetilde{f}={K}^{2}-{M}_{0}^{2}+{M}_{1}^{2}=-f+2{K}^{2}$.
Here we have presented the general recursive algorithm. In appendix B, we will show that these recursion relations can be solved explicitly, i.e. we find explicit expressions for all coefficients cn,m.

3.3. The analytic solution

In the previous section, we presented the solution using the series expansion. In this section, we will solve the two differential equations (3.22) and (3.23) directly.
Let us start from (3.23) first. To solve it, we define the following new variables
$\begin{eqnarray}x={K}^{2}r-{p}^{2},\,\,\,\,\,y\,=\,p,\,\,\,\end{eqnarray}$
then (3.23) becomes
$\begin{eqnarray}\begin{array}{l}(2p{\partial }_{r}+{K}^{2}{\partial }_{p}-{tf})c(r,p)=({K}^{2}{\partial }_{y}-{tf})c(x,y)\\ \quad =t{\xi }_{K}{h}_{T},\,\,\end{array}\end{eqnarray}$
where we have used
$\begin{eqnarray}\begin{array}{rcl}p & = & y,\,\,\,\,\,\,r=\displaystyle \frac{x+{y}^{2}}{{K}^{2}},\,\,\,\,{\partial }_{r}={K}^{2}{\partial }_{x},\\ {\partial }_{p} & = & -2y{\partial }_{x}+{\partial }_{y}.\,\,\end{array}\end{eqnarray}$
The differential equation (3.38) can be solved as (see the discussion in the appendix, for example, (A.5))
$\begin{eqnarray}\begin{array}{rcl}c(x,y) & = & \frac{1}{{K}^{2}}{{\rm{e}}}^{\frac{{tf}}{{K}^{2}}y}\left(\Space{0ex}{2.35ex}{0ex}G(x)\right.\\ & & \left.+{\int }_{0}^{y}{\rm{d}}w{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}w}t{\xi }_{K}{h}_{T}(x,w)\right),\,\,\end{array}\end{eqnarray}$
where the function G depends only on x, while hThT(x, y) is the function of both x, y.
Now we consider the equation (3.22). The first step is to simplify it by writing
$\begin{eqnarray}\begin{array}{l}\left(4r{\partial }_{r}^{2}+4p{\partial }_{p}{\partial }_{r}+{K}^{2}{\partial }_{p}^{2}+2D{\partial }_{r}\right)\\ \quad =\displaystyle \frac{1}{{K}^{2}}(2p{\partial }_{r}+{K}^{2}{\partial }_{p})(2p{\partial }_{r}+{K}^{2}{\partial }_{p})\\ \quad +\left(4r-\displaystyle \frac{4{p}^{2}}{{K}^{2}}\right){\partial }_{r}^{2}+2(D-1){\partial }_{r}.\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
Thus using (3.23), (3.22) becomes
$\begin{eqnarray}\begin{array}{l}\left(4{{xK}}^{2}{\partial }_{x}^{2}+2(D-1){K}^{2}{\partial }_{x}+\displaystyle \frac{{t}^{2}({f}^{2}-4{K}^{2}{M}_{0}^{2})}{{K}^{2}}\right)c\\ \quad =\left(-t{\xi }_{K}{\partial }_{y}+\displaystyle \frac{4{t}^{2}{\xi }_{R}{K}^{2}-{t}^{2}{\xi }_{K}f}{{K}^{2}}\right){h}_{T}.\,\,\,\end{array}\end{eqnarray}$
Putting (3.40) to (3.42) and simplifying we get
$\begin{eqnarray}\begin{array}{l}\left(4{{xK}}^{2}{\partial }_{x}^{2}+2(D-1){K}^{2}{\partial }_{x}+\frac{{t}^{2}({f}^{2}-4{K}^{2}{M}_{0}^{2})}{{K}^{2}}\right)G(x)\\ \quad ={K}^{2}{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}y}\left(-t{\xi }_{K}{\partial }_{y}+\frac{4{t}^{2}{\xi }_{R}{K}^{2}-{t}^{2}{\xi }_{K}f}{{K}^{2}}\right){h}_{T}(x,y)\\ \quad -\left(4{{xK}}^{2}{\partial }_{x}^{2}+2(D-1){K}^{2}{\partial }_{x}+\frac{{t}^{2}({f}^{2}-4{K}^{2}{M}_{0}^{2})}{{K}^{2}}\right)\\ \quad \times {\int }_{0}^{y}{\rm{d}}w{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}w}t{\xi }_{K}{h}_{T}(x,w).\,\,\,\end{array}\end{eqnarray}$
Equation (3.43) is the form of (A.14) which has been discussed in the appendix. One interesting point is that since the left-hand side is independent of y, the right-hand side should be zero under the action of ∂y. One can check that it is indeed true.
Having laid out the frame, we can use it to solve various generation functions.

3.3.1. The generation function c2→2

For this case, we have hT = 0, thus using the result (A.29) we can immediately write down
$\begin{eqnarray}\begin{array}{l}{c}_{2\to 2}(t,r,p,{K}^{2};{M}_{0},{M}_{1})={\,}_{0}{F}_{1}\Space{0ex}{2.7ex}{0ex}(\varnothing ;\frac{D-1}{2};\\ \quad \left(\frac{{t}^{2}(4{K}^{2}{M}_{0}^{2}-{f}^{2})x}{4{\left({K}^{2}\right)}^{2}}\right){{\rm{e}}}^{\frac{{tf}}{{K}^{2}}y}{| }_{x\to {K}^{2}r-{p}^{2},y\to p}\Space{0ex}{1.25ex}{0ex}).\,\,\end{array}\end{eqnarray}$
One can check it with the series expansion (B.3) given in appendix B. Compared to it, the result (3.44) is very simple and compact. This shows the power of using the generation function. Also, the differential equations (3.22) and (3.23) tell us the right variables for the series expansion should be x, y instead of the naive variables r, p.

3.3.2. The generation function ${c}_{2 \rightarrow 1;\hat{1}}$

For this case, we have ξR = 0, ξK = −1 and
$\begin{eqnarray}\begin{array}{l}{h}_{T}(r)={c}_{1\to 1}\left(t,r=\displaystyle \frac{x+{y}^{2}}{{K}^{2}},{M}_{0}\right)\\ \quad =\sum _{n=0}^{\infty }\displaystyle \frac{{\left({t}^{2}{M}_{0}^{2}\right)}^{n}{\left(\tfrac{x+{y}^{2}}{{K}^{2}}\right)}^{n}}{n!{\left(\tfrac{D}{2}\right)}_{n}},\,\,\end{array}\end{eqnarray}$
which satisfies the differential equation (2.4). The (3.43) becomes
$\begin{eqnarray}\begin{array}{l}\left(4{{xK}}^{2}{\partial }_{x}^{2}+2(D-1){K}^{2}{\partial }_{x}+\frac{{t}^{2}({f}^{2}-4{K}^{2}{M}_{0}^{2})}{{K}^{2}}\right)G(x)\\ \quad ={K}^{2}{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}y}t\left({\partial }_{y}+\frac{{tf}}{{K}^{2}}\right){h}_{T}(x,y)\\ \quad +t\left(4{{xK}}^{2}{\partial }_{x}^{2}+2(D-1){K}^{2}{\partial }_{x}+\frac{{t}^{2}({f}^{2}-4{K}^{2}{M}_{0}^{2})}{{K}^{2}}\right)\\ \quad \times {\int }_{0}^{y}{\rm{d}}w{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}w}{h}_{T}(x,w).\,\,\end{array}\end{eqnarray}$
The first important check is that the right-hand side of (3.46) is y-independent. Acting $\displaystyle \frac{\partial }{\partial y}$ on the right-hand side we will get
$\begin{eqnarray}\begin{array}{l}{{te}}^{-\displaystyle \frac{{tf}}{{K}^{2}}y}\left\{-{tf}\left({\partial }_{y}+\displaystyle \frac{{tf}}{{K}^{2}}\right){h}_{T}(x,y)\right.\\ \quad +{K}^{2}\left({\partial }_{y}^{2}+\displaystyle \frac{{tf}}{{K}^{2}}{\partial }_{y}\right){h}_{T}(x,y)\\ \quad +\left(\Space{0ex}{3.25ex}{0ex}4{{xK}}^{2}{\partial }_{x}^{2}+2(D-1){K}^{2}{\partial }_{x}\right.\\ \quad \left.\left.+\displaystyle \frac{{t}^{2}({f}^{2}-4{K}^{2}{M}_{0}^{2})}{{K}^{2}}\right){h}_{T}(x,y)\right\}.\,\,\end{array}\end{eqnarray}$
Using
$\begin{eqnarray}\begin{array}{rcl}{\partial }_{x}{h}_{T}(r) & = & \displaystyle \frac{\partial \tfrac{x+{y}^{2}}{{K}^{2}}}{\partial x}{\partial }_{r}{h}_{T}=\displaystyle \frac{1}{{K}^{2}}{\partial }_{r}{h}_{T},\\ {\partial }_{y}{h}_{T}(r) & = & \displaystyle \frac{\partial \tfrac{x+{y}^{2}}{{K}^{2}}}{\partial y}{\partial }_{r}{h}_{T}=\displaystyle \frac{2y}{{K}^{2}}{\partial }_{r}{h}_{T},\end{array}\end{eqnarray}$
one can check that (3.47) is reduced to the differential equation (2.4), thus we have proved the y-independent of (3.46).
Setting y = 0 in (3.46) we get
$\begin{eqnarray}\begin{array}{l}\left(4{{xK}}^{2}{\partial }_{x}^{2}+2(D-1){K}^{2}{\partial }_{x}+\displaystyle \frac{{t}^{2}({f}^{2}-4{K}^{2}{M}_{0}^{2})}{{K}^{2}}\right)G(x)\\ \quad ={t}^{2}{{fh}}_{T}(x,y=0),\,\,\end{array}\end{eqnarray}$
where we have used ${\partial }_{y}{h}_{T}(x,y=0)=\displaystyle \frac{2y}{{K}^{2}}{\partial }_{r}{h}_{T}=0$. The differential equation (3.49) is the form of (A.14) and we get the solution
$\begin{eqnarray}\begin{array}{l}G(x)=\frac{{t}^{2}f}{4{K}^{2}}{G}_{0}(x){\int }_{0}^{x}{\rm{d}}{{ww}}^{-\frac{(D-1)}{2}}{{\rm{e}}}^{-2{G}_{0}(w)}\\ \quad \times {\int }_{0}^{w}{\rm{d}}\xi {h}_{T}(\xi ,y=0){G}_{0}^{-1}(\xi ){{\rm{e}}}^{2{G}_{0}(\xi )}{\xi }^{\frac{(D-1)}{2}-1},\,\,\end{array}\end{eqnarray}$
where
$\begin{eqnarray}{G}_{0}(x)={\,}_{0}{F}_{1}(\varnothing ;\displaystyle \frac{(D-1)}{2};\displaystyle \frac{{t}^{2}(4{K}^{2}{M}_{0}^{2}-{f}^{2})}{4{\left({K}^{2}\right)}^{2}}x).\,\,\end{eqnarray}$
Putting it all together we finally have
$\begin{eqnarray}\begin{array}{l}{c}_{2\to 1;\hat{1}}(x,y)=\frac{1}{{K}^{2}}{{\rm{e}}}^{\frac{{tf}}{{K}^{2}}y}\\ \quad \times \left(G(x)-t{\int }_{0}^{y}{\rm{d}}w{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}w}{h}_{T}(x,w)\right).\,\,\end{array}\end{eqnarray}$
Although we have a very compact expression (3.52) for the generation function, in practice it is more desirable to have the series expansion form. In appendix A, we have introduced three ways. Here we work out the expansion by direct integration. Using (3.45) we have
$\begin{eqnarray}\begin{array}{l}{\int }_{0}^{y}{\rm{d}}w{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}w}{h}_{T}(x,w)=\displaystyle \sum _{n=0}^{\infty }\frac{{\left({t}^{2}{M}_{0}^{2}\right)}^{n}}{n!{\left(\frac{D}{2}\right)}_{n}}\\ \quad \times {\int }_{0}^{y}{\rm{d}}w{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}w}{\left(\frac{x+{w}^{2}}{{K}^{2}}\right)}^{n}.\,\,\end{array}\end{eqnarray}$
To work out the integration, we see that
$\begin{eqnarray}R(\alpha )\equiv {\int }_{0}^{T}{\rm{d}}u{{\rm{e}}}^{\alpha u}=\frac{1}{\alpha }{{\rm{e}}}^{\alpha u}{| }_{0}^{T}=\frac{{{\rm{e}}}^{\alpha T}-1}{\alpha },\,\,\,\end{eqnarray}$
thus
$\begin{eqnarray}\begin{array}{rcl}\frac{{{\rm{d}}}^{n}}{{\rm{d}}{\alpha }^{n}}R(\alpha ) & = & {\int }_{0}^{T}{\rm{d}}u{{\rm{e}}}^{\alpha u}{u}^{n}=\frac{{\left(-\right)}^{n}n!}{{\alpha }^{n+1}}\left({{\rm{e}}}^{\alpha T}\lfloor {{\rm{e}}}^{-\alpha T}{\rfloor }_{{\alpha }^{n}}-1\right)\\ & = & \frac{{\left(-\right)}^{n}n!}{{\alpha }^{n+1}}\left({{\rm{e}}}^{\alpha T}\left({{\rm{e}}}^{-\alpha T}-\displaystyle \sum _{i=n+1}^{\infty }\frac{{\left(-\alpha T\right)}^{i}}{i!}\right)-1\right)\\ & = & \frac{-{\left(-\right)}^{n}n!}{{\alpha }^{n+1}}{{\rm{e}}}^{\alpha T}\displaystyle \sum _{i=n+1}^{\infty }\frac{{\left(-\alpha T\right)}^{i}}{i!},\,\,\,\end{array}\end{eqnarray}$
where the symbol $\lfloor Y(x){\rfloor }_{{x}^{n-1}}$ means to keep the Taylor expansion up to the order of x(n−1). Using (3.55) we have
$\begin{eqnarray}\begin{array}{l}{\int }_{0}^{T}{\rm{d}}u{{\rm{e}}}^{\alpha u}{\left(\beta +\gamma {u}^{2}\right)}^{N}\\ \quad =\displaystyle \sum _{i=0}^{N}\frac{N!}{i!(N-i)!}{\beta }^{N-i}{\gamma }^{i}{\int }_{0}^{T}{\rm{d}}u{{\rm{e}}}^{\alpha u}{u}^{2i}\\ \quad =\displaystyle \sum _{i=0}^{N}\frac{N!}{i!(N-i)!}{\beta }^{N-i}{\gamma }^{i}\frac{-(2i)!}{{\alpha }^{2i+1}}{{\rm{e}}}^{\alpha T}\\ \quad \times \displaystyle \sum _{j\,=\,2i+1}^{\infty }\frac{{\left(-\alpha T\right)}^{j}}{j!}.\,\,\,\end{array}\end{eqnarray}$
Using (3.56) we can evaluate (3.53) as
$\begin{eqnarray}\begin{array}{l}{\int }_{0}^{y}{\rm{d}}w{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}w}{h}_{T}(x,w)=y{{\rm{e}}}^{-\frac{{tf}}{{K}^{2}}y}\displaystyle \sum _{n=0}^{\infty }\displaystyle \sum _{j\,=\,0}^{\infty }\displaystyle \sum _{i=0}^{n}\\ \quad \times \frac{{\left(\frac{{t}^{2}{M}_{0}^{2}}{{K}^{2}}\right)}^{n}}{{\left(\frac{D}{2}\right)}_{n}}\frac{(2i)!{x}^{n-i}{y}^{2i}}{i!(n-i)!}\frac{{\left(\frac{{tfy}}{{K}^{2}}\right)}^{j}}{(j+2i+1)!}.\,\,\end{array}\end{eqnarray}$
The evaluation of G(x) can be found in (A.23) as
$\begin{eqnarray}\begin{array}{l}G(x)=\sum _{n=0}^{\infty }\sum _{i=0}^{n-1}\displaystyle \frac{{\left(\tfrac{D-1}{2}\right)}_{i}}{n!{\left(\tfrac{D-1}{2}\right)}_{n}{\left(\tfrac{D}{2}\right)}_{i}}\\ \quad \times \displaystyle \frac{f{\left({M}_{0}^{2}\right)}^{i}{\left({t}^{2}x\right)}^{n}{\left(4{K}^{2}{M}_{0}^{2}-{f}^{2}\right)}^{n-i-1}}{{4}^{n-i}{\left({K}^{2}\right)}^{2n-i-1}}.\,\,\end{array}\end{eqnarray}$
Collecting all pieces together, we finally have
$\begin{eqnarray}\begin{array}{l}{c}_{2\to 1;\hat{1}}(t,r,p,{K}^{2};{M}_{0},{M}_{1})=\frac{1}{{K}^{2}}{{\rm{e}}}^{\frac{{tf}}{{K}^{2}}y}\displaystyle \sum _{n=0}^{\infty }\displaystyle \sum _{i=0}^{n-1}\\ \quad \times \frac{{\left(\frac{D-1}{2}\right)}_{i}}{n!{\left(\frac{D-1}{2}\right)}_{n}{\left(\frac{D}{2}\right)}_{i}}\frac{f{\left({M}_{0}^{2}\right)}^{i}{\left({t}^{2}x\right)}^{n}{\left(4{K}^{2}{M}_{0}^{2}-{f}^{2}\right)}^{n-i-1}}{{4}^{n-i}{\left({K}^{2}\right)}^{2n-i-1}}\\ \quad -\frac{{ty}}{{K}^{2}}\displaystyle \sum _{n=0}^{\infty }\displaystyle \sum _{j\,=\,0}^{\infty }\displaystyle \sum _{i=0}^{n}\frac{{\left(\frac{{t}^{2}{M}_{0}^{2}}{{K}^{2}}\right)}^{n}}{{\left(\frac{D}{2}\right)}_{n}}\\ \quad \times \frac{(2i)!{x}^{n-i}{y}^{2i}}{i!(n-i)!}\frac{{\left(\frac{{tfy}}{{K}^{2}}\right)}^{j}}{(j+2i+1)!}.\,\,\end{array}\end{eqnarray}$
This result can be checked with the one given in (B.14). One can see that the formula (3.59) is much more compact and manifest with various analytic structures.

4. The general frame

Having the detailed computations in the bubble, in this section we will set up the general frame to find generation functions for general one-loop integrals with (n + 1) propagators. The system has n external momenta Ki, i = 1,…,n and (n + 1) masses ${M}_{j}^{2},j=0,1,\ldots ,n$. Using the auxiliary vector R we have (n + 1) auxiliary scalar products (ASP) r = R · R, pi = Ki · R, i = 1,…,n. From the experience in the bubble, we know that these ASPs are not good variables to solve differential equations produced by ∂R · ∂R and Ki · ∂R, i = 1,…,n. Thus we will discuss how to find these good variables in the first subsection. Then we discuss the differential equations in these new variables in the second subsection and finally their solutions in the third section.

4.1. Finding good variables

We will denote the good variables by x and yi, i = 1,…,n. To simplify the differential equations, we need to impose the following conditions
$\begin{eqnarray}\begin{array}{rcl}({K}_{i}\cdot {\partial }_{R})x & = & 0,\,\,\,\,({K}_{i}\cdot {\partial }_{R}){y}_{j}\sim {\delta }_{{ij}},\\ \forall i,j & = & 1,\ldots ,n.\,\,\end{array}\end{eqnarray}$
To see if there is indeed a solution for (4.1), let us define the Gram matrix G and the row vector PT as
$\begin{eqnarray}{G}_{{ij}}={K}_{i}\cdot {K}_{j},\,\,\,\,{\left({P}^{T}\right)}_{i}={K}_{i}\cdot R.\,\,\,\end{eqnarray}$
Putting yj = ∑tβjtpt to (4.1), it is easy to see that the condition becomes
$\begin{eqnarray}{K}_{i}\cdot {\partial }_{R}{y}_{j}=\sum _{t}{\beta }_{{jt}}({K}_{t}\cdot {K}_{i})\sim {\delta }_{{ij}},\,\,\,\end{eqnarray}$
thus the matrix β can be solved as
$\begin{eqnarray}\beta =| G| {G}^{-1},\,\,\end{eqnarray}$
where ∣G∣ is the Gram determinant. For x, let us assume
$\begin{eqnarray}x=| G| r+{P}^{T}{AP},\,\,\,\,{A}^{T}\,=\,A.\,\,\end{eqnarray}$
Since
$\begin{eqnarray}{K}_{i}\cdot {\partial }_{R}x=| G| 2{p}_{i}+2{\left({P}^{T}\right)}_{R\to {K}_{i}}{AP},\,\,\end{eqnarray}$
where ${P}_{R\to {K}_{i}}$ means to replace vector R by the vector Ki, when collecting all of i together, the right-hand side of (4.6) is just (2∣GI + 2GA)P, thus we have the solution
$\begin{eqnarray}A=-| G| {G}^{-1}.\,\,\,\end{eqnarray}$
Putting everything together, we finally have
$\begin{eqnarray}x=| G| (r-{P}^{T}{G}^{-1}P),\qquad Y=| G| {G}^{-1}P,\,\,\end{eqnarray}$
where the mass dimensions of various quantities are
$\begin{eqnarray}\begin{array}{rcl}\left[| G| \right] & = & 2n,\,\,\,[{\left({G}^{-1}\right)}_{{ij}}]=-2,\,\,\,[{A}_{{ij}}]=2(n-1),\\ \left[x\right] & = & 2(n+1),\,\,\,[{y}_{i}]=2n.\,\,\end{array}\end{eqnarray}$
From (4.8) we can solve
$\begin{eqnarray}P=\displaystyle \frac{1}{| G| }{GY},\,\,\,\,r=\displaystyle \frac{| G| x+{Y}^{T}{GY}}{| G{| }^{2}}.\,\,\,\end{eqnarray}$

4.2. The differential equations

Having found the good variables, we express differential operators ∂R · ∂R and Ki · ∂R, i = 1,…,n using them. The first step is to use (4.8) to write
$\begin{eqnarray}\begin{array}{rcl}\displaystyle \frac{\partial }{\partial {R}^{\mu }} & = & \left(2| G| {R}_{\mu }-2| G| {{ \mathcal K }}_{\mu }^{T}{G}^{-1}P\right){\partial }_{x}\\ & & +| G| {{ \mathcal K }}_{\mu }^{T}{G}^{-1}{\partial }_{Y},\,\,\end{array}\end{eqnarray}$
where we have defined ${{ \mathcal K }}^{T}=({K}_{1},\ldots ,{K}_{n})$ and ${\partial }_{Y}^{T}\,=({\partial }_{{y}_{1}},\ldots ,{\partial }_{{y}_{n}})$. Thus we find
$\begin{eqnarray}\begin{array}{l}{ \mathcal K }\cdot \displaystyle \frac{\partial }{\partial {R}^{\mu }}=| G| {\partial }_{Y},\,\,\,\,\,\,{\partial }_{R}\cdot {\partial }_{R}=2| G| (D-n){\partial }_{x}\\ \quad +4| G| x{\partial }_{x}^{2}+| G{| }^{2}{\partial }_{Y}^{T}{G}^{-1}{\partial }_{Y}.\,\,\end{array}\end{eqnarray}$
The differential equations for cT, where T denotes different types of generation functions, have the following pattern
$\begin{eqnarray}{K}_{i}\cdot {\partial }_{R}{c}_{T}={\alpha }_{i}{c}_{T}+{H}_{T;i},\,\,i\,=\,1,2,\ldots ,n,\end{eqnarray}$
$\begin{eqnarray}{\partial }_{R}\cdot {\partial }_{R}{c}_{T}={\alpha }_{R}{c}_{T}+{H}_{T;R},\,\,\end{eqnarray}$
where αR, αi are constant (which are independent of T) and HT;R, HT;i are known functions coming from lower topologies. Using the result (4.12) and (4.13), we find
$\begin{eqnarray}\begin{array}{l}| G{| }^{2}{\partial }_{Y}^{T}{G}^{-1}{\partial }_{Y}{c}_{T}={\alpha }_{{ \mathcal K }}^{T}{G}^{-1}{\alpha }_{{ \mathcal K }}{c}_{T}\\ \quad +{H}_{T;{ \mathcal K }}^{T}{G}^{-1}{\alpha }_{{ \mathcal K }}+| G| {\partial }_{Y}^{T}{G}^{-1}{H}_{T;{ \mathcal K }},\,\,\end{array}\end{eqnarray}$
where ${\alpha }_{{ \mathcal K }}^{T}=({\alpha }_{1},\ldots ,{\alpha }_{n})$ and ${H}_{T;{ \mathcal K }}^{T}=({H}_{T;1},\ldots ,{H}_{T;n})$. Thus the differential equations (4.13) can be written as
$\begin{eqnarray}\left({\partial }_{{y}_{i}}-\displaystyle \frac{{\alpha }_{i}}{| G| }\right){c}_{T}=\displaystyle \frac{1}{| G| }{H}_{T;i}\,\,i\,=\,1,2,\ldots ,n,\end{eqnarray}$
while (4.14) becomes
$\begin{eqnarray}\left(4| G| x{\partial }_{x}^{2}+2| G| (D-n){\partial }_{x}+{\widetilde{\alpha }}_{R}\right){c}_{T}={{ \mathcal H }}_{T;R}\,\,\,\end{eqnarray}$
with
$\begin{eqnarray}\begin{array}{rcl}{\widetilde{\alpha }}_{R} & = & {\alpha }_{{ \mathcal K }}^{T}{G}^{-1}{\alpha }_{{ \mathcal K }}-{\alpha }_{R},\\ {{ \mathcal H }}_{T;R} & = & -{H}_{T;{ \mathcal K }}^{T}{G}^{-1}{\alpha }_{{ \mathcal K }}-| G| {\partial }_{Y}^{T}{G}^{-1}{H}_{T;{ \mathcal K }}+{H}_{T;R}.\,\,\end{array}\end{eqnarray}$
Having given the differential equations (4.16) and (4.17), there is an important point to be mentioned. For (4.16) and (4.17) to have a solution, functions H are not arbitrary but must satisfy the integrability conditions, which are
$\begin{eqnarray}\begin{array}{rcl}\left({\partial }_{{y}_{j}}-\displaystyle \frac{{\alpha }_{j}}{| G| }\right){H}_{T;i} & = & \left({\partial }_{{y}_{i}}-\displaystyle \frac{{\alpha }_{i}}{| G| }\right){H}_{T;j},\\ \forall i,j & = & 1,2,\ldots ,n,\,\,\end{array}\end{eqnarray}$
and
$\begin{eqnarray}\begin{array}{l}\left(4| G| x{\partial }_{x}^{2}+2| G| (D-n){\partial }_{x}+{\widetilde{\alpha }}_{R}\right)\displaystyle \frac{1}{| G| }{H}_{T;i}\\ \quad =\left({\partial }_{{y}_{i}}-\displaystyle \frac{{\alpha }_{i}}{| G| }\right){{ \mathcal H }}_{T;R},\,\,\,\forall i\,=\,1,2,\ldots ,n.\,\,\end{array}\end{eqnarray}$
Differential equations (4.16) and (4.17) are the type of (A.1) and (A.14) respectively in appendix A, for which the solution has been presented. In the next subsection, we will solve them analytically.

4.3. Analytic solution

In this part, we will present the necessary steps for solving the above differential equations (4.16) and (4.17). Let us solve them one by one. For differential equation (4.16) with i = 1, using the result (A.5) in appendix A, we have
$\begin{eqnarray}\begin{array}{l}{c}_{T}(x,y)={{\rm{e}}}^{\frac{{\alpha }_{1}}{| G| }{y}_{1}}\left({F}_{T}(x,{y}_{2},\ldots ,{y}_{n})+\frac{1}{| G| }\right.\\ \quad \left.\times {\int }_{0}^{{y}_{1}}{\rm{d}}{w}_{1}{{\rm{e}}}^{-\frac{{\alpha }_{1}}{| G| }{w}_{1}}{H}_{T;1}(x,{w}_{1},{y}_{2},\ldots ,{y}_{n})\right),\,\,\end{array}\end{eqnarray}$
where FT(x, y2,…,yn) does not depend on the variable y1. Now we act with $\left({\partial }_{{y}_{2}}-\displaystyle \frac{{\alpha }_{2}}{| G| }\right)$ on both sides of (4.21) to get the differential equation
$\begin{eqnarray}\begin{array}{l}\left({\partial }_{{y}_{2}}-\frac{{\alpha }_{2}}{| G| }\right){F}_{T}(x,{y}_{2},\ldots ,{y}_{n})=-\frac{1}{| G| }\\ \quad \times {\int }_{0}^{{y}_{1}}{\rm{d}}{w}_{1}{{\rm{e}}}^{-\frac{{\alpha }_{1}}{| G| }{w}_{1}}\left({\partial }_{{y}_{2}}-\frac{{\alpha }_{2}}{| G| }\right){H}_{T;1}(x,{w}_{1},{y}_{2},\ldots ,{y}_{n})\\ \quad +{{\rm{e}}}^{-\frac{{\alpha }_{1}}{| G| }{y}_{1}}\frac{1}{| G| }{H}_{T;2}(x,{y}_{1},{y}_{2},\ldots ,{y}_{n}).\,\,\,\end{array}\end{eqnarray}$
Using (A.5) we find
$\begin{eqnarray}\begin{array}{l}{F}_{T}(x,{y}_{2},\ldots ,{y}_{n})={{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}{F}_{T}(x,{y}_{3},\ldots ,{y}_{n})\\ \quad +{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}{{\rm{e}}}^{-\frac{{\alpha }_{1}}{| G| }{y}_{1}}\frac{1}{| G| }\\ \quad \times {\int }_{0}^{{y}_{2}}{\rm{d}}{w}_{2}{{\rm{e}}}^{-\frac{{\alpha }_{2}}{| G| }{w}_{2}}{H}_{T;2}(x,{y}_{1},{w}_{2},\ldots ,{y}_{n})\\ \quad -{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}\frac{1}{| G| }{\int }_{0}^{{y}_{1}}{\rm{d}}{w}_{1}{{\rm{e}}}^{-\frac{{\alpha }_{1}}{| G| }{w}_{1}}\left({{\rm{e}}}^{-\frac{{\alpha }_{2}}{| G| }{y}_{2}}{H}_{T;1}(x,{w}_{1},{y}_{2},\ldots ,{y}_{n})\right.\\ \quad \left.-{H}_{T;1}(x,{w}_{1},{y}_{2}=0,\ldots ,{y}_{n})\right).\,\,\,\end{array}\end{eqnarray}$
Putting (4.23) back to (4.21) and doing some algebraic manipulations, we get
$\begin{eqnarray}\begin{array}{l}{c}_{T}(x,y)={{\rm{e}}}^{\frac{{\alpha }_{1}}{| G| }{y}_{1}}{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}{F}_{T}(x,{y}_{3},\ldots ,{y}_{n})\\ \quad +{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}\frac{1}{| G| }{\int }_{0}^{{y}_{2}}{\rm{d}}{w}_{2}{{\rm{e}}}^{-\frac{{\alpha }_{2}}{| G| }{w}_{2}}{H}_{T;2}(x,{y}_{1},{w}_{2},\ldots ,{y}_{n})\\ \quad +{{\rm{e}}}^{\frac{{\alpha }_{1}}{| G| }{y}_{1}}{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}\frac{1}{| G| }{\int }_{0}^{{y}_{1}}{\rm{d}}{w}_{1}{{\rm{e}}}^{-\frac{{\alpha }_{1}}{| G| }{w}_{1}}{H}_{T;1}\\ \quad \times (x,{w}_{1},{y}_{2}=0,\ldots ,{y}_{n}).\,\,\end{array}\end{eqnarray}$
Repeating above procedure with the action $\left({\partial }_{{y}_{2}}-\displaystyle \frac{{\alpha }_{2}}{| G| }\right)$ we can solve FT(x, y3,…,yn) and then find
$\begin{eqnarray}\begin{array}{l}{c}_{T}(x,y)={{\rm{e}}}^{\frac{{\alpha }_{1}}{| G| }{y}_{1}}{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}{{\rm{e}}}^{\frac{{\alpha }_{3}}{| G| }{y}_{3}}{F}_{T}(x,{y}_{4},\ldots ,{y}_{n})\\ \quad +{{\rm{e}}}^{\frac{{\alpha }_{3}}{| G| }{y}_{3}}\frac{1}{| G| }{\int }_{0}^{{y}_{3}}{\rm{d}}{w}_{3}{{\rm{e}}}^{-\frac{{\alpha }_{3}}{| G| }{w}_{3}}{H}_{T;3}(x,{y}_{1},{y}_{2},{w}_{3},\ldots ,{y}_{n})\\ \quad +{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}{{\rm{e}}}^{\frac{{\alpha }_{3}}{| G| }{y}_{3}}\frac{1}{| G| }{\int }_{0}^{{y}_{2}}{\rm{d}}{w}_{2}{{\rm{e}}}^{-\frac{{\alpha }_{2}}{| G| }{w}_{2}}{H}_{T;2}\\ \quad \times (x,{y}_{1},{w}_{2},{y}_{3}=0,\ldots ,{y}_{n})\\ \quad +{{\rm{e}}}^{\frac{{\alpha }_{1}}{| G| }{y}_{1}}{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}{{\rm{e}}}^{\frac{{\alpha }_{3}}{| G| }{y}_{3}}\frac{1}{| G| }{\int }_{0}^{{y}_{1}}{\rm{d}}{w}_{1}{{\rm{e}}}^{-\frac{{\alpha }_{1}}{| G| }{w}_{1}}{H}_{T;1}\\ \quad \times (x,{w}_{1},{y}_{2}=0,{y}_{3}=0,\ldots ,{y}_{n}).\,\,\end{array}\end{eqnarray}$
By checking (4.24) and (4.25) we can see that after solving n first order differential equations (4.16) we get
$\begin{eqnarray}{c}_{T}(x,{y}_{1},\ldots ,{y}_{n})={{\rm{e}}}^{\frac{{\sum \ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}F(x)+{{ \mathcal H }}_{T;K}\,\,\,\,\,\,\end{eqnarray}$
with
$\begin{eqnarray}\begin{array}{l}{{ \mathcal H }}_{T;K}=\frac{1}{| G| }{\tilde{\sum \ }}_{i=n}^{1}{{\rm{e}}}^{\frac{{\tilde{\sum \ }}_{j=n}^{i}\ {\alpha }_{j}{y}_{j}}{| G| }}\\ \times {\int }_{0}^{{y}_{i}}{\rm{d}}{w}_{i}{{\rm{e}}}^{\frac{-{\alpha }_{i}{w}_{i}}{| G| }}{H}_{T;i}(x,{y}_{1},\ldots ,{y}_{i-1},{w}_{i},0,\ldots ,0),\,\,\,\,\,\end{array}\end{eqnarray}$
where for simplicity we have defined the sum ${\widetilde{\sum \ }}_{i=a}^{b}$ to mean to take the sum over (a, a − 1, a − 2,…,b) with ab.
Before going to solve the only unknown function F(x), let us check that the form (4.26) does satisfy the differential equations (4.16). When acting with $\left({\partial }_{{y}_{k}}-\displaystyle \frac{{\alpha }_{k}}{| G| }\right)$ on the both sides, it is easy to see that the first term at the right-hand side of (4.26) and the terms in ${{ \mathcal H }}_{T;K}$ with i < k give zero contributions since they contain only the factor ${e}^{\displaystyle \frac{-{\alpha }_{k}{y}_{k}}{| G| }}$ depending on yk. For the term i= k in ${{ \mathcal H }}_{T;K}$, the action gives
$\begin{eqnarray}\frac{1}{| G| }{{\rm{e}}}^{\frac{{\tilde{\sum \ }}_{j=n}^{k+1}\ {\alpha }_{j}{y}_{j}}{| G| }}{H}_{T;k}(x,{y}_{1},\ldots ,{y}_{k-1},{y}_{k},0,\ldots ,0).\,\,\,\,\,\end{eqnarray}$
For the term i = k + 1 in ${{ \mathcal H }}_{T;K}$, the action gives
$\begin{eqnarray}\begin{array}{l}\frac{1}{| G| }{{\rm{e}}}^{\frac{{\tilde{\sum \ }} _{j=n} ^{k+1}\ {\alpha}_{j}{y}_{j}}{| G| }} {\int }_{0}^{{y}_{k+1}}{\rm{d}}{w}_{k+1}{{\rm{e}}}^{\frac{-{\alpha }_{k+1}\ {w}_{k+1}}{| G| }}\\ \quad \times \left({\partial }_{{y}_{k}}-\frac{{\alpha }_{k}}{| G| }\right){H}_{T;k+1}(x,{y}_{1},\ldots ,{y}_{k},{w}_{k+1},0,\ldots ,0)\\ \quad =\frac{1}{| G| }{{\rm{e}}}^{\frac{{\tilde{\sum \ }}_{j=n}^{k+1}\ {\alpha }_{j}{y}_{j}}{| G| }}{\int }_{0}^{{y}_{k+1}}{\rm{d}}{w}_{k+1}{{\rm{e}}}^{\frac{-{\alpha }_{k+1}\ {w}_{k+1}}{| G| }}\\ \quad \times \left({\partial }_{{w}_{k+1}}-\frac{{\alpha }_{k+1}}{| G| }\right){H}_{T;k}(x,{y}_{1},\ldots ,{y}_{k},{w}_{k+1},0,\ldots ,0),\,\,\,\,\,\end{array}\end{eqnarray}$
where in the second line we have used the integrability condition (4.19). After partial integration we get
$\begin{eqnarray}\begin{array}{l}-\frac{1}{| G| }{{\rm{e}}}^{\frac{{\tilde{\sum \ }}_{j=n}^{k+1}\ {\alpha }_{j}{y}_{j}}{| G| }}{H}_{T;k}(x,{y}_{1},\ldots ,{y}_{k-1},{y}_{k},0,\ldots ,0)\\ \quad +\frac{1}{| G| }{{\rm{e}}}^{\frac{{\tilde{\sum \ }}_{j=n}^{k+2}\ {\alpha }_{j}{y}_{j}}{| G| }}{H}_{T;k}(x,{y}_{1},\ldots ,{y}_{k},{y}_{k+1},0,\ldots ,0).\,\,\,\,\,\,\end{array}\end{eqnarray}$
The first term in (4.30) cancels the term in (4.28) and we are left with the second term in (4.30). Now the pattern is clear. The i = k + 2 term in ${{ \mathcal H }}_{T;K}$ will produce two terms after using the integrability condition and partial integration, the first term will cancel the second term in (4.30), while the second term will be the form
$\begin{eqnarray}\frac{1}{| G| }{{\rm{e}}}^{\frac{{\tilde{\sum \ }}_{j=n}^{k+3}\ {\alpha }_{j}{y}_{j}}{| G| }}{H}_{T;k}(x,{y}_{1},\ldots ,{y}_{k},{y}_{k+1},{y}_{k+2},0,\ldots ,0).\,\,\,\,\,\end{eqnarray}$
Continuing to the term i = n in ${{ \mathcal H }}_{T;K}$ we will be left with $\displaystyle \frac{1}{| G| }{H}_{T;k}(x,{y}_{1},\ldots ,{y}_{n})$, thus we have proved that (4.26) does satisfy the differential equations (4.16).
Now we consider the differential equation (4.17). Using the form (4.26), we derive
$\begin{eqnarray}\begin{array}{l}\left(4| G| x{\partial }_{x}^{2}+2| G| (D-n){\partial }_{x}+{\tilde{\alpha }}_{R}\right)F(x)\\ \quad ={{\rm{e}}}^{\frac{-{\sum \ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}\\ \quad \times \left({{ \mathcal H }}_{T;R}-\left(4| G| x{\partial }_{x}^{2}+2| G| (D-n){\partial }_{x}+{\tilde{\alpha }}_{R}\right){{ \mathcal H }}_{T;K}\right).\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
One important point of (4.32) is that the right-hand side must be yi-independent. To check this point, we act ${\partial }_{{y}_{k}}$ on the right-hand side to give
$\begin{eqnarray}\begin{array}{l}-\frac{{\alpha }_{k}}{| G| }{{\rm{e}}}^{\frac{-{\sum \ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}\left({{ \mathcal H }}_{T;R}-\left(4| G| x{\partial }_{x}^{2}\right.\right.\\ \quad \left.\left.+2| G| (D-n){\partial }_{x}+{\tilde{\alpha }}_{R}\right){{ \mathcal H }}_{T;K}\right)\\ \quad +{{\rm{e}}}^{\frac{-{\sum \ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}\left({\partial }_{{y}_{k}}{{ \mathcal H }}_{T;R}-\left(4| G| x{\partial }_{x}^{2}\right.\right.\\ \quad \left.\left.+2| G| (D-n){\partial }_{x}+{\tilde{\alpha }}_{R}\right){\partial }_{{y}_{k}}{{ \mathcal H }}_{T;K}\right).\,\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
Since we have proved
$\begin{eqnarray}\left({\partial }_{{y}_{k}}-\displaystyle \frac{{\alpha }_{k}}{| G| }\right){{ \mathcal H }}_{T;K}=\displaystyle \frac{1}{| G| }{H}_{T;k},\,\,\,\,\,\,\,\end{eqnarray}$
(4.33) is simplified to
$\begin{eqnarray}\begin{array}{l}-\frac{{\alpha }_{k}}{| G| }{{\rm{e}}}^{\frac{-{\sum \ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}{{ \mathcal H }}_{T;R}+{{\rm{e}}}^{\frac{-{\sum \ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}\\ \quad \times \left({\partial }_{{y}_{k}}{{ \mathcal H }}_{T;R}-\left(4| G| x{\partial }_{x}^{2}+2| G| (D-n){\partial }_{x}+{\tilde{\alpha }}_{R}\right)\right.\\ \quad \left.\times \frac{1}{| G| }{H}_{T;k}\right).\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
Using the integrability condition (4.20) we get
$\begin{eqnarray}\begin{array}{l}-\frac{{\alpha }_{k}}{| G| }{{\rm{e}}}^{\frac{-{\sum\ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}{{ \mathcal H }}_{T;R}+{{\rm{e}}}^{\frac{-{\sum \ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}\\ \quad \times \left({\partial }_{{y}_{k}}{{ \mathcal H }}_{T;R}-\left({\partial }_{{y}_{k}}-\frac{{\alpha }_{k}}{| G| }\right){{ \mathcal H }}_{T;R}\right)=0.\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
Having checked the y-independent, we can take yi to be any values at the right-hand side of (4.32). From the expression (4.27) one can see that if we take y1 = y2 = ... = yn = 0, we have ${{ \mathcal H }}_{T;K}=0$, thus (4.32) is simplified to
$\begin{eqnarray}\begin{array}{l}\left(4| G| x{\partial }_{x}^{2}+2| G| (D-n){\partial }_{x}+{\widetilde{\alpha }}_{R}\right)F(x)\\ \quad ={{ \mathcal H }}_{T;R}(x,0,0,\ldots ,0).\,\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
The above differential equation is the form of (A.14) in appendix A and we can write down the solution immediately (see (A.33))
$\begin{eqnarray}\begin{array}{l}F(x)={F}_{0}(x)\left({f}_{0}+{\int }_{0}^{x}{\rm{d}}{{ww}}^{-\frac{(D-n)}{2}}{{\rm{e}}}^{-2{F}_{0}(w)}\right.\\ \quad \left.\times {\int }_{0}^{w}{\rm{d}}\xi \frac{1}{A}{{ \mathcal H }}_{T;R}(x,0,0,\ldots ,0){F}_{0}^{-1}(\xi ){{\rm{e}}}^{2{F}_{0}(\xi )}{\xi }^{\frac{(D-n)}{2}-1}\right)\,\,\,\,\,\,\,\,,\end{array}\end{eqnarray}$
where
$\begin{eqnarray}{F}_{0}(x)={\,}_{0}{F}_{1}(\varnothing ;\displaystyle \frac{(D-n)}{2};\displaystyle \frac{-{\widetilde{\alpha }}_{R}x}{4| G| }).\,\,\,\,\,\,\,\end{eqnarray}$
Putting (4.39) back to (4.26) we get the final analytic expression of generation functions
$\begin{eqnarray}\begin{array}{l}{c}_{T}(x,{y}_{1},\ldots ,{y}_{n})={{ \mathcal H }}_{T;K}+{{\rm{e}}}^{\frac{{\sum \ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}{F}_{0}(x)\\ \quad \times \left({f}_{0}+{\int }_{0}^{x}{\rm{d}}{{ww}}^{-\frac{(D-n)}{2}}{{\rm{e}}}^{-2{F}_{0}(w)}{\int }_{0}^{w}{\rm{d}}\xi \right.\\ \quad \left.\times \frac{1}{A}{{ \mathcal H }}_{T;R}(x,0,0,\ldots ,0){F}_{0}{\left(\xi \right)}^{-1}{{\rm{e}}}^{2{F}_{0}(\xi )}{\xi }^{\frac{(D-n)}{2}-1}\right).\,\,\,\,\,\end{array}\end{eqnarray}$
When we consider the generation functions of reduction coefficients of one-loop integrals with (n + 1) propagators, there is a special case, where all HT;i, HT;R are zero. For this case, we can write down immediately the generation function
$\begin{eqnarray}\begin{array}{l}{c}_{n+1\to n+1}(R,{K}_{1},\ldots ,{K}_{n})={\,}_{0}{F}_{1}\\ \quad \times \left(\varnothing ;\frac{(D-n)}{2};\frac{-{\tilde{\alpha }}_{R}x}{4| G| }\right){{\rm{e}}}^{\frac{{\sum \ }_{i=1}^{n}\ {\alpha }_{i}{y}_{i}}{| G| }}.\,\,\,\,\end{array}\end{eqnarray}$

5. Triangle

In this part we present another example, i.e. the triangle, to demonstrate the general frame laid down in the previous section. The seven generation functions are defined by
$\begin{eqnarray}\begin{array}{l}{I}_{{\rm{tri}}}(t,R)\\ \quad \equiv \int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{({\ell }^{2}-{M}_{0}^{2})({\left(\ell -{K}_{1}\right)}^{2}-{M}_{1}^{2})({\left(\ell -{K}_{2}\right)}^{2}-{M}_{2}^{2})}\\ \quad \equiv \int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{{D}_{0}{D}_{1}{D}_{2}}\\ \quad ={c}_{3\to 3}\int {\rm{d}}\ell \frac{1}{{D}_{0}{D}_{1}{D}_{2}}+{c}_{3\to 2;\hat{i}}\\ \quad \times \int {\rm{d}}\ell \frac{1}{{\prod }_{j\ne i,0}^{2}{D}_{j}}+{c}_{3\to 1;i}\int {\rm{d}}\ell \frac{1}{{D}_{i}}.\,\,\,\,\end{array}\end{eqnarray}$
Using the permutation symmetry and the shifting of loop momentum we can find nontrivial relations among these seven generation functions. The first group of relations is
$\begin{eqnarray}\begin{array}{l}{c}_{3\to 3}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={c}_{3\to 3}(t,R;{M}_{0};{K}_{2},{M}_{2};{K}_{1},{M}_{1}),\\ {c}_{3\to 2;\widehat{0}}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={c}_{3\to 2;\widehat{0}}(t,R;{M}_{0};{K}_{2},{M}_{2};{K}_{1},{M}_{1}),\\ {c}_{3\to 2;\widehat{1}}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={c}_{3\to 2;\widehat{2}}(t,R;{M}_{0};{K}_{2},{M}_{2};{K}_{1},{M}_{1}),\\ {c}_{3\to 2;\widehat{2}}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={c}_{3\to 2;\widehat{1}}(t,R;{M}_{0};{K}_{2},{M}_{2};{K}_{1},{M}_{1}),\\ {c}_{3\to 1;0}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={c}_{3\to 1;0}(t,R;{M}_{0};{K}_{2},{M}_{2};{K}_{1},{M}_{1}),\\ {c}_{3\to 1;1}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={c}_{3\to 1;2}(t,R;{M}_{0};{K}_{2},{M}_{2};{K}_{1},{M}_{1}),\\ {c}_{3\to 1;2}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={c}_{3\to 1;1}(t,R;{M}_{0};{K}_{2},{M}_{2};{K}_{1},{M}_{1}).\,\,\,\,\end{array}\end{eqnarray}$
The second group of relations is
$\begin{eqnarray}\begin{array}{l}{c}_{3\to 3}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{3\to 3}(t,R;{M}_{1};-{K}_{1},{M}_{0};{K}_{2}-{K}_{1},{M}_{2}),\\ {c}_{3\to 2;\hat{0}}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{3\to 2;\hat{1}}(t,R;{M}_{1};-{K}_{1},{M}_{0};{K}_{2}-{K}_{1},{M}_{2}),\\ {c}_{3\to 2;\hat{1}}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{3\to 2;\hat{0}}(t,R;{M}_{1};-{K}_{1},{M}_{0};{K}_{2}-{K}_{1},{M}_{2}),\\ {c}_{3\to 2;\hat{2}}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{3\to 2;\hat{1}}(t,R;{M}_{1};-{K}_{1},{M}_{0};{K}_{2}-{K}_{1},{M}_{2}),\\ {c}_{3\to 1;0}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{3\to 1;1}(t,R;{M}_{1};-{K}_{1},{M}_{0};{K}_{2}-{K}_{1},{M}_{2}),\\ {c}_{3\to 1;1}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{3\to 1;0}(t,R;{M}_{1};-{K}_{1},{M}_{0};{K}_{2}-{K}_{1},{M}_{2}),\\ {c}_{3\to 1;2}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{3\to 1;1}(t,R;{M}_{1};-{K}_{1},{M}_{0};{K}_{2}-{K}_{1},{M}_{2}).\,\,\,\,\end{array}\end{eqnarray}$
The third group of relations is
$\begin{eqnarray}\begin{array}{l}{c}_{3\to 3}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{2}\cdot R}{c}_{3\to 3}(t,R;{M}_{2};{K}_{1}-{K}_{2},{M}_{1};-{K}_{2},{M}_{0}),\\ {c}_{3\to 2;\hat{0}}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{2}\cdot R}{c}_{3\to 2;\hat{2}}(t,R;{M}_{2};{K}_{1}-{K}_{2},{M}_{1};-{K}_{2},{M}_{0}),\\ {c}_{3\to 2;\hat{1}}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{2}\cdot R}{c}_{3\to 2;\hat{1}}(t,R;{M}_{2};{K}_{1}-{K}_{2},{M}_{1};-{K}_{2},{M}_{0}),\\ {c}_{3\to 2;\hat{2}}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{2}\cdot R}{c}_{3\to 2;\hat{0}}(t,R;{M}_{2};{K}_{1}-{K}_{2},{M}_{1};-{K}_{2},{M}_{0}),\\ {c}_{3\to 1;0}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{2}\cdot R}{c}_{3\to 1;2}(t,R;{M}_{2};{K}_{1}-{K}_{2},{M}_{1};-{K}_{2},{M}_{0}),\\ {c}_{3\to 1;1}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{2}\cdot R}{c}_{3\to 1;1}(t,R;{M}_{2};{K}_{1}-{K}_{2},{M}_{1};-{K}_{2},{M}_{0}),\\ {c}_{3\to 1;2}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{\rm{e}}}^{2{K}_{2}\cdot R}{c}_{3\to 1;0}(t,R;{M}_{2};{K}_{1}-{K}_{2},{M}_{1};-{K}_{2},{M}_{0}).\,\,\,\,\end{array}\end{eqnarray}$
Using these three groups of relations, we just need to compute three generation functions, for example, c3→3, ${c}_{3\to 2;\widehat{1}}$ and c3→1;0. The mass dimensions of them are
$\begin{eqnarray}\begin{array}{rcl}[t] & = & -2,\,\,\,\,\,\,[{c}_{3\to 3}]=0,\,\,\,\,[{c}_{3\to 2;\widehat{i}}]=-2,\\ \,[{c}_{3\to 1;i}] & = & -4.\,\,\,\end{array}\end{eqnarray}$

5.1. The differential equations

Since we have given enough details in the section of the bubble, here we will be more brief. Using ∂R · ∂R, K1 · ∂R, K2 · ∂R operators, we can find
$\begin{eqnarray}\begin{array}{l}{\partial }_{R}\cdot {\partial }_{R}{c}_{T}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad =4{t}^{2}{M}_{0}^{2}{c}_{T}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})+4{t}^{2}{\xi }_{T}{h}_{T},\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}{K}_{1}\cdot {\partial }_{R}{c}_{T}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{tf}}_{1}{c}_{T}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})-t{\widetilde{\xi }}_{T}{\widetilde{h}}_{T}+t{\xi }_{T}{h}_{T},\,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}{K}_{2}\cdot {\partial }_{R}{c}_{T}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})\\ \quad ={{tf}}_{2}{c}_{T}(t,R;{M}_{0};{K}_{1},{M}_{1};{K}_{2},{M}_{2})-t{\widehat{\xi }}_{T}{\widehat{h}}_{T}+t{\xi }_{T}{h}_{T},\,\,\,\end{array}\end{eqnarray}$
where T is the index for different generation functions and ${f}_{1}={K}_{1}^{2}-{M}_{1}^{2}+{M}_{0}^{2}$, ${f}_{2}={K}_{2}^{2}-{M}_{2}^{2}+{M}_{0}^{2}$. The various constants $\xi ,\widetilde{\xi },\widehat{\xi }$ are given in the table
while hT are given by
$\begin{eqnarray}\begin{array}{l}{h}_{3\to 2;\hat{0}}={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{2\to 2}(t,{R}^{2},({K}_{2}-{K}_{1})\cdot R,\\ \quad {\left({K}_{2}-{K}_{1}\right)}^{2};{M}_{1},{M}_{2}),\\ {h}_{3\to 1;1}={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{2\to 2;\hat{1}}(t,{R}^{2},({K}_{2}-{K}_{1})\cdot R,\\ \quad {\left({K}_{2}-{K}_{1}\right)}^{2};{M}_{1},{M}_{2}),\\ {h}_{3\to 1;2}={{\rm{e}}}^{2{K}_{1}\cdot R}{c}_{2\to 2;\hat{0}}(t,{R}^{2},({K}_{2}-{K}_{1})\cdot R,\\ \quad {\left({K}_{2}-{K}_{1}\right)}^{2};{M}_{1},{M}_{2}),\end{array}\end{eqnarray}$
and ${\widetilde{h}}_{T}$ are given by
$\begin{eqnarray}\begin{array}{rcl}{\widetilde{h}}_{3\to 2;\widehat{1}} & = & {c}_{2\to 2}(t,{R}^{2},{K}_{2}\cdot R,{K}_{2}^{2};{M}_{0},{M}_{2}),\\ {\widetilde{h}}_{3\to 1;0} & = & {c}_{2\to 1;\widehat{1}}(t,{R}^{2},{K}_{2}\cdot R,{K}_{2}^{2};{M}_{0},{M}_{2}),\\ {\widetilde{h}}_{3\to 1;2} & = & {c}_{2\to 1;\widehat{0}}(t,{R}^{2},{K}_{2}\cdot R,{K}_{2}^{2};{M}_{0},{M}_{2}),\,\,\,\,\end{array}\end{eqnarray}$
and ${\widehat{h}}_{T}$ are given by
$\begin{eqnarray}\begin{array}{rcl}{\widehat{h}}_{3\to 2;\widehat{2}} & = & {c}_{2\to 2}(t,{R}^{2},{K}_{1}\cdot R,{K}_{1}^{2};{M}_{0},{M}_{1}),\\ {\widehat{h}}_{3\to 1;0} & = & {c}_{2\to 1;\widehat{1}}(t,{R}^{2},{K}_{1}\cdot R,{K}_{1}^{2};{M}_{0},{M}_{1}),\\ {\widehat{h}}_{3\to 1;1} & = & {c}_{2\to 1;\widehat{0}}(t,{R}^{2},{K}_{1}\cdot R,{K}_{1}^{2};{M}_{0},{M}_{1}).\end{array}\end{eqnarray}$
The differential equations (5.6), (5.7) and (5.8) are indeed the form (4.13) and (4.14) in the previous section.
For triangle, the natural variables for generation functions are
$\begin{eqnarray}\begin{array}{rcl}r & = & R\cdot R,\,\,\,{p}_{1}={K}_{1}\cdot R,\\ {p}_{2} & = & {K}_{2}\cdot R.\,\,\,\end{array}\end{eqnarray}$
However, the good variables for differential equations (5.6), (5.7) and (5.8) are x, y1, y2 as
$\begin{eqnarray}\begin{array}{rcl}x & = & [{K}_{1}^{2}{K}_{2}^{2}-{\left({K}_{1}\cdot {K}_{2}\right)}^{2}]r-{K}_{2}^{2}{p}_{1}^{2}-{K}_{1}^{2}{p}_{2}^{2}\\ & & +(2{K}_{1}\cdot {K}_{2}){p}_{1}{p}_{2}\\ & = & | G| r-{p}_{1}{y}_{1}-{p}_{2}{y}_{2},\\ {y}_{2} & = & -({K}_{1}\cdot {K}_{2}){p}_{1}+{K}_{1}^{2}{p}_{2},\\ {y}_{1} & = & -({K}_{1}\cdot {K}_{2}){p}_{2}+{K}_{2}^{2}{p}_{1},\,\,\,\end{array}\end{eqnarray}$
which are defined in (4.8) with Gram matrix
$\begin{eqnarray}\begin{array}{rcl}G({K}_{1},{K}_{2}) & \equiv & \left(\begin{array}{cc}{K}_{1}^{2} & {K}_{1}\cdot {K}_{2}\\ {K}_{1}\cdot {K}_{2} & {K}_{2}^{2}\end{array}\right),\\ {G}^{-1} & = & \displaystyle \frac{1}{| G| }\left(\begin{array}{cc}{K}_{2}^{2} & -{K}_{1}\cdot {K}_{2}\\ -{K}_{1}\cdot {K}_{2} & {K}_{1}^{2}\end{array}\right).\,\,\,\end{array}\end{eqnarray}$
Using the new variables, the differential equations are
$\begin{eqnarray}\begin{array}{l}\left({\partial }_{{y}_{1}}-\displaystyle \frac{{\alpha }_{1}}{| G| }\right){c}_{T}=\displaystyle \frac{1}{| G| }{H}_{T;1},\\ \quad \left({\partial }_{{y}_{2}}-\displaystyle \frac{{\alpha }_{2}}{| G| }\right){c}_{T}=\displaystyle \frac{1}{| G| }{H}_{T;2},\,\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\left(4| G| x{\partial }_{x}^{2}+2(D-2)| G| {\partial }_{x}+{\widetilde{\alpha }}_{R}\right){c}_{T}={{ \mathcal H }}_{T;R},\end{eqnarray}$
where
$\begin{eqnarray}\begin{array}{rcl}{\alpha }_{1} & = & {{tf}}_{1},\,\,\,{\alpha }_{2}={{tf}}_{2},\\ {\widetilde{\alpha }}_{R} & = & \displaystyle \frac{{t}^{2}{f}_{1}^{2}{K}_{2}^{2}+{t}^{2}{f}_{2}^{2}{K}_{1}^{2}-2{t}^{2}{f}_{1}{f}_{2}({K}_{1}\cdot {K}_{2})}{| G| }\\ & & -4{t}^{2}{M}_{0}^{2},\,\,\,\end{array}\end{eqnarray}$
and
$\begin{eqnarray}\begin{array}{rcl}{{ \mathcal H }}_{T;R} & = & -\left({K}_{2}^{2}{\partial }_{{y}_{1}}-({K}_{1}\cdot {K}_{2}){\partial }_{{y}_{2}}\right.\\ & & \left.+\displaystyle \frac{{\alpha }_{1}{K}_{2}^{2}-{\alpha }_{2}({K}_{1}\cdot {K}_{2})}{| G| }\right){H}_{T;1}\\ & & -\left({K}_{1}^{2}{\partial }_{{y}_{2}}-({K}_{1}\cdot {K}_{2}){\partial }_{{y}_{1}}\right.\\ & & \left.+\displaystyle \frac{{\alpha }_{2}{K}_{1}^{2}-{\alpha }_{1}({K}_{1}\cdot {K}_{2})}{| G| }\right){H}_{T;2}+{H}_{T;R}.\,\,\,\,\,\,\end{array}\end{eqnarray}$
The solution is given by
$\begin{eqnarray}\begin{array}{l}{c}_{T}(x,{y}_{1},{y}_{2})={{\rm{e}}}^{\frac{{\alpha }_{1}}{| G| }{y}_{1}}{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}{F}_{T}^{{tri}}(x)+{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}\frac{1}{| G| }\\ \quad \times {\int }_{0}^{{y}_{2}}{\rm{d}}{w}_{2}{{\rm{e}}}^{-\frac{{\alpha }_{2}}{| G| }{w}_{2}}{H}_{T;2}(x,{y}_{1},{w}_{2})\\ \quad +{{\rm{e}}}^{\frac{{\alpha }_{1}}{| G| }{y}_{1}}{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}\frac{1}{| G| }{\int }_{0}^{{y}_{1}}{\rm{d}}{w}_{1}{{\rm{e}}}^{-\frac{{\alpha }_{1}}{| G| }{w}_{1}}{H}_{T;1}(x,{w}_{1},0),\,\,\,\,\,\,\end{array}\end{eqnarray}$
where
$\begin{eqnarray}{F}_{0}^{{\rm{tri}}}(x)={\,}_{0}{F}_{1}\left(\varnothing ;\displaystyle \frac{(D-2)}{2};\displaystyle \frac{-{\widetilde{\alpha }}_{R}x}{4| G| }\right)\,\,\,\,\,\,\,\,\,\end{eqnarray}$
and
$\begin{eqnarray}\begin{array}{l}{F}^{{\rm{tri}}}(x)={F}_{0}^{{\rm{tri}}}(x)\left({f}_{0}+{\int }_{0}^{x}{\rm{d}}{{ww}}^{\frac{-(D-2)}{2}}{{\rm{e}}}^{-2{F}_{0}^{{\rm{tri}}}(w)}\right.\\ \quad \left.\times {\int }_{0}^{w}{\rm{d}}\xi \frac{1}{A}{{ \mathcal H }}_{T;R}(x,0,\ldots ,0){F}_{0}^{{\rm{tri}}}{\left(\xi \right)}^{-1}{{\rm{e}}}^{2{F}_{0}^{{\rm{tri}}}(\xi )}{\xi }^{\frac{(D-2)}{4}}\right).\,\,\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
The generation function c3→3: for this one, from table (5.9), we see that HT;1 = HT;2 = HT;R = 0, thus we write down immediately
$\begin{eqnarray}{c}_{3\to 3}={{\rm{e}}}^{\frac{{\alpha }_{1}}{| G| }{y}_{1}}{{\rm{e}}}^{\frac{{\alpha }_{2}}{| G| }{y}_{2}}{\,}_{0}{F}_{1}\left(\varnothing ;\frac{(D-2)}{2};\frac{-{\tilde{\alpha }}_{R}x}{4| G| }\right).\,\,\,\,\,\,\,\,\,\end{eqnarray}$
The generation function c3→2: there are three generation functions ${c}_{3\to 2;\widehat{i}}$. We want to choose one of them with the simplest H1, H2, HR. Checking with table (5.9), we see that if we consider ${c}_{3\to 2;\widehat{1}}$, we will have HR = 0, H2 = 0 and
$\begin{eqnarray}\begin{array}{l}{H}_{3\to 2;\hat{1};1}(x,{y}_{1},{y}_{2})=-{c}_{2\to 2}(t,{R}^{2},{K}_{2}\cdot R,{K}_{2}^{2};{M}_{0},{M}_{2})\\ \quad ={-}_{0}{F}_{1}\left(\varnothing ;\frac{D-1}{2};\right.\\ \quad \left.\left(\frac{{t}^{2}(4{K}_{2}^{2}{M}_{0}^{2}-{f}_{2}^{2})({K}_{2}^{2}r-{p}_{2}^{2})}{4{\left({K}_{2}^{2}\right)}^{2}}\right)\right){{\rm{e}}}^{\frac{{{tf}}_{2}}{{K}_{2}^{2}}{p}_{2}},\,\,\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
where we have used the result (3.44) and expressed r, p1, p2 using (4.10)
$\begin{eqnarray}\begin{array}{rcl}{p}_{1} & = & \displaystyle \frac{{K}_{1}^{2}{y}_{1}+({K}_{1}\cdot {K}_{2}){y}_{2}}{| G| },\,\,\,\,{p}_{2}=\displaystyle \frac{{K}_{2}^{2}{y}_{2}+({K}_{1}\cdot {K}_{2}){y}_{1}}{| G| },\\ r & = & \displaystyle \frac{| G| x+{K}_{1}^{2}{y}_{1}^{2}+{K}_{2}^{2}{y}_{2}^{2}+2({K}_{1}\cdot {K}_{2}){y}_{1}{y}_{2}}{| G{| }^{2}}.\,\,\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
Thus by (5.19) we can find
$\begin{eqnarray}\begin{array}{l}{{ \mathcal H }}_{3\to 2;\hat{1};R}=\left(\frac{{\alpha }_{1}{K}_{2}^{2}-{\alpha }_{2}({K}_{1}\cdot {K}_{2})}{| G| }\right){{\rm{e}}}^{\frac{{{tf}}_{2}}{{K}_{2}^{2}}{p}_{2}}{\,}_{0}{F}_{1}\\ \quad \times \left(\varnothing ;\frac{D-1}{2};\left(\frac{{t}^{2}(4{K}_{2}^{2}{M}_{0}^{2}-{f}_{2}^{2})({K}_{2}^{2}r-{p}_{2}^{2})}{4{\left({K}_{2}^{2}\right)}^{2}}\right)\right)\\ \quad +\frac{{t}^{2}(4{K}_{2}^{2}{M}_{0}^{2}-{f}_{2}^{2})}{4{\left({K}_{2}^{2}\right)}^{2}}\frac{2{K}_{2}^{2}{y}_{1}}{| G| }{{\rm{e}}}^{\frac{{{tf}}_{2}}{{K}_{2}^{2}}{p}_{2}}{\,}_{0}{F}_{1}^{(1)}\\ \quad \times \left(\varnothing ;\frac{D-1}{2};\left(\frac{{t}^{2}(4{K}_{2}^{2}{M}_{0}^{2}-{f}_{2}^{2})({K}_{2}^{2}r-{p}_{2}^{2})}{4{\left({K}_{2}^{2}\right)}^{2}}\right)\right),\end{array}\end{eqnarray}$
where we have defined
$\begin{eqnarray}\begin{array}{l}{\,}_{A}{F}_{B}^{(n)}({a}_{1},\ldots ,{a}_{A};{b}_{1},\ldots ,{b}_{B};x)\\ \quad =\frac{{{\rm{d}}}^{n}}{{\rm{d}}{x}^{n}}{\,}_{A}{F}_{B}({a}_{1},\ldots ,{a}_{A};{b}_{1},\ldots ,{b}_{B};x).\end{array}\end{eqnarray}$
Putting them back to (5.20) we can find the analytic expression. Using it we can get the explicit series expansion as discussed in appendix A.
For generation functions c3→1;i, we can do similar calculations. The key is to find HT;i and HT;R, which is the generation functions of one order lower topology. Thus we see the recursive structures of generation functions from lower topologies to higher topologies. The logic is clear although working out details takes some effects.

6. Conclusion

In this paper, we have introduced the concept of generation function for reduction coefficients of loop integrals. For one-loop integrals, using the recent proposal of auxiliary vector R, we can construct two types of differential operators $\tfrac{\partial }{\partial R}\cdot \tfrac{\partial }{\partial R}$ and ${K}_{i}\cdot \tfrac{\partial }{\partial R}$. Using these operators, we can establish corresponding differential equations for generation functions. By proper changing of variables, these differential equations can be written into the decoupled form, thus one can solve them one by one analytically. Obviously, one could try to apply the same idea to discuss the reduction problem for two and higher loop integrals. But with the appearance of irreducible scalar products, the problem becomes harder. One can try to use the IBP relation in the Baikov representation [44] as shown [42, 43].

Acknowledgments

We would like to thank Jiaqi Chen, Tingfei Li and Rui Yu for their useful discussions and early participation. This work is supported by Chinese NSF funding under Grant Nos. 11935013, 11947301, 12047502 (Peng Huanwu Center).

Appendix A. Solving differential equations

As shown in the paper, differential equations for generation functions can be reduced to two typical types. In this appendix, we present details of solving these typical differential equations.
The first order differential equation: the first typical differential equation is following first order differential equation
$\begin{eqnarray}\boxed{\left(A\frac{{\rm{d}}}{{\rm{d}}x}+B\right)F(x)=H(x)},\,\,\,\end{eqnarray}$
where A, B are independent of x and H(x) is the known function of x. To solve it, first, we solve the homogenous part
$\begin{eqnarray}\begin{array}{l}\left(A\frac{{\rm{d}}}{{\rm{d}}x}+B\right){F}_{0}(x)=0,\\ \quad \Longrightarrow \,\,\,{F}_{0}(x)={{\rm{e}}}^{-\frac{B}{A}x}.\,\,\,\,\end{array}\end{eqnarray}$
Then we write F(x) = F0(x)F1(x) in (A.1) to get the differential equation for F1(x) as
$\begin{eqnarray}A\frac{{\rm{d}}}{{\rm{d}}x}{F}_{1}(x)={F}_{0}^{-1}H(x),\,\,\,\end{eqnarray}$
thus we have
$\begin{eqnarray}{F}_{1}(x)={F}_{1}(x=0)+{\int }_{0}^{x}{\rm{d}}t\frac{1}{A}{{\rm{e}}}^{\frac{B}{A}t}H(t).\,\,\,\end{eqnarray}$
Knowing the special solution of F(x) in (A.4), the general solution of (A.1) is given by
$\begin{eqnarray}\boxed{F(x)={F}_{0}(x){F}_{1}(x)+\tilde{\alpha }{F}_{0}(x)={{\rm{e}}}^{-\frac{B}{A}x}\left(\alpha +{\int }_{0}^{x}{\rm{d}}w\frac{1}{A}{{\rm{e}}}^{\frac{B}{A}w}H(w)\right)},\,\,\,\end{eqnarray}$
where the constant α is determined by the boundary condition.
If we want to have the series expansion of x, there are several ways to do this. The first one is to carry out the integration
$\begin{eqnarray}\begin{array}{l}{\int }_{0}^{x}{\rm{d}}t\frac{1}{A}{{\rm{e}}}^{\frac{B}{A}t}H(t)=\frac{x}{A}\displaystyle \sum _{n=0}^{\infty }\displaystyle \sum _{a=0}^{\infty }{h}_{n}{x}^{n}\\ \quad \times \frac{1}{(n+a+1)}\frac{{\left(\frac{{Bx}}{A}\right)}^{a}}{a!},\,\,\,\end{array}\end{eqnarray}$
where we have used the expansion of $H(x)={\sum }_{n=0}^{\infty }{h}_{n}{x}^{n}$. The second way is to put $F(x)={\sum }_{n=0}^{\infty }{f}_{n}{x}^{n}$ directly to (A.1) to arrive at the recursion relation
$\begin{eqnarray}\begin{array}{rcl}{f}_{n+1} & = & \displaystyle \frac{-B}{(n+1)A}{f}_{n}+\displaystyle \frac{{h}_{n}}{(n+1)A}\\ & \equiv & \gamma (n){f}_{n}+\rho (n),\,\,\,\,n\geqslant 0.\,\,\,\end{array}\end{eqnarray}$
The recursive relation (A.7) can be solved as
$\begin{eqnarray}{f}_{n+1}={f}_{0}\prod _{i=0}^{n}\gamma (i)+\sum _{i=0}^{n}\rho (i)\prod _{j=i+1}^{n}\gamma (j).\,\,\,\end{eqnarray}$
It is easy to compute
$\begin{eqnarray}{\rm{\Xi }}[n+1]=\prod _{i=0}^{n}\gamma (i)=\displaystyle \frac{{\left(-B\right)}^{n+1}}{(n+1)!{A}^{n+1}}\,\,\,\,\,\end{eqnarray}$
and
$\begin{eqnarray}\begin{array}{l}\rho (i)\prod _{j=i+1}^{n-1}\gamma (j)=\displaystyle \frac{{h}_{i}}{(i+1)A}\prod _{j=i+1}^{n-1}\gamma (j)\\ \quad =\displaystyle \frac{{h}_{i}}{-B}\prod _{j=i}^{n-1}\gamma (j)\\ \quad =\displaystyle \frac{{h}_{i}}{-B}\displaystyle \frac{{\rm{\Xi }}[n]}{{\rm{\Xi }}[i]}=\displaystyle \frac{{h}_{i}}{-B}\displaystyle \frac{i!{\left(-B\right)}^{n-i}}{n!{A}^{n-i}},\,\,\,\,\end{array}\end{eqnarray}$
thus we have
$\begin{eqnarray}F(x)=\sum _{n=0}\left({f}_{0}\displaystyle \frac{{\left(-B\right)}^{n}}{n!{A}^{n}}+\sum _{i=0}^{n-1}\displaystyle \frac{{h}_{i}}{A}\displaystyle \frac{i!{\left(-B\right)}^{n-1-i}}{n!{A}^{n-1-i}}\right){x}^{n}.\,\,\,\,\end{eqnarray}$
The third way is to use analytic expression (A.5). We need to compute $\frac{{{\rm{d}}}^{n}F(x)}{{\rm{d}}{x}^{n}}$ and then set x = 0. One can see that, for example,
$\begin{eqnarray}\begin{array}{l}\frac{{\rm{d}}F(x)}{{\rm{d}}x}=-\frac{B}{A}{{\rm{e}}}^{-\frac{B}{A}x}\left(\alpha +{\int }_{0}^{x}{\rm{d}}w\frac{1}{A}{{\rm{e}}}^{\frac{B}{A}w}H(w)\right)\\ \quad +\frac{1}{A}H(x),\,\,\,\end{array}\end{eqnarray}$
thus
$\begin{eqnarray}\frac{{\rm{d}}F(x)}{{\rm{d}}x}{| }_{x=0}=-\frac{B}{A}\alpha +\frac{1}{A}H(x=0).\,\,\,\end{eqnarray}$
The important point is that when setting x = 0 at the end of differentiation, the integration ${\int }_{0}^{x}{\rm{d}}w...=0$, thus we have got rid of integration and all we need to do are the differentiations over ${{\rm{e}}}^{\frac{B}{A}x}$ and H(x).
The second order differential equation: the second typical differential equation met in this paper is the following second order differential equation
$\begin{eqnarray}\boxed{\left({Ax}\frac{{{\rm{d}}}^{2}}{{\rm{d}}{x}^{2}}+B\frac{{\rm{d}}}{{\rm{d}}x}+C\right)F(x)=H(x)},\,\,\,\,\end{eqnarray}$
where A, B, and C are independent of x and H(x) is a known function of x. Let us solve it using the series expansion. Writing
$\begin{eqnarray}F(x)=\sum _{n=0}^{\infty }{f}_{n}{x}^{n},\qquad \qquad H(x)=\sum _{n=0}^{\infty }{h}_{n}{x}^{n},\,\,\,\,\end{eqnarray}$
and putting back to (A.14) we have
$\begin{eqnarray}\begin{array}{l}\sum _{n=0}^{\infty }{h}_{n}{x}^{n}=\sum _{n=0}^{\infty }({An}(n+1){f}_{n+1}{x}^{n}\\ \quad +B(n+1){f}_{n+1}{x}^{n}+{{Cf}}_{n}{x}^{n}),\,\,\,\,\end{array}\end{eqnarray}$
thus we have the relation
$\begin{eqnarray}{h}_{n}=(n+1)(B+{An}){f}_{n+1}+{{Cf}}_{n},\,\,\,n\geqslant 0.\,\,\,\,\end{eqnarray}$
Using it, we can solve6(6An important point is that although (A.14) is a second order differential equation, because the x is in front of $\displaystyle \frac{{d}^{2}}{{{dx}}^{2}}$, around x = 0, it is essentially a first order differential equation. This explains why using only f0 and known H(x) we can determine F(x) using (A.18).)
$\begin{eqnarray}\begin{array}{rcl}{f}_{n+1} & = & \displaystyle \frac{{h}_{n}}{(n+1)(B+{An})}-C\displaystyle \frac{1}{(n+1)(B+{An})}\\ {f}_{n} & \equiv & \gamma (n){f}_{n}+\rho (n),\,\,\,\,n\geqslant 0.\,\,\,\,\end{array}\end{eqnarray}$
The recursive relation (A.18) can be solved as
$\begin{eqnarray}{f}_{n+1}={f}_{0}\prod _{i=0}^{n}\gamma (i)+\sum _{i=0}^{n}\rho (i)\prod _{j=i+1}^{n}\gamma (j).\,\,\,\,\end{eqnarray}$
It is easy to compute
$\begin{eqnarray}\begin{array}{rcl}{\rm{\Xi }}[n+1] & = & \prod _{i=0}^{n}\gamma (i)=\displaystyle \frac{{\left(-C\right)}^{n+1}}{(n+1)!{A}^{n+1}\prod _{i=0}^{n}(B/A+i)}\\ & = & \displaystyle \frac{{\left(-C\right)}^{n+1}}{(n+1)!{A}^{n+1}{\left(\tfrac{B}{A}\right)}_{n+1}},\,\,\,\,\end{array}\end{eqnarray}$
where we have used the Pochhammer symbol to simplify the expression7(7From the definition one can see that ${\left(x\right)}_{n=0}=1,\forall x$.)
$\begin{eqnarray}{\left(x\right)}_{n}=\displaystyle \frac{{\rm{\Gamma }}(x+n)}{{\rm{\Gamma }}(x)}=\prod _{i=1}^{n}(x+(i-1)).\,\,\,\end{eqnarray}$
Using
$\begin{eqnarray}\begin{array}{rcl}\rho (i)\prod _{j=i+1}^{n-1}\gamma (j) & = & \displaystyle \frac{{h}_{i}}{(i+1)(B+{Ai})}\prod _{j=i+1}^{n-1}\gamma (j)\\ & = & \displaystyle \frac{-{h}_{i}}{C}\prod _{j=i}^{n-1}\gamma (j)=\displaystyle \frac{-{h}_{i}}{C}\displaystyle \frac{{\rm{\Xi }}[n]}{{\rm{\Xi }}[i]}\,\,\,\,\,\end{array}\end{eqnarray}$
we have
$\begin{eqnarray}F(x)=\sum _{n=0}^{\infty }{f}_{n}{x}^{n}=\sum _{n=0}^{\infty }{x}^{n}{\rm{\Xi }}[n]\left\{{f}_{0}+\sum _{i=0}^{n-1}\displaystyle \frac{-{h}_{i}}{C{\rm{\Xi }}[i]}\right\}.\,\,\,\,\,\end{eqnarray}$
Let us define the following function
$\begin{eqnarray}\begin{array}{l}{H}_{A,B,C}(x)=\sum _{i=0}^{\infty }\displaystyle \frac{-{h}_{i}{x}^{i}}{C{\rm{\Xi }}[i]}\\ \quad =\sum _{i=0}^{\infty }\displaystyle \frac{i!{A}^{i}}{{\left(-C\right)}^{i+1}}{\left(\displaystyle \frac{B}{A}\right)}_{i}{h}_{i}{x}^{i},\,\,\,\,\,\end{array}\end{eqnarray}$
which can be considered as the ‘dual function' of H(x) corresponding to the differential equation (A.14), then we can write
$\begin{eqnarray}\sum _{i=0}^{n-1}\displaystyle \frac{-{h}_{i}}{C{\rm{\Xi }}[i]}=\lfloor {H}_{A,B,C}(x=1){\rfloor }_{{x}^{n-1}},\,\,\,\,\,\end{eqnarray}$
where the symbol $\lfloor Y(x){\rfloor }_{{x}^{n-1}}$ means to keep the Taylor series up to the order of x(n−1). Thus F(x) can be written compactly as
$\begin{eqnarray}\begin{array}{rcl}F(x) & = & \sum _{n=0}^{\infty }{f}_{n}{x}^{n}=\sum _{n=0}^{\infty }{x}^{n}{\rm{\Xi }}[n]\\ & & \times \left\{{f}_{0}+\lfloor {H}_{A,B,C}(x=1){\rfloor }_{{x}^{n-1}}\right\}.\,\,\,\,\,\end{array}\end{eqnarray}$
For the special case H(x) = 0, it is easy to see that
$\begin{eqnarray}F(x)={f}_{0}{F}_{0},\,\,\,\,\,\,{F}_{0}(x)\equiv \sum _{n=0}^{\infty }\displaystyle \frac{{\left(-C\right)}^{n}{x}^{n}}{n!{A}^{n}{\left(\tfrac{B}{A}\right)}_{n}}.\,\,\,\,\,\end{eqnarray}$
The expression (A.27) is nothing, but the special case of generalized hypergeometric function (see (C.2) of [11]), which is defined as
$\begin{eqnarray}\begin{array}{l}{\,}_{A}{F}_{B}({a}_{1},\ldots ,{a}_{A};{b}_{1},\ldots ,{b}_{B};x)\\ \quad =\sum _{n=0}^{\infty }\displaystyle \frac{{\left({a}_{1}\right)}_{n}...{\left({a}_{A}\right)}_{n}}{{\left({b}_{1}\right)}_{n}...{\left({b}_{B}\right)}_{n}}\displaystyle \frac{{x}^{n}}{n!},\,\,\,\end{array}\end{eqnarray}$
thus we have
$\begin{eqnarray}F(x)={f}_{0}{F}_{0}(x),\,\,\,\,\,\,{F}_{0}={\,}_{0}{F}_{1}\left(\varnothing ;\displaystyle \frac{B}{A};\displaystyle \frac{-{Cx}}{A}\right).\,\,\,\,\,\end{eqnarray}$
The solution in (A.23) is given in the series expansion. We can also write it in the analytic expression. Writing F(x) = F0(x)F1(x) we can find the differential equation of F1(x) as
$\begin{eqnarray}H(x)={F}_{0}(x)\left({Ax}\frac{{{\rm{d}}}^{2}}{{\rm{d}}{x}^{2}}+\left(B+2{Ax}\frac{{\rm{d}}{F}_{0}(x)}{{dx}}\right)\frac{{\rm{d}}}{{\rm{d}}x}\right){F}_{1}(x),\,\,\,\,\,\end{eqnarray}$
which is the first order differential equation of $\frac{{\rm{d}}{F}_{1}(x)}{{\rm{d}}x}=U(x)$. Using a similar method as for the differential equation (A.1) we can solve
$\begin{eqnarray}\begin{array}{l}U(x)={x}^{-\frac{B}{A}}{{\rm{e}}}^{-2{F}_{0}(x)}\left({\alpha }_{1}\right.\\ \quad \left.+{\int }_{0}^{x}{\rm{d}}w\frac{1}{A}H(w){F}_{0}^{-1}(w){{\rm{e}}}^{2{F}_{0}(w)}{w}^{\frac{B}{A}-1}\right).\,\,\,\,\,\end{array}\end{eqnarray}$
Integrating U(x) to get F1(x) we obtain
$\begin{eqnarray}\begin{array}{l}F(x)={F}_{0}(x)\left({\alpha }_{2}+{\int }_{0}^{x}{\rm{d}}{{ww}}^{-\frac{B}{A}}{{\rm{e}}}^{-2{F}_{0}(w)}\right.\\ \quad \left.\times \left({\alpha }_{1}+{\int }_{0}^{w}{\rm{d}}\xi \frac{1}{A}H(\xi ){F}_{0}^{-1}(\xi ){{\rm{e}}}^{2{F}_{0}(\xi )}{\xi }^{\frac{B}{A}-1}\right)\right),\,\,\,\,\,\end{array}\end{eqnarray}$
where the α1, α2 can be determined using the initial condition of F(x = 0) and $\frac{{\rm{d}}F(x=0)}{{\rm{d}}x}$. Using the expansion of F(x) we see that α2 = f0 and α1 = 0 thus we have
$\begin{eqnarray}\boxed{F(x)={F}_{0}(x)\left({f}_{0}+{\int }_{0}^{x}{\rm{d}}{{ww}}^{-\frac{B}{A}}{{\rm{e}}}^{-2{F}_{0}(w)}{\int }_{0}^{w}{\rm{d}}\xi \frac{1}{A}H(\xi ){F}_{0}^{-1}(\xi ){{\rm{e}}}^{2{F}_{0}(\xi )}{\xi }^{\frac{B}{A}-1}\right)}.\,\,\,\,\,\end{eqnarray}$
The analytic form (A.33) is very compact, but it is hard to carry out the integration in general. However, we can use it to get series expansion just like before. One can see that
$\begin{eqnarray}\begin{array}{l}\frac{{\rm{d}}F(x)}{{\rm{d}}x}=\frac{{\rm{d}}{F}_{0}(x)}{{\rm{d}}x}\left({f}_{0}+{\int }_{0}^{x}{\rm{d}}{{ww}}^{-\frac{B}{A}}{{\rm{e}}}^{-2{F}_{0}(w)}\right.\\ \quad \left.{\int }_{0}^{w}{\rm{d}}t\frac{1}{A}H(t){F}_{0}{\left(t\right)}^{-1}{{\rm{e}}}^{2{F}_{0}(t)}{t}^{\frac{B}{A}-1}\right)\\ \quad +{F}_{0}(x){x}^{-\frac{B}{A}}{{\rm{e}}}^{-2{F}_{0}(x)}\\ \quad \times {\int }_{0}^{x}{\rm{d}}t\frac{1}{A}H(t){F}_{0}{\left(t\right)}^{-1}{{\rm{e}}}^{2{F}_{0}(t)}{t}^{\frac{B}{A}-1}.\,\,\,\,\end{array}\end{eqnarray}$
When taking the value at the x = 0, we need to be careful with the second line and evaluate it as
$\begin{eqnarray}\begin{array}{l}{\int }_{0}^{x}{\rm{d}}t\frac{1}{A}H(t){F}_{0}{\left(t\right)}^{-1}{{\rm{e}}}^{2{F}_{0}(t)}{t}^{\frac{B}{A}-1}\\ \quad \to \frac{1}{A}H(x=0){F}_{0}^{-1}(x=0){{\rm{e}}}^{2{F}_{0}(x=0)}{\int }_{0}^{x}{\rm{d}}{{tt}}^{\frac{B}{A}-1}\\ \quad =\frac{1}{A}H(x=0){F}_{0}^{-1}(x=0){{\rm{e}}}^{2{F}_{0}(x=0)}\frac{A}{B}{x}^{\frac{B}{A}}{| }_{0}^{x}\\ \quad =H(x=0){F}_{0}^{-1}(x=0){{\rm{e}}}^{2{F}_{0}(x=0)}\frac{1}{B}{x}^{\frac{B}{A}}.\,\,\,\,\end{array}\end{eqnarray}$
With the result (A.35), (A.34) gives
$\begin{eqnarray}\frac{{\rm{d}}F(x)}{{\rm{d}}x}={f}_{0}\frac{{\rm{d}}{F}_{0}(x)}{{\rm{d}}x}{| }_{x=0}+\frac{1}{B}H(x=0).\end{eqnarray}$
From (A.14) we have
$\begin{eqnarray}\frac{{\rm{d}}F}{{\rm{d}}x}{| }_{x=0}=\frac{1}{B}H(x=0)-\frac{C}{B}F(x=0),\end{eqnarray}$
thus we see that
$\begin{eqnarray}{f}_{0}=-\frac{C}{B}F(x=0){\left(\frac{{\rm{d}}{F}_{0}(x)}{{\rm{d}}x}{| }_{x=0}\right)}^{-1}.\end{eqnarray}$
For general $\frac{{{\rm{d}}}^{n}F(x)}{{\rm{d}}{x}^{n}}$ we do the same thing to get the series expansion of x. At each step, we use (A.35) and there is no integration to be done for x = 0.

Appendix B. The explicit solutions of cn,m for bubble reduction

In this part, we will show how to get explicit solutions for the recursion relations (3.30), (3.31), (3.32) and (3.33).

B.1. The generation function c2→2

For this case, we have hT = 0, i.e. all hn,m = 0 in (3.30), (3.31), (3.32) and (3.33). Using (3.30) it is easy to find
$\begin{eqnarray}\begin{array}{rcl}{c}_{N,0} & = & \prod _{n=1}^{N}\displaystyle \frac{(-{f}^{2}+4{K}^{2}{M}_{0}^{2}){t}^{2}}{2{K}^{2}n(D+2n-3)}\\ & = & \prod _{n=0}^{N-1}\displaystyle \frac{{K}^{2}(\beta -{\alpha }^{2})}{2(n+1)(D+2n-1)},\,\,\,\end{array}\end{eqnarray}$
where we have used the initial condition c0,0 = 1 and defined
$\begin{eqnarray}\alpha =\displaystyle \frac{{tf}}{{K}^{2}},\,\,\,\,\beta =\displaystyle \frac{4{t}^{2}{M}_{0}^{2}}{{K}^{2}}.\,\,\,\,\end{eqnarray}$
Using (3.32) to compute the first few cn,m, one can see the pattern
$\begin{eqnarray}{c}_{N,m}=\frac{1}{m!}{{\rm{d}}}_{N,m}{c}_{N,0},\,\,\,\end{eqnarray}$
where dN,m depends on both N, m and the first few dN,m are
$\begin{eqnarray}\begin{array}{l}{{\rm{d}}}_{N,0}=1,\,{{\rm{d}}}_{N,1}=\alpha ,\\ \quad {{\rm{d}}}_{N,2}=\frac{(D+2N){\alpha }^{2}-\beta }{(D+2N-1)}.\,\,\,\end{array}\end{eqnarray}$
Putting the form (B.3) to (3.32) we get the recursion relation
$\begin{eqnarray}\begin{array}{l}{{\rm{d}}}_{N,m+1}=\frac{\beta }{\alpha }{{\rm{d}}}_{N,m}\\ \quad -\frac{(\beta -{\alpha }^{2})(D+2N+m-1)}{\alpha (D+2N-1)}{{\rm{d}}}_{N+1,m}.\end{array}\end{eqnarray}$
Using the formally defined operator $\widehat{P}$ such that
$\begin{eqnarray}\widehat{P}f(N,m)=f(N+1,m),\end{eqnarray}$
the solution of (B.5) can be formally given by
$\begin{eqnarray}\begin{array}{l}{{\rm{d}}}_{N,M}=\displaystyle \prod _{m=1}^{M}\left(\frac{\beta }{\alpha }-\frac{(\beta -{\alpha }^{2})(D+2N+m-2)}{\alpha (D+2N-1)}\hat{P}\right),\\ \quad M\geqslant 1,\\ \quad := \left(\frac{\beta }{\alpha }-\frac{(\beta -{\alpha }^{2})(D+2N+M-2)}{\alpha (D+2N-1)}\hat{P}\right)\\ \quad \times \left(\frac{\beta }{\alpha }-\frac{(\beta -{\alpha }^{2})(D+2N+(M-1)-2)}{\alpha (D+2N-1)}\hat{P}\right)\\ \quad \times ......\left(\frac{\beta }{\alpha }-\frac{(\beta -{\alpha }^{2})(D+2N+2-2)}{\alpha (D+2N-1)}\hat{P}\right)\\ \quad \times \left(\frac{\beta }{\alpha }-\frac{(\beta -{\alpha }^{2})(D+2N+1-2)}{\alpha (D+2N-1)}\hat{P}\right){{\rm{d}}}_{N,0},\,\,\,\end{array}\end{eqnarray}$
where since the appearance of the operator $\widehat{P}$, the ordering of the multiple factors is given explicitly. With a little computation, one can see that
$\begin{eqnarray}{{\rm{d}}}_{N,M}=\displaystyle \sum _{i=0}^{M}\frac{{\left(-\right)}^{i}{\beta }^{M-i}{\left(\beta -{\alpha }^{2}\right)}^{i}}{{\alpha }^{M}}{{\rm{d}}}_{N,M;i},\,\,\,\end{eqnarray}$
where
$\begin{eqnarray}\begin{array}{l}{{\rm{d}}}_{N,M;i}=\displaystyle \sum _{1;[{t}_{1},\ldots ,{t}_{i}]}^{M}\,\displaystyle \prod _{s=1}^{i}\\ \quad \times \frac{D+2(N+s-1)+(M+1-{t}_{s})-2}{D+2(N+s-1)-1},\\ \quad i\geqslant 1;\,\,{{\rm{d}}}_{N,M;0}=1\end{array}\end{eqnarray}$
and the summation sign is defined as
$\begin{eqnarray}\sum _{1\leqslant {t}_{1}\lt {t}_{2}...\lt {t}_{i}\leqslant M}:= \sum _{1;[{t}_{1},\ldots ,{t}_{i}]}^{M}.\,\,\,\end{eqnarray}$
Combining (B.3) with (B.1) and (B.8), we have the explicit solution for the generation function of c2→2(t, r, p, K2; M0, M1).

B.1.1. The generation function ${c}_{2 \rightarrow 1;\hat{1}}$

For this case, we have hT = c1→1(t, r, M0). Using (2.5) to (3.30) one can write down
$\begin{eqnarray}{c}_{n+\mathrm{1,0}}=\gamma (n){c}_{n,0}+\rho (n)\,\,\,\end{eqnarray}$
with
$\begin{eqnarray}\begin{array}{rcl}\gamma (n) & = & \displaystyle \frac{{K}^{2}(\beta -{\alpha }^{2})}{2(n+1)(D+2n-1)},\\ \rho (n) & = & \displaystyle \frac{{t}^{2n+1}{\left({M}_{0}^{2}\right)}^{n}\alpha }{2(D+2n-1)(n+1)!{\left(\tfrac{D}{2}\right)}_{n}},\,\,\end{array}\end{eqnarray}$
thus we can solve
$\begin{eqnarray}{c}_{n+\mathrm{1,0}}={c}_{\mathrm{0,0}}\prod _{i=0}^{n}\gamma (i)+\sum _{i=0}^{n}\rho (i)\prod _{j=i+1}^{n}\gamma (j),\,\,\end{eqnarray}$
where for the current case, the initial condition is c0,0 = 0. To find cn,m we write
$\begin{eqnarray}{c}_{N,m}=\displaystyle \frac{1}{m!}{d}_{N,m}{c}_{N,0}-\displaystyle \frac{1}{m!}\displaystyle \frac{t{\beta }^{N}{\left({K}^{2}\right)}^{N-1}}{N!{4}^{N}{\left(\tfrac{D}{2}\right)}_{N}}{b}_{N,m}\,\,\,\end{eqnarray}$
with the first few dN,m and bN,m
$\begin{eqnarray}\begin{array}{rcl}{{\rm{d}}}_{N,0} & = & 1,\,\,\,\,\,\,\,{b}_{N,0}=0,\\ {{\rm{d}}}_{N,1} & = & \alpha ,\,\,\,\,\,\,\,{b}_{N,1}=1,\\ {{\rm{d}}}_{N,2} & = & \frac{-\beta +{\alpha }^{2}(D+2N)}{(D+2N-1)},\\ {b}_{N,2} & = & \frac{\alpha (D+2N)}{(D+2N-1)}.\,\,\end{array}\end{eqnarray}$
Using the form (B.14) to (3.32) we get the recursion relations
$\begin{eqnarray}\begin{array}{l}{{\rm{d}}}_{N,m+1}=\frac{\beta }{\alpha }{{\rm{d}}}_{N,m}\\ \quad -\frac{(D+2N+m-1)(\beta -{\alpha }^{2})}{\alpha (D+2N-1)}{{\rm{d}}}_{N+1,m},\,\,\end{array}\end{eqnarray}$
$\begin{eqnarray}\begin{array}{l}{b}_{N,m+1}=\frac{\beta }{\alpha }{b}_{N,m}-\frac{\beta (D+2N+m-1)}{\alpha (D+2N)}{b}_{N+1,m}\\ \quad +\frac{(D+2N+m-1)}{(D+2N-1)}{{\rm{d}}}_{N+1,m}.\,\,\end{array}\end{eqnarray}$
The solution of dN,m can be similarly solved using the operator language as in the previous subsubsection and we find
$\begin{eqnarray}{{\rm{d}}}_{N,M}=\displaystyle \sum _{i=0}^{M}\frac{{\left(-\right)}^{i}{\beta }^{M-i}{\left(\beta -{\alpha }^{2}\right)}^{i}}{{\alpha }^{M}}{{\rm{d}}}_{N,M;i}\,\,\,\end{eqnarray}$
with
$\begin{eqnarray}\begin{array}{l}{{\rm{d}}}_{N,M;i}=\displaystyle \sum _{1;[{t}_{1},\ldots ,{t}_{i}]}^{M}\,\displaystyle \prod _{s=1}^{i}\\ \quad \times \frac{D+2(N+s-1)+(M+1-{t}_{s})-2}{D+2(N+s-1)-1},\\ \quad i\geqslant 1;\,\,{{\rm{d}}}_{N,M;0}=1.\,\,\end{array}\end{eqnarray}$
The solution for bN,m is a little bit complicated because the third term is on the right-hand side of (B.17). To solve it, we write
$\begin{eqnarray}{b}_{N,m+1}={\widetilde{\gamma }}_{N}(m){b}_{N,m}+{\widetilde{\rho }}_{N}(m),\,\,\end{eqnarray}$
where
$\begin{eqnarray}\begin{array}{rcl}{\tilde{\gamma }}_{N}(m) & = & \left(\frac{\beta }{\alpha }-\frac{\beta (D+2N+m-1)}{\alpha (D+2N)}\hat{P}\right),\\ {\tilde{\rho }}_{N}(m) & = & \frac{(D+2N+m-1)}{(D+2N-1)}{{\rm{d}}}_{N+1,m}.\,\,\end{array}\end{eqnarray}$
Iterating (B.20) with proper ordering we have
$\begin{eqnarray}\begin{array}{rcl}{b}_{N,m+1} & = & {\widetilde{\gamma }}_{N}(m){\widetilde{\gamma }}_{N}(m-1)...{\widetilde{\gamma }}_{N}(0){b}_{N,0}\\ & & +{\widetilde{\gamma }}_{N}(m){\widetilde{\gamma }}_{N}(m-1)...{\widetilde{\gamma }}_{N}(1){\widetilde{\rho }}_{N}(0)\\ & & +{\widetilde{\gamma }}_{N}(m){\widetilde{\gamma }}_{N}(m-1)...{\widetilde{\gamma }}_{N}(2){\widetilde{\rho }}_{N}(1)+...\\ & & +{\widetilde{\gamma }}_{N}(m){\widetilde{\rho }}_{N}(m-1)+{\widetilde{\rho }}_{N}(m).\,\,\end{array}\end{eqnarray}$
Using
$\begin{eqnarray}{\left(\hat{P}\right)}^{a}{\tilde{\rho }}_{N}(m)=\frac{(D+2(N+a)+m-1)}{(D+2(N+a)-1)}{{\rm{d}}}_{N+1+a,m}\,\,\,\end{eqnarray}$
and
$\begin{eqnarray}\begin{array}{l}{\tilde{\gamma }}_{N}(m){\tilde{\gamma }}_{N}(m-1)...{\tilde{\gamma }}_{N}(m-k){\tilde{\rho }}_{N}(m-k-1)\\ \quad =\displaystyle \sum _{i=0}^{k+1}{\left(-\right)}^{i}{\left(\frac{\beta }{\alpha }\right)}^{k+1}\\ \quad \times \left(\displaystyle \sum _{1;[{t}_{1},\ldots ,{t}_{i}]}^{k+1}\displaystyle \prod _{s=1}^{i}\right.\\ \quad \left.\times \frac{D+2(N+s-1)+(m+1-{t}_{s})-1}{D+2(N+s-1)}\right)\\ \quad \times \frac{(D+2(N+i)+(m-k-1)-1)}{(D+2(N+i)-1)}\\ \quad {{\rm{d}}}_{N+1+i,m-k-1},\,\,\end{array}\end{eqnarray}$
we finally reach
$\begin{eqnarray}\begin{array}{l}{b}_{N,m+1}=\frac{(D+2N+m-1)}{(D+2N-1)}{{\rm{d}}}_{N+1,m}\\ \quad +\displaystyle \sum _{k=0}^{m-1}{\left(\frac{\beta }{\alpha }\right)}^{k+1}\displaystyle \sum _{i=0}^{k+1}{\left(-\right)}^{i}\\ \quad \times \frac{(D+2(N+i)+(m-k-1)-1)}{(D+2(N+i)-1)}{{\rm{d}}}_{N+1+i,m-k-1}\\ \quad \times \left(\displaystyle \sum _{1;[{t}_{1},\ldots ,{t}_{i}]}^{k+1}\displaystyle \prod _{s=1}^{i}\frac{D+2(N+s-1)+(m+1-{t}_{s})-1}{D+2(N+s-1)}\right).\,\,\,\,\,\,\,\,\end{array}\end{eqnarray}$
where the condition bN,0 = 0 has been used. The formula (B.14) plus (B.13), (B.18) and (B.25) gives the explicit solution for the generation function ${c}_{2\to 1;\widehat{1}}$.
Knowing ${c}_{2\to 1;\widehat{1}}$, we can use (3.5) to get the generation function ${c}_{2\to 1;\widehat{0}}$ or directly compute it using (3.30), (3.31), (3.32) and (3.33).

B.1.2. The proof of one useful relation

When we use the improved PV-reduction method with auxiliary vector R to discuss the reduction of sunset topology, an important reduction relation between different tensor ranks has been observed in [40]. Later this relation has been studied in [41, 42, 45]. For the bubble, it is given explicitly by
$\begin{eqnarray}\begin{array}{l}{I}_{2}^{(r)}=\displaystyle \frac{(D+2r-4){fp}}{(D+r-3){K}^{2}}{I}_{2}^{(r-1)}\\ \quad -\displaystyle \frac{(r-1)(4{M}_{0}^{2}{p}^{2}+({f}^{2}-4{M}_{0}^{2}{K}^{2})r)}{(D+r-3){K}^{2}}{I}_{2}^{(r-2)}\\ \quad +\displaystyle \frac{p}{{K}^{2}}{I}_{2;\widehat{0}}^{(r-1)}+\displaystyle \frac{(r-1)(({K}^{2}-{M}_{0}^{2}+{M}_{1}^{2})r-2{p}^{2})}{(D+r-3){K}^{2}}{I}_{2;\widehat{0}}^{(r-2)}\\ \quad +\displaystyle \frac{-p}{{K}^{2}}{I}_{2;\widehat{1}}^{(r-1)}+\displaystyle \frac{(r-1){fr}}{(D+r-3){K}^{2}}{I}_{2;\widehat{1}}^{(r-2)},\,\,\,\end{array}\end{eqnarray}$
where
$\begin{eqnarray}\begin{array}{rcl}{I}_{2}^{(r)} & = & \int {\rm{d}}\ell \frac{{\left(2\ell \cdot R\right)}^{r}}{({\ell }^{2}-{M}_{0}^{2})({\left(\ell -K\right)}^{2}-{M}_{1}^{2})},\\ {I}_{2;\hat{1}}^{(r)} & = & \int {\rm{d}}\ell \frac{{\left(2\ell \cdot R\right)}^{r}}{({\ell }^{2}-{M}_{0}^{2})},\\ {I}_{2;\hat{0}}^{(r)} & = & \int {\rm{d}}\ell \frac{{\left(2\ell \cdot R\right)}^{r}}{{\left(\ell -K\right)}^{2}-{M}_{1}^{2}}.\,\,\,\end{array}\end{eqnarray}$
We can use the series form to check the relation (B.26). Let us define
$\begin{eqnarray}\begin{array}{l}F[N+2]={c}^{(N+2)}\\ \quad -\left(\displaystyle \frac{(D+2N){{fpc}}^{(N+1)}-(N+1)(4{M}_{0}^{2}{p}^{2}+({f}^{2}-4{M}_{0}^{2}{K}^{2})r){c}^{(N)}}{(D+N-1){K}^{2}}\right),\,\,\end{array}\end{eqnarray}$
where
$\begin{eqnarray}{c}^{(N)}=\displaystyle \frac{N!}{{t}^{N}}\sum _{n=0}^{[N/2]}{c}_{n,N-2n}{r}^{n}{p}^{N-2n}\,\,\,\end{eqnarray}$
and c can be c2→2 or c2→1. Depending on if N is even or odd, the computation details have some differences, thus we consider N even only, while N odd will be similar. Expanding (B.28) we have
$\begin{eqnarray}\begin{array}{l}F[2N+2]=\displaystyle \frac{(2N+2)!}{{t}^{2N+2}}{r}^{0}{p}^{2N+2}\\ \quad \times \left({c}_{\mathrm{0,2}N+2}-\left(\displaystyle \frac{(D+4N){{tfc}}_{\mathrm{0,2}N+1}}{(2N+2)(D+2N-1){K}^{2}}\right.\right.\\ \quad \left.\left.-\displaystyle \frac{4{M}_{0}^{2}{t}^{2}{c}_{\mathrm{0,2}N}}{2(N+1)(D+2N-1){K}^{2}}\right)\right)\\ \quad +\displaystyle \frac{(2N+2)!}{{t}^{2N+2}}{r}^{N+1}{p}^{0}\left({c}_{N+\mathrm{1,0}}\right.\\ \quad \left.-\left(-\displaystyle \frac{{t}^{2}({f}^{2}-4{M}_{0}^{2}{K}^{2})}{(2N+2)(D+2N-1){K}^{2}}{c}_{N,0}\right)\right)\\ \quad +\displaystyle \frac{(2N+2)!}{{t}^{2N+2}}\sum _{n=1}^{N}{r}^{n}{p}^{2N+2-2n}\left({c}_{n,2N+2-2n}\right.\\ \quad -\left(\displaystyle \frac{(D+4N){tf}}{(2N+2)(D+2N-1){K}^{2}}{c}_{n,2N+1-2n}\right.\\ \quad -\displaystyle \frac{4{M}_{0}^{2}{t}^{2}}{(2N+2)(D+2N-1){K}^{2}}{c}_{n,2N-2n}\\ \quad \left.\left.-\displaystyle \frac{{t}^{2}({f}^{2}-4{M}_{0}^{2}{K}^{2})}{(2N+2)(D+2N-1){K}^{2}}{c}_{n-\mathrm{1,2}N+2-2n}\right)\right).\end{array}\end{eqnarray}$
For the term with r0p2N+2, using the relation (3.29) with n = 0, m = 2N, we find the part inside the bracket is simplified to
$\begin{eqnarray}\displaystyle \frac{-4{\xi }_{R}{t}^{2}{h}_{\mathrm{0,2}N}+{\xi }_{K}t(D+4N){h}_{\mathrm{0,2}N+1}}{2(N+1)(D+2N-1){K}^{2}}.\,\,\end{eqnarray}$
For the term with rN+1p0, using the relation (3.30) with n = N, we find the part inside the bracket is simplified to
$\begin{eqnarray}-\displaystyle \frac{{t}^{2}({\xi }_{K}f-{\xi }_{R}4{K}^{2}){h}_{N,0}+{\xi }_{K}{{tK}}^{2}{h}_{N,1}}{2(N+1)(D+2N-1){K}^{2}}.\,\,\end{eqnarray}$
For the term with rnp2N+2−2n, the computation is a little bit complicated. First, we use (3.28) with nn − 1 and m → 2N − 2n + 2, 2N − 2n + 1, 2N − 2n to write all ci,j with i = n − 1. Then we use (3.29) with nn − 1 and m → 2N − 2n + 1, 2N − 2n. After doing the above two steps and making algebraic simplification, we get
$\begin{eqnarray}\begin{array}{l}\displaystyle \frac{{t}^{3}({\xi }_{K}{M}_{0}^{2}-{\xi }_{R}f)}{(N+1)(D+2N-1){{nK}}^{2}}{h}_{n-\mathrm{1,2}N-2n+1}\\ \quad +\displaystyle \frac{{t}^{2}(-{\xi }_{K}{nf}+{\xi }_{R}4{K}^{2}(N+1))}{2(N+1)(D+2N-1){{nK}}^{2}}{h}_{n-\mathrm{1,2}N-2n+2}\\ \quad +\displaystyle \frac{-{\xi }_{K}(2N-2n+3)t}{2n(D+2N-1)}{h}_{n-\mathrm{1,2}N-2n+3}.\,\,\end{array}\end{eqnarray}$
For different c we use the different known hT as given in (3.17), (3.18) and (3.19), thus one can see the relation (B.26) is satisfied.
1
Bern Z (NLO Multileg Working Group) The NLO multileg working group: summary report arXiv:0803.0494 [hep-ph]

2
Henn J M Plefka J C Scattering amplitudes in gauge theories Lecture Notes in Physics vol 883 Berlin Springer

3
Elvang H Huang Y-T Scattering Amplitudes in Gauge Theory and Gravity Cambridge Cambridge University Press

4
Bern Z Dixon L J Dunbar D C Kosower D A 1994 One loop n point gauge theory amplitudes, unitarity and collinear limits Nucl. Phys. B 425 217 260

DOI

5
Bern Z Dixon L J Dunbar D C Kosower D A 1995 Fusing gauge theory tree amplitudes into loop amplitudes Nucl. Phys. B 435 59 101

DOI

6
Cachazo F Svrcek P Witten E 2004 MHV vertices and tree amplitudes in gauge theory J. High Energy Phys. JHEP09(2004)006

DOI

7
Britto R Cachazo F Feng B 2005 Generalized unitarity and one-loop amplitudes in N = 4 super-Yang-Mills Nucl. Phys. B 725 275 305

DOI

8
Britto R Cachazo F Feng B 2005 New recursion relations for tree amplitudes of gluons Nucl. Phys. B 715 499 522

DOI

9
Britto R Cachazo F Feng B Witten E 2005 Direct proof of tree-level recursion relation in Yang-Mills theory Phys. Rev. Lett. 94 181602

DOI

10
Ellis R K Kunszt Z Melnikov K Zanderighi G 2012 One-loop calculations in quantum field theory: from Feynman diagrams to unitarity cuts Phys. Rep. 518 141 250

DOI

11
Weinzierl S Feynman Integrals arXiv:2201.03593 [hep-th]

12
Ossola G Papadopoulos C G Pittau R 2007 Reducing full one-loop amplitudes to scalar integrals at the integrand level Nucl. Phys. B 763 147 169

DOI

13
Mastrolia P Ossola G 2011 On the integrand-reduction method for two-loop scattering amplitudes J. High Energy Phys. JHEP11(2011)014

DOI

14
Badger S Frellesvig H Zhang Y 2012 Hepta-cuts of two-loop scattering amplitudes J. High Energy Phys. JHEP04(2012)055

DOI

15
Zhang Y 2012 Integrand-level reduction of loop amplitudes by computational algebraic geometry methods J. High Energy Phys. JHEP09(2012)042

DOI

16
Passarino G Veltman M J G 1979 One loop corrections for e + e- annihilation into mu+ mu- in the weinberg model Nucl. Phys. B 160 151 207

DOI

17
Chetyrkin K G Tkachov F V 1981 Integration by parts: the algorithm to calculate beta functions in 4 loops Nucl. Phys. B 192 159 204

DOI

18
Tkachov F V 1981 Phys. Lett. B 100 65 68

DOI

19
Laporta S 2000 High precision calculation of multiloop Feynman integrals by difference equations Int. J. Mod. Phys. A 15 5087 5159

DOI

20
von Manteuffel A Studerus C Reduze 2—distributed feynman integral reduction arXiv:1201.4330 [hep-ph]

21
von Manteuffel A Schabinger R M 2015 A novel approach to integration by parts reduction Phys. Lett. B 744 101 104

DOI

22
Maierhöfer P Usovitsch J Uwer P 2018 Kira—a Feynman integral reduction program Comput. Phys. Commun. 230 99 112

DOI

23
Smirnov A V Chuharev F S 2020 FIRE6: feynman integral reduction with modular arithmetic Comput. Phys. Commun. 247 106877

DOI

24
Britto R Buchbinder E Cachazo F Feng B 2005 One-loop amplitudes of gluons in SQCD Phys. Rev. D 72 065012

DOI

25
Britto R Feng B Mastrolia P 2006 The Cut-constructible part of QCD amplitudes Phys. Rev. D 73 105004

DOI

26
Anastasiou C Britto R Feng B Kunszt Z Mastrolia P 2007 D-dimensional unitarity cut method Phys. Lett. B 645 213 216

DOI

27
Anastasiou C Britto R Feng B Kunszt Z Mastrolia P 2007 Unitarity cuts and Reduction to master integrals in d dimensions for one-loop amplitudes J. High Energy Phys. JHEP03(2007)111

DOI

28
Britto R Feng B 2007 Unitarity cuts with massive propagators and algebraic expressions for coefficients Phys. Rev. D 75 105006

DOI

29
Britto R Feng B 2008 Integral coefficients for one-loop amplitudes J. High Energy Phys. JHEP02(2008)095

DOI

30
Britto R Mirabella E 2011 Single cut integration J. High Energy Phys. JHEP01(2011)135

DOI

31
Mastrolia P Mizera S 2019 Feynman integrals and intersection theory J. High Energy Phys. JHEP02(2019)139

DOI

32
Frellesvig H Gasparotto F Mandal M K Mastrolia P Mattiazzi L Mizera S 2019 Vector space of feynman integrals and multivariate intersection numbers Phys. Rev. Lett. 123 201602

DOI

33
Mizera S Pokraka A 2020 From infinity to four dimensions: higher residue pairings and feynman integrals J. High Energy Phys. JHEP02(2020)159

DOI

34
Frellesvig H Gasparotto F Laporta S Mandal M K Mastrolia P Mattiazzi L Mizera S 2021 Decomposition of feynman integrals by multivariate intersection numbers J. High Energy Phys. JHEP03(2021)027

DOI

35
Caron-Huot S Pokraka A 2021 J. High Energy Phys. JHEP12(2021)045

DOI

36
Caron-Huot S Pokraka A 2022 J. High Energy Phys. JHEP04(2022)078

DOI

37
Feng B Li T Li X 2021 Analytic tadpole coefficients of one-loop integrals, J. High Energy Phys. JHEP09(2021)081

DOI

38
Hu C Li T Li X 2021 One-loop Feynman integral reduction by differential operators Phys. Rev. D 104 116014

DOI

39
Feng B Li T Wang H Zhang Y 2022 Reduction of general one-loop integrals using auxiliary vector J. High Energy Phys. JHEP05(2022)065

DOI

40
Feng B Li T 2022 PV-reduction of sunset topology with auxiliary vector Commun. Theor. Phys. 74 095201

DOI

41
Feng B Hu C Li T Song Y 2022 Reduction with degenerate Gram matrix for one-loop integrals J. High Energy Phys. JHEP08(2022)110

DOI

42
Chen J Feng B Module intersection and uniform formula for iterative reduction of one-loop integrals arXiv:2207.03767 [hep-ph]

43
Chen J Iteratively reduce auxiliary scalar product in multi-loop integrals arXiv:2208.14693 [hep-ph]

44
Baikov P A 1997 Explicit solutions of the multiloop integral recurrence relations and its application Nucl. Instrum. Meth. A 389 347 349

DOI

45
Li T Nontrivial one-loop recursive reduction relation arXiv:2209.11428 [hep-ph]

Outlines

/