For loop integrals, reduction is the standard method. Having an efficient way to find reduction coefficients is an important topic in scattering amplitudes. In this paper, we present the generation functions of reduction coefficients for general one-loop integrals with an arbitrary tensor rank in their numerator.
Bo Feng. Generation function for one-loop tensor reduction[J]. Communications in Theoretical Physics, 2023, 75(2): 025203. DOI: 10.1088/1572-9494/aca253
1. Motivation
As the bridge connecting the theoretical frame and the experiment data, scattering amplitude is always one of the central concepts in the quantum field theory. Its efficient computation (including higher order contributions in perturbation) becomes necessary, especially with the coming of the LHC experiment [1]. The on-shell program1(1 There are many works on this topic. For an introduction, please see the following two books [2, 3]. Some early works with the on-shell concept are [4–9].) in the scattering amplitudes, as the outcome of the challenge, has made the calculations of one-loop amplitude straightforward [10].
In general, the loop computation can be divided into two parts: the construction of integrands and then performing the integration. Although the construction of integrands using Feynman diagrams is well established, sometimes it is not the economic way to do so. Looking for a better way to construct integrands is one of the current research directions2(2For example, the unitarity cut method proposed in [4, 5, 7] uses on-shell tree-level amplitudes as input.). For the second part, i.e. doing the integration, a very useful method is the reduction. It has been shown that any integral can be written as the linear combination of some basis integrals (called the master integrals) with coefficients as the rational function of external momenta, polarization vectors, masses and spacetime dimension. Using the idea of reduction, loop integration can be separated into two parallel tasks: the computation of master integrals and the algorithm to efficiently extract the reduction coefficients. Progresses in any one task will enable us to do more and more complicated integrations (for a nice introduction of recent developments, see [11]).
For the reduction, it can be classified into two categories: the reduction at the integrand level and the reduction at the integral level. The reduction at the integrand level can be systematically solved using computational algebraic geometry [12–15]. For the reduction at the integral level, the first proposal is the famous Passarino-Veltman reduction (PV-reduction) method [16]. There are other proposals, such as the Integration-by-Part (IBP) method [17–23], the unitarity cut method [4, 5, 7, 24–30], and Intersection number [31–36]. Although there are a lot of developments for the reduction at the integral level, it is still desirable to improve them by the current complexity of computations.
In recent papers [37–41] we have introduced the auxiliary vector R to improve the traditional PV-reduction method. Using R we can construct the differential operators and then establish algebraic recurrence relation to determine reduction coefficients analytically. This method has also been generalized to a two-loop sunset diagram (see [40]) where the original PV-reduction method is hard to be used. When using the auxiliary vector R in the IBP method, the efficiency of reduction has also been improved as shown in [42, 43].
Although the advantage of using auxiliary vector R has been demonstrated from various aspects, the algebraic recursive structure makes it still hard to have a general understanding of reduction coefficients for higher and higher tensor ranks in the numerators of integrands. Could we get more understanding of the analytical structures of reduction coefficients by this method? As we will show in this paper, indeed we can get more if we probe the reduction problem from a new angle. The key idea is the concept of generation function. In fact, the generation function is well-known in physics and mathematics. Sometimes the coefficients of a series is hard to imagine, but the series itself is easy to write down. For example, the Hermite Polynomial Hn(x) can be read out from the generation function
Thus we can ask that if we sum the reduction coefficients of different tensor ranks together, could we get a simpler answer? For the reduction problem, the numerator of tensor rank k is given by ${\left(2{\ell }\cdot R\right)}^{k}$ in our method and we need to see how to incorporate them together. There are many ways to do this. Two typical ones are
In this paper, we will focus on the generation function of the type ψ2(t) because it is invariant under the differential action, i.e. $\frac{{\mathrm{de}}^{x}}{{\rm{d}}x}={{\rm{e}}}^{x}$. We will see that the generation functions satisfy simple differential equations, which can be solved analytically.
The plan of the paper is following. In section two, we present the generation function of reduction coefficients of the tadpole integral. The tadpole example is a little bit trivial, thus in section three, we discuss carefully how to find generation functions for the bubble integral, which is the simplest nontrivial example. With the experience obtained for bubble, we present the solution for general one-loop integrals in section four. To demonstrate the frame established in section four, we discuss briefly the triangle example in section five. Finally, a brief summary and discussion are given in section six. Some technical details have been collected in the appendix, where in appendix A, the solution of two typical differential equations is presented, while the solution of the recursive relation for the bubble has been explained in appendix B.
2. Tadpole
With the above brief discussion, we start from the simplest case, i.e. the tadpole topology to discuss the generation function. Summing all tensor ranks properly we have3(3The mass dimension of parameter t is −2.)
where c1→1(t, R, M) is the generation function of reduction coefficients and for simplicity, we have defined $\int {\rm{d}}{\ell }_{i}(\bullet )\equiv \int \frac{{{\rm{d}}}^{D}{\ell }_{i}}{{\rm{i}}{\pi }^{D/2}}(\bullet )$. To find the closed analytic expression for c1→1(t, R, M) we establish the corresponding differential equation. Acting with ∂R we have
at one side, and $\left(\displaystyle \frac{\partial }{\partial R}\cdot \displaystyle \frac{\partial }{\partial R}{c}_{1\to 1}(t,R,M)\right)\int {\rm{d}}{\ell }\displaystyle \frac{1}{{{\ell }}^{2}-{M}^{2}}$ at the other side, thus we get
By the Lorentz invariance, c1→1(t, R, M) is the function of r = R · R only, i.e. c1→1(t, R, M) = f(r). It is easy to see that the differential equation (2.3) becomes
which is the form (A.14) studied in appendix A. This second order differential equation has two singular points r = 0 and r = ∞ , where the singular point r = 0 is canonical. The solution has been given in (A.33). Putting A = 4, B = 2D, C = −4t2M2 in (A.29) and the boundary condition c0 = 1, we get immediately
Before ending this section, let us mention that when we do the reduction for other topologies, we will meet the reduction of $\int {\rm{d}}\ell \frac{{{\rm{e}}}^{t(2\ell \cdot R)}}{{\left(\ell -K\right)}^{2}-{M}^{2}}$. Using the momentum shifting, it is easy to see that
The results show the advantage of using the generation function with exponential form.
3. Bubble
Having found the generation function of tadpole reduction, we move to the first nontrivial example, i.e. the generation function of bubble reduction, which is defined through
For simplicity, we have not written down the variables of reduction coefficients explicitly. If written explicitly, it will be c(t, R, K; M0, M1) or c(t, R2, K · R, K2; M0, M1) if using the Lorentz contraction form.
The generation form (3.1) can produce some nontrivial relations among these generation functions of reduction coefficients easily. Noticing that4(4Comparing to the shifting symmetry discussed in (3.2), one can also consider the symmetry with ℓ→ −ℓ. For this one, we have R→ −R and K→ −K. If using the variables R2, K · R, it is invariant. In other words, the symmetry ℓ→ −ℓ is trivial.)
by comparing (3.2) with (3.1). The first relation (3.3) can be a consistent check for c2→2 while the second relation (3.4) and the third relation (3.5) tell us that we need to compute only one of ${c}_{2\to 1;\widehat{i}}$ functions. Another useful check is the mass dimension. Since the mass dimension of t is (−2), we have
Equations (3.22) and (3.23) are the differential equations we need to solve. We will present two ways to solve them. One is by the series expansion of the naive variables r, p. This is the method used in [37, 38]. However, as we will show, using the idea of the generation function, the powers of r, p are independent of each other, thus the recursion relations become simpler and can be solved explicitly. Another method is to solve the differential equation directly and get a more compact and analytical expression. An important lesson from the second method is that the right variables to do the series expansion are not r, p but their proper combination.
3.2. The series expansion
In this subsection, we will present the solution in the form of a series expansion of r, p. Writing5(5Since we consider the generation functions, the n, m are free indices, while in previous works [37, 38] with fixed tensor rank K, n, m are constrained by 2n + m = K. One can see that many manipulations are simplified using the idea of generation functions.)
Using equation (3.30) we can recursively solve all cn,0 starting from the boundary condition c0,0 = 1 for c2→2 or c0,0 = 0 for c2→1. Knowing all cn,0 we can use (3.31) to get all cn,1. To solve all cn,m, we use (3.25) and (3.26) again, but now solve
Both equations can be used recursively to solve cn,m. After using one of them to get all cn,m, another one becomes a nontrivial consistent check. Among them (3.32) is better, since it solves (m + 1) from m.
Using above algorithm, we present the first few terms of generation functions for comparison. For c2→2 we have
where we have defined $\widetilde{f}={K}^{2}-{M}_{0}^{2}+{M}_{1}^{2}=-f+2{K}^{2}$.
Here we have presented the general recursive algorithm. In appendix B, we will show that these recursion relations can be solved explicitly, i.e. we find explicit expressions for all coefficients cn,m.
3.3. The analytic solution
In the previous section, we presented the solution using the series expansion. In this section, we will solve the two differential equations (3.22) and (3.23) directly.
Let us start from (3.23) first. To solve it, we define the following new variables
Equation (3.43) is the form of (A.14) which has been discussed in the appendix. One interesting point is that since the left-hand side is independent of y, the right-hand side should be zero under the action of ∂y. One can check that it is indeed true.
Having laid out the frame, we can use it to solve various generation functions.
3.3.1. The generation function c2→2
For this case, we have hT = 0, thus using the result (A.29) we can immediately write down
One can check it with the series expansion (B.3) given in appendix B. Compared to it, the result (3.44) is very simple and compact. This shows the power of using the generation function. Also, the differential equations (3.22) and (3.23) tell us the right variables for the series expansion should be x, y instead of the naive variables r, p.
3.3.2. The generation function ${c}_{2 \rightarrow 1;\hat{1}}$
For this case, we have ξR = 0, ξK = −1 and
The first important check is that the right-hand side of (3.46) is y-independent. Acting $\displaystyle \frac{\partial }{\partial y}$ on the right-hand side we will get
where we have used ${\partial }_{y}{h}_{T}(x,y=0)=\displaystyle \frac{2y}{{K}^{2}}{\partial }_{r}{h}_{T}=0$. The differential equation (3.49) is the form of (A.14) and we get the solution
Although we have a very compact expression (3.52) for the generation function, in practice it is more desirable to have the series expansion form. In appendix A, we have introduced three ways. Here we work out the expansion by direct integration. Using (3.45) we have
This result can be checked with the one given in (B.14). One can see that the formula (3.59) is much more compact and manifest with various analytic structures.
4. The general frame
Having the detailed computations in the bubble, in this section we will set up the general frame to find generation functions for general one-loop integrals with (n + 1) propagators. The system has n external momenta Ki, i = 1,…,n and (n + 1) masses ${M}_{j}^{2},j=0,1,\ldots ,n$. Using the auxiliary vector R we have (n + 1) auxiliary scalar products (ASP)r = R · R, pi = Ki · R, i = 1,…,n. From the experience in the bubble, we know that these ASPs are not good variables to solve differential equations produced by ∂R · ∂R and Ki · ∂R, i = 1,…,n. Thus we will discuss how to find these good variables in the first subsection. Then we discuss the differential equations in these new variables in the second subsection and finally their solutions in the third section.
4.1. Finding good variables
We will denote the good variables by x and yi, i = 1,…,n. To simplify the differential equations, we need to impose the following conditions
where ${P}_{R\to {K}_{i}}$ means to replace vector R by the vector Ki, when collecting all of i together, the right-hand side of (4.6) is just (2∣G∣I + 2GA)P, thus we have the solution
Having found the good variables, we express differential operators ∂R · ∂R and Ki · ∂R, i = 1,…,n using them. The first step is to use (4.8) to write
where we have defined ${{ \mathcal K }}^{T}=({K}_{1},\ldots ,{K}_{n})$ and ${\partial }_{Y}^{T}\,=({\partial }_{{y}_{1}},\ldots ,{\partial }_{{y}_{n}})$. Thus we find
where αR, αi are constant (which are independent of T) and HT;R, HT;i are known functions coming from lower topologies. Using the result (4.12) and (4.13), we find
$\begin{eqnarray}\begin{array}{l}| G{| }^{2}{\partial }_{Y}^{T}{G}^{-1}{\partial }_{Y}{c}_{T}={\alpha }_{{ \mathcal K }}^{T}{G}^{-1}{\alpha }_{{ \mathcal K }}{c}_{T}\\ \quad +{H}_{T;{ \mathcal K }}^{T}{G}^{-1}{\alpha }_{{ \mathcal K }}+| G| {\partial }_{Y}^{T}{G}^{-1}{H}_{T;{ \mathcal K }},\,\,\end{array}\end{eqnarray}$
where ${\alpha }_{{ \mathcal K }}^{T}=({\alpha }_{1},\ldots ,{\alpha }_{n})$ and ${H}_{T;{ \mathcal K }}^{T}=({H}_{T;1},\ldots ,{H}_{T;n})$. Thus the differential equations (4.13) can be written as
$\begin{eqnarray}\begin{array}{rcl}{\widetilde{\alpha }}_{R} & = & {\alpha }_{{ \mathcal K }}^{T}{G}^{-1}{\alpha }_{{ \mathcal K }}-{\alpha }_{R},\\ {{ \mathcal H }}_{T;R} & = & -{H}_{T;{ \mathcal K }}^{T}{G}^{-1}{\alpha }_{{ \mathcal K }}-| G| {\partial }_{Y}^{T}{G}^{-1}{H}_{T;{ \mathcal K }}+{H}_{T;R}.\,\,\end{array}\end{eqnarray}$
Having given the differential equations (4.16) and (4.17), there is an important point to be mentioned. For (4.16) and (4.17) to have a solution, functions H are not arbitrary but must satisfy the integrability conditions, which are
Differential equations (4.16) and (4.17) are the type of (A.1) and (A.14) respectively in appendix A, for which the solution has been presented. In the next subsection, we will solve them analytically.
4.3. Analytic solution
In this part, we will present the necessary steps for solving the above differential equations (4.16) and (4.17). Let us solve them one by one. For differential equation (4.16) with i = 1, using the result (A.5) in appendix A, we have
where FT(x, y2,…,yn) does not depend on the variable y1. Now we act with $\left({\partial }_{{y}_{2}}-\displaystyle \frac{{\alpha }_{2}}{| G| }\right)$ on both sides of (4.21) to get the differential equation
Repeating above procedure with the action $\left({\partial }_{{y}_{2}}-\displaystyle \frac{{\alpha }_{2}}{| G| }\right)$ we can solve FT(x, y3,…,yn) and then find
where for simplicity we have defined the sum ${\widetilde{\sum \ }}_{i=a}^{b}$ to mean to take the sum over (a, a − 1, a − 2,…,b) with a ≥ b.
Before going to solve the only unknown function F(x), let us check that the form (4.26) does satisfy the differential equations (4.16). When acting with $\left({\partial }_{{y}_{k}}-\displaystyle \frac{{\alpha }_{k}}{| G| }\right)$ on the both sides, it is easy to see that the first term at the right-hand side of (4.26) and the terms in ${{ \mathcal H }}_{T;K}$ with i < k give zero contributions since they contain only the factor ${e}^{\displaystyle \frac{-{\alpha }_{k}{y}_{k}}{| G| }}$ depending on yk. For the term i= k in ${{ \mathcal H }}_{T;K}$, the action gives
The first term in (4.30) cancels the term in (4.28) and we are left with the second term in (4.30). Now the pattern is clear. The i = k + 2 term in ${{ \mathcal H }}_{T;K}$ will produce two terms after using the integrability condition and partial integration, the first term will cancel the second term in (4.30), while the second term will be the form
Continuing to the term i = n in ${{ \mathcal H }}_{T;K}$ we will be left with $\displaystyle \frac{1}{| G| }{H}_{T;k}(x,{y}_{1},\ldots ,{y}_{n})$, thus we have proved that (4.26) does satisfy the differential equations (4.16).
Now we consider the differential equation (4.17). Using the form (4.26), we derive
One important point of (4.32) is that the right-hand side must be yi-independent. To check this point, we act ${\partial }_{{y}_{k}}$ on the right-hand side to give
Having checked the y-independent, we can take yi to be any values at the right-hand side of (4.32). From the expression (4.27) one can see that if we take y1 = y2 = ... = yn = 0, we have ${{ \mathcal H }}_{T;K}=0$, thus (4.32) is simplified to
When we consider the generation functions of reduction coefficients of one-loop integrals with (n + 1) propagators, there is a special case, where all HT;i, HT;R are zero. For this case, we can write down immediately the generation function
In this part we present another example, i.e. the triangle, to demonstrate the general frame laid down in the previous section. The seven generation functions are defined by
Using the permutation symmetry and the shifting of loop momentum we can find nontrivial relations among these seven generation functions. The first group of relations is
Using these three groups of relations, we just need to compute three generation functions, for example, c3→3, ${c}_{3\to 2;\widehat{1}}$ and c3→1;0. The mass dimensions of them are
Since we have given enough details in the section of the bubble, here we will be more brief. Using ∂R · ∂R, K1 · ∂R, K2 · ∂R operators, we can find
where T is the index for different generation functions and ${f}_{1}={K}_{1}^{2}-{M}_{1}^{2}+{M}_{0}^{2}$, ${f}_{2}={K}_{2}^{2}-{M}_{2}^{2}+{M}_{0}^{2}$. The various constants $\xi ,\widetilde{\xi },\widehat{\xi }$ are given in the table
The generation functionc3→2: there are three generation functions ${c}_{3\to 2;\widehat{i}}$. We want to choose one of them with the simplest H1, H2, HR. Checking with table (5.9), we see that if we consider ${c}_{3\to 2;\widehat{1}}$, we will have HR = 0, H2 = 0 and
Putting them back to (5.20) we can find the analytic expression. Using it we can get the explicit series expansion as discussed in appendix A.
For generation functions c3→1;i, we can do similar calculations. The key is to find HT;i and HT;R, which is the generation functions of one order lower topology. Thus we see the recursive structures of generation functions from lower topologies to higher topologies. The logic is clear although working out details takes some effects.
6. Conclusion
In this paper, we have introduced the concept of generation function for reduction coefficients of loop integrals. For one-loop integrals, using the recent proposal of auxiliary vector R, we can construct two types of differential operators $\tfrac{\partial }{\partial R}\cdot \tfrac{\partial }{\partial R}$ and ${K}_{i}\cdot \tfrac{\partial }{\partial R}$. Using these operators, we can establish corresponding differential equations for generation functions. By proper changing of variables, these differential equations can be written into the decoupled form, thus one can solve them one by one analytically. Obviously, one could try to apply the same idea to discuss the reduction problem for two and higher loop integrals. But with the appearance of irreducible scalar products, the problem becomes harder. One can try to use the IBP relation in the Baikov representation [44] as shown [42, 43].
Acknowledgments
We would like to thank Jiaqi Chen, Tingfei Li and Rui Yu for their useful discussions and early participation. This work is supported by Chinese NSF funding under Grant Nos. 11935013, 11947301, 12047502 (Peng Huanwu Center).
Appendix A. Solving differential equations
As shown in the paper, differential equations for generation functions can be reduced to two typical types. In this appendix, we present details of solving these typical differential equations.
The first order differential equation: the first typical differential equation is following first order differential equation
where we have used the expansion of $H(x)={\sum }_{n=0}^{\infty }{h}_{n}{x}^{n}$. The second way is to put $F(x)={\sum }_{n=0}^{\infty }{f}_{n}{x}^{n}$ directly to (A.1) to arrive at the recursion relation
The third way is to use analytic expression (A.5). We need to compute $\frac{{{\rm{d}}}^{n}F(x)}{{\rm{d}}{x}^{n}}$ and then set x = 0. One can see that, for example,
The important point is that when setting x = 0 at the end of differentiation, the integration ${\int }_{0}^{x}{\rm{d}}w...=0$, thus we have got rid of integration and all we need to do are the differentiations over ${{\rm{e}}}^{\frac{B}{A}x}$ and H(x).
The second order differential equation: the second typical differential equation met in this paper is the following second order differential equation
Using it, we can solve6(6An important point is that although (A.14) is a second order differential equation, because the x is in front of $\displaystyle \frac{{d}^{2}}{{{dx}}^{2}}$, around x = 0, it is essentially a first order differential equation. This explains why using only f0 and known H(x) we can determine F(x) using (A.18).)
The solution in (A.23) is given in the series expansion. We can also write it in the analytic expression. Writing F(x) = F0(x)F1(x) we can find the differential equation of F1(x) as
which is the first order differential equation of $\frac{{\rm{d}}{F}_{1}(x)}{{\rm{d}}x}=U(x)$. Using a similar method as for the differential equation (A.1) we can solve
where the α1, α2 can be determined using the initial condition of F(x = 0) and $\frac{{\rm{d}}F(x=0)}{{\rm{d}}x}$. Using the expansion of F(x) we see that α2 = f0 and α1 = 0 thus we have
The analytic form (A.33) is very compact, but it is hard to carry out the integration in general. However, we can use it to get series expansion just like before. One can see that
For general $\frac{{{\rm{d}}}^{n}F(x)}{{\rm{d}}{x}^{n}}$ we do the same thing to get the series expansion of x. At each step, we use (A.35) and there is no integration to be done for x = 0.
Appendix B. The explicit solutions of cn,m for bubble reduction
In this part, we will show how to get explicit solutions for the recursion relations (3.30), (3.31), (3.32) and (3.33).
B.1. The generation function c2→2
For this case, we have hT = 0, i.e. all hn,m = 0 in (3.30), (3.31), (3.32) and (3.33). Using (3.30) it is easy to find
where since the appearance of the operator $\widehat{P}$, the ordering of the multiple factors is given explicitly. With a little computation, one can see that
where the condition bN,0 = 0 has been used. The formula (B.14) plus (B.13), (B.18) and (B.25) gives the explicit solution for the generation function ${c}_{2\to 1;\widehat{1}}$.
Knowing ${c}_{2\to 1;\widehat{1}}$, we can use (3.5) to get the generation function ${c}_{2\to 1;\widehat{0}}$ or directly compute it using (3.30), (3.31), (3.32) and (3.33).
B.1.2. The proof of one useful relation
When we use the improved PV-reduction method with auxiliary vector R to discuss the reduction of sunset topology, an important reduction relation between different tensor ranks has been observed in [40]. Later this relation has been studied in [41, 42, 45]. For the bubble, it is given explicitly by
and c can be c2→2 or c2→1. Depending on if N is even or odd, the computation details have some differences, thus we consider N even only, while N odd will be similar. Expanding (B.28) we have
For the term with rnp2N+2−2n, the computation is a little bit complicated. First, we use (3.28) with n → n − 1 and m → 2N − 2n + 2, 2N − 2n + 1, 2N − 2n to write all ci,j with i = n − 1. Then we use (3.29) with n → n − 1 and m → 2N − 2n + 1, 2N − 2n. After doing the above two steps and making algebraic simplification, we get
AnastasiouCBrittoRFengBKunsztZMastroliaP2007 Unitarity cuts and Reduction to master integrals in d dimensions for one-loop amplitudes J. High Energy Phys. JHEP03(2007)111
FrellesvigHGasparottoFMandalM KMastroliaPMattiazziLMizeraS2019 Vector space of feynman integrals and multivariate intersection numbers Phys. Rev. Lett.123 201602
FrellesvigHGasparottoFLaportaSMandalM KMastroliaPMattiazziLMizeraS2021 Decomposition of feynman integrals by multivariate intersection numbers J. High Energy Phys. JHEP03(2021)027