Statistical properties of eigenvectors and eigenvalues of structured random matrices

We study the eigenvalues and the eigenvectors of $N\times N$ structured random matrices of the form $H = W\tilde{H}W+D$ with diagonal matrices $D$ and $W$ and $\tilde{H}$ from the Gaussian Unitary Ensemble. Using the supersymmetry technique we derive general asymptotic expressions for the density of states and the moments of the eigenvectors. We find that the eigenvectors remain ergodic under very general assumptions, but a degree of their ergodicity depends strongly on a particular choice of $W$ and $D$. For a special case of $D=0$ and random $W$, we show that the eigenvectors can become critical and are characterized by non-trivial fractal dimensions.


Introduction
Statistical properties of eigenvalues and eigenvectors of random matrices is the central topic of Random Matrix Theory (RMT) [1] . The key idea of RMT is that many features of complex systems are universal and therefore they can be modelled by ensembles of random matrices, which share the same global symmetries, but don't contain any system specific information. A prominent example of such classical ensemble is the Gaussian Unitary Ensemble (GUE), in which the only constraint is the Hermiticity of a matrix.
Despite the great success of classical RMT during the last fifty years, there is a growing interest to new ensembles of random matrices, in which some structural information about an original system is partly present. In this paper, we study one of such random matrix models, which is defined as whereH is an N × N matrix from GUE and W , D are diagonal matrices with elements w i and d i , i = 1, ..., N, respectively; the matrices W and D can be either deterministic or random. The random matrices of this form appear naturally in various applications including signal processing [2], vibration analysis [3], wireless communication [4] and neural networks [5]. The spectral properties of such random matrices have been studied recently and a number of very general results have been derived (see [5,6] and references therein), however much less is known about their eigenvectors [7]. In this work, we generalize our recent results, which have been obtained for two particular cases: i) D = 0 and W is deterministic [8] ii) W = I and D is either deterministic or random [9].
One of the main results of this paper is a general non-perturbative, asymptotic expression for the moments of the eigenvectors of H, which allows us to calculate the moments for any given values of w i and d i . From this expression, it follows, in particular, that the eigenvectors of H remain qualitatively the same as the eigenvectors ofH for very generic choice of parameters w i and d i . That means, that extended nature of the GUE eigenvectors is very robust under a wide class of the deformations described by Eq.(1). At the same time, it also shows that on a quantitative level the eigenvectors of H can be very different from their GUE counterparts, namely they can occupy an arbitrarily small fraction of the available space.
Another important conclusion following from the general result for the moments is that the extended nature of the eigenvectors can be altered, provided that d i and w i become N-dependent. One of the special cases we study in the present paper is the model with D = 0 and uncorrelated Gaussian distributed w i with the variance, which is N-dependent. Such a model can be considered as a multiplicative counterpart of the Rosenzweig-Porter model [10], whose eigenvectors statistics was calculated in [9]. We find that eigenvectors of this model can be fractal and compute their fractal dimensions.
The paper is organized as follows. In Section 2 we derive our general results for the moments of the eigenvectors and the density of states. In Section 3 we investigate a special case of the model with D = 0 and random W . Finally, some conclusions and open problems are discussed briefly in Section 4.

Moments of the eigenvectors and the density of states
In this section we derive expressions for the moments of the eigenvectors of H and the density of states. Generally, the local moments at energy E are given by the definition where ψ α is a normalized eigenvector corresponding to the eigenvalue E α and ρ(E) is the density of states The integer moments can be related to the diagonal matrix elements of the Green's functions where G R denotes the retarded Green's functions and similarly G A the advanced Green's function, which are defined by where ǫ > 0 provides an infinitesimal imaginary shift of E into the complex plane and . . . denotes an average over the random matrix ensemble. For the matrix elements of the Green's functions such an average can be computed by employing the supersymmetry technique. In this approach the averaged Green's functions are represented as superintegrals over a supermatrix Q, which is in our case is just a 4 × 4 matrix. The first steps of the method are very generic and don't depend significantly on the structure of matrix H, therefore we don't present them here, further details of the derivation can be found in [8]. The superintegral representing the product of the Green's functions from Eq.(4) is given by where the explicit expression for g BB is given in Appendix A. We notice that the standard action of the superintegral appearing in the GUE case is altered by the parameters d i and w i as expected.
In the limit N → ∞, the integral is dominated by the saddle-points that satisfy the saddle-point equation where the solutions can be parametrized as [11] Q s.p. = t + isT −1 ΛT, the variables s = 0 and t are two real parameters satisfying the simultaneous equations In this way any physical quantity, which can be expressed through the Green's functions, can be calculated in terms of s and t by computing the corresponding superintegral over Q. Then for any given set of parameters {d i } and {w i } the above system of the equations can be solved numerically yielding an explicit result for any quantity of interest. In particular, one can compute the density of states, which takes the form and in a similar way, we find the expression for the local moments where Γ(z) is the gamma function and q is a positive integer. These two general results allow us to calculate the density of states and the statistics of the eigenvectors for any particular choice of the matrices W and D in Eq. (1). Verifying that we recover the GUE case once we set d i = 0 and w i = 1 is a simple exercise, where we obtain these are the well-known results for the GUE case. Setting w i = 1 we reproduce our previous result derived in [9]: It follows from Eq.(11) that the scaling of I q (n) with N remains the same as in the GUE case, provided that w i , d i , s and t are N-independent. This implies that the eigenvectors of all such models are extended. Nevertheless their quantitative characteristics, which depend strongly on the ratio sw 2 n (E−dn−w 2 n t) 2 +w 4 n s 2 can change significantly compared to the GUE case. In particular, such eigenvectors can be concentrated on an arbitrarily small fraction of the available space being less ergodic than their GUE counterparts.
The fact that the local moments I q (n) depend explicitly only on the corresponding matrix elements d n and w n and don't depend on d k and w k with k = n might be useful for some applications, in which one can control the matrices D and W . Indeed, changing the values of d n and w n relative to other matrix elements, one can enhance or decrease the corresponding component of the eigenvector in a desirable fashion.
We test our general result by numerical simulations, considering a specific model, in which w i = d i = N/i. Numerical results for the density of states and the moments of the eigenvectors were produced by direct matrix diagonalization and they match our analytical expressions with high accuracy. Fig. 1 shows the results of numerical simulations for I 2 = n I 2 (n) with N ranging from 500 to 3000 of 1000 realizations.
The eigenvectors that were used in the calculation correspond to the eigenvalues in the vicinity of E = 0.
3. Model with random W and D = 0.
A particular case of the general model, in which D = 0 and W is a deterministic matrix was investigated in Ref. [8]. In this section we study how the results of that work can be generalized to the case of random W . Specifically, we focus on the model, in which w i are independent Gaussian distributed variables with w i = 0 and w 2 i = σ 2 . The system of the equations (11) at d i = 0, is valid for any particular realization of the random variables d i . Therefore s and t also become random variables, whose distribution functions can be found by solving the equations for each realization of w i . As s and t are determined by a large number of independent random variables, they must satisfy some generalization of the law of large numbers and by numerical simulations we infer that the deviation of s and t from their mean values become smaller and smaller as N → ∞. That means that the variables s and t are self-averaging quantities implying that they can be replaced by their mean values s and t . Taking this fact into account and averaging the above equations over w i we find (16) As w i are identically distributed, we can simply replace w i with x and simplify the system to where x is the Gaussian distributed random variable with x = 0 and x 2 = σ 2 . In order to compute the average of the second equation, we first rearrange its right hand side as follows The above average over x can be now calculated using the Fourier transform of Once the integration is completed (see Appendix B for details), we get the expression for the averaged equation where we introduced the functions and erfi(z) stands for the imaginary error function. A similar approach is taken to average the first simultaneous equation, which gives Statistical properties of eigenvectors and eigenvalues of structured random matrices 8 By solving the system of equations (20) and (22) numerically, we can find s and t and hence the density of statesρ In Fig. 2 we present the results of numerical simulations testing the validity of this expression. One can show that t ∝ √ E and s = O(1) at E → 0. Therefore the density of states,ρ(E) ∝ 1/ √ E, is singular at E = 0. The origin of this singularity can be understood from the general expression (10), according to which the density of states is given by a sum of Lorentzians. At d i = 0 and E = 0 the contribution of each of them to ρ(0) has a maximum value proportional to w −2 i . Since negative moments are divergent for the Gaussian distribution, the density of states tends to infinity, if w i are random Gaussian variables.
Employing the same method one can average the expression for the moments of the eigenvectors (14): The calculation of the averaging over w i can be simplified first by noticing that therefore the averaged moments of the eigenvectors can be written aŝ The latter average can be evaluated exactly in the same way as one in Eq.(17). Once the averaging is completed, we arrive at the final result for the momentŝ The derivatives can be calculated explicitly for any integer q. Since the final expressions forÎ q become quite lengthy for higher values of q, here we present only an explicit formula for q = 2: In order to corroborate the validity of this expression we ran numerical simulations for σ = 10. The numerical results presented in Fig. 3 along with the analytical solution fully confirm its validity. The moment with q = 2 was calculated for the eigenvectors corresponding the eigenvalues from the vicinity of E = 1. According to Eq.(27) the scaling ofÎ q ∝ N 1−q is exactly the same as in GUE, indicating that the eigenvectors of this model are qualitatively similar to the GUE eigenvectors. However, if one assumes that σ acquires N-dependence, then this conclusion can't be drawn any more. To explore such a possibility, we study the model with σ = N γ , γ > 0.
Since σ → ∞, as N → ∞ we can analyse the asymptotic behaviour of the simultaneous equations when σ → ∞, assuming that E ∝ O(1), so we set E = 1 for simplicity. One can show that in this limit s ≫ t , therefore we can expand all the expressions in t / s and keep only the leading order terms. Then the asymptotic solution of the simultaneous equations is given by Substituting this result into the formula forÎ q we find an asymptotic expression for the  This result holds for any σ ≫ 1. In particular, for σ = N γ we haveÎ q ∝ N (γ−1)(q−1) . The scaling of the moments with non-trivial power of N implies that the eigenvectors become fractal in this case with the fractal dimension D q = 1 − γ. There is a clear similarity between this finding and recent results [12,9] for non-ergodic states in the Rosenzweig-Porter model [10]. Thus the model we discuss here can be considered as a multiplicative analogue of the Rosenzweig-Porter model. As the exponent (γ − 1)(q − 1) of the scaling law must be negative, we conclude that our result breaks down for γ > 1. We computedÎ 2 numerically for σ = N 1/2 for the eigenvectors, whose eigenvalues are sufficiently close to E = 1, and found the the numerical results are in agreement with our prediction. The corresponding results are given in Fig. 4.

Conclusions
We studied a general class of the structured random matrices given by Eq.(1). Our main focus was on the statistical properties of the eigenvectors of such random matrices. Using the supersymmetry technique we derived a very general expression for the local moments of the eigenvectors. This result allowed us not only to make predictions about qualitative nature of the eigenvectors, such as a degree of their ergodicity, but also to understand, how particular components of the eigenvectors are affected by the corresponding matrix elements of W and D.