Reply to Comments Made by R. Grave De Peralta Menendez and
S.L. Gozalez Andino
Appendix II
Introduction
This Appendix extends the results presented in [1]. Here, regularized instantaneous, 3D,
discrete, linear solutions for the EEG inverse problem are considered.
Here I make two propositions. First, ordinary crossvalidation is the method of choice
for selecting the regularization parameter. Second, in a comparative study of inverse solutions,
the method of choice is the inverse solution with minimum crossvalidation error. All this work
is based on the contributions of Stone [12].
There are two methodological principles involved in judging the performance of an inverse solution:
1. An inverse solution is "not good" if it has high crossvalidation error.
2. The converse is not true.
Very informally, the principle states that the selected model must produce the best prediction
(in the true sense of objective prediction, as in e.g., the leaveoneout procedure). It is
important to emphasize the "prediction error" as defined in the work of Stone [12]
is a concept very different from the classical one used in "goodness of fit"
and in "least squares".
1. Methods
The reader must refer to [1] for background and notation.
The regularized version of the generalized minimum norm inverse problem considered is:


(20) 
for any given positive definite matrix W of dimension (3M) • (3M),
and for any given α > 0.
This problem and its solution can be found
in [6], equations (9), (9'), (10), (10') therein.
Note that K and Φ belong to the
linear manifold (H_{N}), where H_{N}
is the N • N
average reference operator (or centering
matrix) defined as:


(21) 
where 1_{N} is an N•1
vector composed of ones. In this case, rank (K) = (N  1),
K ≡ H_{N} K,
and Φ ≡ H_{N} Φ.
The solution to the problem in equation (20) is:


(22) 
The crossvalidation error is defined as:


(23) 
where [Φ]_{i} denotes the
i^{th} element of Φ,
[KZ_{i}]_{i}
denotes the i^{th}
element of the vector KZ_{i},
and:


(24) 
where K_{{i}} is obtained from K by
deleting its i^{th} row;
K_{{i}}∈(H_{(N1)});
K_{{i}} ≡ H_{(N1}) K_{{i}};
Φ_{{i}} is obtained from Φ
by deleting its i^{th}
element; and Φ_{{i}} ≡ H_{(N1}) Φ_{{i}}.
An equivalent equation for crossvalidation error is:


(25) 
where KW^{1}K^{T} = ΓΛΓ^{T} denotes the eigendecomposition;
Γ is an N•(N1)
matrix whose columns are eigenvectors;
and Λ is an (N1)•(N1)
diagonal matrix with nonnull eigenvalues.
Three important comments:
1. These equations are valid if the matrix W does not depend on the measurement space,
i.e., it must not depend on the position or number of electrodes.
2. Equation (25) is valid for any α ≥ 0.
3. I have emphasized here the use of simple ordinary crossvalidation as the
"common yard stick" for comparing inverse solutions. Ordinary crossvalidation
corresponds exactly to the concept of "prediction error" based on "leave one
out". Furthermore, it can be calculated exactly for any inverse solution (linear,
nonlinear, Bayesian, etc.). Generalized crossvalidation does not have these properties.
______________________________