Package com.polytechnik.kgo
Class LagrangeMultipliersPartialSubspace
java.lang.Object
com.polytechnik.kgo.LagrangeMultipliersPartialSubspace
A class to handle Lagrange multipliers in partial subspace.
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionprivate static class
Store linear system coefficeints. -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionstatic double[]
calculateRegularLambda
(int nC, int nX, double[] SK, double[] u) Calculate regular lambda as \( \lambda_{ij}=\mathrm{Herm} \sum\limits_{k=k^{\prime}=0}^{nX-1}\sum\limits_{j^{\prime}=0}^{nC-1} u_{ik}S_{jk;j^{\prime}k^{\prime}} u_{j^{\prime}k^{\prime}} \).private static double[]
getLambdaAsMatrix
(int nC, double[] vecLambda) Unvectorize lambda.(package private) static double[]
getLambdaForSubspace
(int nC, int nX, double[] SK, double[] u, double[][] vBasis) Similar tocalculateRegularLambda(int, int, double[], double[])
but calculates Lagrange multipliers using only weights for projections to a given subspace.private static double[]
getLambdaForSubspaceAverageSeveralU
(int nC, int nX, double[] SK, double[][] uToAverage, double[][] vBasis) A try to build Lagrange multipliers not for specific u, but averaged for several states fromuToAverage
, obtain averaged Lagrange multiplers over several solution candidates.private static double[]
getLambdaSolvingLinearSystem
(int nC, double[] matrLinSystem, double[] rPart) Solve linear system and convert obtained lambda-vector to Hermitian matrix.private static double[]
getVectorizedCoefsByLambda
(int nC, double[] UV) Given UV matrix of nC*nC obtain corresponding coefficients (as vector) by Hermitian lambda.
-
Constructor Details
-
LagrangeMultipliersPartialSubspace
LagrangeMultipliersPartialSubspace()
-
-
Method Details
-
calculateRegularLambda
public static double[] calculateRegularLambda(int nC, int nX, double[] SK, double[] u) Calculate regular lambda as \( \lambda_{ij}=\mathrm{Herm} \sum\limits_{k=k^{\prime}=0}^{nX-1}\sum\limits_{j^{\prime}=0}^{nC-1} u_{ik}S_{jk;j^{\prime}k^{\prime}} u_{j^{\prime}k^{\prime}} \). The \( u_{jk} \) is assumed to be orthogonal: \( \delta_{ij}=\sum\limits_{k=0}^{nX-1} u_{ik}u_{jk} \).- Parameters:
nC
- First dimension of \( u_{jk} \).nX
- Second dimension of \( u_{jk} \).SK
- Data tensor.u
- The state in which to calculate Lagrange multipliers. The state must be orthogonal \( \delta_{ij}=\sum\limits_{k=0}^{nX-1} u_{ik}u_{jk} \).- Returns:
- Lagrange multipliers.
-
getVectorizedCoefsByLambda
private static double[] getVectorizedCoefsByLambda(int nC, double[] UV) Given UV matrix of nC*nC obtain corresponding coefficients (as vector) by Hermitian lambda.- Returns:
- Coefficients by Lambda, as a vector of nC*(nC+1)/2 length.
-
getLambdaAsMatrix
private static double[] getLambdaAsMatrix(int nC, double[] vecLambda) Unvectorize lambda. -
getLambdaForSubspace
static double[] getLambdaForSubspace(int nC, int nX, double[] SK, double[] u, double[][] vBasis) Similar tocalculateRegularLambda(int, int, double[], double[])
but calculates Lagrange multipliers using only weights for projections to a given subspace. Originally, the \( \lambda_{ij} \) is obtained from the minimization problem (methodcalculateRegularLambda(int, int, double[], double[])
$$ \sum\limits_{i=0}^{nC-1}\sum\limits_{q=0}^{nX-1} \left| \sum\limits_{j^{\prime}=0}^{nC-1}\sum\limits_{k^{\prime}=0}^{nX-1} S_{iq;j^{\prime}k^{\prime}} u_{j^{\prime}k^{\prime}} - \sum\limits_{j=0}^{nC-1} \frac{\lambda_{ij}+\lambda_{ji}}{2} u_{jq} \right|^2 \xrightarrow[\lambda_{ij}]{\quad }\min $$ If we have some chosen basis \( v^{[p]}_{jk} \) then we may consider $$ \sum\limits_{p=0}^{nP-1} \left| \sum\limits_{i=0}^{nC-1}\sum\limits_{q=0}^{nX-1} v^{[p]}_{iq}\left[ \sum\limits_{j^{\prime}=0}^{nC-1}\sum\limits_{k^{\prime}=0}^{nX-1} S_{iq;j^{\prime}k^{\prime}} u_{j^{\prime}k^{\prime}} - \sum\limits_{j=0}^{nC-1} \frac{\lambda_{ij}+\lambda_{ji}}{2} u_{jq} \right] \right|^2 \xrightarrow[\lambda_{ij}]{\quad }\min $$ If \( v^{[p]}_{jk} \) is a full basis \( p=0\dots nC*nX-1\), then two methods of \( \lambda_{ij} \) calculation are identical. The reason why subspace may be beneficial to use -- the \( \lambda_{ij} \), Hermitian matrix, has \(nC(nC+1)/2\) independent parameters. Lagrangian variation has \(nC*nX\) equations. One may choose a subspace, the variation projections on which must be set to zero in the first place. This technique may be used to improve algorithm convergence. Work in progress. Linear system may be degenerated. The concept itself may not be a good one. A better approach seems to add additional linear constraints on \(u_{jk}\), seeKGOIterationalSubspaceLinearConstraints
andLinearConstraints.getOrthogonalOffdiag0DiagEq(int, int, double[], double[])
.- Parameters:
nC
- First dimension of \( u_{jk} \).nX
- Second dimension of \( u_{jk} \).SK
- Data tensor.u
- The state in which to calculate Lagrange multipliers.vBasis
- A subspace using which to calculate Lagrange multipliers, keepvBasis.length >= nC*(nC+1)/2
to avoid linear system degeneracy.- Returns:
- Lagrange multipliers \( nC \times nC \) matrix,
if
vBasis
is a full basis the result is identical tocalculateRegularLambda(int, int, double[], double[])
.
-
getLambdaForSubspaceAverageSeveralU
private static double[] getLambdaForSubspaceAverageSeveralU(int nC, int nX, double[] SK, double[][] uToAverage, double[][] vBasis) A try to build Lagrange multipliers not for specific u, but averaged for several states fromuToAverage
, obtain averaged Lagrange multiplers over several solution candidates. -
getLambdaSolvingLinearSystem
private static double[] getLambdaSolvingLinearSystem(int nC, double[] matrLinSystem, double[] rPart) Solve linear system and convert obtained lambda-vector to Hermitian matrix.
-