#include <stdio.h>
#include "spConfig.h"
#include "spMatrix.h"
#include "spDefs.h"
Defines | |
#define | spINSIDE_SPARSE |
#define | NORM(a) (nr = ABS((a).Real), ni = ABS((a).Imag), MAX (nr,ni)) |
#define | SLACK 1e4 |
Functions | |
void | spMNA_Preorder (spMatrix eMatrix) |
void | spScale (spMatrix eMatrix, spREAL RHS_ScaleFactors[], spREAL SolutionScaleFactors[]) |
void | spMultiply (spMatrix eMatrix, spREAL RHS[], spREAL Solution[]) |
void | spMultTransposed (spMatrix eMatrix, spREAL RHS[], spREAL Solution[]) |
void | spDeterminant (spMatrix eMatrix, int *pExponent, spREAL *pDeterminant, spREAL *piDeterminant) |
void | spStripFills (spMatrix eMatrix) |
void | spDeleteRowAndCol (spMatrix eMatrix, int Row, int Col) |
spREAL | spPseudoCondition (spMatrix eMatrix) |
spREAL | spCondition (spMatrix eMatrix, spREAL NormOfMatrix, int *pError) |
spREAL | spNorm (spMatrix eMatrix) |
spREAL | spLargestElement (spMatrix eMatrix) |
spREAL | spRoundoff (spMatrix eMatrix, spREAL Rho) |
void | spErrorMessage (spMatrix eMatrix, FILE *Stream, char *Originator) |
Objects that begin with the spc prefix are considered private and should not be used.
|
Computes an estimate of the condition number using a variation on the LINPACK condition number estimation algorithm. This quantity is an indicator of ill-conditioning in the matrix. To avoid problems with overflow, the reciprocal of the condition number is returned. If this number is small, and if the matrix is scaled such that uncertainties in the RHS and the matrix entries are equilibrated, then the matrix is ill-conditioned. If the this number is near one, the matrix is well conditioned. This routine must only be used after a matrix has been factored by spOrderAndFactor() or spFactor() and before it is cleared by spClear() or spInitialize(). Unlike the LINPACK condition number estimator, this routines returns the L infinity condition number. This is an artifact of Sparse placing ones on the diagonal of the upper triangular matrix rather than the lower. This difference should be of no importance. References: A.K. Cline, C.B. Moler, G.W. Stewart, J.H. Wilkinson. An estimate for the condition number of a matrix. SIAM Journal on Numerical Analysis. Vol. 16, No. 2, pages 368-375, April 1979. J.J. Dongarra, C.B. Moler, J.R. Bunch, G.W. Stewart. LINPACK User's Guide. SIAM, 1979. Roger G. Grimes, John G. Lewis. Condition number estimation for sparse matrices. SIAM Journal on Scientific and Statistical Computing. Vol. 2, No. 4, pages 384-388, December 1981. Dianne Prost O'Leary. Estimating matrix condition numbers. SIAM Journal on Scientific and Statistical Computing. Vol. 1, No. 2, pages 205-209, June 1980.
|
|
Deletes a row and a column from a matrix. Sparse will abort if an attempt is made to delete a row or column that doesn't exist.
|
|
This routine in capable of calculating the determinant of the matrix once the LU factorization has been performed. Hence, only use this routine after spFactor() and before spClear(). The determinant equals the product of all the diagonal elements of the lower triangular matrix L, except that this product may need negating. Whether the product or the negative product equals the determinant is determined by the number of row and column interchanges performed. Note that the determinants of matrices can be very large or very small. On large matrices, the determinant can be far larger or smaller than can be represented by a floating point number. For this reason the determinant is scaled to a reasonable value and the logarithm of the scale factor is returned.
|
|
This routine prints a short message describing the error error state of sparse. No message is produced if there is no error. The error state is cleared.
|
|
This routine, along with spRoundoff(), are used to gauge the stability of a factorization. If the factorization is determined to be too unstable, then the matrix should be reordered. The routines compute quantities that are needed in the computation of a bound on the error attributed to any one element in the matrix during the factorization. In other words, there is a matrix of error terms such that . This routine finds a bound on . Erisman & Reid [1] showed that , where is the machine rounding unit, where the max is taken over every row , column , and step , and is the number of multiplications required in the computation of if or otherwise. Barlow [2] showed that where . spLargestElement() finds the magnitude on the largest element in the matrix. If the matrix has not yet been factored, the largest element is found by direct search. If the matrix is factored, a bound on the largest element in any of the reduced submatrices is computed using Barlow with and . The ratio of these two numbers is the growth, which can be used to determine if the pivoting order is adequate. A large growth implies that considerable error has been made in the factorization and that it is probably a good idea to reorder the matrix. If a large growth in encountered after using spFactor(), reconstruct the matrix and refactor using spOrderAndFactor(). If a large growth is encountered after using spOrderAndFactor(), refactor using spOrderAndFactor() with the pivot threshold increased, say to 0.1. Using only the size of the matrix as an upper bound on and Barlow's bound, the user can estimate the size of the matrix error terms using the bound of Erisman and Reid. spRoundoff() computes a tighter bound (with more work) based on work by Gear [3], where is the threshold and is the maximum number of off-diagonal elements in any row of . The expensive part of computing this bound is determining the maximum number of off-diagonals in , which changes only when the order of the matrix changes. This number is computed and saved, and only recomputed if the matrix is reordered. [1] A. M. Erisman, J. K. Reid. Monitoring the stability of the triangular factorization of a sparse matrix. Numerische Mathematik. Vol. 22, No. 3, 1974, pp 183-186. [2] J. L. Barlow. A note on monitoring the stability of triangular decomposition of sparse matrices. "SIAM Journal of Scientific and Statistical Computing." Vol. 7, No. 1, January 1986, pp 166-168. [3] I. S. Duff, A. M. Erisman, J. K. Reid. "Direct Methods for Sparse Matrices." Oxford 1986. pp 99.
|
|
This routine massages modified node admittance matrices to remove zeros from the diagonal. It takes advantage of the fact that the row and column associated with a zero diagonal usually have structural ones placed symmetricly. This routine should be used only on modified node admittance matrices and should be executed after the matrix has been built but before the factorization begins. It should be executed for the initial factorization only and should be executed before the rows have been linked. Thus it should be run before using spScale(), spMultiply(), spDeleteRowAndCol(), or spNorm(). This routine exploits the fact that the structural ones are placed in the matrix in symmetric twins. For example, the stamps for grounded and a floating voltage sources are grounded: floating: [ x x 1 ] [ x x 1 ] [ x x ] [ x x -1 ] [ 1 ] [ 1 -1 ] grounded: floating 1: floating 2: [ 1 ] [ 1 -1 ] [ x x 1 ] [ x x ] [ x x -1 ] [ 1 -1 ] [ x x 1 ] [ x x 1 ] [ x x -1 ] It is important to deal with any zero diagonals that only have one set of twins before dealing with those that have more than one because swapping row destroys the symmetry of any twins in the rows being swapped, which may limit future moves. Consider [ x x 1 ] [ x x -1 1 ] [ 1 -1 ] [ 1 ] [ x x 1 ] [ 1 ] [ 1 -1 ] [ x x -1 1 ] [ 1 -1 ] [ 1 ] [ x x 1 ] [ x x -1 1 ] [ x x 1 ] [ 1 -1 ] [ x x -1 1 ] [ 1 ] So we always take care of lone twins first. When none remain, we choose arbitrarily a set of twins for a diagonal with more than one set and swap the rows corresponding to that twin. We then deal with any lone twins that were created and repeat the procedure until no zero diagonals with symmetric twins remain. In this particular implementation, columns are swapped rather than rows. The algorithm used in this function was developed by Ken Kundert and Tom Quarles.
|
|
Multiplies matrix by solution vector to find source vector. Assumes matrix has not been factored. This routine can be used as a test to see if solutions are correct. It should not be used before spMNA_Preorder().
|
|
Multiplies transposed matrix by solution vector to find source vector. Assumes matrix has not been factored. This routine can be used as a test to see if solutions are correct. It should not be used before spMNA_Preorder().
|
|
Computes the L-infinity norm of an unfactored matrix. It is a fatal error to pass this routine a factored matrix.
|
|
Computes the magnitude of the ratio of the largest to the smallest pivots. This quantity is an indicator of ill-conditioning in the matrix. If this ratio is large, and if the matrix is scaled such that uncertainties in the RHS and the matrix entries are equilibrated, then the matrix is ill-conditioned. However, a small ratio does not necessarily imply that the matrix is well-conditioned. This routine must only be used after a matrix has been factored by spOrderAndFactor() or spFactor() and before it is cleared by spClear() or spInitialize(). The pseudocondition is faster to compute than the condition number calculated by spCondition(), but is not as informative.
|
|
This routine, along with spLargestElement(), are used to gauge the stability of a factorization. See description of spLargestElement() for more information.
|
|
This function scales the matrix to enhance the possibility of finding a good pivoting order. Note that scaling enhances accuracy of the solution only if it affects the pivoting order, so it makes no sense to scale the matrix before spFactor(). If scaling is desired it should be done before spOrderAndFactor(). There are several things to take into account when choosing the scale factors. First, the scale factors are directly multiplied against the elements in the matrix. To prevent roundoff, each scale factor should be equal to an integer power of the number base of the machine. Since most machines operate in base two, scale factors should be a power of two. Second, the matrix should be scaled such that the matrix of element uncertainties is equilibrated. Third, this function multiplies the scale factors by the elements, so if one row tends to have uncertainties 1000 times smaller than the other rows, then its scale factor should be 1024, not 1/1024. Fourth, to save time, this function does not scale rows or columns if their scale factors are equal to one. Thus, the scale factors should be normalized to the most common scale factor. Rows and columns should be normalized separately. For example, if the size of the matrix is 100 and 10 rows tend to have uncertainties near 1e-6 and the remaining 90 have uncertainties near 1e-12, then the scale factor for the 10 should be 1/1,048,576 and the scale factors for the remaining 90 should be 1. Fifth, since this routine directly operates on the matrix, it is necessary to apply the scale factors to the RHS and Solution vectors. It may be easier to simply use spOrderAndFactor() on a scaled matrix to choose the pivoting order, and then throw away the matrix. Subsequent factorizations, performed with spFactor(), will not need to have the RHS and Solution vectors descaled. Lastly, this function should not be executed before the function spMNA_Preorder().
|
|
Strips the matrix of all fill-ins.
|