The inverse matrix exists if and only if. Necessary and sufficient condition for the existence of an inverse matrix


Matrix addition.

Addition properties:

A + B = B + A.

· (A + B) + C = A + (B + C) .

Multiplying a matrix by a number.

k(A + B) = kA + kB.

· (k + m)A = kA + mA.


Matrix multiplication.

inverse matrix.




Qualifier properties




4. Substitution theorem.

5. Cancellation theorem.

additions to these elements

where i= ,

Matrix transposition.

Transposed matrix
A T[ i, j] = A[j, i].
For example,

And

cylindrical surfaces.

The surface formed by the motion of the straight line L, which moves in space, maintaining a constant direction and intersecting some curve K each time, is called a cylindrical surface or a cylinder, while the curve K is the guide of the cylinder, and L is its generatrix.

Elliptical cylinder

Elliptic equation:

special case elliptical cylinder is circular cylinder, his equation is x 2 + y 2 = R 2 . The equation x 2 \u003d 2pz determines in space parabolic cylinder.

The equation: defines in space hyperbolic cylinder.

All these surfaces are called second order cylinders, since their equations are equations of the second degree with respect to the current coordinates x, y, z.

62. Ellipsoids.

We examine the surface given by the equation:

Consider sections of the surface with planes parallel to the xOy plane. Equations of such planes: z=h, where h is any number. The line obtained in the section is determined by two equations:

Examining the surface:

And if That The line of intersection of the surface with the planes z=h does not exist.

B) if , the line of intersection degenerates into two points (0,0,s) and (0,0,-s). The plane z = c, z = - c touches the given surface.

B) if , then the equations can be rewritten as:
, as you can see, the line of intersection is an ellipse with semi-axes a1 = , b1 = . In this case, the smaller h, the larger the semiaxis. At n=0 they reach their maximum values. a1=a, b1=b. The equations will take the form:

The considered sections allow us to depict the surface as a closed oval surface. The surface is called ellipsoids. If any semiaxes are equal, the triaxial ellipsoid turns into an ellipsoid of revolution, and if a=b=c, then into a sphere.

Hyperboloids.

1. Explore the surface . Crossing the surface with the z=h plane, we obtain a line of intersection, the equation of which is


z=h. or z=hhalf-axes: a1= b1=

semiaxes reach their smallest value at h=0: a1=a, b1=b. As h increases, the semiaxes of the ellipse will increase. => x=0.

An analysis of these sections shows that the surface defined by the equation has the shape of an infinite expanding tube. The surface is called one-sheeted hyperboloid.

2. - surface equation.

And - a surface consisting of 2 cavities in the form of convex unlimited bowls. The surface is called two-sheeted hyperboloid.

64. paraboloids.

.
-This elliptical paraboloid.

Canonical equation: (p>0, q>0).

p = q is a paraboloid of revolution around the Oz axis.

Sections elliptical paraboloid planes - either an ellipse, or a parabola, or a point.

2.
- hyperbolic paraboloid.

Sections of a hyperbolic paraboloid by planes are either a hyperbola, or a parabola, or a pair of straight lines (rectilinear generators).

65. Canonical surfaces.

Canonical equation:

a = b - cone of revolution (straight circular)
Sections of the cone by planes: in a plane intersecting all rectilinear generators - an ellipse; in a plane parallel to one rectilinear generatrix - a parabola; in a plane parallel to two rectilinear generators - a hyperbola; in the plane passing through the vertex of the cone, a pair of intersecting lines or a point (vertex).

66. Function. Basic concepts. Ways to set it.

A function is a law according to which the number x from a given set X is associated with only one number y, they write, while x is called the argument of the function, y

is called the value of the function.

1. Analytical method.

2. Graphical way.

3. Verbal way.

4. Tabular method.

Comparison theorem.

in the theory of differential equations, a theorem asserting the existence of a certain property of solutions to a differential equation (or a system of differential equations) under the assumption that an auxiliary equation or inequality (a system of differential equations or inequalities) has some property.

1) Shturm's theorem: any non-trivial solution of an equation vanishes on a segment no more than m times if the equation has this property and at.

2) Differential inequality: the solution to the problem is componentwise non-negative if this property has a solution to the problem and the inequalities are satisfied

The first is wonderful limit.

When calculating the limits of expressions containing trigonometric functions, often use the limit called the first remarkable limit.

It reads: the limit of the ratio of the sine to its argument equal to one when the argument goes to zero.

Proof:

Take a circle of radius 1, denote the radian measure of the MOB angle as x. let 0 , the arc MB is numerically equal to the central angle x, . Obviously we have . Based on the corresponding geometry formulas, we obtain . Divide the inequality by >0, Get 1<

Because , then by the sign (on the limit of an intermediate function) the existence of limits .

What if x<0 => , where –x>0 =>

83. The second remarkable limit.

As you know, the limit of a number sequence
, has a limit equal to e. . 1.Let . Each x value is enclosed between two positive integers: , where n=[x] is the integer part of x. Hence it follows, therefore
. If , That . That's why:
,

On the basis of the existence of limits: . 2. Let . Let's make a substitution –x=t, then = . And are called the second remarkable limit. They are widely used in calculating limits. In analysis applications, an important role is played by exponential function with base e. Function is called exponential, the notation is also used .

Proof.

(taking into account the fact that if Dx®0, then Du®0, because u = g(x) – continuous function)

Then . The theorem has been proven.

Cauchy's theorem

Cauchy's theorem: If the functions f(x) and are continuous on the interval , differentiable on the interval (a,b), and For , then there is at least one point , such that the equality
.

Matrices. Basic concepts. Linear operations on matrices and their properties.

An m by n matrix is ​​a collection of mn real (complex) numbers or elements of another structure (polynomials, functions, etc.), written in the form of a rectangular table, which consists of m rows and n columns and is taken in round or rectangular or in double straight brackets. In this case, the numbers themselves are called elements of the matrix, and each element is assigned two numbers - the row number and the column number.

A matrix, all elements of which are equal to zero, is called a zero matrix

An n by n matrix is ​​called an nth order square matrix, i.e. the number of rows is equal to the number of columns.

A square matrix is ​​said to be diagonal if all of its off-diagonal entries are equal to zero.

A diagonal matrix with all diagonal entries equal to 1 is called the identity matrix
Matrix addition.

Addition properties:

A + B = B + A.

· (A + B) + C = A + (B + C) .

If O is a zero matrix, then A + O = O + A = A

Remark 1. The validity of these properties follows from the definition of the operation of matrix addition.

Remark 2. Note again that only matrices of the same dimension can be added.

Multiplying a matrix by a number.

Properties of multiplying a matrix by a number

k(A + B) = kA + kB.

· (k + m)A = kA + mA.

Remark 1. The validity of the properties follows from Definitions 3.4 and 3.5.

Remark 2. Let's call the difference of matrices A and B the matrix C, for which С+В=А, i.e. С=А+(-1)В.
Matrix multiplication.

Multiplication of a matrix by a matrix also requires the fulfillment of certain conditions for the dimensions of the factors, namely: the number of columns of the first factor must be equal to the number of rows of the second.

For square matrices of the same order, the products AB and BA exist and have the same dimension, but their corresponding elements are generally not equal.

However, in some cases the products AB and BA coincide

Inverse matrix.

A square matrix A is called degenerate if ∆A=0, and non-degenerate if ∆A≠0

A square matrix B is called the inverse of a square matrix A of the same order if AB = BA = E. In this case, B is denoted

For the inverse matrix to exist, it is necessary and sufficient that the original matrix be nonsingular.


2. Matrix determinant. Properties of determinants.

Determinant (or determinant) - one of the basic concepts linear algebra. The determinant of a matrix is ​​a polynomial in the elements of a square matrix (that is, one that has the same number of rows and columns). In general, a matrix can be defined over any commutative ring, in which case the determinant will be an element of the same ring. (∆A)

Qualifier properties

· The determinant is a skew-symmetric multilinear function of rows (columns) of a matrix. Polylinearity means that the determinant is linear in all rows (columns): , where, etc. - the rows of the matrix, - the determinant of such a matrix.

· When adding a linear combination of other rows (columns) to any row (column), the determinant will not change.

· If two rows (columns) of a matrix are the same, then its determinant is equal to zero.

· If two (or several) rows (columns) of a matrix are linearly dependent, then its determinant is equal to zero.

· If you rearrange two rows (columns) of a matrix, then its determinant is multiplied by (-1).

· Common multiplier elements of any series of the determinant can be taken out of the sign of the determinant.

· If at least one row (column) of the matrix is ​​zero, then the determinant is equal to zero.

· The sum of the products of all elements of any string and their algebraic complements is equal to the determinant.

· The sum of the products of all elements of any series and the algebraic complements of the corresponding elements of the parallel series is equal to zero.

· The determinant of the product of square matrices of the same order is equal to the product of their determinants (see also the Binet-Cauchy formula).

Using index notation, the determinant of a 3×3 matrix can be determined using the Levi-Civita symbol from the relationship:

3. Minors and algebraic additions.

The minor of an element of a matrix of the nth order is the determinant of the matrix of the (n-1)th order, obtained from the matrix A by deleting the i-th row and the j-th column.

When writing out the determinant of the (n-1) order, the elements under the lines in the original determinant are not taken into account.
The algebraic complement Aij of the element aij of an n-th order matrix is ​​its minor, taken with a sign, depending on the row number and column number: that is, the algebraic complement coincides with the minor when the sum of the row and column numbers is an even number, and differs from the minor in sign, when the sum of the row and column numbers is an odd number.

4. Substitution theorem.

The sums of the products of arbitrary numbers bi ,b2,...,b and the algebraic complements of the elements of any column or row of a matrix of order n are equal to the determinant of the matrix that is obtained from the given one by replacing the elements of this column (row) by the numbers b1,b2,...,bn.

5. Cancellation theorem.

The sum of the products of the elements of one of the columns (rows) of the matrix and the corresponding algebraic complements of the elements of another column (row) is equal to zero.

6. Some methods for calculating determinants.

Theorem (Laplace). The determinant of the matrix of order N = the sum of the product of all minors of the kth order, which can be composed of arbitrarily chosen k parallel series and algebraic complements of these minors

Theorem (on the decomposition of the determinant in terms of the elements of the series). Determinant sq. matrices = the sum of products of elements of a certain series and algebraic

additions to these elements

7. Matrix multiplication. multiplication properties.

The operation of multiplying two matrices is introduced only for the case when the number of columns of the first matrix is ​​equal to the number of rows of the second matrix.

The product of the matrix А m * n = (a i , g) and the matrix B n * p = (b i , k) is the matrix Сm*p = (с i , k) such that: ,

where i= , , i.e. the element of the i-th and k-th column of the product matrix C is equal to the sum of the products of the elements of the i-th row of matrix A and the corresponding elements of the k-th column of matrix B.

Matrices A, n*m and B, m*n, called. agreed. (If A is consistent with B, then this does not mean that B is consistent with A).

The meaning of consistency is that the number of columns of the 1st matrix matches the number of rows of the 2nd matrix. For consistent matrices, the operation of multiplication can be defined.

If matrices A and B are square and of the same size, then A*B and B*A always exist. Transposition is the replacement of all elements of a column with the corresponding elements of a row. If A T \u003d A, then the matrix A is called. symmetrical (it is necessarily square).

Matrix transposition.

Transposed matrix- matrix obtained from the original matrix by replacing rows with columns.
Formally, the transposed matrix for the size matrix is ​​the size matrix, defined as A T[ i, j] = A[j, i].
For example,

And

Inverse matrix. A necessary and sufficient condition for the existence of an inverse matrix. Finding the inverse matrix.

Let there be a matrix A - nondegenerate.

A -1 , A -1 *A=A*A -1 =E, where E is the identity matrix. A -1 has the same dimensions as A.

Algorithm for finding the inverse matrix:

1. Instead of each element of the matrix a ij we write down its algebraic complement.

A* - union matrix.

2. transpose the resulting union matrix. A*T

3. divide each element of the union matrix by the determinant of matrix A.

A -1 = A *T

Theorem: (on the annihilation of the determinant):
the sum of the products of the elements of some series of the determinant and the algebraic complement to the elements of another parallel series is always equal to zero.

10. Matrix notation system linear equations and her decisions.

Matrices make it possible to briefly write down a system of linear equations. Let a system of 3 equations with three unknowns be given:

Consider the matrix of the system and matrix columns of unknown and free members

Let's find the product

those. as a result of the product, we obtain the left-hand sides of the equations of this system. Then, using the definition of matrix equality, this system can be written as

or shorter AX=B.

Here matrices A And B are known, and the matrix X unknown. She needs to be found, because. its elements are the solution of this system. This equation is called matrix equation.

Let the matrix determinant be different from zero | A| ≠ 0. Then the matrix equation is solved as follows. Multiply both sides of the equation on the left by the matrix A-1, the inverse of the matrix A: . Because the A -1 A = E And EX=X, then we obtain the solution of the matrix equation in the form X = A -1 B.

Note that since the inverse matrix can only be found for square matrices, the matrix method can only solve those systems in which the number of equations is the same as the number of unknowns. However, the matrix notation of the system is also possible in the case when the number of equations is not equal to the number of unknowns, then the matrix A is not square and therefore it is impossible to find a solution to the system in the form X = A -1 B.

11. Solution of non-degenerate linear systems, Cramer's formulas.

SLAE is usually written in matrix form, when the unknowns themselves are not indicated, but only the matrix of the system A and the column of free terms B are indicated.

Solution of non-degenerate SLAE by Cramer's method:

A -1 =

X1= (A 11 b 1 + A 21 b 2 + …+A n 1 b n)

Theorem: (Cramer):
solution of non-degenerate equations AX=B, can be written like this:

, Ak is obtained from A by replacing the k-th column with the column of the free term B.

12. Matrix rank. Matrix rank properties. Calculating the rank of a matrix using elementary transformations.

The maximum number of linearly dependent rows of the matrix A is called. matrix rank and denoted by r(a). The largest of the orders of the minors of a given matrix other than 0 is called matrix rank.

Properties:

1) when transposing rang=const.

2) if you cross out the zero row, then rang=const;

3) rank=cost, with elementary transformations.

3) to calculate the rank using element-transform matrix A-transform to matrices B, whose rank is easily found.

4) the rank of triangles of the matrix = the number of non-zero elements, located on the main diagonals.

Methods for finding the rank of a matrix:

1) the method of bordering minors

2) method of elementary transformations

Fringing Minor Method:

the method of bordering minors makes it possible to algorithmize the process of finding the rank-matrix and allows minimizing the amount of calculation of minors.

1) if the matrix has all zero elements, then rank = 0

2) if there is at least one non-zero element => r(a)>0

now we will border the M1 minor, i.e. we will construct all possible minors of the 2nd order, ktr. contain the i-th row and the j-th column, until we find a non-zero minor of the 2nd order.

The process will continue until one of the events:
1. the size of the minor will reach the number k.

2. at some stage, all bordered minors will be = 0.

In both cases, the value of the rank-matrix will be equal to the order of the larger non-zero minor.

Method of elementary transformations:
as is known, the concept of a triangular matrix is ​​defined only for square matrices. For rectangular matrices, the analog is the concept of a trapezoidal matrix.

For example:
rank = 2.

Matrix A -1 is called reverse in relation to the square matrix A, if when multiplying this matrix by matrix A, both on the right and on the left, an identity matrix is ​​obtained: A -1 * A = A * A -1 = E.

It follows from the definition that the inverse matrix is ​​a square matrix of the same order as matrix A.

It can be noted that the concept of an inverse matrix is ​​similar to the concept of an inverse number (this is a number that, when multiplied by a given number, gives one: a * a -1 \u003d a * (1 / a) \u003d 1).

All numbers except zero have reciprocals.

To decide whether a square matrix has an inverse, it is necessary to find its determinant. If the determinant of a matrix is ​​zero, then such a matrix is ​​called degenerate, or special.

Necessary and sufficient condition for the existence of an inverse matrix: the inverse matrix exists and is unique if and only if the original matrix is ​​nonsingular.

Let's prove the necessity. Let matrix A have an inverse matrix A -1 , i.e. A -1 * A = E. Then |A -1 * A| = | A -1 | * |A| = |E| = 1. Therefore, |A|0.

Let us prove sufficiency. To prove it, we just need to describe a way to calculate the inverse matrix, which we can always apply to a non-singular matrix.

So let |A| 0. We transpose the matrix A. For each element A T we find an algebraic complement and form a matrix from them, which is called attached(mutual, allied):
.

Find the product of the attached matrix and the original
. Get
. Thus, the matrix B is diagonal. On its main diagonal are the determinants of the original matrix, and all other elements are zeros:

Similarly, one can show that
.

If we divide all the elements of the matrix by |A|, then the identity matrix E will be obtained.

Thus
, i.e.
.

Let us prove the uniqueness of the inverse matrix. Assume that there is another inverse matrix for A other than A -1 . We denote it by X. Then A * X \u003d E. Multiply both sides of the equality on the left by A -1.

A -1 * A * X \u003d A -1 * E

The uniqueness is proved.

So, the algorithm for calculating the inverse matrix consists of the following steps:

1. Find the determinant of the matrix |A| . If |A| = 0, then the matrix A is degenerate, and the inverse matrix cannot be found. If |A| 0, then go to the next step.

2. Construct the transposed matrix A T.

3. Find the algebraic complements of the elements of the transposed matrix and construct the associated matrix .

4. Calculate the inverse matrix by dividing the associated matrix by |A|.

5. You can check the correctness of the calculation of the inverse matrix in accordance with the definition: A -1 * A = A * A -1 = E.

    Let's find the determinant of this matrix by the rule of triangles:

Let's skip the check.

The following matrix inversion properties can be proved:

1) | A -1 | = 1/|A|

2) (A -1) -1 = A

3) (A m) -1 = (A -1) m

4) (AB) -1 \u003d B -1 * A -1

5) (A -1) T \u003d (A T) -1

Matrix rank

Minork-th order matrices A of size m x n is called the determinant of a square matrix of the kth order, which is obtained from matrix A by deleting any rows and columns.

It follows from the definition that the order of the minor does not exceed the smallest of its dimensions, i.e. kmin(m;n). For example, from the matrix A 5x3, you can get square submatrices of the first, second and third orders (respectively, calculate the minors of these orders).

rank matrices name the highest order of non-zero minors of this matrix (denoted rang A, or r(A)).

It follows from the definition that

1) the rank of the matrix does not exceed the smallest of its dimensions, i.e. r(А)min(m;n);

2) r(А) = 0 if and only if the matrix is ​​zero (all elements of the matrix are equal to zero), i.e. r(А) = 0А = 0;

3) for a square matrix of the nth order r(A) = n if and only if this matrix A is nondegenerate, i.e. r(A) = n|A|0.

In fact, for this it is enough to calculate only one such minor (the one obtained by deleting the third column (because the rest will contain a zero third column, and therefore they are equal to zero).

According to the triangle rule = 1*2*(-3) + 3*1*2 + 3*(-1)*4 – 4*2*2 – 1*(-1)*1 – 3*3*(-3) = -6 +6 – 12 – 16 + 1 +27 = 0.

Since all third-order minors are zero, r(А)2. Since there is a non-zero second-order minor, for example,

Obviously, the methods we used (consideration of all possible minors) are not suitable for determining the rank in more complex cases due to the high complexity. Usually, to find the rank of a matrix, some transformations are used, which are called elementary:

1). Dropping zero rows (columns).

2). Multiplying all elements of a row or column of a matrix by a number other than zero.

3). Changing the order of rows (columns) of a matrix.

4). Adding to each element of one row (column) the corresponding elements of another row (column), multiplied by any number.

5). Transposition.

If matrix A is obtained from matrix B by elementary transformations, then these matrices are called equivalent and denoted by AB.

Theorem. Elementary transformations of a matrix do not change its rank.

The proof of the theorem follows from the properties of the matrix determinant. Indeed, under these transformations, the determinants of square matrices are either preserved or multiplied by a non-zero number. As a result, the highest order of non-zero minors of the original matrix remains the same, i.e. her rank does not change.

With the help of elementary transformations, the matrix is ​​brought to the so-called step form (transformed into step matrix), i.e. they achieve that in the equivalent matrix under the main diagonal there are only zero elements, and on the main diagonal - non-zero:

The rank of the step matrix is ​​r, since by deleting columns from it, starting from (r + 1) and further, you can get a triangular matrix of the rth order, the determinant of which will be different from zero, since it will be a product of nonzero elements (hence , there is an rth order minor that is not equal to zero):

Example. Find the rank of a matrix

1). If a 11 = 0 (as in our case), then by rearranging the rows or columns we will achieve that a 11  0. Here we swap the 1st and 2nd rows of the matrix:

2). Now a 11 0. By elementary transformations, we will ensure that all other elements in the first column are equal to zero. In the second line a 21 = 0. In the third line a 31 = -4. To replace (-4) with 0, add to the third line the first line multiplied by 2 (i.e. by (-а 31 /а 11) = -(-4)/2 = = 2). Similarly, to the fourth line, add the first line (multiplied by one, i.e. by (-a 41 / a 11) \u003d - (-2) / 2 \u003d 1).

3). In the resulting matrix, a 22  0 (if it were a 22 = 0, then we could rearrange the rows again). We will achieve that below the diagonal in the second column there are also zeros. To do this, add the second line multiplied by -3 to the 3rd and 4th lines ((-a 32 / a 22) \u003d (-a 42 / a 22) \u003d - (-3) / (-1) \u003d - 3):

4). In the resulting matrix, the last two rows are zero, and they can be discarded:

A step matrix consisting of two rows is obtained. Therefore, r(A) = 2.

For each numbers a¹0 there is an inverse a -1 such that the work a × a -1 \u003d 1. A similar notion is introduced for square matrices.

Definition. If there are square matrices X and A of the same order that satisfy the condition:

where E is the identity matrix of the same order as the matrix A, then the matrix X is called reverse to the matrix A and is denoted by A -1 .

It follows from the definition that only a square matrix has an inverse; in this case, the inverse matrix is ​​also square of the same order.

However, not every square matrix has an inverse. If condition a¹0 is necessary and sufficient for the existence of a number a -1, then for the existence of the matrix A -1 such a condition is the requirement DA ¹0.

Definition. square matrix n th order is called non-degenerate (non-singular), if its determinant is DA ¹0.

If DA= 0 , then the matrix A is called degenerate (special).

Theorem(a necessary and sufficient condition for the existence of an inverse matrix). If a square matrix non-special(that is, its determinant is not equal to zero), then for it there exists the only inverse matrix.

Proof.

I. Necessity. Let the matrix A have an inverse A -1, i.e. AA -1 \u003d A -1 A \u003d E. By property 3 determinants ( § 11) we have D(AA -1)= D(A -1) D(A)= D(E)=1, i.e. DA ¹0 and DA-1 ¹0.

I I. Adequacy. Let the square matrix A be non-singular, i.e. DA ¹0 . Let's write the transposed matrix A T:

In this matrix, we replace each element with its algebraic complement, we get the matrix:

The matrix A* is called attached matrix to matrix A.

Find the product of AA * (and A * A):

Where diagonal elements = DA,

DA.(formula 11.1 §eleven)

And all the rest off-diagonal the elements of the matrix AA * are equal to zero in property 10 §11, For example:

etc. Hence,

AA * = or AA * = DA = DA×E.

Similarly, it is proved that A * A = DA×E.

Dividing both obtained equalities by DA, we get: . Hence, by the definition of an inverse matrix, it follows that there is an inverse matrix

Because AA -1 \u003d A -1 A \u003d E.

The existence of the inverse matrix is ​​proved. Let's prove uniqueness. Suppose that there is another inverse matrix F for matrix A, then AF = E and FA = E. Multiplying both parts of the first equality by A -1 on the left, and the second by A -1 on the right, we get: A -1 AF = A - 1 E and FA A -1 = E A -1 , whence EF = A -1 E and FE = E A -1 . Therefore, F \u003d A -1. The uniqueness is proved.

Example. Given a matrix A = , find A -1 .

Algorithm for calculating the inverse matrix:

Properties of inverse matrices.

1) (A -1) -1 = A;

2) (AB) -1 = B -1 A -1

3) (A T) -1 = (A -1) T .

⇐ Previous78910111213141516Next ⇒

⇐ PreviousPage 3 of 4Next ⇒

Consider the matrices

Moreover, the elements of the matrices A and B are given, and X 1, X 2, X 3 are unknown.

Then the equation A × X = B is called the simplest matrix equation.

To solve it, i.e. find the elements of the matrix of unknowns X, proceed as follows:

1. Multiply both sides of the equation by matrix A -1, inverse for matrix A , left:

A -1 (A × X) \u003d A -1 × B

2. Using the property of matrix multiplication, we write

(A -1 × A) X = A -1 × B

3. From the definition of the inverse matrix

(A -1 × A = E) we have E × X = A -1 × B.

4. Using the property of the identity matrix (E × X = X), we finally get X = A -1 × B

Comment. If the matrix equation has the form X × C \u003d D, then to find the unknown matrix X, the equation must be multiplied by C -1 on right.

Example. Solve matrix equation

Solution. Let us introduce the notation

Their definitions of matrix multiplication, taking into account the dimensions of A and B, the matrix of unknowns X will have the form

Taking into account the introduced notation, we have

A × X = B whence X = A -1 × B

Let's find A -1 by the algorithm for constructing the inverse matrix

Compute the product

Then for X we get

X \u003d from where x 1 \u003d 3, x 2 \u003d 2

Matrix rank

Consider a matrix A of size (m x n)

The k-th order minor of a matrix A is the determinant of the order k, whose elements are the elements of the matrix A that are at the intersection of any K rows and any K columns. Obviously, k £ min (m, n).

Definition. The rank r(A) of a matrix A is the largest order of the non-zero minor of this matrix.

Definition. Any non-zero minor of a matrix whose order is equal to its rank is called basic minor.

Define e. Matrices having the same ranks are called equivalent.

Calculating the rank of a matrix

Definition. The matrix is ​​called stepped, if under the first non-null element of each of its rows there are zeros in the underlying rows.

Theorem. The rank of a step matrix is ​​equal to the number of its nonzero rows.

Thus, by transforming the matrix to a stepped form, it is easy to determine its rank. This operation is carried out using elementary matrix transformations, which do not change its rank:

— multiplication of all elements of the matrix row by the number l ¹ 0;

- replacing rows with columns and vice versa;

- permutation of parallel rows;

- deletion of the zero row;

- addition to the elements of a certain series of the corresponding elements of the parallel series, multiplied by any real number.

Example.

Theorem (necessary and sufficient condition for the existence of an inverse matrix).

Calculate the rank of a matrix

A =

Solution. Let us transform the matrix to a stepped form. To do this, add the second line multiplied by (-3) to the third line.

Ah~

Let's add the third line to the fourth line.

The number of non-zero rows in the resulting equivalent matrix is ​​three, hence r(A) = 3.

Systems of n linear equations with n unknowns.

Methods for their solution

Consider a system of n linear equations with n unknowns.

A 11 x 1 + a 12 x 2 + ... + a 1 n x n \u003d b 1

a 21 x 1 + a 22 x 2 + ... + a 2 n x n \u003d b 2 (1)

……………………………….

a n 1 x 1 + a n 2 x 2 + ... + a nn x n = b n

Definition: The solution of system (1) is a set of numbers (x 1, x 2, ..., x n), which turns each equation of the system into a true equality.

The matrix A, composed of the coefficients of the unknowns, is called the main matrix of the system (1).

A=

The matrix B, consisting of the elements of the matrix A and the column of free members of the system (1), is called extended matrix.

B =

Matrix method

Consider the matrices

X = - matrix of unknowns;

C = is the matrix of free terms of system (1).

Then, according to the rule of matrix multiplication, system (1) can be represented as a matrix equation

A × X = C (2)

The solution of equation (2) is stated above, i.e. X = A -1 × C, where A -1 is the inverse matrix for the main matrix of system (1).

Cramer method

A system of n linear equations with n unknowns, the main determinant of which is different from zero, always has a solution and, moreover, the only one, which is found by the formulas:

where D = det A is the determinant of the main matrix A of system (1), which is called the main one, Dх i are obtained from the determinant D by replacing the i-th column with a column of free terms, i.e.

Dх 1 = ;

Dх 2 = ; … ;

Example.

Solve the system of equations by Cramer's method

2x 1 + 3x 2 + 4x 3 = 15

x 1 + x 2 + 5x 3 = 16

3x 1 - 2x 2 + x 3 = 1

Solution.

Let us calculate the determinant of the main matrix of the system

D = det A = = 44 ¹ 0

Calculate auxiliary determinants

Dх 3 = = 132.

Using Cramer's formulas, we find the unknowns

; ; .

Thus, x 1 \u003d 0; x 2 = 1; x 3 = 3.

Gauss method

The essence of the Gauss method is the successive elimination of unknowns from the equations of the system, i.e. in bringing the main matrix of the system to a triangular form, when there are zeros under its main diagonal. This is achieved using elementary transformations of the matrix over rows. As a result of such transformations, the equivalence of the system is not violated and it also acquires a triangular form, i.e. the last equation contains one unknown, the penultimate two, and so on. Expressing the n-th unknown from the last equation and using the reverse move, using a series of successive substitutions, the values ​​of all unknowns are obtained.

Example. Solve a system of equations using the Gauss method

3x 1 + 2x 2 + x 3 = 17

2x 1 - x 2 + 2x 3 = 8

x 1 + 4x 2 - 3x 3 = 9

Solution. Let us write out the extended matrix of the system and reduce the matrix A contained in it to a triangular form.

Let's swap the first and third rows of the matrix, which is equivalent to permuting the first and third equations of the system. This will allow us to avoid the appearance of fractional expressions in subsequent calculations.

In ~

We multiply the first row of the resulting matrix sequentially by (-2) and (-3) and add it to the second and third rows, respectively, while B will look like:

After multiplying the second row by and adding it to the third row, matrix A will take on a triangular form. However, to simplify the calculations, you can do the following: multiply the third row by (-1) and add it to the second. Then we get:

In ~

In ~

Restore from the resulting matrix B a system of equations equivalent to the given

X 1 + 4x 2 - 3x 3 = 9

x 2 - 2x 3 = 0

- 10x 3 = -10

From the last equation we find We substitute the found value x 3 \u003d 1 into the second equation of the system, from which x 2 \u003d 2x 3 \u003d 2 × 1 \u003d 2.

After substituting x 3 \u003d 1 and x 2 \u003d 2 in the first equation for x 1, we get x 1 \u003d 9 - 4x 2 + 3x 3 \u003d 9 - 4 × 2 + 3 × 1 \u003d 4.

So, x 1 = 4, x 2 = 2, x 3 = 1.

Comment. To check the correctness of the solution of a system of equations, it is necessary to substitute the found values ​​of the unknowns into each of the equations of this system. Moreover, if all equations turn into identities, then the system is solved correctly.

Examination:

3 x 4 + 2 x 2 + 1 = 17 is correct

2 × 4 - 2 + 2 × 1 = 8 true

4 + 4 × 2 - 3 × 1 = 9 true

So the system is correct.

⇐ Previous1234Next ⇒

Read also:

The simplest matrix equations

where are matrices of such sizes that all the operations used are possible, and the left and right parts of these matrix equations are matrices of the same size.

The solution of equations (1)-(3) is possible with the help of inverse matrices in the case of non-degeneracy of the matrices at X. In the general case, the matrix X is written element by element and the operations indicated in the equation are performed on the matrices. The result is a system of linear equations. Having solved the system, find the elements of the matrix X.

Inverse matrix method

This is a solution to a system of linear equations in the case of a square non-singular matrix of the system A. It is found from the matrix equation AX=B.

A -1 (AX) \u003d A -1 B, (A -1 A) X \u003d A -1 B, EX \u003d A -1 B, X \u003d A -1 B.

Cramer's formulas

Theorem.Let Δthe determinant of the matrix of the system A, and Δ j is the determinant of the matrix obtained from the matrix A by replacing the jth column of free terms. Then if ∆≠ 0, then the system has a unique solution determined by the formulas:

are Cramer's formulas.

DZ 1. 2.23, 2.27, 2.51, 2.55, 2.62; DZ 2.2.19, 2.26, 2.40,2.65

Topic 4. Complex numbers and polynomials

Complex numbers and operations on them

Definitions.

1. A symbol of the form a + bi , where a and b are arbitrary real numbers, we will agree to call a complex number.

2. We will agree to consider complex numbers a + bi and a 1 + b 1 i equal if a = a 1 and

b = b 1 .

3. We will agree to consider a complex number of the form a + 0i equal to a real number a.

4. The sum of two complex numbers a + bi and a 1 + b 1 i is the complex number (a + a 1) + (b + b 1)i.

Inverse matrix. Matrix rank.

The product of two complex numbers is the complex number aa 1 - bb 1 + (a b 1 + a 1 b)i.

Complex number of the form 0 + bi is called a purely imaginary number and is usually written like this: bi; number 0 +1 i = i called imaginary unit.

By Definition 3, every real number A corresponds to an "equal" complex number a + 0i and vice versa for any complex number a + 0i corresponds to an "equal" real number A, that is, there is a one-to-one correspondence between these numbers. Considering the sum and product of complex numbers a 1 + 0i and a 2 + 0i according to rules 4 and 5, we get:

(a 1 + 0i) + (a 2 + 0i) = (a 1 + a 2) + 0i,

(a 1 + 0i) (a 2 + 0i) = (a 1 a 2 - 0) + (a 1 0+a 2 0) i = a 1 a 2 + 0i.

We see that the sum (or product) of these complex numbers corresponds to a real number "equal" to the sum (or product) of the corresponding real numbers. So the correspondence between complex numbers kind a + 0i and real number A is such that as a result of performing arithmetic operations on the corresponding components, the corresponding results are obtained. A one-to-one correspondence that is preserved when performing actions is called isomorphism. This allows us to identify the number a + 0i with real number A and consider any real number as a special case of a complex one.

Consequence. Number square i equals - 1.

i 2 = i i = (0 +1i)(0 +1i) = (0 – 1) + (0 1 + 1 0)i =— 1.

Theorem.For addition and multiplication of complex numbers, the basic laws of operations remain valid.

Definitions:

1. The real number a is called the real part of the complex number z = a + bi. Rez=a

2. The number b is called the imaginary part of the complex number z, the number b is called the coefficient of the imaginary part of z. Imz=b.

3. The numbers a + bi and a - bi are called conjugate.

The conjugate number z = a + bi denoted by the symbol

= a - bi.

Example. z=3 + i ,= 3 - i.

Theorem.The sum and product of two conjugate complex numbers are real.

Proof. We have

In the set of complex numbers, the operations inverse to addition and multiplication are feasible.

Subtraction. Let z 1 = a 1 + b 1 i And z 2 = a 2 + b 2 i are complex numbers. difference z1z2 there is a number z = x + y i, satisfying the condition z1 = z 2 + z or

and 1 + b 1 i = (a 2 + x) + (b 2 + y)i.

For determining x And y we get the system of equations a 2 + x = a 1 And b2 + y = b1, which has a unique solution:

x \u003d a 1 - a 2, y \u003d b 1 - b 2,

z \u003d (a 1 + b 1 i) - (a 2 + b 2 i) \u003d a 1 - a 2 + (b 1 - b 2) i.

Subtraction can be replaced by addition with the opposite number to be subtracted:

z \u003d (a 1 + b 1 i) - (a 2 + b 2 i) \u003d (a 1 + b 1 i) + (- a 2 - b 2 i).

Division.

quotient of numbers z1 And z2≠ 0 is a number z = x + y i, satisfying the condition z 1 = z 2 z or

a 1 + b 1 i = (a 2 + b 2 i) (x + yi),

hence,

a 1 + b 1 i = a 2 x - b 2 y+ (b 2 x + a 2 y)i,

whence we obtain the system of equations:

a 2 x - b 2 y \u003d a 1,

b 2 x + a 2 y = b 1 .

The decision of which will

hence,

In practice, to find the quotient, multiply the dividend and the divisor by the conjugate of the divisor:

For example,

In particular, the reciprocal of given number z, can be represented as

Note. In the set of complex numbers remains valid theorem: if the product is equal to zero, then at least one of the factors is equal to zero.

Indeed, if z 1 z 2 =0 and if z 1 ≠ 0, then multiplying by , we get

Q.E.D.

When performing arithmetic operations on complex numbers, one should be guided by the following general rule: actions are performed according to the usual rules for actions on algebraic expressions, followed by replacing i 2 with-1.

Theorem.When replacing each of the components with its conjugate number, the result of the action is also replaced by the conjugate number.

The proof consists in a direct verification. So, for example, if each term z 1 = a 1 + b 1 i And z 2 = a 2 + b 2 i replaced by a conjugate number, then we get a number conjugate to the sum z 1 + z 2 .

therefore,

Similarly, for the product we have:

Previous567891011121314151617181920Next

VIEW MORE:

Matrix equations

Catalin David

AX = B, where matrix A is invertible

Since matrix multiplication is not always commutative, we multiply both sides of the equation on the left by $A^(-1)$.

$A^(-1)\cdot|A\cdot X = B$

$A^(-1)\cdot A\cdot X = A^(-1)\cdot B$

$I_(n)\cdot X = A^(-1)\cdot B$


$\color(red)(X =A^(-1)\cdot B)$

Example 50
solve the equation
$\begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)\cdot X \begin(pmatrix) 3 & 5\\ 2 & 1 \end(pmatrix)$


Theorem 2. Criterion for the existence of an inverse matrix.

Multiply on the left by its inverse matrix.
$\begin(pmatrix) 1 & 3\\ 2 & 5\\ \end(pmatrix)^(-1)\cdot \begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)\cdot X= \begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)^(-1)\cdot \begin(pmatrix) 3 & 5\\ 2 & 1 \end(pmatrix)$

$I_(2)\cdot X = \begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)^(-1)\cdot \begin(pmatrix) 3 & 5\\ 2 & 1 \end( pmatrix)$

$X=\begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)^(-1)\cdot \begin(pmatrix) 3 & 5\\ 2 & 1 \end(pmatrix)$

$\begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)^(-1)= \begin(pmatrix) -5 & 3\\ 2 & -1 \end(pmatrix)\rightarrow X= \ begin(pmatrix) -5 & 3\\ 2 & -1 \end(pmatrix)\cdot \begin(pmatrix) 3 & 5\\ 2 & 1 \end(pmatrix)= \begin(pmatrix) -9 & -22 \\ 4 & 9 \end(pmatrix)$

XA = B, where matrix A is invertible

Since matrix multiplication is not always commutative, we multiply both sides of the equation on the right by $A^(-1)$.

$X\cdot A = B |\cdot A^(-1)$

$X\cdot A\cdot A^(-1) = B\cdot A^(-1)$

$X \cdot I_(n) =B\cdot A^(-1)$

The solution of the equation has the general form
$\color(red)(X =B\cdot A^(-1))$

Example 51
solve the equation
$X \begin(pmatrix) 1 & 3\\ 2 & 5\\ \end(pmatrix)= \begin(pmatrix) 3 & 5\\ 2 & 1\\ \end(pmatrix)$

Let's make sure that the first matrix is ​​invertible.
$\left|A\right|=5-6=-1\neq 0$, hence the matrix is ​​invertible.

Multiply on the right by its inverse matrix.
$X \begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)\cdot \begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)^(-1)= \begin(pmatrix ) 3 & 5\\ 2 & 1 \end(pmatrix)\cdot \begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)^(-1)$

$X\cdot I_(2)= \begin(pmatrix) 3 & 5\\ 2 & 1 \end(pmatrix)\cdot \begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)^(- 1)$

$X=\begin(pmatrix) 3 & 5\\ 2 & 1 \end(pmatrix)\cdot \begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)^(-1)$

$\begin(pmatrix) 1 & 3\\ 2 & 5 \end(pmatrix)^(-1)= \begin(pmatrix) -5 & 3\\ 2 & -1 \end(pmatrix)\rightarrow X= \ begin(pmatrix) 3 & 5\\ 2 & 1 \end(pmatrix) \cdot \begin(pmatrix) -5 & 3\\ 2 & -1 \end(pmatrix)= \begin(pmatrix) -5 & 4\ \ -8 & 5 \end(pmatrix)$

MatricesMatrix multiplicationDeterminantsMatrix rankInverse matricesSystems of equationsMatrix calculators

int. amazement, surprise; joy, hope; suddenness, fright; grief, despair. Ah, how good! Oh, so be it! Oh, how you scared me! Oh yes, waving your hands. Ah, ah, but there is nothing to help. Ah, judge, judge: four floors, eight pockets.

| Sometimes ah turns into a noun. , husband. Ah, yes, oh, yes, woman's sighs. What here was ahov, surprise, joy. Ahti, ahti me, an exclamation of grief, sadness; Alas; Ahti me, all the comrades in prison - something will happen to me? Ohti-axmul somehow get married? Do not be so hot to me, not surprisingly, not painfully good. Akhanki to me, ahakhanki, expresses, as it were, compassion for oneself or for another. Akhanki, like little children, this is a kind of greeting. gasp, gasp, gasp, marvel; rejoice at something, grieve, moan, exclaim ah! Ah, yes, at home, on my own. Akhal uncle, looking at yourself, take care of everyone about yourself, about your business. I gasped, frightened, amazed. We also gasped, we saw grief. A single man sometimes groans, and a married one groans.

inverse matrix

Get up to what. We gasped when we heard about it. Naahali, and let's go. I was in awe of these miracles. Fed up, right? Sigh some more. One gasps, the other gasps. Why did he swing? Reluctantly you get excited. Not so gasp, gasp again, a mockery of useless calls. Wasted all day. A woman came to gasp, but had to gasp; I came to look at someone else's joy or sorrow, but my own misfortune happened. Akhanye cf. an immoderate expression of joy, amazement, grief, despair: an ahal man is a husband. woman's swindler ahala vol. who marvels at everything, praises someone else's excessively, envies. There are seven accordionists for each accordionist. For each bahar, seven akhal. Ahovaya lower. breathtaking penz. delightful, incredibly beautiful, beautiful, causing an exclamation of amazement and approval. Ahh scarf. Ahva? female , arch.-he. hole, hole; a hole, a cut in the skin, damaging it from a careless shot, prick or blow with something. Ahovnya? female skin spoiled with ahvoi, akhovaya or ahvodnaya skin. Ahvit, ahvod ?, spoil the skin with a shot, a prick, a cut. Awful Saturday, with payments, when the faulty ones gasp for money.

Lemma: For any matrix A the product of it by the identity matrix of the corresponding size is equal to the matrix A: AE=EA=A.

Matrix IN called reverse to the matrix A, If AB=BA=E. Inverse Matrix to Matrix A denoted A -1 .

The inverse matrix only exists for a square matrix.

Theorem: square matrix A has an inverse if and only if the determinant of this matrix is ​​nonzero (|A|≠0).

Algorithm for finding the inverse matrix A -1:

(for matrices of the second and third orders)


“If you want to learn how to swim, then boldly enter the water, and if you want to learn to solve problems, That solve them
D. Poya (1887-1985)

(Mathematician. Made a great contribution to the popularization of mathematics. Wrote several books on how to solve problems and how to teach how to solve problems.)

Let there be a square matrix of the nth order

Matrix A -1 is called inverse matrix with respect to the matrix A, if A * A -1 = E, where E is the identity matrix of the nth order.

Identity matrix- such a square matrix, in which all elements along the main diagonal, passing from the upper left corner to the lower right corner, are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices that have the same number of rows and columns.

Inverse Matrix Existence Condition Theorem

For a matrix to have an inverse matrix, it is necessary and sufficient that it be nondegenerate.

The matrix A = (A1, A2,...A n) is called non-degenerate if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix is ​​equal to its dimension, i.e. r = n.

Algorithm for finding the inverse matrix

  1. Write the matrix A in the table for solving systems of equations by the Gauss method and on the right (in place of the right parts of the equations) assign matrix E to it.
  2. Using Jordan transformations, bring matrix A to a matrix consisting of single columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that the identity matrix E is obtained under the matrix A of the original table.
  4. Write the inverse matrix A -1, which is in the last table under the matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write down the matrix A and on the right we assign the identity matrix E. Using the Jordan transformations, we reduce the matrix A to the identity matrix E. The calculations are shown in Table 31.1.

Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

As a result of matrix multiplication, the identity matrix is ​​obtained. Therefore, the calculations are correct.

Answer:

Solution of matrix equations

Matrix equations can look like:

AX = B, XA = B, AXB = C,

where A, B, C are given matrices, X is the desired matrix.

Matrix equations are solved by multiplying the equation by inverse matrices.

For example, to find the matrix from an equation, you need to multiply this equation by on the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX = B if

Solution: Since the inverse of the matrix equals (see example 1)

Matrix method in economic analysis

Along with others, they also find application matrix methods. These methods are based on linear and vector-matrix algebra. Such methods are used for the purposes of analyzing complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to compare the functioning of organizations and their structural divisions.

In the process of applying matrix methods of analysis, several stages can be distinguished.

At the first stage the formation of a system of economic indicators is carried out and on its basis a matrix of initial data is compiled, which is a table in which system numbers are shown in its individual lines (i = 1,2,....,n), and along the vertical graphs - numbers of indicators (j = 1,2,....,m).

At the second stage for each vertical column, the largest of the available values ​​of the indicators is revealed, which is taken as a unit.

After that, all the amounts reflected in this column are divided by highest value and a matrix of standardized coefficients is formed.

At the third stage all components of the matrix are squared. If they have different significance, then each indicator of the matrix is ​​assigned a certain value. weight coefficient k. The value of the latter is determined by an expert.

On the last fourth stage found values ratings Rj grouped in order of increasing or decreasing.

The above matrix methods should be used, for example, when comparative analysis various investment projects, as well as when evaluating other economic performance indicators of organizations.

Inverse matrix · The matrix B is called inverse to the matrix if the equality is true: . Designation: − Only square matrix may have an inverse matrix. − Not every square matrix has an inverse matrix. Properties: 1. ; 2. ; 3. , where the matrices are square, of the same dimension. Generally speaking, if for non-square matrices a product is possible that will be a square matrix, then the existence of an inverse matrix is ​​also possible , although the 3-property is violated in this case. To find the inverse matrix, you can use the method of elementary row transformations: 1. Compose an extended matrix by assigning an identity matrix of the corresponding dimension to the right of the original matrix: . 2. Elementary row transformations of the matrix G lead to the form: . − required Matrix rank · A minor of the kth order of a matrix is ​​a determinant composed of elements of the original matrix that are at the intersection of any k rows and k columns ( ). Comment. Each element of a matrix is ​​its 1st order minor. Theorem. If in the matrix all minors of order k are equal to zero, then all minors of higher order are equal to zero. We expand the minor (determinant) ( k+1)-th order through the elements of the 1st row: . Algebraic additions are essentially minors k- th order, which, by the assumption of the theorem, are equal to zero. Hence, . · In the order matrix, an order minor is called basic if it is not equal to zero, and all minors of order and above are equal to zero, or do not exist at all, i.e. matches the smaller of the numbers or . The columns and rows of the matrix that make up the basic minor are called basic. There can be several different basis minors in a matrix that have the same order. · The order of the basis minor of a matrix is ​​called the rank of the matrix And denoted: , . It's obvious that . For example. 1. , . 2. . Matrix IN contains the only nonzero element that is a 1st order minor. All higher order determinants will contain the 0th row and therefore equal 0. Therefore, . inverse matrix 4. Systems of linear equations. Basic concepts. Linear system algebraic equations (linear system , abbreviations are also used SLAU, SLN) is a system of equations, each equation in which is a linear - algebraic equation of the first degree. General form systems of linear algebraic equations: Here is the number of equations, and is the number of variables, are the unknowns to be determined, coefficients and free terms assumed to be known. The system is called homogeneous, if all its free members are equal to zero (), otherwise - heterogeneous. The solution of a system of linear algebraic equations is a set of numbers such that from the corresponding substitution instead of into the system turns all its equations into identities. A system is called consistent if it has at least one solution, and inconsistent if it has no solutions. Solutions are considered different if at least one of the values ​​of the variables does not match. A joint system with a single solution is called definite, if there is more than one solution - underdetermined. Matrix form A system of linear algebraic equations can be represented in matrix form as: or: . Here, is the matrix of the system, is the column of unknowns, and is the column of free terms. If a column of free terms is assigned to the matrix on the right, then the resulting matrix is ​​called an extended one. Kronecker - Capelli theorem Kronecker - Capelli theorem establishes a necessary and sufficient condition for the compatibility of a system of linear algebraic equations through the properties of matrix representations: the system is consistent if and only if the rank of its matrix coincides with the rank of the extended matrix. Methods for solving systems of linear equations. Matrix Method Let a system of linear equations with unknowns be given (over an arbitrary field): Let's rewrite in matrix form: We find the solution of the system by the formula We find the inverse matrix by the formula: , where is the transposed matrix of algebraic complements of the corresponding elements of the matrix . If, then the inverse matrix does not exist, and it is impossible to solve the system by the matrix method. In this case, the system is solved by the Gauss method. Cramer's method Cramer's method (Cramer's rule) is a method for solving SLAE with the number of equations equal to the number of unknowns with a non-zero main determinant of the matrix. For a system of linear equations with unknowns Replace the i-th column of the matrix with a column of free terms b Example: System of linear equations with real coefficients: Qualifiers: In the determinants, the column of coefficients for the corresponding unknown is replaced by the column of free terms of the system. Solution: 5. Gauss method Solution algorithm: 1. Write down the augmented matrix 2. Bring it to a stepped form by elementary transformations 3. Reverse move, during which we express the basic terms in terms of free ones. An augmented matrix is ​​obtained by adding a column of free terms to the matrix. There are the following elementary transformations: 1. Matrix rows can be rearranged. 2. If there are (or appeared) proportional (as a special case - identical) rows in the matrix, then all these rows should be deleted from the matrix except for one. 3. If a zero row appeared in the matrix during the transformations, then it should also be deleted. 4. The row of the matrix can be multiplied (divided) by any number, non-zero. 5. To the row of the matrix, you can add another row, multiplied by a number other than zero. Elementary transformations do not change the solution of the system of equations Reverse motion: Usually, as basic variables, those variables are taken that are located in the first places in the non-zero rows of the transformed matrix of the system, i.e. on the steps. Further, the basis terms are expressed in terms of the free ones. We go “from bottom to top” along the way expressing the basis terms and substituting the results into the higher equation. Example: Basic variables always "sit" strictly on the steps of the matrix. IN this example the basic variables are also Free variables - these are all the remaining variables that did not get a step. In our case, there are two of them: - free variables. Now everything is needed basis variables express only through free variables. The reverse move of the Gaussian algorithm traditionally works from the bottom of the century