Solve the system by the inverse matrix method online calculator. inverse matrix

A system of m linear equations with n unknowns called a system of the form

where a ij and b i (i=1,…,m; b=1,…,n) Are some known numbers, and x 1, ..., x n - unknown. In the designation of the coefficients a ij first index idenotes the equation number, and the second j - the number of the unknown at which this coefficient stands.

The coefficients for the unknowns will be written in the form of a matrix , which we will call system matrix.

The numbers on the right sides of the equations b 1, ..., b m are called free members.

The aggregate n numbers c 1, ..., c n called decision of the given system if each equation of the system turns into equality after substitution of numbers c 1, ..., c n instead of the corresponding unknowns x 1, ..., x n.

Our task will be to find solutions to the system. In this case, three situations may arise:

A system of linear equations that has at least one solution is called joint... Otherwise, i.e. if the system has no solutions, then it is called inconsistent.

Consider ways to find solutions to the system.


MATRIX METHOD FOR SOLVING SYSTEMS OF LINEAR EQUATIONS

Matrices make it possible to concisely write a system of linear equations. Let a system of 3 equations with three unknowns be given:

Consider the matrix of the system and matrix columns of unknown and free terms

Find the work

those. as a result of the product, we obtain the left-hand sides of the equations of this system. Then, using the definition of matrix equality, this system can be written in the form

or shorter AX \u003d B.

Here matrices A and B are known, and the matrix X unknown. She also needs to be found, tk. its elements are the solution to this system. This equation is called matrix equation.

Let the determinant of the matrix be nonzero | A| ≠ 0. Then the matrix equation is solved as follows. We multiply both sides of the equation on the left by the matrix A -1, the inverse of the matrix A:. Insofar as A -1 A \u003d E and EX \u003d X, then we obtain the solution of the matrix equation in the form X \u003d A -1 B .

Note that since the inverse matrix can be found only for square matrices, the matrix method can only be used to solve those systems in which the number of equations coincides with the number of unknowns... However, the matrix representation of the system is also possible in the case when the number of equations is not equal to the number of unknowns, then the matrix A will not be square and therefore it is impossible to find a solution to the system in the form X \u003d A -1 B.

Examples.Solve systems of equations.

RULE OF CRAMER

Consider a system of 3 linear equations with three unknowns:

Determinant of the third order corresponding to the matrix of the system, i.e. composed of coefficients with unknowns,

called system determinant.

Let's compose three more determinants as follows: replace in the determinant D successively 1, 2 and 3 columns with the column of free members

Then the following result can be proved.

Theorem (Cramer's rule). If the determinant of the system is Δ ≠ 0, then the system under consideration has one and only one solution, and

Evidence... So, let's consider a system of 3 equations with three unknowns. We multiply the 1st equation of the system by the algebraic complement A 11 element a 11, 2nd equation - on A 21 and 3rd - on A 31:

We add these equations:

Let's look at each of the parentheses and the right side of this equation. By the theorem on the expansion of the determinant in terms of the elements of the 1st column

Similarly, one can show that and.

Finally, it is easy to see that

Thus, we obtain the equality:.

Hence, .

The equalities and are derived similarly, whence the statement of the theorem follows.

Thus, we note that if the determinant of the system is Δ ≠ 0, then the system has a unique solution and vice versa. If the determinant of the system is equal to zero, then the system either has an infinite set of solutions, or has no solutions, i.e. inconsistent.

Examples.Solve system of equations


GAUSS METHOD

The previously considered methods can be used to solve only those systems in which the number of equations coincides with the number of unknowns, and the determinant of the system must be different from zero. Gauss's method is more versatile and is suitable for systems with any number of equations. It consists in the consistent elimination of unknowns from the equations of the system.

Consider again a system of three equations with three unknowns:

.

We leave the first equation unchanged, and from the second and third we exclude the terms containing x 1... For this, we divide the second equation by and 21 and multiply by - and 11 and then add it to the 1st equation. Similarly, we divide the third equation into and 31 and multiply by - and 11 and then add to the first. As a result, the original system will take the form:

Now we exclude from the last equation the term containing x 2... To do this, divide the third equation by, multiply by and add to the second. Then we will have a system of equations:

Hence from the last equation it is easy to find x 3, then from the 2nd equation x 2 and finally from the 1st - x 1.

When using the Gaussian method, equations can be swapped as needed.

Often, instead of writing a new system of equations, they are limited to writing out an extended matrix of the system:

and then bring it to a triangular or diagonal form using elementary transformations.

TO elementary transformations matrices include the following transformations:

  1. permutation of rows or columns;
  2. multiplying a string by a number other than zero;
  3. adding other lines to one line.

Examples: Solve systems of equations by the Gauss method.


Thus, the system has an infinite number of solutions.

This is a concept that generalizes all the possible operations performed with matrices. Mathematical matrix is \u200b\u200ba table of elements. About such a table where m lines and n columns, they say that this matrix has the dimension m on the n.

General view of the matrix:

For matrix solutions you need to understand what a matrix is \u200b\u200band know its main parameters. The main elements of the matrix:

  • Main diagonal made up of elements a 11, a 22 ... ..a mn.
  • Side diagonal made up of elements a 1n, a 2n-1 ... ..a m1.

The main types of matrices:

  • Square is a matrix where the number of rows \u003d the number of columns ( m \u003d n).
  • Zero - where all elements of the matrix \u003d 0.
  • Transpose Matrix - Matrix INwhich was obtained from the original matrix A by replacing rows with columns.
  • Single - all elements of the main diagonal \u003d 1, all others \u003d 0.
  • An inverse matrix is \u200b\u200ba matrix that when multiplied by the original matrix results in the identity matrix.

The matrix can be symmetrical about the main and side diagonal. That is, if a 12 \u003d a 21, a 13 \u003d a 31,… .a 23 \u003d a 32…. a m-1n \u003d a mn-1, then the matrix is \u200b\u200bsymmetric about the main diagonal. Only square matrices can be symmetric.

Methods for solving matrices.

Almost all matrix solution methods are to find its determinant n-th order and most of them are rather cumbersome. There are other, more rational ways to find the determinant of the 2nd and 3rd order.

Finding determinants of the second order.

To calculate the determinant of a matrix AND 2nd order, it is necessary to subtract the product of the elements of the secondary diagonal from the product of the elements of the main diagonal:

Methods for finding determinants of the third order.

Below are the rules for finding the 3rd order determinant.

Simplified rule of the triangle, as one of methods for solving matrices, can be depicted as follows:

In other words, the product of the elements in the first qualifier that are connected by straight lines is taken with a "+" sign; also, for the 2nd determinant - the corresponding products are taken with the "-" sign, that is, according to the following scheme:

When solving matrices by Sarrus's rule, to the right of the determinant, the first 2 columns are added and the products of the corresponding elements on the main diagonal and on the diagonals that are parallel to it are taken with a "+" sign; and the products of the corresponding elements of the side diagonal and the diagonals that are parallel to it, with the "-" sign:

Determinant expansion by row or column when solving matrices.

The determinant is equal to the sum of the products of the elements of the determinant string by their algebraic complements. Usually select the row / column in which there are zeros. The row or column along which the decomposition is carried out will be indicated by an arrow.

Reducing the determinant to triangular form when solving matrices.

When solving matrices by the method of reducing the determinant to a triangular form, they work as follows: using the simplest transformations on rows or columns, the determinant becomes triangular and then its value, in accordance with the properties of the determinant, will be equal to the product of the elements that are on the main diagonal.

Laplace's theorem for solving matrices.

When solving matrices by Laplace's theorem, it is necessary to know directly the theorem itself. Laplace's theorem: Let Δ is the determinant nth order. We choose any k rows (or columns), provided kn - 1... In this case, the sum of the products of all minors kth order contained in the selected k rows (columns) on their algebraic complement will be equal to the determinant.

Inverse matrix solution.

Sequence of actions for inverse matrix solutions:

  1. Determine if a given matrix is \u200b\u200bsquare. If the answer is negative, it becomes clear that there cannot be an inverse matrix for it.
  2. We calculate the algebraic complements.
  3. We compose a union (mutual, attached) matrix C.
  4. Composing the inverse matrix from algebraic complements: all elements of the adjoint matrix C divided by the determinant of the initial matrix. The resulting matrix will be the desired inverse matrix relative to the given one.
  5. We check the work done: we multiply the initial matrix and the resulting matrix, the result should be the identity matrix.

Solution of matrix systems.

For matrix systems solutionsthe most commonly used method is Gauss.

The Gauss method is a standard way of solving systems of linear algebraic equations (SLAE) and it consists in the fact that variables are successively excluded, i.e., using elementary changes, the system of equations is brought to an equivalent system of triangular form and from it, sequentially, starting with the latter (by number), find each element of the system.

Gauss method is the most versatile and best tool for finding solutions to matrices. If the system has an infinite set of solutions or the system is incompatible, then it cannot be solved by Cramer's rule and the matrix method.

The Gauss method also implies direct (reducing the extended matrix to a stepped form, i.e. getting zeros under the main diagonal) and reverse (getting zeros over the main diagonal of the extended matrix) moves. The forward move is the Gauss method, the reverse is the Gauss-Jordan method. The Gauss-Jordan method differs from the Gauss method only in the sequence of elimination of variables.

Let there be a square matrix of the nth order

The matrix A -1 is called inverse matrix with respect to the matrix A, if A * A -1 \u003d E, where E is the n-order identity matrix.

Unit matrix - such a square matrix, in which all the elements along the main diagonal passing from the upper left corner to the lower right corner are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices with the same number of rows and columns.

A theorem on the existence condition for an inverse matrix

For a matrix to have an inverse matrix, it is necessary and sufficient that it be non-degenerate.

The matrix А \u003d (А1, А2, ... Аn) is called non-degenerateif the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is \u200b\u200bcalled the rank of the matrix. Therefore, we can say that for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix be equal to its dimension, i.e. r \u003d n.

Algorithm for finding the inverse matrix

  1. Write matrix A in the table for solving systems of equations by the Gaussian method and on the right (in place of the right-hand sides of the equations) assign matrix E.
  2. Using the Jordan transform, reduce the matrix A to a matrix consisting of unit columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that under the matrix A of the original table we get the unit matrix E.
  4. Write the inverse of the matrix A -1, which is in the last table under the matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write down the matrix A and on the right we assign the identity matrix E. Using the Jordan transforms, we bring the matrix A to the identity matrix E. The calculations are shown in table 31.1.

Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

As a result of matrix multiplication, the unit matrix is \u200b\u200bobtained. Therefore, the calculations are correct.

Answer:

Solving matrix equations

Matrix equations can be as follows:

AX \u003d B, XA \u003d B, AXB \u003d C,

where A, B, C are the specified matrices, X is the required matrix.

Matrix equations are solved by multiplying the equation by inverse matrices.

For example, to find a matrix from an equation, you multiply that equation by the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX \u003d B if

Decision: Since the inverse of the matrix is \u200b\u200b(see example 1)

Matrix method in economic analysis

Along with others, they also find application in matrix methods... These methods are based on linear and vector-matrix algebra. Such methods are used to analyze complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to make a comparative assessment of the functioning of organizations and their structural units.

In the process of applying matrix methods of analysis, several stages can be distinguished.

At the first stage the formation of a system of economic indicators is carried out and on its basis a matrix of initial data is compiled, which is a table in which the numbers of systems are shown on its separate lines (i \u003d 1,2, ...., n), and along the vertical columns - numbers of indicators (j \u003d 1,2, ...., m).

In the second stage for each vertical column, the largest of the available values \u200b\u200bof indicators is revealed, which is taken as a unit.

After that, all the amounts reflected in this column are divided by the largest value and a matrix of standardized coefficients is formed.

In the third stage all the constituent parts of the matrix are squared. If they have different significance, then each indicator of the matrix is \u200b\u200bassigned a certain weighting factor k... The value of the latter is determined by expert judgment.

At the last one, fourth stage found values \u200b\u200bof ratings R j are grouped in order of increasing or decreasing.

The outlined matrix methods should be used, for example, in a comparative analysis of various investment projects, as well as in assessing other economic indicators of organizations.

(sometimes this method is also called the matrix method or the inverse matrix method) requires prior familiarization with such a concept as the matrix form of SLAE notation. The inverse matrix method is intended for solving those systems of linear algebraic equations for which the determinant of the matrix of the system is nonzero. Naturally, this implies that the matrix of the system is square (the concept of a determinant exists only for square matrices). The essence of the inverse matrix method can be expressed in three points:

  1. Write down three matrices: the matrix of the system $ A $, the matrix of unknowns $ X $, the matrix of free terms $ B $.
  2. Find the inverse of $ A ^ (- 1) $.
  3. Using the equality $ X \u003d A ^ (- 1) \\ cdot B $, obtain the solution of the given SLAE.

Any SLAE can be written in matrix form as $ A \\ cdot X \u003d B $, where $ A $ is the matrix of the system, $ B $ is the matrix of free terms, $ X $ is the matrix of unknowns. Let the matrix $ A ^ (- 1) $ exist. Multiply both sides of the equality $ A \\ cdot X \u003d B $ by the matrix $ A ^ (- 1) $ on the left:

$$ A ^ (- 1) \\ cdot A \\ cdot X \u003d A ^ (- 1) \\ cdot B. $$

Since $ A ^ (- 1) \\ cdot A \u003d E $ ($ E $ is the identity matrix), then the above equality becomes:

$$ E \\ cdot X \u003d A ^ (- 1) \\ cdot B. $$

Since $ E \\ cdot X \u003d X $, then:

$$ X \u003d A ^ (- 1) \\ cdot B. $$

Example # 1

Solve the SLAE $ \\ left \\ (\\ begin (aligned) & -5x_1 + 7x_2 \u003d 29; \\\\ & 9x_1 + 8x_2 \u003d -11. \\ End (aligned) \\ right. $ Using the inverse matrix.

$$ A \u003d \\ left (\\ begin (array) (cc) -5 & 7 \\\\ 9 & 8 \\ end (array) \\ right); \\; B \u003d \\ left (\\ begin (array) (c) 29 \\\\ -11 \\ end (array) \\ right); \\; X \u003d \\ left (\\ begin (array) (c) x_1 \\\\ x_2 \\ end (array) \\ right). $$

Let us find the inverse matrix to the matrix of the system, i.e. calculate $ A ^ (- 1) $. Example # 2

$$ A ^ (- 1) \u003d - \\ frac (1) (103) \\ cdot \\ left (\\ begin (array) (cc) 8 & -7 \\\\ -9 & -5 \\ end (array) \\ right) ... $$

Now we substitute all three matrices ($ X $, $ A ^ (- 1) $, $ B $) into the equality $ X \u003d A ^ (- 1) \\ cdot B $. Then we perform matrix multiplication

$$ \\ left (\\ begin (array) (c) x_1 \\\\ x_2 \\ end (array) \\ right) \u003d - \\ frac (1) (103) \\ cdot \\ left (\\ begin (array) (cc) 8 & -7 \\\\ -9 & -5 \\ end (array) \\ right) \\ cdot \\ left (\\ begin (array) (c) 29 \\\\ -11 \\ end (array) \\ right) \u003d \\\\ \u003d - \\ frac (1) (103) \\ cdot \\ left (\\ begin (array) (c) 8 \\ cdot 29 + (- 7) \\ cdot (-11) \\\\ -9 \\ cdot 29 + (- 5) \\ cdot (- 11) \\ end (array) \\ right) \u003d - \\ frac (1) (103) \\ cdot \\ left (\\ begin (array) (c) 309 \\\\ -206 \\ end (array) \\ right) \u003d \\ left ( \\ begin (array) (c) -3 \\\\ 2 \\ end (array) \\ right). $$

So, we got the equality $ \\ left (\\ begin (array) (c) x_1 \\\\ x_2 \\ end (array) \\ right) \u003d \\ left (\\ begin (array) (c) -3 \\\\ 2 \\ end (array ) \\ right) $. From this equality we have: $ x_1 \u003d -3 $, $ x_2 \u003d 2 $.

Answer: $ x_1 \u003d -3 $, $ x_2 \u003d 2 $.

Example No. 2

Solve SLAE $ \\ left \\ (\\ begin (aligned) & x_1 + 7x_2 + 3x_3 \u003d -1; \\\\ & -4x_1 + 9x_2 + 4x_3 \u003d 0; \\\\ & 3x_2 + 2x_3 \u003d 6. \\ End (aligned) \\ right . $ inverse matrix method.

Let us write down the matrix of the system $ A $, the matrix of free terms $ B $ and the matrix of unknowns $ X $.

$$ A \u003d \\ left (\\ begin (array) (ccc) 1 & 7 & 3 \\\\ -4 & 9 & 4 \\\\ 0 & 3 & 2 \\ end (array) \\ right); \\; B \u003d \\ left (\\ begin (array) (c) -1 \\\\ 0 \\\\ 6 \\ end (array) \\ right); \\; X \u003d \\ left (\\ begin (array) (c) x_1 \\\\ x_2 \\\\ x_3 \\ end (array) \\ right). $$

Now the turn has come to find the inverse matrix to the matrix of the system, i.e. find $ A ^ (- 1) $. In example # 3 on the page on finding inverse matrices, the inverse has already been found. Let's use the finished result and write $ A ^ (- 1) $:

$$ A ^ (- 1) \u003d \\ frac (1) (26) \\ cdot \\ left (\\ begin (array) (ccc) 6 & -5 & 1 \\\\ 8 & 2 & -16 \\\\ -12 & - 3 & 37 \\ end (array) \\ right). $$

Now we substitute all three matrices ($ X $, $ A ^ (- 1) $, $ B $) into the equality $ X \u003d A ^ (- 1) \\ cdot B $, after which we perform the matrix multiplication on the right-hand side of this equality.

$$ \\ left (\\ begin (array) (c) x_1 \\\\ x_2 \\\\ x_3 \\ end (array) \\ right) \u003d \\ frac (1) (26) \\ cdot \\ left (\\ begin (array) (ccc) 6 & -5 & 1 \\\\ 8 & 2 & -16 \\\\ -12 & -3 & 37 \\ end (array) \\ right) \\ cdot \\ left (\\ begin (array) (c) -1 \\\\ 0 \\ +1 \\ cdot 6 \\\\ 8 \\ cdot (-1) +2 \\ cdot 0 + (- 16) \\ cdot 6 \\\\ -12 \\ cdot (-1) + (- 3) \\ cdot 0 + 37 \\ cdot 6 \\ end (array) \\ right) \u003d \\ frac (1) (26) \\ cdot \\ left (\\ begin (array) (c) 0 \\\\ - 104 \\\\ 234 \\ end (array) \\ right) \u003d \\ left ( \\ begin (array) (c) 0 \\\\ - 4 \\\\ 9 \\ end (array) \\ right) $$

So, we got the equality $ \\ left (\\ begin (array) (c) x_1 \\\\ x_2 \\\\ x_3 \\ end (array) \\ right) \u003d \\ left (\\ begin (array) (c) 0 \\\\ - 4 \\ From this equality we have: $ x_1 \u003d 0 $, $ x_2 \u003d -4 $, $ x_3 \u003d 9 $.