Inverse matrix 2x2 examples. Matrix algebra - inverse matrix

We continue talking about actions with matrices. Namely - in the course of studying this lecture, you will learn how to find the inverse matrix. Learn. Even if math is tight.

What is an inverse matrix? Here you can draw an analogy with reciprocal numbers: consider, for example, the optimistic number 5 and its inverse. The product of these numbers is equal to one:. With matrices, everything is similar! The product of a matrix by its inverse matrix is ​​- identity matrix, which is the matrix analogue of a numeric unit. However, first things first, let's decide an important practical question, namely, we will learn how to find this inverse matrix.

What do you need to know and be able to do to find the inverse matrix? You must be able to decide determinants... You must understand what it is matrix and be able to perform some actions with them.

There are two main methods for finding the inverse of a matrix:
by using algebraic complements and using elementary transformations.

Today we will explore the first, easier way.

Let's start with the most terrible and incomprehensible. Consider square matrix. The inverse matrix can be found by the following formula:

Where is the determinant of the matrix, is the transposed matrix of the algebraic complements of the corresponding elements of the matrix.

The concept of an inverse matrix exists only for square matrices, matrices "two by two", "three by three", etc.

Designations: As you probably already noticed, the inverse of the matrix is ​​indicated by a superscript

Let's start with the simplest case - a two-by-two matrix. Most often, of course, “three by three” is required, but, nevertheless, I strongly recommend that you study a simpler task in order to learn general principle solutions.

Example:

Find the inverse of a matrix

We decide. The sequence of actions can be conveniently broken down into points.

1) First, find the determinant of the matrix.

If your understanding of this action is not good enough, read the material How to calculate the determinant?

Important! In the event that the determinant of the matrix is ZERO- inverse matrix DOES NOT EXIST.

In the example under consideration, as it turned out, which means that everything is in order.

2) Find the matrix of minors.

To solve our problem, it is not necessary to know what a minor is, however, it is advisable to read the article How to calculate the determinant.

The matrix of minors has the same dimensions as the matrix, that is, in this case.
The matter is small, it remains to find four numbers and put them instead of asterisks.

Back to our matrix
Let's first consider the left top element:

How to find it minor?
And this is done like this: THINKINGLY cross out the row and column in which this element is located:

The remaining number is minor of this element, which we write into our matrix of minors:

Consider the following matrix element:

We mentally cross out the row and column in which this element is located:

What is left is the minor of this element, which we write into our matrix:

Similarly, we consider the elements of the second line and find their minors:


Ready.

It's simple. In the matrix of minors, you need CHANGE SIGNS two numbers:

These are the numbers that I have circled!

- a matrix of algebraic complements of the corresponding elements of the matrix.

And it's just ...

4) Find the transposed matrix of algebraic complements.

- transposed matrix of algebraic complements of the corresponding elements of the matrix.

5) Answer.

Remembering our formula
Everything is found!

So the inverse of the matrix is:

The answer is best left as it is. NOT NECESSARY divide each element of the matrix by 2, since you get fractional numbers. This nuance is discussed in more detail in the same article. Matrix operations.

How can I check the solution?

It is necessary to perform matrix multiplication or

Examination:

The already mentioned identity matrix Is a matrix with ones on main diagonal and zeros elsewhere.

Thus, the inverse is correct.

If you carry out an action, then the result will also be the identity matrix. This is one of the few cases where matrix multiplication is permutable, more detailed information can be found in the article Properties of operations on matrices. Matrix expressions... Also note that during the check, the constant (fraction) is brought forward and processed at the very end - after matrix multiplication. This is a standard technique.

Let's move on to a more common case in practice - the "three by three" matrix:

Example:

Find the inverse of a matrix

The algorithm is exactly the same as for the two-by-two case.

We find the inverse matrix by the formula:, where is the transposed matrix of algebraic complements of the corresponding elements of the matrix.

1) Find the determinant of the matrix.


Here the determinant is revealed on the first line.

Also, do not forget that, which means that everything is fine - inverse matrix exists.

2) Find the matrix of minors.

Minors matrix has a dimension "three by three" and we need to find nine numbers.

I'll go into a couple of minor details in detail:

Consider the following matrix element:

THOUGHTLY cross out the row and column in which this element is located:

The remaining four numbers are written into the determinant "two by two"

This qualifier is "two by two" and is the minor of this element... It needs to be calculated:


That's it, the minor is found, we write it into our matrix of minors:

As you may have guessed, there are nine two-by-two determinants to be computed. The process, of course, is dreary, but the case is not the most difficult, it can be worse.

Well, to consolidate - finding another minor in the pictures:

Try to calculate the rest of the minors yourself.

Final Result:
- the matrix of the minors of the corresponding elements of the matrix.

The fact that all the minors turned out to be negative is pure coincidence.

3) Find the matrix of algebraic complements.

In the matrix of minors, it is necessary CHANGE SIGNS strictly at the following elements:

In this case:

Finding the inverse matrix for the “four by four” matrix is ​​not considered, since such a task can only be given by a sadistic teacher (so that the student calculates one determinant “four by four” and 16 determinants “three by three”). In my practice, I met only one such case, and the customer test work paid for my torment quite dearly =).

In a number of textbooks, manuals, you can find a slightly different approach to finding the inverse matrix, however, I recommend using the above solution algorithm. Why? Because the likelihood of getting confused in calculations and signs is much less.

Methods for finding the inverse matrix,. Consider a square matrix

We set Δ = det A.

The square matrix A is called non-degenerate, or non-special if its determinant is nonzero, and degenerate or special, ifΔ = 0.

A square matrix B exists for a square matrix A of the same order if their product A B = B A = E, where E is the identity matrix of the same order as the matrices A and B.

Theorem . For the matrix A to have an inverse matrix, it is necessary and sufficient that its determinant be nonzero.

inverse matrix matrix A, denoted by A- 1, so that B = A - 1 and is calculated by the formula

, (1)

where А i j are the algebraic complements of the elements a i j of the matrix A ..

Calculation of A -1 according to formula (1) for high-order matrices is very laborious, therefore in practice it is convenient to find A -1 using the method of elementary transformations (EP). Any nonsingular matrix A can be reduced to the identity matrix E by EP of only columns (or only rows). It is convenient to perform EP over the matrices A and E at the same time, writing both matrices side by side through a line. Note again that when finding the canonical form of a matrix for the purpose of finding, you can use transformations of rows and columns. If you need to find the inverse of a matrix, only rows or only columns should be used in the transformation process.

Example 2.10... For matrix find A -1.

Solution.We first find the determinant of the matrix A
hence, the inverse matrix exists and we can find it by the formula: , where A i j (i, j = 1,2,3) are the algebraic complements of the elements a i j of the original matrix.

Where .

Example 2.11... Using the method of elementary transformations, find A -1 for the matrix: A =.

Solution.We assign to the original matrix on the right the identity matrix of the same order: ... With the help of elementary column transformations, we bring the left “half” to the unit one, simultaneously performing exactly the same transformations over the right matrix.
To do this, let's swap the first and second columns:
~ ... Add the first to the third column, and the first multiplied by -2 to the second: ... From the first column we subtract the second doubled, and from the third - the second multiplied by 6; ... Let's add the third column to the first and second: ... Let's multiply the last column by -1: ... Derived to the right of the vertical bar square matrix is the inverse of the matrix A. So,
.

Consider the problem of defining the operation inverse to matrix multiplication.

Let A be a square matrix of order n. The matrix A ^ (- 1), which, together with the given matrix A, satisfies the equalities:

A ^ (- 1) \ cdot A = A \ cdot A ^ (- 1) = E,


called reverse... The matrix A is called reversible if there is an inverse for it, otherwise - irreversible.

It follows from the definition that if the inverse matrix A ^ (- 1) exists, then it is square of the same order as A. However, the inverse does not exist for every square matrix. If the determinant of the matrix A is zero(\ det (A) = 0), then there is no inverse for it. Indeed, applying the theorem on the determinant of the product of matrices for the identity matrix E = A ^ (- 1) A, we obtain a contradiction

\ det (E) = \ det (A ^ (- 1) \ cdot A) = \ det (A ^ (- 1)) \ det (A) = \ det (A ^ (- 1)) \ cdot0 = 0


since the determinant of the identity matrix is ​​1. It turns out that the difference from zero of the determinant of a square matrix is ​​the only condition for the existence of an inverse matrix. Recall that a square matrix, the determinant of which is equal to zero, is called degenerate (singular), otherwise - nondegenerate (nonsingular).

Theorem 4.1 on the existence and uniqueness of the inverse matrix. Square matrix A = \ begin (pmatrix) a_ (11) & \ cdots & a_ (1n) \\ \ vdots & \ ddots & \ vdots \\ a_ (n1) & \ cdots & a_ (nn) \ end (pmatrix), the determinant of which is nonzero, has an inverse matrix and, moreover, only one:

A ^ (- 1) = \ frac (1) (\ det (A)) \ cdot \! \ begin (pmatrix) A_ (11) & A_ (21) & \ cdots & A_ (1n) \\ A_ (12) & A_ (22) & \ cdots & A_ (n2) \\ \ vdots & \ vdots & \ ddots & \ vdots \\ A_ (1n ) & A_ (2n) & \ cdots & A_ (nn) \ end (pmatrix) = \ frac (1) (\ det (A)) \ cdot A ^ (+),

where A ^ (+) is the matrix transposed for the matrix composed of the algebraic complements of the elements of the matrix A.

The matrix A ^ (+) is called attached matrix with respect to matrix A.

Indeed, the matrix \ frac (1) (\ det (A)) \, A ^ (+) exists under the condition \ det (A) \ ne0. It must be shown that it is inverse to A, i.e. satisfies two conditions:

\ begin (aligned) \ mathsf (1)) & ~ A \ cdot \! \ left (\ frac (1) (\ det (A)) \ cdot A ^ (+) \ right) = E; \\ \ mathsf (2)) & ~ \! \ Left (\ frac (1) (\ det (A)) \ cdot A ^ (+) \ right) \! \ Cdot A = E. \ End (aligned)

Let us prove the first equality. According to clause 4 of Remarks 2.3, it follows from the properties of the determinant that AA ^ (+) = \ det (A) \ cdot E... That's why

A \ cdot \! \ Left (\ frac (1) (\ det (A)) \ cdot A ^ (+) \ right) = \ frac (1) (\ det (A)) \ cdot AA ^ (+) = \ frac (1) (\ det (A)) \ cdot \ det (A) \ cdot E = E,

which was required to be shown. The second equality is proved similarly. Therefore, under the condition \ det (A) \ ne0, the matrix A has the inverse

A ^ (- 1) = \ frac (1) (\ det (A)) \ cdot A ^ (+).

Let us prove the uniqueness of the inverse matrix by contradiction. Let, besides the matrix A ^ (- 1), there exist one more inverse matrix B \, (B \ ne A ^ (- 1)) such that AB = E. Multiplying both sides of this equality on the left by the matrix A ^ (- 1), we obtain \ underbrace (A ^ (- 1) AB) _ (E) = A ^ (- 1) E... Hence B = A ^ (- 1), which contradicts the assumption B \ ne A ^ (- 1). Therefore, the inverse matrix is ​​unique.

Notes 4.1

1. It follows from the definition that the matrices A and A ^ (- 1) commute.

2. The inverse matrix of a nondegenerate diagonal is also diagonal:

\ Bigl [\ operatorname (diag) (a_ (11), a_ (22), \ ldots, a_ (nn)) \ Bigr] ^ (- 1) = \ operatorname (diag) \! \ Left (\ frac (1 ) (a_ (11)), \, \ frac (1) (a_ (22)), \, \ ldots, \, \ frac (1) (a_ (nn)) \ right) \ !.

3. The inverse of the nondegenerate lower (upper) triangular matrix is ​​lower (upper) triangular.

4. Elementary matrices have inverse, which are also elementary (see item 1 of Remarks 1.11).

Inverse Matrix Properties

The matrix inversion operation has the following properties:

\ begin (aligned) \ bold (1.) & ~~ (A ^ (- 1)) ^ (- 1) = A \,; \\ \ bold (2.) & ~~ (AB) ^ (- 1 ) = B ^ (- 1) A ^ (- 1) \,; \\ \ bold (3.) & ~~ (A ^ T) ^ (- 1) = (A ^ (- 1)) ^ T \ ,; \\ \ bold (4.) & ~~ \ det (A ^ (- 1)) = \ frac (1) (\ det (A)) \,; \\ \ bold (5.) & ~~ E ^ (- 1) = E \ ,. \ end (aligned)


if the operations indicated in equalities 1-4 make sense.

Let us prove property 2: if the product AB of nondegenerate square matrices of the same order has an inverse, then (AB) ^ (- 1) = B ^ (- 1) A ^ (- 1).

Indeed, the determinant of the product of matrices AB is not equal to zero, since

\ det (A \ cdot B) = \ det (A) \ cdot \ det (B), where \ det (A) \ ne0, ~ \ det (B) \ ne0

Therefore, the inverse matrix (AB) ^ (- 1) exists and is unique. Let us show by definition that the matrix B ^ (- 1) A ^ (- 1) is inverse with respect to the matrix AB. Really.

The inverse matrix for a given one is such a matrix, multiplying the original by which gives the identity matrix: A mandatory and sufficient condition for the presence of an inverse matrix is ​​the inequality of the original determinant to zero (which in turn implies that the matrix must be square). If the determinant of a matrix is ​​equal to zero, then it is called degenerate and such a matrix has no inverse. V higher mathematics inverse matrices are important and are used to solve a number of problems. For example, on finding the inverse matrix a matrix method for solving systems of equations is constructed. Our service site allows calculate inverse matrix online two methods: the Gauss-Jordan method and using the matrix of algebraic complements. The first one implies a large number of elementary transformations within the matrix, the second one - the calculation of the determinant and algebraic complements to all elements. To calculate the determinant of the matrix online, you can use our other service - Calculate the determinant of the matrix online

.

Find the inverse matrix of a website

site allows you to find inverse matrix online fast and free. On the site, calculations are performed by our service and the result is returned with detailed solution by finding inverse matrix... The server always returns only an accurate and correct answer. In tasks by definition inverse matrix online, it is necessary that the determinant matrices was nonzero, otherwise site will report the impossibility of finding the inverse matrix due to the equality of the determinant of the original matrix to zero. The task of finding inverse matrix found in many branches of mathematics, being one of the most basic concepts of algebra and a mathematical tool in applied problems. Independent inverse matrix definition requires a lot of effort, a lot of time, computation and great care in order to avoid a slip or minor error in calculations. Therefore, our service for finding the inverse matrix online will greatly facilitate your task and become an indispensable tool for solving math problems... Even if you find the inverse of the matrix on your own, we recommend checking your solution on our server. Enter your original matrix on our Calculate inverse matrix online and check your answer. Our system never fails and finds inverse matrix of a given dimension in the mode online instantly! On the site site character entries are allowed in elements matrices, in this case inverse matrix online will be presented in general symbolic form.

This topic is one of the most hated among students. Only determinants are probably worse.

The trick is that the very concept of an inverse element (and I'm not just talking about matrices now) refers us to the operation of multiplication. Even in school curriculum multiplication is considered a complex operation, and matrix multiplication is generally a separate topic, which I have a whole paragraph and video tutorial devoted to.

We won't go into the details of matrix calculations today. Just remember: how matrices are denoted, how they are multiplied and what follows from this.

Repetition: matrix multiplication

First of all, let's agree on the notation. A matrix $ A $ of size $ \ left [m \ times n \ right] $ is simply a table of numbers, in which there are exactly $ m $ rows and $ n $ columns:

\ = \ underbrace (\ left [\ begin (matrix) ((a) _ (11)) & ((a) _ (12)) & ... & ((a) _ (1n)) \\ (( a) _ (21)) & ((a) _ (22)) & ... & ((a) _ (2n)) \\ ... & ... & ... & ... \\ ((a) _ (m1)) & ((a) _ (m2)) & ... & ((a) _ (mn)) \\\ end (matrix) \ right]) _ (n) \]

In order not to accidentally confuse rows and columns in places (believe me, you can mix up a 1 with a 2 in the exam - what can we say about some lines there), just take a look at the picture:

Determination of indices for matrix cells

What's happening? If you place standard system coordinates $ OXY $ in the upper left corner and direct the axes so that they cover the entire matrix, then each cell of this matrix can be uniquely associated with the coordinates $ \ left (x; y \ right) $ - this will be the row number and column number.

Why is the coordinate system located in the upper left corner? Because it is from there that we begin to read any texts. It's very easy to remember.

Why is the $ x $ axis directed downwards and not to the right? Again, everything is simple: take the standard coordinate system (the $ x $ axis goes to the right, the $ y $ axis goes up) and rotate it so that it encloses the matrix. This is a 90 degree clockwise rotation - we can see its result in the picture.

In general, we figured out how to determine the indices of the matrix elements. Now let's deal with multiplication.

Definition. Matrices $ A = \ left [m \ times n \ right] $ and $ B = \ left [n \ times k \ right] $, when the number of columns in the first is the same as the number of rows in the second, are called consistent.

In that order. You can be confused and say, they say, the matrices $ A $ and $ B $ form an ordered pair $ \ left (A; B \ right) $: if they are consistent in this order, then it is completely unnecessary that $ B $ and $ A $, those. the pair $ \ left (B; A \ right) $ is also matched.

Only matched matrices can be multiplied.

Definition. The product of matched matrices $ A = \ left [m \ times n \ right] $ and $ B = \ left [n \ times k \ right] $ is a new matrix $ C = \ left [m \ times k \ right] $ , whose elements $ ((c) _ (ij)) $ are calculated by the formula:

\ [((c) _ (ij)) = \ sum \ limits_ (k = 1) ^ (n) (((a) _ (ik))) \ cdot ((b) _ (kj)) \]

In other words: to get the element $ ((c) _ (ij)) $ of the matrix $ C = A \ cdot B $, you need to take the $ i $-row of the first matrix, the $ j $ -th column of the second matrix, and then multiply pairwise elements from this row and column. Add up the results.

Yes, that's such a harsh definition. Several facts immediately follow from it:

  1. Matrix multiplication, generally speaking, is non-commutative: $ A \ cdot B \ ne B \ cdot A $;
  2. However, multiplication is associative: $ \ left (A \ cdot B \ right) \ cdot C = A \ cdot \ left (B \ cdot C \ right) $;
  3. And even distributively: $ \ left (A + B \ right) \ cdot C = A \ cdot C + B \ cdot C $;
  4. And again distributively: $ A \ cdot \ left (B + C \ right) = A \ cdot B + A \ cdot C $.

The distributivity of multiplication had to be described separately for the left and right multiplier-sum, precisely because of the non-commutativity of the multiplication operation.

If, nevertheless, it turns out that $ A \ cdot B = B \ cdot A $, such matrices are called permutation matrices.

Among all the matrices that are multiplied there by something, there are special ones - those that, when multiplied by any matrix $ A $, again give $ A $:

Definition. Matrix $ E $ is called identity if $ A \ cdot E = A $ or $ E \ cdot A = A $. In the case of a square matrix $ A $ we can write:

The identity matrix is ​​a frequent visitor when solving matrix equations... And in general, a frequent visitor to the world of matrices. :)

And also because of this $ E $, someone came up with all the game that will be written next.

What is inverse matrix

Since matrix multiplication is a very time consuming operation (you have to multiply a bunch of rows and columns), the concept of an inverse matrix is ​​also not the most trivial one. And requiring some explanation.

Key definition

Well, it's time to learn the truth.

Definition. The matrix $ B $ is called inverse to the matrix $ A $ if

The inverse matrix is ​​denoted by $ ((A) ^ (- 1)) $ (not to be confused with the degree!), So the definition can be rewritten as follows:

It would seem that everything is extremely simple and clear. But when analyzing such a definition, several questions immediately arise:

  1. Does an inverse matrix always exist? And if not always, then how to determine: when it exists and when it does not?
  2. And who said that there is exactly one such matrix? What if for some initial matrix $ A $ there is a whole crowd of inverse ones?
  3. What do all these reverse look like? And how, in fact, are they to be counted?

As for the calculation algorithms - we will talk about this a little later. But we will answer the rest of the questions right now. Let us form them in the form of separate statements-lemmas.

Basic properties

Let's start with what the matrix $ A $ should look like in order for it to have $ ((A) ^ (- 1)) $. Now we will make sure that both of these matrices must be square, and of the same size: $ \ left [n \ times n \ right] $.

Lemma 1. Given a matrix $ A $ and its inverse $ ((A) ^ (- 1)) $. Then both of these matrices are square, with the same order $ n $.

Proof. It's simple. Let the matrix $ A = \ left [m \ times n \ right] $, $ ((A) ^ (- 1)) = \ left [a \ times b \ right] $. Since the product $ A \ cdot ((A) ^ (- 1)) = E $ exists by definition, the matrices $ A $ and $ ((A) ^ (- 1)) $ are matched in the indicated order:

\ [\ begin (align) & \ left [m \ times n \ right] \ cdot \ left [a \ times b \ right] = \ left [m \ times b \ right] \\ & n = a \ end ( align) \]

This is a direct consequence of the matrix multiplication algorithm: the coefficients $ n $ and $ a $ are "transitory" and must be equal.

At the same time, the inverse multiplication is also defined: $ ((A) ^ (- 1)) \ cdot A = E $, therefore the matrices $ ((A) ^ (- 1)) $ and $ A $ are also matched in the indicated order:

\ [\ begin (align) & \ left [a \ times b \ right] \ cdot \ left [m \ times n \ right] = \ left [a \ times n \ right] \\ & b = m \ end ( align) \]

Thus, without loss of generality, we can assume that $ A = \ left [m \ times n \ right] $, $ ((A) ^ (- 1)) = \ left [n \ times m \ right] $. However, according to the definition, $ A \ cdot ((A) ^ (- 1)) = ((A) ^ (- 1)) \ cdot A $, so the sizes of the matrices are strictly the same:

\ [\ begin (align) & \ left [m \ times n \ right] = \ left [n \ times m \ right] \\ & m = n \ end (align) \]

So it turns out that all three matrices - $ A $, $ ((A) ^ (- 1)) $ and $ E $ - are square sizes $ \ left [n \ times n \ right] $. The lemma is proved.

Well, that's not bad already. We see that only square matrices are reversible. Now let's make sure that the inverse is always the same.

Lemma 2. Given a matrix $ A $ and its inverse $ ((A) ^ (- 1)) $. Then this inverse is the only one.

Proof. Let's go from the contrary: let the matrix $ A $ have at least two copies of its inverse - $ B $ and $ C $. Then, according to the definition, the following equalities are true:

\ [\ begin (align) & A \ cdot B = B \ cdot A = E; \\ & A \ cdot C = C \ cdot A = E. \\ \ end (align) \]

From Lemma 1 we conclude that all four matrices - $ A $, $ B $, $ C $ and $ E $ - are square of the same order: $ \ left [n \ times n \ right] $. Therefore, the product is defined:

Since matrix multiplication is associative (but not commutative!), We can write:

\ [\ begin (align) & B \ cdot A \ cdot C = \ left (B \ cdot A \ right) \ cdot C = E \ cdot C = C; \\ & B \ cdot A \ cdot C = B \ cdot \ left (A \ cdot C \ right) = B \ cdot E = B; \\ & B \ cdot A \ cdot C = C = B \ Rightarrow B = C. \\ \ end (align) \]

Received the only thing possible variant: two instances of the inverse matrix are equal. The lemma is proved.

The above reasoning repeats almost word for word the proof of the uniqueness of the inverse for all real numbers $ b \ ne 0 $. The only essential addition is taking into account the dimension of the matrices.

However, we still do not know anything about whether any square matrix is ​​invertible. Here the determinant comes to our aid - this is a key characteristic for all square matrices.

Lemma 3. You are given a matrix $ A $. If its inverse matrix $ ((A) ^ (- 1)) $ exists, then the determinant of the original matrix is ​​nonzero:

\ [\ left | A \ right | \ ne 0 \]

Proof. We already know that $ A $ and $ ((A) ^ (- 1)) $ are square matrices of size $ \ left [n \ times n \ right] $. Therefore, for each of them, you can calculate the determinant: $ \ left | A \ right | $ and $ \ left | ((A) ^ (- 1)) \ right | $. However, the determinant of the work is equal to the product determinants:

\ [\ left | A \ cdot B \ right | = \ left | A \ right | \ cdot \ left | B \ right | \ Rightarrow \ left | A \ cdot ((A) ^ (- 1)) \ right | = \ left | A \ right | \ cdot \ left | ((A) ^ (- 1)) \ right | \]

But according to the definition $ A \ cdot ((A) ^ (- 1)) = E $, and the determinant of $ E $ is always 1, therefore

\ [\ begin (align) & A \ cdot ((A) ^ (- 1)) = E; \\ & \ left | A \ cdot ((A) ^ (- 1)) \ right | = \ left | E \ right |; \\ & \ left | A \ right | \ cdot \ left | ((A) ^ (- 1)) \ right | = 1. \\ \ end (align) \]

The product of two numbers is equal to one only if each of these numbers is different from zero:

\ [\ left | A \ right | \ ne 0; \ quad \ left | ((A) ^ (- 1)) \ right | \ ne 0. \]

So it turns out that $ \ left | A \ right | \ ne 0 $. The lemma is proved.

In fact, this requirement is quite logical. Now we will analyze the algorithm for finding the inverse matrix - and it will become quite clear why, with a zero determinant, no inverse matrix, in principle, can exist.

But first, let's formulate an "auxiliary" definition:

Definition. A degenerate matrix is ​​a square matrix of size $ \ left [n \ times n \ right] $, whose determinant is zero.

Thus, we can assert that every invertible matrix is ​​non-degenerate.

How to find the inverse of a matrix

We will now consider universal algorithm finding inverse matrices. In general, there are two generally accepted algorithms, and we will also consider the second one today.

The one that will be discussed now is very efficient for matrices of size $ \ left [2 \ times 2 \ right] $ and - partially - of size $ \ left [3 \ times 3 \ right] $. But starting from the size $ \ left [4 \ times 4 \ right] $ it is better not to use it. Why - now you yourself will understand everything.

Algebraic complements

Get ready. There will be pain now. No, do not worry: a beautiful nurse in a skirt, stockings with lace does not come to you and does not give you an injection in the buttock. Everything is much more prosaic: algebraic additions and Her Majesty "Union Matrix" are coming to you.

Let's start with the main thing. Let there be a square matrix of size $ A = \ left [n \ times n \ right] $, whose elements are named $ ((a) _ (ij)) $. Then, for each such element, an algebraic complement can be defined:

Definition. Algebraic complement $ ((A) _ (ij)) $ to the element $ ((a) _ (ij)) $ located in the $ i $ -th row and $ j $ -th column of the matrix $ A = \ left [n \ times n \ right] $ is a construction of the form

\ [((A) _ (ij)) = ((\ left (-1 \ right)) ^ (i + j)) \ cdot M_ (ij) ^ (*) \]

Where $ M_ (ij) ^ (*) $ is the determinant of the matrix obtained from the original $ A $ by deleting the same $ i $ -th row and $ j $ -th column.

Again. The algebraic complement to the matrix element with coordinates $ \ left (i; j \ right) $ is denoted as $ ((A) _ (ij)) $ and is calculated according to the scheme:

  1. First, delete the $ i $ -line and the $ j $ -th column from the original matrix. We get a new square matrix, and we denote its determinant as $ M_ (ij) ^ (*) $.
  2. Then we multiply this determinant by $ ((\ left (-1 \ right)) ^ (i + j)) $ - at first this expression may seem brain-boring, but in fact we are just finding out the sign in front of $ M_ (ij) ^ (*) $.
  3. We count - we get a specific number. Those. the algebraic complement is exactly a number, not some new matrix, etc.

The matrix $ M_ (ij) ^ (*) $ itself is called the complementary minor to the element $ ((a) _ (ij)) $. And in this sense, the above definition of an algebraic complement is a special case of a more complex definition - what we considered in the lesson about the determinant.

Important note. Generally, in "adult" mathematics, algebraic additions are defined as follows:

  1. We take $ k $ rows and $ k $ columns in a square matrix. At their intersection, we get a matrix of size $ \ left [k \ times k \ right] $ - its determinant is called the minor of order $ k $ and is denoted by $ ((M) _ (k)) $.
  2. Then we delete these "favorites" $ k $ lines and $ k $ columns. Again, we get a square matrix - its determinant is called the complementary minor and is denoted $ M_ (k) ^ (*) $.
  3. Multiply $ M_ (k) ^ (*) $ by $ ((\ left (-1 \ right)) ^ (t)) $, where $ t $ is (now attention!) The sum of the numbers of all selected lines and columns ... This will be the algebraic addition.

Take a look at the third step: there is actually a sum of $ 2k $ terms! Another thing is that for $ k = 1 $ we get only 2 terms - these will be the same $ i + j $ - "coordinates" of the element $ ((a) _ (ij)) $, for which we are looking for an algebraic complement.

Thus, today we use a slightly simplified definition. But as we will see later, it will be more than enough. The next thing is much more important:

Definition. The adjoint matrix $ S $ to the square matrix $ A = \ left [n \ times n \ right] $ is a new matrix of size $ \ left [n \ times n \ right] $, which is obtained from $ A $ by replacing $ (( a) _ (ij)) $ algebraic complements $ ((A) _ (ij)) $:

\\ Rightarrow S = \ left [\ begin (matrix) ((A) _ (11)) & ((A) _ (12)) & ... & ((A) _ (1n)) \\ (( A) _ (21)) & ((A) _ (22)) & ... & ((A) _ (2n)) \\ ... & ... & ... & ... \\ ((A) _ (n1)) & ((A) _ (n2)) & ... & ((A) _ (nn)) \\\ end (matrix) \ right] \]

The first thought that arises at the moment of realizing this definition is "this is how much you have to count!" Relax: you will have to count, but not so much. :)

Well, this is all very nice, but why is it necessary? Here's why.

The main theorem

Let's go back a little. Remember, in Lemma 3 it was stated that an invertible matrix $ A $ is always non-degenerate (that is, its determinant is nonzero: $ \ left | A \ right | \ ne 0 $).

So, the opposite is also true: if the matrix $ A $ is not degenerate, then it is always invertible. And there is even a search scheme $ ((A) ^ (- 1)) $. Check it out:

The inverse matrix theorem. Let a square matrix $ A = \ left [n \ times n \ right] $ be given, and its determinant is nonzero: $ \ left | A \ right | \ ne 0 $. Then the inverse matrix $ ((A) ^ (- 1)) $ exists and is calculated by the formula:

\ [((A) ^ (- 1)) = \ frac (1) (\ left | A \ right |) \ cdot ((S) ^ (T)) \]

And now - everything is the same, but in legible handwriting. To find the inverse of a matrix, you need:

  1. Calculate determinant $ \ left | A \ right | $ and make sure it is nonzero.
  2. Construct the union matrix $ S $, i.e. count 100500 algebraic complements $ ((A) _ (ij)) $ and place them in place of $ ((a) _ (ij)) $.
  3. Transpose this matrix $ S $, and then multiply it by some number $ q = (1) / (\ left | A \ right |) \; $.

And that's it! The inverse matrix $ ((A) ^ (- 1)) $ is found. Let's take a look at examples:

\ [\ left [\ begin (matrix) 3 & 1 \\ 5 & 2 \\\ end (matrix) \ right] \]

Solution. Let's check the reversibility. Let's calculate the determinant:

\ [\ left | A \ right | = \ left | \ begin (matrix) 3 & 1 \\ 5 & 2 \\\ end (matrix) \ right | = 3 \ cdot 2-1 \ cdot 5 = 6-5 = 1 \]

The determinant is nonzero. Hence, the matrix is ​​invertible. Let's compose the union matrix:

Let's count the algebraic additions:

\ [\ begin (align) & ((A) _ (11)) = ((\ left (-1 \ right)) ^ (1 + 1)) \ cdot \ left | 2 \ right | = 2; \\ & ((A) _ (12)) = ((\ left (-1 \ right)) ^ (1 + 2)) \ cdot \ left | 5 \ right | = -5; \\ & ((A) _ (21)) = ((\ left (-1 \ right)) ^ (2 + 1)) \ cdot \ left | 1 \ right | = -1; \\ & ((A) _ (22)) = ((\ left (-1 \ right)) ^ (2 + 2)) \ cdot \ left | 3 \ right | = 3. \\ \ end (align) \]

Please note: determinants | 2 |, | 5 |, | 1 | and | 3 | - these are the determinants of matrices of size $ \ left [1 \ times 1 \ right] $, not modules. Those. if the qualifiers included negative numbers, it is not necessary to remove the "minus".

In total, our union matrix looks like this:

\ [((A) ^ (- 1)) = \ frac (1) (\ left | A \ right |) \ cdot ((S) ^ (T)) = \ frac (1) (1) \ cdot ( (\ left [\ begin (array) (* (35) (r)) 2 & -5 \\ -1 & 3 \\\ end (array) \ right]) ^ (T)) = \ left [\ begin (array) (* (35) (r)) 2 & -1 \\ -5 & 3 \\\ end (array) \ right] \]

So that is all. The problem has been solved.

Answer. $ \ left [\ begin (array) (* (35) (r)) 2 & -1 \\ -5 & 3 \\\ end (array) \ right] $

Task. Find the inverse of the matrix:

\ [\ left [\ begin (array) (* (35) (r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\ end (array) \ right] \]

Solution. Again we consider the determinant:

\ [\ begin (align) & \ left | \ begin (array) (* (35) (r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\ end (array) \ right | = \ begin (matrix ) \ left (1 \ cdot 2 \ cdot 1+ \ left (-1 \ right) \ cdot \ left (-1 \ right) \ cdot 1 + 2 \ cdot 0 \ cdot 0 \ right) - \\ - \ left (2 \ cdot 2 \ cdot 1+ \ left (-1 \ right) \ cdot 0 \ cdot 1 + 1 \ cdot \ left (-1 \ right) \ cdot 0 \ right) \\\ end (matrix) = \ \ & = \ left (2 + 1 + 0 \ right) - \ left (4 + 0 + 0 \ right) = - 1 \ ne 0. \\ \ end (align) \]

The determinant is nonzero - the matrix is ​​invertible. But now there will be the most tough: you need to count as many as 9 (nine, damn them!) Algebraic additions. And each of them will contain the qualifier $ \ left [2 \ times 2 \ right] $. Flew:

\ [\ begin (matrix) ((A) _ (11)) = ((\ left (-1 \ right)) ^ (1 + 1)) \ cdot \ left | \ begin (matrix) 2 & -1 \\ 0 & 1 \\\ end (matrix) \ right | = 2; \\ ((A) _ (12)) = ((\ left (-1 \ right)) ^ (1 + 2)) \ cdot \ left | \ begin (matrix) 0 & -1 \\ 1 & 1 \\\ end (matrix) \ right | = -1; \\ ((A) _ (13)) = ((\ left (-1 \ right)) ^ (1 + 3)) \ cdot \ left | \ begin (matrix) 0 & 2 \\ 1 & 0 \\\ end (matrix) \ right | = -2; \\ ... \\ ((A) _ (33)) = ((\ left (-1 \ right)) ^ (3 + 3)) \ cdot \ left | \ begin (matrix) 1 & -1 \\ 0 & 2 \\\ end (matrix) \ right | = 2; \\ \ end (matrix) \]

In short, the union matrix will look like this:

Therefore, the inverse of the matrix will be like this:

\ [((A) ^ (- 1)) = \ frac (1) (- 1) \ cdot \ left [\ begin (matrix) 2 & -1 & -2 \\ 1 & -1 & -1 \\ -3 & 1 & 2 \\\ end (matrix) \ right] = \ left [\ begin (array) (* (35) (r)) - 2 & -1 & 3 \\ 1 & 1 & -1 \ \ 2 & 1 & -2 \\\ end (array) \ right] \]

Well, that's all. Here is the answer.

Answer. $ \ left [\ begin (array) (* (35) (r)) -2 & -1 & 3 \\ 1 & 1 & -1 \\ 2 & 1 & -2 \\\ end (array) \ right ] $

As you can see, at the end of each example, we ran a check. In this regard, an important note:

Don't be lazy to check. Multiply the original matrix by the found inverse - you should get $ E $.

This check is much easier and faster than looking for an error in further calculations, when, for example, you are solving a matrix equation.

Alternative way

As I said, the inverse matrix theorem works great for the sizes $ \ left [2 \ times 2 \ right] $ and $ \ left [3 \ times 3 \ right] $ (in the latter case, it’s not so “great "), But for large matrices, sadness begins.

But do not worry: there is an alternative algorithm, which can be used to calmly find the inverse even for the matrix $ \ left [10 \ times 10 \ right] $. But, as is often the case, to consider this algorithm, we need a little theoretical background.

Elementary transformations

Among the various transformations of the matrix, there are several special ones - they are called elementary. There are exactly three such transformations:

  1. Multiplication. You can take the $ i $ th row (column) and multiply it by any number $ k \ ne 0 $;
  2. Addition. Add to the $ i $ th row (column) any other $ j $ th row (column) multiplied by any number $ k \ ne 0 $ (you can, of course, and $ k = 0 $, but what's the point ? Nothing will change though).
  3. Rearrangement. Take the $ i $ th and $ j $ th rows (columns) and swap them.

Why these transformations are called elementary (for large matrices they do not look so elementary) and why there are only three of them - these questions are beyond the scope of today's lesson. Therefore, we will not go into details.

Another thing is important: we have to perform all these perversions on the attached matrix. Yes, yes: you heard right. Now there will be one more definition - the last one in today's lesson.

Attached matrix

Surely at school you solved systems of equations using the addition method. Well, there, subtract another from one string, multiply some string by a number - that's all.

So: now everything will be the same, but already "in an adult way." Ready?

Definition. Let the matrix $ A = \ left [n \ times n \ right] $ and the identity matrix $ E $ of the same size $ n $ be given. Then the adjoint matrix $ \ left [A \ left | E \ right. \ right] $ is a new $ \ left [n \ times 2n \ right] $ matrix that looks like this:

\ [\ left [A \ left | E \ right. \ right] = \ left [\ begin (array) (rrrr | rrrr) ((a) _ (11)) & ((a) _ (12)) & ... & ((a) _ (1n)) & 1 & 0 & ... & 0 \\ ((a) _ (21)) & ((a) _ (22)) & ... & ((a) _ (2n)) & 0 & 1 & ... & 0 \\ ... & ... & ... & ... & ... & ... & ... & ... \\ ((a) _ (n1)) & ((a) _ (n2)) & ... & ((a) _ (nn)) & 0 & 0 & ... & 1 \\\ end (array) \ right] \]

In short, we take the matrix $ A $, on the right we assign to it the identity matrix $ E $ the right size, we separate them with a vertical line for beauty - here's the attached one. :)

What's the catch? Here's what:

Theorem. Let the matrix $ A $ be invertible. Consider the adjoint matrix $ \ left [A \ left | E \ right. \ right] $. If using elementary string conversions bring it to the form $ \ left [E \ left | B \ right. \ right] $, i.e. by multiplying, subtracting and rearranging the rows to get from $ A $ the matrix $ E $ on the right, then the matrix $ B $ obtained on the left is the inverse of $ A $:

\ [\ left [A \ left | E \ right. \ right] \ to \ left [E \ left | B \ right. \ right] \ Rightarrow B = ((A) ^ (- 1)) \]

It's that simple! In short, the algorithm for finding the inverse matrix looks like this:

  1. Write the appended matrix $ \ left [A \ left | E \ right. \ right] $;
  2. Perform elementary string conversions until $ E $ appears instead of $ A $;
  3. Of course, something will also appear on the left - some matrix $ B $. This will be the opposite;
  4. PROFIT! :)

Of course, this is much easier said than done. So let's look at a couple of examples: for sizes $ \ left [3 \ times 3 \ right] $ and $ \ left [4 \ times 4 \ right] $.

Task. Find the inverse of the matrix:

\ [\ left [\ begin (array) (* (35) (r)) 1 & 5 & 1 \\ 3 & 2 & 1 \\ 6 & -2 & 1 \\\ end (array) \ right] \ ]

Solution. We compose the attached matrix:

\ [\ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & -2 & 1 & 0 & 0 & 1 \\\ end (array) \ right] \]

Since the last column of the original matrix is ​​filled with ones, let's subtract the first row from the rest:

\ [\ begin (align) & \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & - 2 & 1 & 0 & 0 & 1 \\\ end (array) \ right] \ begin (matrix) \ downarrow \\ -1 \\ -1 \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\ end (array) \ right] \\ \ end (align) \]

There are no more ones, except for the first line. But we do not touch it, otherwise in the third column the newly removed units will begin to “multiply”.

But we can subtract the second line twice from the last - we get one in the lower left corner:

\ [\ begin (align) & \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\ end (array) \ right] \ begin (matrix) \ \\ \ downarrow \\ -2 \\\ end (matrix) \ to \\ & \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \\ \ end (align) \]

Now we can subtract the last row from the first and twice from the second - this way we "zero" the first column:

\ [\ begin (align) & \ left [\ begin (array) (rrr | rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \ begin (matrix) -1 \\ -2 \\ \ uparrow \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrr | rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \\ \ end (align) \]

Multiply the second row by −1, and then subtract it 6 times from the first and add it 1 time to the last:

\ [\ begin (align) & \ left [\ begin (array) (rrr | rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \ \ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \ begin (matrix) \ \\ \ left | \ cdot \ left (-1 \ right) \ right. \\ \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrr | rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\ end (array) \ right] \ begin (matrix) -6 \\ \ updownarrow \\ +1 \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrr | rrr) 0 & 0 & 1 & -18 & 32 & -13 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & 0 & 0 & 4 & -7 & 3 \\\ end (array) \ right] \\ \ end (align) \]

All that remains is to swap lines 1 and 3:

\ [\ left [\ begin (array) (rrr | rrr) 1 & 0 & 0 & 4 & -7 & 3 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 0 & 0 & 1 & - 18 & 32 & -13 \\\ end (array) \ right] \]

Ready! On the right is the desired inverse matrix.

Answer. $ \ left [\ begin (array) (* (35) (r)) 4 & -7 & 3 \\ 3 & -5 & 2 \\ -18 & 32 & -13 \\\ end (array) \ right ] $

Task. Find the inverse of the matrix:

\ [\ left [\ begin (matrix) 1 & 4 & 2 & 3 \\ 1 & -2 & 1 & -2 \\ 1 & -1 & 1 & 1 \\ 0 & -10 & -2 & -5 \\\ end (matrix) \ right] \]

Solution. Again, we compose the attached:

\ [\ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \ \ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\ end (array) \ right] \]

Let's get a little sleepy, grieve over how much we have to count now ... and start counting. First, let's zero the first column by subtracting row 1 from rows 2 and 3:

\ [\ begin (align) & \ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \\ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\ end (array) \ right] \ begin (matrix) \ downarrow \\ -1 \\ -1 \\ \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & -1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\ end (array) \ right] \\ \ end (align) \]

We see too many "cons" in lines 2-4. Multiply all three rows by −1, and then burn out the third column by subtracting row 3 from the rest:

\ [\ begin (align) & \ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & - 1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\ \ end (array) \ right] \ begin (matrix) \ \\ \ left | \ cdot \ left (-1 \ right) \ right. \\ \ left | \ cdot \ left (-1 \ right) \ right. \\ \ left | \ cdot \ left (-1 \ right) \ right. \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & 6 & 1 & 5 & ​​1 & -1 & 0 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 10 & 2 & 5 & 0 & 0 & 0 & -1 \\\ end (array) \ right] \ begin (matrix) -2 \\ -1 \\ \ updownarrow \\ -2 \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end (array) \ right] \\ \ end (align) \]

Now is the time to "fry" the last column of the original matrix: subtract row 4 from the rest:

\ [\ begin (align) & \ left [\ begin (array) (rrrr | rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end (array ) \ right] \ begin (matrix) +1 \\ -3 \\ -2 \\ \ uparrow \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end (array) \ right] \\ \ end (align) \]

Final Roll: Burn out the second column by subtracting row 2 from rows 1 and 3:

\ [\ begin (align) & \ left [\ begin (array) (rrrr | rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end ( array) \ right] \ begin (matrix) 6 \\ \ updownarrow \\ -5 \\ \ \\\ end (matrix) \ to \\ & \ to \ left [\ begin (array) (rrrr | rrrr) 1 & 0 & 0 & 0 & 33 & -6 & -26 & -17 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 0 & 1 & 0 & -25 & 5 & 20 & -13 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\ end (array) \ right] \\ \ end (align) \]

And again on the left is the identity matrix, which means the inverse is on the right. :)

Answer. $ \ left [\ begin (matrix) 33 & -6 & -26 & 17 \\ 6 & -1 & -5 & 3 \\ -25 & 5 & 20 & -13 \\ -2 & 0 & 2 & - 1 \\\ end (matrix) \ right] $