Whether the rank of the matrix can be equal to zero. Matrix rank: definition, methods of finding, examples, solutions


Let A be a matrix of size m \ times n, and k - natural number not exceeding m and n: k \ leqslant \ min \ (m; n \). Minor of the kth order of the matrix A is called the determinant of the matrix of the kth order, formed by elements at the intersection of arbitrarily chosen k rows and k columns of the matrix A. When denoting minors, the numbers of the selected rows will be indicated with superscripts, and the selected columns - with lower ones, placing them in ascending order.


Example 3.4. Write minors of different orders of a matrix


A = \ begin (pmatrix) 1 & 2 & 1 & 0 \\ 0 & 2 & 2 & 3 \\ 1 & 4 & 3 & 3 \ end (pmatrix) \ !.


Solution. Matrix A has dimensions 3 \ times4. It has: 12 minors of the 1st order, for example, a minor M _ (() _ 2) ^ (() _ 3) = \ det (a_ (32)) = 4; 18 minors of the 2nd order, for example, M _ (() _ (23)) ^ (() ^ (12)) = \ begin (vmatrix) 2 & 1 \\ 2 & 2 \ end (vmatrix) = 2; 4 minor 3rd order, for example,


M _ (() _ (134)) ^ (() ^ (123)) = \ begin (vmatrix) 1 & 1 & 0 \\ 0 & 2 & 3 \\ 1 & 3 & 3 \ end (vmatrix) = 0.

In a matrix A of size m \ times n, the rth order minor is called basic if it is nonzero, and all (r + 1) -ro order minors are zero or they do not exist at all.


By the rank of the matrix the order of the basic minor is called. There is no basic minor in the zero matrix. Therefore, the rank of the zero matrix, by definition, is assumed to be zero. The rank of the matrix A is denoted \ operatorname (rg) A.


Example 3.5. Find All Basic Minors and Rank of a Matrix


A = \ begin (pmatrix) 1 & 2 & 2 & 0 \\ 0 & 2 & 2 & 3 \\ 0 & 0 & 0 & 0 \ end (pmatrix) \ !.


Solution. All the third-order minors of this matrix are equal to zero, since these determinants have a zero third row. Therefore, only the second-order minor located in the first two rows of the matrix can be basic. Going through 6 possible minors, we select non-zero


M _ (() _ (12)) ^ (() ^ (12)) = M _ (() _ (13)) ^ (() ^ (12)) = \ begin (vmatrix) 1 & 2 \\ 0 & 2 \ end ( vmatrix) \!, \ quad M _ (() _ (24)) ^ (() ^ (12)) = M _ (() _ (34)) ^ (() ^ (12)) = \ begin (vmatrix) 2 & 0 \\ 2 & 3 \ end (vmatrix) \!, \ Quad M _ (() _ (14)) ^ (() ^ (12)) = \ begin (vmatrix) 1 & 0 \\ 0 & 3 \ end (vmatrix) \ !.


Each of these five minors is basic. Therefore, the rank of the matrix is ​​2.

Remarks 3.2


1. If in the matrix all the minors of the kth order are equal to zero, then the minors of a higher order are also equal to zero. Indeed, expanding the (k + 1) -ro order minor along any row, we obtain the sum of the products of the elements of this row by the kth order minors, and they are equal to zero.


2. The rank of a matrix is ​​equal to the highest order of a nonzero minor of this matrix.


3. If square matrix nondegenerate, then its rank is equal to its order. If the square matrix is ​​degenerate, then its rank is less than its order.


4. The designations are also used for the rank \ operatorname (Rg) A, ~ \ operatorname (rang) A, ~ \ operatorname (rank) A.


5. Block matrix rank is defined as the rank of an ordinary (numeric) matrix, i.e. not paying attention to its block structure. Moreover, the rank of a block matrix is ​​not less than the ranks of its blocks: \ operatorname (rg) (A \ mid B) \ geqslant \ operatorname (rg) A and \ operatorname (rg) (A \ mid B) \ geqslant \ operatorname (rg) B since all minors of the matrix A (or B) are also minors of the block matrix (A \ mid B).

Basic Minor and Matrix Rank Theorems

Consider the main theorems expressing the properties of linear dependence and linear independence of columns (rows) of a matrix.


Theorem 3.1 on the basic minor. In an arbitrary matrix A, each column (row) is a linear combination of the columns (rows) in which the base minor is located.


Indeed, without loss of generality, we assume that in a matrix A of size m \ times n, the base minor is located in the first r rows and the first r columns. Consider the determinant


D = \ begin (vmatrix) ~ a_ (11) & \ cdots & a_ (1r) \! \! & \ Vline \! \! & A_ (1k) ~ \\ ~ \ vdots & \ ddots & \ vdots \! \! & \ vline \! \! & \ vdots ~ \\ ~ a_ (r1) & \ cdots & a_ (rr) \! \! & \ vline \! \! & a_ (rk) ~ \\\ hline ~ a_ (s1) & \ cdots & a_ (sr) \! \! & \ vline \! \! & a_ (sk) ~ \ end (vmatrix),


which is obtained by assigning to the basic minor of the matrix A the corresponding elements of the s-th row and k-th column. Note that for any 1 \ leqslant s \ leqslant m and this determinant is zero. If s \ leqslant r or k \ leqslant r, then the determinant D contains two identical rows or two identical columns. If s> r and k> r, then the determinant of D is equal to zero, since it is a minor of the (r + l) -ro order. Expanding the determinant along the last line, we get


a_ (s1) \ cdot D_ (r + 11) + \ ldots + a_ (sr) \ cdot D_ (r + 1r) + a_ (sk) \ cdot D_ (r + 1 \, r + 1) = 0,


where D_ (r + 1 \, j) are the algebraic complements of the elements of the last row. Note that D_ (r + 1 \, r + 1) \ ne0, since this is a base minor. That's why


a_ (sk) = \ lambda_1 \ cdot a_ (s1) + \ ldots + \ lambda_r \ cdot a_ (sr), where \ lambda_j = - \ frac (D_ (r + 1 \, j)) (D_ (r + 1 \, r + 1)), ~ j = 1,2, \ ldots, r.


Writing down the last equality for s = 1,2, \ ldots, m, we get

\ begin (pmatrix) a_ (1k) \\\ vdots \\ a_ (mk) \ end (pmatrix) = \ lambda_1 \ cdot \! \ begin (pmatrix) a_ (11) \\\ vdots \\ a_ (m1) \ end (pmatrix) + \ ldots \ lambda_r \ cdot \! \ begin (pmatrix) a_ (1r) \\\ vdots \\ a_ (mr) \ end (pmatrix) \ !.


those. k -th column (for any 1 \ leqslant k \ leqslant n) is a linear combination of the columns of the basic minor, as required.


The basic minor theorem serves to prove the following important theorems.

The condition of equality to zero of the determinant

Theorem 3.2 (a necessary and sufficient condition for the vanishing of the determinant). For the determinant to be equal to zero, it is necessary and sufficient that one of its columns (one of its rows) be a linear combination of the remaining columns (rows).


Indeed, necessity follows from the basic minor theorem. If the determinant of a square matrix of the nth order is equal to zero, then its rank is less than n, i.e. at least one column is not included in the base minor. Then this selected column, by Theorem 3.1, is a linear combination of the columns in which the basic minor is located. Adding, if necessary, to this combination other columns with zero coefficients, we obtain that the selected column is a linear combination of the remaining columns of the matrix. Sufficiency follows from the properties of the determinant. If, for example, the last column A_n of the determinant \ det (A_1 ~ A_2 ~ \ cdots ~ A_n) expressed linearly in terms of the rest


A_n = \ lambda_1 \ cdot A_1 + \ lambda_2 \ cdot A_2 + \ ldots + \ lambda_ (n-1) \ cdot A_ (n-1),


then adding to A_n column A_1 multiplied by (- \ lambda_1), then column A_2 multiplied by (- \ lambda_2), etc. column A_ (n-1) multiplied by (- \ lambda_ (n-1)), we get the determinant \ det (A_1 ~ \ cdots ~ A_ (n-1) ~ o) with a zero column that is zero (property 2 of the determinant).

Matrix rank invariance under elementary transformations

Theorem 3.3 (on rank invariance under elementary transformations). Elementary transformations of the columns (rows) of the matrix do not change its rank.


Indeed, let it be. Suppose that as a result of one elementary transformation of the columns of the matrix A, we obtained the matrix A ". If the transformation of type I (permutation of two columns) was performed, then any minor (r + l) -ro of the order of the matrix A" is either equal to the corresponding minor (r + l ) -ro of the order of the matrix A, or differs from it in sign (property 3 of the determinant). If a type II transformation was performed (multiplication of a column by the number \ lambda \ ne0), then any minor (r + l) -ro of order of matrix A "is either equal to the corresponding minor (r + l) -ro of order of matrix A, or differs from it factor \ lambda \ ne0 (property 6 of the determinant). If a transformation of type III was performed (addition to one column of another column multiplied by the number \ Lambda), then any minor of the (r + 1) th order of the matrix A "is either equal to the corresponding minor (r + 1) -th order of the matrix A (property 9 of the determinant), or is equal to the sum of two minors of the (r + l) -ro order of the matrix A (property 8 of the determinant). Therefore, under an elementary transformation of any type, all the minors of the (r + l) -ro order of the matrix A "are equal to zero, since all the minors of the (r + l) -ro order of the matrix A are equal to zero. Since transformations inverse to elementary ones are elementary, the rank of a matrix under elementary transformations of columns cannot and decrease, ie it does not change. Similarly, it is proved that the rank of a matrix does not change under elementary transformations of rows.


Corollary 1. If one row (column) of the matrix is ​​a linear combination of its other rows (columns), then this row (column) can be deleted from the matrix without changing its rank.


Indeed, such a string can be made zero using elementary transformations, and the zero string cannot be included in the basic minor.


Corollary 2. If the matrix is ​​reduced to the simplest form (1.7), then


\ operatorname (rg) A = \ operatorname (rg) \ Lambda = r \ ,.


Indeed, the matrix of the simplest form (1.7) has a base minor of the rth order.


Corollary 3. Any non-degenerate square matrix is ​​elementary, in other words, any non-degenerate square matrix is ​​equivalent to the identity matrix of the same order.


Indeed, if A is a nondegenerate square matrix of order n, then \ operatorname (rg) A = n(see item 3 of remarks 3.2). Therefore, leading elementary transformations matrix A to the simplest form (1.7), we obtain the identity matrix \ Lambda = E_n, since \ operatorname (rg) A = \ operatorname (rg) \ Lambda = n(see Corollary 2). Consequently, the matrix A is equivalent to the identity matrix E_n and can be obtained from it as a result of a finite number of elementary transformations. This means that the matrix A is elementary.

Theorem 3.4 (on the rank of a matrix). The rank of a matrix is ​​equal to the maximum number of linearly independent rows of this matrix.


Indeed, let \ operatorname (rg) A = r... Then the matrix A has r linearly independent rows. These are the lines in which the base minor is located. If they were linearly dependent, then this minor would be equal to zero by Theorem 3.2, and the rank of the matrix A would not be equal to r. Let us show that r is the maximum number of linearly independent rows, that is, any p rows are linearly dependent for p> r. Indeed, we form a matrix B from these p rows. Since matrix B is part of matrix A, then \ operatorname (rg) B \ leqslant \ operatorname (rg) A = r

Hence, at least one row of the matrix B is not included in the basic minor of this matrix. Then, by the basic minor theorem, it is equal to a linear combination of the rows in which the basic minor is located. Therefore, the rows of matrix B are linearly dependent. Thus, the matrix A contains at most r linearly independent rows.


Corollary 1. The maximum number of linearly independent rows in a matrix is ​​equal to the maximum number of linearly independent columns:


\ operatorname (rg) A = \ operatorname (rg) A ^ T.


This statement follows from Theorem 3.4 if we apply it to the rows of the transposed matrix and take into account that the minors do not change during the transposition (property 1 of the determinant).


Corollary 2. Under elementary transformations of the rows of a matrix, the linear dependence (or linear independence) of any system of columns of this matrix is ​​preserved.


Indeed, let us choose any k columns of the given matrix A and compose matrix B from them. Let, as a result of elementary transformations of the rows of the matrix A, the matrix A "was obtained, and as a result of the same transformations of the rows of the matrix B, the matrix B" was obtained. By Theorem 3.3 \ operatorname (rg) B "= \ operatorname (rg) B... Therefore, if the columns of the matrix B were linearly independent, i.e. k = \ operatorname (rg) B(see Corollary 1), then the columns of the matrix B "are also linearly independent, since k = \ operatorname (rg) B "... If the columns of matrix B were linearly dependent (k> \ operatorname (rg) B), then the columns of the matrix B "are also linearly dependent (k> \ operatorname (rg) B ")... Consequently, for any columns of the matrix A, linear dependence or linear independence is preserved under elementary transformations of rows.


Remarks 3.3


1. By virtue of Corollary 1 of Theorem 3.4, the property of columns indicated in Corollary 2 is also valid for any system of rows of a matrix if elementary transformations are performed only on its columns.


2. Corollary 3 of Theorem 3.3 can be refined as follows: any non-degenerate square matrix, using elementary transformations of only its rows (or only its columns), can be reduced to the identity matrix of the same order.


Indeed, using only elementary row transformations, any matrix A can be reduced to a simplified form \ Lambda (Fig. 1.5) (see Theorem 1.1). Since the matrix A is nondegenerate (\ det (A) \ ne0), its columns are linearly independent. Hence, the columns of the matrix \ Lambda are also linearly independent (Corollary 2 of Theorem 3.4). Therefore, the simplified form \ Lambda of a nondegenerate matrix A coincides with its simplest form (Fig. 1.6) and is the identity matrix \ Lambda = E (see Corollary 3 of Theorem 3.3). Thus, transforming only the rows of a non-degenerate matrix, it can be reduced to the identity one. Similar reasoning is valid for elementary transformations of the columns of a non-degenerate matrix.

The rank of the product and the sum of matrices

Theorem 3.5 (on the rank of a product of matrices). The rank of the matrix product does not exceed the rank of the factors:


\ operatorname (rg) (A \ cdot B) \ leqslant \ min \ (\ operatorname (rg) A, \ operatorname (rg) B \).


Indeed, let the matrices A and B have dimensions m \ times p and p \ times n. We assign to matrix A the matrix C = AB \ colon \, (A \ mid C)... It goes without saying that \ operatorname (rg) C \ leqslant \ operatorname (rg) (A \ mid C), since C is a part of the matrix (A \ mid C) (see item 5 of Remark 3.2). Note that each column C_j, according to the matrix multiplication operation, is a linear combination of columns A_1, A_2, \ ldots, A_p matrices A = (A_1 ~ \ cdots ~ A_p):


C_ (j) = A_1 \ cdot b_ (1j) + A_2 \ cdot b_ (2j) + \ ldots + A_ (p) \ cdot b_pj), \ quad j = 1,2, \ ldots, n.


Such a column can be deleted from the matrix (A \ mid C) without changing its rank (Corollary 1 of Theorem 3.3). Crossing out all the columns of the matrix C, we get: \ operatorname (rg) (A \ mid C) = \ operatorname (rg) A... Hence, \ operatorname (rg) C \ leqslant \ operatorname (rg) (A \ mid C) = \ operatorname (rg) A... Similarly, one can prove that the condition \ operatorname (rg) C \ leqslant \ operatorname (rg) B, and draw a conclusion about the validity of the theorem.


Consequence. If A is a nondegenerate square matrix, then \ operatorname (rg) (AB) = \ operatorname (rg) B and \ operatorname (rg) (CA) = \ operatorname (rg) C, i.e. the rank of the matrix does not change if it is multiplied to the left or to the right by a non-degenerate square matrix.


Theorem 3.6 on the rank of the sum of matrices. The rank of the sum of matrices does not exceed the sum of the ranks of the terms:


\ operatorname (rg) (A + B) \ leqslant \ operatorname (rg) A + \ operatorname (rg) B.


Indeed, we compose the matrix (A + B \ mid A \ mid B)... Note that each column of the matrix A + B is a linear combination of the columns of the matrices A and B. That's why \ operatorname (rg) (A + B \ mid A \ mid B) = \ operatorname (rg) (A \ mid B)... Considering that the number of linearly independent columns in the matrix (A \ mid B) does not exceed \ operatorname (rg) A + \ operatorname (rg) B, a \ operatorname (rg) (A + B) \ leqslant \ operatorname (rg) (A + B \ mid A \ mid B)(see item 5 of Remark 3.2), we obtain the required inequality.

Elementary the following matrix transformations are called:

1) permutation of any two rows (or columns),

2) multiplying a row (or column) by a nonzero number,

3) adding to one row (or column) another row (or column) multiplied by some number.

The two matrices are called equivalent if one of them is obtained from the other using a finite set of elementary transformations.

Equivalent matrices are not, generally speaking, equal, but their ranks are equal. If matrices A and B are equivalent, then it is written like this: A ~ B.

The canonical a matrix is ​​a matrix in which at the beginning of the main diagonal there are several ones in a row (the number of which can be equal to zero), and all other elements are equal to zero, for example,

By means of elementary transformations of rows and columns, any matrix can be reduced to the canonical one. The rank of the canonical matrix equal to the number units on its main diagonal.

Example 2 Find the rank of a matrix

A =

and bring it to the canonical form.

Solution. Subtract the first from the second line and rearrange these lines:

.

Now, subtract the first from the second and third lines, multiplied by 2 and 5, respectively:

;

subtract the first from the third line; we get the matrix

B = ,

which is equivalent to matrix A, since it is obtained from it using a finite set of elementary transformations. Obviously, the rank of the matrix B is equal to 2, and therefore r (A) = 2. Matrix B can be easily reduced to the canonical one. Subtracting the first column, multiplied by suitable numbers, from all subsequent ones, we convert to zero all the elements of the first row, except for the first, and the elements of the remaining rows do not change. Then, subtracting the second column, multiplied by suitable numbers, from all subsequent ones, let us zero all the elements of the second row, except for the second, and get the canonical matrix:

.

The Kroonecker - Capelli theorem is the compatibility criterion for the system of linear algebraic equations:

To linear system is joint, it is necessary and sufficient that the rank of the extended matrix of this system be equal to the rank of its main matrix.

Proof (compatibility conditions for the system)

Need

Let be system joint. Then there are numbers are, what . Therefore, a column is a linear combination of matrix columns. Since the rank of a matrix will not change if from the system of its rows (columns) we delete or assign a row (column) that is a linear combination of other rows (columns) it follows that.

Adequacy

Let be . Let's take some basic minor in the matrix. Since, it will also be the base minor of the matrix. Then, according to the basic theorem minor, the last column of the matrix will be a linear combination of the base columns, that is, the columns of the matrix. Therefore, the column of free members of the system is a linear combination of the columns of the matrix.

Consequences

    Number of main variables systems is equal to the rank of the system.

    Joint system will be determined (its the only solution) if the rank of the system is equal to the number of all its variables.

Homogeneous system of equations

Offer15 . 2 Homogeneous system of equations

is always joint.

Proof... For this system, the set of numbers,,, is a solution.

In this section, we will use the system matrix notation:.

Offer15 . 3 The sum of solutions to a homogeneous system of linear equations is a solution to this system. A solution multiplied by a number is also a solution.

Proof... Let them serve as solutions to the system. Then and. Let be . Then

Since, then - the solution.

Let be an arbitrary number,. Then

Since, then - the solution.

Consequence15 . 1 If a homogeneous system linear equations has a nonzero solution, then it has infinitely many different solutions.

Indeed, multiplying a nonzero solution by different numbers, we will get different solutions.

Definition15 . 5 We will say that solutions systems form fundamental decision system if columns form a linearly independent system and any solution to the system is a linear combination of these columns.

Definition. By the rank of the matrix is the maximum number of linearly independent lines considered as vectors.

Theorem 1 on the rank of a matrix. By the rank of the matrix is the maximum order of a nonzero minor of the matrix.

We have already analyzed the concept of a minor in the lesson using determinants, and now we will generalize it. Let us take in the matrix some rows and some columns, and this “some” should be less than the number of rows and columns of the matrix, and for rows and columns this “some” should be the same number. Then at the intersection of some rows and how many columns there will be a matrix of lower order than our original matrix. The determinant of this matrix will be the k-th order minor if the mentioned "some" (the number of rows and columns) is denoted by k.

Definition. Minor ( r+1) th order, within which the selected minor lies r-th order is called bordering for a given minor.

The two most commonly used are finding the rank of the matrix... it bordering minors way and method of elementary transformations(by the Gauss method).

The following theorem is used for the bordering minors method.

Theorem 2 on the rank of a matrix. If from the elements of the matrix it is possible to compose a minor r th order, not equal to zero, then the rank of the matrix is r.

In the method of elementary transformations, the following property is used:

If, by elementary transformations, a trapezoidal matrix is ​​obtained that is equivalent to the original one, then the rank of this matrix is the number of lines in it, except for lines consisting entirely of zeros.

Finding the rank of a matrix by the bordering minors method

A bordering minor is a minor of a higher order in relation to a given one, if this minor of a higher order contains a given minor.

For example, given the matrix

Let's take a minor

bordering will be the following minors:

Algorithm for finding the rank of a matrix next.

1. Find non-zero minors of the second order. If all the second-order minors are equal to zero, then the rank of the matrix will be equal to one ( r =1 ).

2. If there is at least one second-order minor that is not equal to zero, then compose the bordering third-order minors. If all the bordering minors of the third order are equal to zero, then the rank of the matrix is ​​equal to two ( r =2 ).

3. If at least one of the bordering minors of the third order is not equal to zero, then we compose the bordering minors. If all the bordering minors of the fourth order are equal to zero, then the rank of the matrix is ​​three ( r =2 ).

4. Continue as long as the size of the matrix allows.

Example 1. Find the rank of a matrix

.

Solution. Minor of the second order .

We frame it. There will be four bordering minors:

,

,

Thus, all bordering minors of the third order are equal to zero, therefore, the rank of this matrix is ​​equal to two ( r =2 ).

Example 2. Find the rank of a matrix

Solution. The rank of this matrix is ​​1, since all the second-order minors of this matrix are equal to zero (in this, as in the cases of bordering minors in the next two examples, dear students are invited to verify for themselves, possibly using the rules for calculating determinants), and among the first-order minors , that is, among the elements of the matrix, there are not equal to zero.

Example 3. Find the rank of a matrix

Solution. Minor of the second order of this matrix, in all the minors of the third order of this matrix are equal to zero. Therefore, the rank of this matrix is ​​two.

Example 4. Find the rank of a matrix

Solution. The rank of this matrix is ​​3, since the only third-order minor of this matrix is ​​3.

Finding the rank of a matrix by the method of elementary transformations (Gauss's method)

Already in Example 1, it can be seen that the problem of determining the rank of a matrix by the method of bordering minors requires calculating a large number determinants. There is, however, a way to keep the amount of computation to a minimum. This method is based on the use of elementary matrix transformations and is also called the Gauss method.

Elementary matrix transformations are understood as the following operations:

1) multiplying any row or any column of the matrix by a number other than zero;

2) adding to the elements of any row or any column of the matrix the corresponding elements of another row or column, multiplied by the same number;

3) interchanging two rows or columns of the matrix;

4) removal of "zero" lines, that is, those, all elements of which are equal to zero;

5) deletion of all proportional lines, except for one.

Theorem. An elementary transformation does not change the rank of the matrix. In other words, if we use elementary transformations from the matrix A went to the matrix B, then .

The number r is called the rank of the matrix A if:
1) the matrix A contains a minor of order r, different from zero;
2) all minors of order (r + 1) and higher, if they exist, are equal to zero.
Otherwise, the rank of the matrix is ​​the highest nonzero minor order.
Designations: rangA, r A, or r.
It follows from the definition that r is an integer positive number... For a null matrix, the rank is considered to be zero.

Service purpose... The online calculator is designed to find rank of the matrix... In this case, the solution is saved in Word and Excel format. cm. example solution.

Instruction. Select the dimension of the matrix, click Next.

Select the dimension of the matrix 3 4 5 6 7 x 3 4 5 6 7

Definition . Let a matrix of rank r be given. Any minor of a matrix other than zero and having order r is called basic, and the rows and columns of its components are called basic rows and columns.
According to this definition, the matrix A can have several basic minors.

The rank of the identity matrix E is n (the number of rows).

Example 1. Two matrices are given, and their minors , ... Which one can be taken as the baseline?
Solution... Minor M 1 = 0, so it cannot be basic for any of the matrices. Minor M 2 = -9 ≠ 0 and has order 2, so it can be taken as the basis matrices A or / and B, provided that they have ranks equal to 2. Since detB = 0 (as a determinant with two proportional columns), then rangB = 2 and M 2 can be taken as the base minor of the matrix B. The rank of the matrix A is 3, since detA = -27 ≠ 0 and, therefore, the order the basic minor of this matrix must be equal to 3, that is, M 2 is not basic for the matrix A. Note that the matrix A has a single basic minor, which is equal to the determinant of the matrix A.

Theorem (on basic minor). Any row (column) of a matrix is ​​a linear combination of its base rows (columns).
Corollaries from the theorem.

  1. Any (r + 1) columns (rows) of a matrix of rank r are linearly dependent.
  2. If the rank of a matrix is ​​less than the number of its rows (columns), then its rows (columns) are linearly dependent. If rangA is equal to the number of its rows (columns), then the rows (columns) are linearly independent.
  3. The determinant of the matrix A is equal to zero if and only if its rows (columns) are linearly dependent.
  4. If to a row (column) of the matrix we add another row (column) multiplied by any number other than zero, then the rank of the matrix will not change.
  5. If a row (column) in the matrix is ​​crossed out, which is a linear combination of other rows (columns), then the rank of the matrix will not change.
  6. The rank of a matrix is ​​equal to the maximum number of its linearly independent rows (columns).
  7. The maximum number of linearly independent rows is the same as the maximum number of linearly independent columns.

Example 2. Find the rank of a matrix .
Solution. Based on the definition of the rank of a matrix, we will look for a minor of the highest order, other than zero. First, we transform the matrix to more simple mind... To do this, multiply the first row of the matrix by (-2) and add to the second, then multiply it by (-1) and add to the third.