Whether the rank of the matrix can be zero. Matrix rank: definition, methods of finding, examples, solutions


Let A be an m \\ times n matrix, and k be a natural number not exceeding m and n: k \\ leqslant \\ min \\ (m; n \\). Kth order minor of a matrix A is called the determinant of the k-th order matrix formed by the elements at the intersection of arbitrarily chosen k rows and k columns of the matrix A. When denoting the minors, the numbers of the selected rows will be indicated by superscripts, and the selected columns - by lower ones, placing them in ascending order.


Example 3.4. Write minors of different matrix orders


A \u003d \\ begin (pmatrix) 1 & 2 & 1 & 0 \\\\ 0 & 2 & 2 & 3 \\\\ 1 & 4 & 3 & 3 \\ end (pmatrix) \\ !.


Decision. Matrix A has dimensions 3 \\ times4. It has: 12 1st order minors, e.g. minor M _ (() _ 2) ^ (() _ 3) \u003d \\ det (a_ (32)) \u003d 4; 18 minors of the 2nd order, for example, M _ (() _ (23)) ^ (() ^ (12)) \u003d \\ begin (vmatrix) 2 & 1 \\\\ 2 & 2 \\ end (vmatrix) \u003d 2; 4 minor 3rd order, for example,


M _ (() _ (134)) ^ (() ^ (123)) \u003d \\ begin (vmatrix) 1 & 1 & 0 \\\\ 0 & 2 & 3 \\\\ 1 & 3 & 3 \\ end (vmatrix) \u003d 0.

In a matrix A of size m \\ times n, the rth order minor is called basicif it is nonzero, and all minors of the (r + 1) -ro order are equal to zero or they do not exist at all.


By the rank of the matrix the order of the basic minor is called. There is no basic minor in the zero matrix. Therefore, the rank of the zero matrix, by definition, is assumed to be zero. The rank of the matrix A is denoted \\ operatorname (rg) A.


Example 3.5. Find All Basic Minors and Rank of a Matrix


A \u003d \\ begin (pmatrix) 1 & 2 & 2 & 0 \\\\ 0 & 2 & 2 & 3 \\\\ 0 & 0 & 0 & 0 \\ end (pmatrix) \\ !.


Decision. All the third-order minors of this matrix are equal to zero, since these determinants have a zero third row. Therefore, only the second-order minor located in the first two rows of the matrix can be basic. Going through 6 possible minors, we select non-zero


M _ (() _ (12)) ^ (() ^ (12)) \u003d M _ (() _ (13)) ^ (() ^ (12)) \u003d \\ begin (vmatrix) 1 & 2 \\\\ 0 & 2 \\ end ( vmatrix) \\!, \\ quad M _ (() _ (24)) ^ (() ^ (12)) \u003d M _ (() _ (34)) ^ (() ^ (12)) \u003d \\ begin (vmatrix) 2 & 0 \\\\ 2 & 3 \\ end (vmatrix) \\!, \\ Quad M _ (() _ (14)) ^ (() ^ (12)) \u003d \\ begin (vmatrix) 1 & 0 \\\\ 0 & 3 \\ end (vmatrix) \\ !.


Each of these five minors is basic. Therefore, the rank of the matrix is \u200b\u200b2.

Remarks 3.2


1. If in the matrix all the minors of the kth order are equal to zero, then the minors of a higher order are also equal to zero. Indeed, expanding the (k + 1) -ro order minor in any row, we obtain the sum of the products of the elements of this row by the kth order minors, and they are equal to zero.


2. The rank of a matrix is \u200b\u200bequal to the highest order of a nonzero minor of this matrix.


3. If a square matrix is \u200b\u200bnon-degenerate, then its rank is equal to its order. If the square matrix is \u200b\u200bdegenerate, then its rank is less than its order.


4. The designations are also used for the rank \\ operatorname (Rg) A, ~ \\ operatorname (rang) A, ~ \\ operatorname (rank) A.


5. Block matrix rank is defined as the rank of an ordinary (numeric) matrix, i.e. regardless of its block structure. Moreover, the rank of a block matrix is \u200b\u200bnot less than the ranks of its blocks: \\ operatorname (rg) (A \\ mid B) \\ geqslant \\ operatorname (rg) A and \\ operatorname (rg) (A \\ mid B) \\ geqslant \\ operatorname (rg) Bsince all the minors of the matrix A (or B) are also minors of the block matrix (A \\ mid B).

Basic minor and matrix rank theorems

Consider the main theorems expressing the properties of linear dependence and linear independence of columns (rows) of a matrix.


Theorem 3.1 on the basic minor. In an arbitrary matrix A, each column (row) is a linear combination of the columns (rows) in which the basic minor is located.


Indeed, without loss of generality, we assume that in a matrix A of size m \\ times n, the base minor is located in the first r rows and the first r columns. Consider the determinant


D \u003d \\ begin (vmatrix) ~ a_ (11) & \\ cdots & a_ (1r) \\! \\! & \\ Vline \\! \\! & A_ (1k) ~ \\\\ ~ \\ vdots & \\ ddots & \\ vdots \\! \\! & \\ (sr) \\! \\! & \\ vline \\! \\! & a_ (sk) ~ \\ end (vmatrix),


which is obtained by assigning the corresponding elements of the st-th row and k-th column to the basic minor of the matrix A. Note that for any 1 \\ leqslant s \\ leqslant m and this determinant is zero. If s \\ leqslant r or k \\ leqslant r, then the determinant D contains two identical rows or two identical columns. If s\u003e r and k\u003e r, then the determinant of D is equal to zero, since it is a minor of the (r + l) -ro order. Expanding the determinant along the last line, we get


a_ (s1) \\ cdot D_ (r + 11) + \\ ldots + a_ (sr) \\ cdot D_ (r + 1r) + a_ (sk) \\ cdot D_ (r + 1 \\, r + 1) \u003d 0,


where D_ (r + 1 \\, j) are the algebraic complements of the elements of the last row. Note that D_ (r + 1 \\, r + 1) \\ ne0, since this is a base minor. therefore


a_ (sk) \u003d \\ lambda_1 \\ cdot a_ (s1) + \\ ldots + \\ lambda_r \\ cdot a_ (sr)where \\ lambda_j \u003d - \\ frac (D_ (r + 1 \\, j)) (D_ (r + 1 \\, r + 1)), ~ j \u003d 1,2, \\ ldots, r.


Writing down the last equality for s \u003d 1,2, \\ ldots, m, we get

\\ begin (pmatrix) a_ (1k) \\\\\\ vdots \\\\ a_ (mk) \\ end (pmatrix) \u003d \\ lambda_1 \\ cdot \\! \\ begin (pmatrix) a_ (11) \\\\\\ vdots \\\\ a_ (m1) \\ end (pmatrix) + \\ ldots \\ lambda_r \\ cdot \\! \\ begin (pmatrix) a_ (1r) \\\\\\ vdots \\\\ a_ (mr) \\ end (pmatrix) \\ !.


those. k th column (for any 1 \\ leqslant k \\ leqslant n) is a linear combination of the columns of the basic minor, as required.


The basic minor theorem serves to prove the following important theorems.

Condition of equality to zero of determinant

Theorem 3.2 (necessary and sufficient condition for the vanishing of the determinant). For the determinant to be equal to zero, it is necessary and sufficient that one of its columns (one of its rows) be a linear combination of the remaining columns (rows).


Indeed, the necessity follows from the basic minor theorem. If the determinant of a square matrix of the nth order is equal to zero, then its rank is less than n, i.e. at least one column is not included in the base minor. Then this selected column, by Theorem 3.1, is a linear combination of the columns in which the basic minor is located. Adding, if necessary, to this combination other columns with zero coefficients, we obtain that the selected column is a linear combination of the remaining columns of the matrix. Sufficiency follows from the properties of the determinant. If, for example, the last column A_n of the determinant \\ det (A_1 ~ A_2 ~ \\ cdots ~ A_n) expressed linearly through the rest


A_n \u003d \\ lambda_1 \\ cdot A_1 + \\ lambda_2 \\ cdot A_2 + \\ ldots + \\ lambda_ (n-1) \\ cdot A_ (n-1),


then adding to A_n column A_1 multiplied by (- \\ lambda_1), then column A_2 multiplied by (- \\ lambda_2), etc. column A_ (n-1) multiplied by (- \\ lambda_ (n-1)), we get the determinant \\ det (A_1 ~ \\ cdots ~ A_ (n-1) ~ o) with a zero column that is zero (property 2 of the determinant).

Invariance of the rank of a matrix under elementary transformations

Theorem 3.3 (on rank invariance under elementary transformations). Elementary transformations of the columns (rows) of the matrix do not change its rank.


Indeed, let it be. Suppose that, as a result of one elementary transformation of the columns of the matrix A, we obtained the matrix A ". If the transformation of type I (permutation of two columns) was performed, then any minor (r + l) -ro of the order of the matrix A" is either equal to the corresponding minor (r + l ) -ro of the order of the matrix A, or differs from it in sign (property 3 of the determinant). If a type II transformation was performed (multiplication of a column by the number \\ lambda \\ ne0), then any minor (r + l) -ro of the order of the matrix A "is either equal to the corresponding minor (r + l) -ro of the order of the matrix A, or differs from it factor \\ lambda \\ ne0 (property 6 of the determinant) .If a type III transformation was performed (addition to one column of another column multiplied by the number \\ Lambda), then any minor of the (r + 1) th order of the matrix A "is either equal to the corresponding minor (r + 1) -th order of matrix A (property 9 of the determinant), or is equal to the sum of two minors of the (r + l) -ro order of matrix A (property 8 of the determinant). Therefore, under an elementary transformation of any type, all the minors of the (r + l) -ro order of the matrix A "are equal to zero, since all the minors of the (r + l) -ro order of the matrix A are equal to zero. Thus, it has been proved that under elementary transformations of the columns, the rank Since transformations inverse to elementary ones are elementary, the rank of a matrix under elementary transformations of columns cannot and decrease, ie it does not change. Similarly, it is proved that the rank of a matrix does not change under elementary transformations of rows.


Corollary 1. If one row (column) of the matrix is \u200b\u200ba linear combination of its other rows (columns), then this row (column) can be deleted from the matrix without changing its rank.


Indeed, such a string can be made zero using elementary transformations, and the zero string cannot be included in the base minor.


Corollary 2. If the matrix is \u200b\u200breduced to the simplest form (1.7), then


\\ operatorname (rg) A \u003d \\ operatorname (rg) \\ Lambda \u003d r \\ ,.


Indeed, the matrix of the simplest form (1.7) has a base minor of the rth order.


Corollary 3. Any non-degenerate square matrix is \u200b\u200belementary, in other words, any non-degenerate square matrix is \u200b\u200bequivalent to the identity matrix of the same order.


Indeed, if A is a nondegenerate square matrix of order n, then \\ operatorname (rg) A \u003d n (see item 3 of remarks 3.2). Therefore, reducing matrix A to the simplest form (1.7) by elementary transformations, we obtain the identity matrix \\ Lambda \u003d E_n, since \\ operatorname (rg) A \u003d \\ operatorname (rg) \\ Lambda \u003d n (see Corollary 2). Consequently, the matrix A is equivalent to the identity matrix E_n and can be obtained from it as a result of a finite number of elementary transformations. This means that the matrix A is elementary.

Theorem 3.4 (on the rank of a matrix). The rank of a matrix is \u200b\u200bequal to the maximum number of linearly independent rows of this matrix.


Indeed, let \\ operatorname (rg) A \u003d r... Then the matrix A contains r linearly independent rows. These are the lines in which the base minor is located. If they were linearly dependent, then this minor would be equal to zero by Theorem 3.2, and the rank of the matrix A would not be equal to r. Let us show that r is the maximum number of linearly independent rows, i.e. any p rows are linearly dependent for p\u003e r. Indeed, we form a matrix B from these p rows. Since matrix B is part of matrix A, then \\ operatorname (rg) B \\ leqslant \\ operatorname (rg) A \u003d r

This means that at least one row of the matrix B is not included in the basic minor of this matrix. Then, by the basic minor theorem, it is equal to the linear combination of the rows in which the basic minor is located. Therefore, the rows of matrix B are linearly dependent. Thus, the matrix A contains at most r linearly independent rows.


Corollary 1. The maximum number of linearly independent rows in a matrix is \u200b\u200bequal to the maximum number of linearly independent columns:


\\ operatorname (rg) A \u003d \\ operatorname (rg) A ^ T.


This statement follows from Theorem 3.4 if we apply it to the rows of the transposed matrix and take into account that the minors do not change during the transposition (property 1 of the determinant).


Corollary 2. Under elementary transformations of the rows of a matrix, the linear dependence (or linear independence) of any system of columns of this matrix is \u200b\u200bpreserved.


Indeed, we choose any k columns of the given matrix A and compose matrix B from them. Let, as a result of elementary transformations of the rows of the matrix A, the matrix A "was obtained, and as a result of the same transformations of the rows of the matrix B, the matrix B" was obtained. By Theorem 3.3 \\ operatorname (rg) B "\u003d \\ operatorname (rg) B... Therefore, if the columns of the matrix B were linearly independent, i.e. k \u003d \\ operatorname (rg) B (see Corollary 1), then the columns of the matrix B "are also linearly independent, since k \u003d \\ operatorname (rg) B "... If the columns of matrix B were linearly dependent (k\u003e \\ operatorname (rg) B), then the columns of the matrix B "are also linearly dependent (k\u003e \\ operatorname (rg) B ")... Therefore, for any columns of the matrix A, linear dependence or linear independence is preserved under elementary transformations of rows.


Remarks 3.3


1. By virtue of Corollary 1 of Theorem 3.4, the property of columns indicated in Corollary 2 is also valid for any system of rows of a matrix if elementary transformations are performed only on its columns.


2. Corollary 3 of Theorem 3.3 can be refined as follows: any nondegenerate square matrix, using elementary transformations of only its rows (or only its columns), can be reduced to the identity matrix of the same order.


Indeed, using only elementary row transformations, any matrix A can be reduced to a simplified form \\ Lambda (Fig. 1.5) (see Theorem 1.1). Since the matrix A is nondegenerate (\\ det (A) \\ ne0), its columns are linearly independent. Hence, the columns of the matrix \\ Lambda are also linearly independent (Corollary 2 of Theorem 3.4). Therefore, the simplified form \\ Lambda of the nondegenerate matrix A coincides with its simplest form (Fig. 1.6) and is the identity matrix \\ Lambda \u003d E (see Corollary 3 of Theorem 3.3). Thus, transforming only the rows of a non-degenerate matrix, it can be reduced to the identity one. Similar reasoning is valid for elementary transformations of the columns of a nondegenerate matrix.

Product rank and matrix sum

Theorem 3.5 (on the rank of a product of matrices). The rank of the matrix product does not exceed the rank of the factors:


\\ operatorname (rg) (A \\ cdot B) \\ leqslant \\ min \\ (\\ operatorname (rg) A, \\ operatorname (rg) B \\).


Indeed, let the matrices A and B have dimensions m \\ times p and p \\ times n. We assign to matrix A the matrix C \u003d AB \\ colon \\, (A \\ mid C)... It goes without saying that \\ operatorname (rg) C \\ leqslant \\ operatorname (rg) (A \\ mid C), since C is part of the matrix (A \\ mid C) (see item 5 of Remark 3.2). Note that each column C_j, according to the operation of matrix multiplication, is a linear combination of columns A_1, A_2, \\ ldots, A_p matrices A \u003d (A_1 ~ \\ cdots ~ A_p):


C_ (j) \u003d A_1 \\ cdot b_ (1j) + A_2 \\ cdot b_ (2j) + \\ ldots + A_ (p) \\ cdot b_pj), \\ quad j \u003d 1,2, \\ ldots, n.


Such a column can be deleted from the matrix (A \\ mid C) without changing its rank (Corollary 1 of Theorem 3.3). Crossing out all columns of the matrix C, we get: \\ operatorname (rg) (A \\ mid C) \u003d \\ operatorname (rg) A... Hence, \\ operatorname (rg) C \\ leqslant \\ operatorname (rg) (A \\ mid C) \u003d \\ operatorname (rg) A... Similarly, one can prove that the condition \\ operatorname (rg) C \\ leqslant \\ operatorname (rg) B, and draw a conclusion about the validity of the theorem.


Consequence. If a A is a nondegenerate square matrix, then \\ operatorname (rg) (AB) \u003d \\ operatorname (rg) B and \\ operatorname (rg) (CA) \u003d \\ operatorname (rg) C, i.e. the rank of the matrix does not change if it is multiplied to the left or right by a nondegenerate square matrix.


Theorem 3.6 on the rank of the sum of matrices. The rank of the sum of matrices does not exceed the sum of the ranks of the terms:


\\ operatorname (rg) (A + B) \\ leqslant \\ operatorname (rg) A + \\ operatorname (rg) B.


Indeed, we compose the matrix (A + B \\ mid A \\ mid B)... Note that each column of the matrix A + B is a linear combination of the columns of the matrices A and B. therefore \\ operatorname (rg) (A + B \\ mid A \\ mid B) \u003d \\ operatorname (rg) (A \\ mid B)... Considering that the number of linearly independent columns in the matrix (A \\ mid B) does not exceed \\ operatorname (rg) A + \\ operatorname (rg) B, a \\ operatorname (rg) (A + B) \\ leqslant \\ operatorname (rg) (A + B \\ mid A \\ mid B) (see item 5 of Remark 3.2), we obtain the required inequality.

Elementary the following matrix transformations are called:

1) permutation of any two rows (or columns),

2) multiplying a row (or column) by a nonzero number,

3) adding to one row (or column) another row (or column) multiplied by some number.

The two matrices are called equivalentif one of them is obtained from the other using a finite set of elementary transformations.

Generally speaking, equivalent matrices are not equal, but their ranks are equal. If matrices A and B are equivalent, then it is written like this: A ~ B.

The canonical a matrix is \u200b\u200ba matrix in which at the beginning of the main diagonal there are several ones in a row (the number of which can be equal to zero), and all other elements are equal to zero, for example,

With the help of elementary transformations of rows and columns, any matrix can be reduced to canonical. The rank of a canonical matrix is \u200b\u200bequal to the number of ones on its main diagonal.

Example 2 Find the rank of a matrix

A \u003d

and bring it to the canonical form.

Decision.Subtract the first from the second line and rearrange these lines:

.

Now subtract the first from the second and third lines, multiplied by 2 and 5, respectively:

;

subtract the first from the third line; we get the matrix

B \u003d ,

which is equivalent to matrix A, since it is obtained from it using a finite set of elementary transformations. Obviously, the rank of the matrix B is equal to 2, and therefore r (A) \u003d 2. The matrix B can be easily reduced to the canonical one. Subtracting the first column, multiplied by suitable numbers, from all subsequent ones, we convert to zero all the elements of the first row, except for the first, and the elements of the remaining rows do not change. Then, subtracting the second column, multiplied by suitable numbers, from all subsequent ones, let us vanish all the elements of the second row, except for the second, and get the canonical matrix:

.

The Kronecker - Capelli theorem - compatibility criterion for a system of linear algebraic equations:

For a linear system to be consistent, it is necessary and sufficient that the rank of the extended matrix of this system be equal to the rank of its main matrix.

Proof (system compatibility conditions)

Necessity

Let be system joint. Then there are numbers such that. Hence, a column is a linear combination of matrix columns. Since the rank of a matrix will not change if from the system of its rows (columns) we delete or assign a row (column) that is a linear combination of other rows (columns) it follows that.

Adequacy

Let be . Let's take some basic minor in the matrix. Since, it will also be the base minor of the matrix. Then, according to the basic theorem minor, the last column of the matrix will be a linear combination of the base columns, that is, the columns of the matrix. Therefore, the column of free members of the system is a linear combination of the columns of the matrix.

Consequences

    Number of main variables systems is equal to the rank of the system.

    Joint system will be determined (its solution is unique) if the rank of the system is equal to the number of all its variables.

Homogeneous system of equations

Sentence15 . 2 Homogeneous system of equations

is always joint.

Evidence... For this system, the set of numbers,,, is a solution.

In this section, we will use the system matrix notation:.

Sentence15 . 3 The sum of solutions to a homogeneous system of linear equations is a solution to this system. A solution multiplied by a number is also a solution.

Evidence... Let them serve as solutions to the system. Then and. Let be . Then

Since, then - the solution.

Let be an arbitrary number,. Then

Since, then - the solution.

Consequence15 . 1 If a homogeneous system of linear equations has a nonzero solution, then it has infinitely many different solutions.

Indeed, multiplying a nonzero solution by different numbers, we will get different solutions.

Definition15 . 5 We will say that solutions systems form fundamental decision systemif columns form a linearly independent system and any solution to the system is a linear combination of these columns.

Definition. By the rank of the matrix is the maximum number of linearly independent rows considered as vectors.

Theorem 1 on the rank of a matrix. By the rank of the matrix is the maximum order of a nonzero minor of the matrix.

We have already analyzed the concept of a minor in the lesson by determinants, and now we will generalize it. Let us take in the matrix some rows and some columns, and this "some" should be less than the number of rows and columns of the matrix, and for rows and columns this "some" should be the same number. Then at the intersection of some rows and how many columns there will be a matrix of lower order than our original matrix. The determinant of this matrix will be a k-th order minor if the mentioned "some" (number of rows and columns) is denoted by k.

Definition.Minor ( r+1) th order, within which the selected minor lies r-th order is called bordering for a given minor.

The two most commonly used are finding the rank of the matrix... it bordering minors way and method of elementary transformations (by the Gauss method).

The following theorem is used for the bordering minors method.

Theorem 2 on the rank of a matrix.If the matrix elements can be used to make r-th order not equal to zero, then the rank of the matrix is r.

In the method of elementary transformations, the following property is used:

If by elementary transformations a trapezoidal matrix equivalent to the original one is obtained, then the rank of this matrix is the number of lines in it, except for lines entirely consisting of zeros.

Finding the rank of a matrix by the bordering minors method

A bordering minor is a minor of a higher order in relation to a given one, if this minor of a higher order contains the given minor.

For example, given the matrix

Let's take a minor

the following minors will be bordering:

Algorithm for finding the rank of a matrix following.

1. Find non-zero minors of the second order. If all the second order minors are zero, then the rank of the matrix will be equal to one ( r =1 ).

2. If there is at least one second order minor that is not equal to zero, then we compose the bordering third order minors. If all bordering minors of the third order are equal to zero, then the rank of the matrix is \u200b\u200bequal to two ( r =2 ).

3. If at least one of the bordering minors of the third order is not equal to zero, then we compose the bordering minors. If all bordering minors of the fourth order are equal to zero, then the rank of the matrix is \u200b\u200bthree ( r =2 ).

4. Continue as long as the size of the matrix allows.

Example 1. Find the rank of a matrix

.

Decision. Minor of the second order .

We frame it. There will be four bordering minors:

,

,

Thus, all bordering minors of the third order are equal to zero, therefore, the rank of this matrix is \u200b\u200bequal to two ( r =2 ).

Example 2. Find the rank of a matrix

Decision. The rank of this matrix is \u200b\u200b1, since all the second-order minors of this matrix are equal to zero (in this, as in the cases of bordering minors in the next two examples, dear students are invited to verify themselves, possibly using the rules for calculating determinants), and among the first-order minors , that is, among the elements of the matrix, there are not equal to zero.

Example 3. Find the rank of a matrix

Decision. Minor of the second order of this matrix, in all the minors of the third order of this matrix are equal to zero. Therefore, the rank of this matrix is \u200b\u200btwo.

Example 4. Find the rank of a matrix

Decision. The rank of this matrix is \u200b\u200b3, since the only third-order minor of this matrix is \u200b\u200b3.

Finding the rank of a matrix by the method of elementary transformations (Gauss method)

Already in Example 1, it can be seen that the problem of determining the rank of a matrix by the method of bordering minors requires calculating a large number of determinants. There is, however, a way to keep the amount of computation to a minimum. This method is based on the use of elementary matrix transformations and is also called the Gaussian method.

Elementary matrix transformations mean the following operations:

1) multiplying any row or any column of the matrix by a number other than zero;

2) adding to the elements of any row or any column of the matrix the corresponding elements of another row or column, multiplied by the same number;

3) swapping two rows or columns of the matrix;

4) removal of "zero" lines, that is, those, all elements of which are equal to zero;

5) deletion of all proportional lines except one.

Theorem.An elementary transformation does not change the rank of the matrix. In other words, if we use elementary transformations from the matrix A went to the matrix B then.

The number r is called the rank of the matrix A if:
1) the matrix A contains a minor of order r, different from zero;
2) all minors of order (r + 1) and higher, if they exist, are equal to zero.
Otherwise, the rank of the matrix is \u200b\u200bthe highest order of a nonzero minor.
Designations: rangA, r A, or r.
It follows from the definition that r is a positive integer. For a null matrix, the rank is considered to be zero.

Service purpose... The online calculator is designed to find rank matrix... The solution is saved in Word and Excel format. cm. example solution.

Instruction. Select the dimension of the matrix, click Next.

Select the dimension of the matrix 3 4 5 6 7 x 3 4 5 6 7

Definition. Let a matrix of rank r be given. Any minor of the matrix other than zero and having the order r is called basic, and the rows and columns of its components are called basic rows and columns.
According to this definition, the matrix A can have several basic minors.

The rank of the identity matrix E is n (the number of rows).

Example 1. Two matrices are given, and their minors , ... Which one can be taken as the basic one?
Decision... Minor M 1 \u003d 0, so it cannot be basic for any of the matrices. Minor M 2 \u003d -9 ≠ 0 and has order 2, so it can be taken as the basis matrices A or / and B, provided that they have ranks equal to 2. Since detB \u003d 0 (as a determinant with two proportional columns), then rangB \u003d 2 and M 2 can be taken as the base minor of the matrix B. The rank of the matrix A is 3, since detA \u003d -27 ≠ 0 and, therefore, the order the basic minor of this matrix must be equal to 3, that is, M 2 is not basic for the matrix A. Note that the matrix A has a single basic minor, which is equal to the determinant of the matrix A.

Theorem (on the basic minor). Any row (column) of a matrix is \u200b\u200ba linear combination of its base rows (columns).
Corollaries from the theorem.

  1. Any (r + 1) columns (rows) of a matrix of rank r are linearly dependent.
  2. If the rank of a matrix is \u200b\u200bless than the number of its rows (columns), then its rows (columns) are linearly dependent. If rangA is equal to the number of its rows (columns), then the rows (columns) are linearly independent.
  3. The determinant of the matrix A is equal to zero if and only if its rows (columns) are linearly dependent.
  4. If to a row (column) of the matrix we add another row (column) multiplied by any number other than zero, then the rank of the matrix will not change.
  5. If a row (column) in the matrix is \u200b\u200bcrossed out, which is a linear combination of other rows (columns), then the rank of the matrix will not change.
  6. The rank of a matrix is \u200b\u200bequal to the maximum number of its linearly independent rows (columns).
  7. The maximum number of linearly independent rows is the same as the maximum number of linearly independent columns.

Example 2. Find the rank of a matrix .
Decision. Based on the definition of the rank of a matrix, we will look for a minor of the highest order, other than zero. First, we transform the matrix to a simpler form. To do this, multiply the first row of the matrix by (-2) and add to the second, then multiply it by (-1) and add to the third.