Solution of systems of differential equations. How to solve a system of differential equations using the operational method

It is a sultry time outside, poplar fluff flies, and such weather is conducive to rest. During the school year, everyone has accumulated fatigue, but the expectation of summer holidays / vacations should inspire them to successfully pass exams and tests. By the way, the teachers are also dull according to the season, so soon I will also take a timeout to unload the brain. And now coffee, measured rumble system block, a few dead mosquitoes on the windowsill and a completely working condition ... ... oh, damn it ... a fucking poet.

To business. Who else, but I have today June 1, and we will consider another typical task complex analysis - finding a particular solution to a system of differential equations by the method of operational calculus. What do you need to know and be able to learn how to solve it? First of all, highly recommend refer to the lesson. Please read the introductory part, understand the general setting of the topic, terminology, notation, and at least two or three examples. The fact is that with diffuser systems everything will be almost the same and even easier!

Of course, you must understand what system of differential equations what does it mean to find common decision systems and a particular solution of the system.

I remind you that the system of differential equations can be solved in the "traditional" way: elimination method or using the characteristic equation. The method of operational calculus, which will be discussed, is applicable to the control system when the task is formulated as follows:

Find a particular solution of a homogeneous system of differential equations corresponding to the initial conditions .

Alternatively, the system can also be heterogeneous - with "makeweights" in the form of functions and in the right parts:

But, in both cases, you need to pay attention to two fundamental points of the condition:

1) It's about only about a private decision.
2) In brackets of initial conditions are strictly zeros, and nothing else.

The general move and algorithm will be very similar to solution of a differential equation by an operational method. From reference materials, the same table of originals and images.

Example 1


, ,

Solution: The start is trivial: with Laplace transform tables Let's move from the originals to the corresponding images. In a problem with remote control systems, this transition is usually simple:

Using tabular formulas №№1,2, taking into account the initial condition , we obtain:

What to do with the "games"? Mentally change in the table "x" to "y". Using the same transformations №№1,2, taking into account the initial condition , we find:

Substitute the found images in the original equation :

Now on the left side equations to be collected All terms that contain or . To the right side equations need to be "drawn up" other terms:

Further, on the left side of each equation, we carry out the bracketing:

In this case, in the first positions should be placed, and in the second positions:

The resulting system of equations with two unknowns is usually solved according to Cramer's formulas. Let us calculate the main determinant of the system:

As a result of the calculation of the determinant, a polynomial was obtained.

Important technical tip! This polynomial is better At once try to factorize. To this end, one should try to solve quadratic equation , but, for many readers, an eye trained for the second year will notice that .

Thus, our main determinant of the system is:

Further disassembly with the system, thank Kramer, is standard:

As a result, we get operator decision of the system:

The advantage of the task under consideration is the feature that fractions usually turn out to be simple, and it is much easier to deal with them than with fractions in tasks finding a particular solution of DE by the operational method. Premonition did not deceive you - the good old method of indeterminate coefficients, with the help of which we decompose each fraction into elementary fractions:

1) We deal with the first fraction:

Thus:

2) We break down the second fraction in a similar way, while it is more correct to use other constants (indefinite coefficients):

Thus:


I advise dummies to write the decomposed operator solution in the following form:
- so the final stage will be clearer - the inverse Laplace transform.

Using the right column of the table, let's move from the images to the corresponding originals:


According to the rules of good mathematical tone, we comb the result a little:

Answer:

Checking the answer is carried out according to the standard scheme, which is discussed in detail in the lesson. How to solve a system of differential equations? Always try to complete it to score a big plus in the task.

Example 2

Using the operational calculus, find a particular solution to the system of differential equations corresponding to the given initial conditions.
, ,

This is an example for independent decision. An approximate sample of the final design of the problem and the answer at the end of the lesson.

The solution of an inhomogeneous system of differential equations is algorithmically no different, except that it will be technically a little more complicated:

Example 3

Using the operational calculus, find a particular solution to the system of differential equations corresponding to the given initial conditions.
, ,

Solution: Using the Laplace transform table, given the initial conditions , let's move from the originals to the corresponding images:

But that's not all, there are lone constants on the right-hand side of the equations. What to do in those cases when the constant is in itself all alone? This has already been discussed in the lesson. How to solve DE by the operational method. We repeat: single constants should be mentally multiplied by one, and the following Laplace transformation should be applied to units:

Substitute the found images in the original system:

To the left we move the terms in which are present, in the right parts we place the remaining terms:

In the left parts, we will carry out the bracketing, in addition, we will reduce to a common denominator right side second equation:

We calculate the main determinant of the system, not forgetting that it is advisable to immediately try to factorize the result:
, so the system has a unique solution.

We go further:



Thus, the operator solution of the system:

Sometimes one or even both fractions can be reduced, and it happens so well that there is practically nothing to lay out! And in some cases, it immediately turns out to be a freebie, by the way, the following example of the lesson will be an indicative example.

Using the method of indefinite coefficients, we obtain the sums of elementary fractions.

Breaking down the first fraction:

And we get the second one:

As a result, the operator decision takes the form we need:

Using the right column tables of originals and images perform the inverse Laplace transform:

Let us substitute the obtained images into the operator solution of the system:

Answer: private solution:

As you can see, in a heterogeneous system, it is necessary to carry out more time-consuming calculations compared to a homogeneous system. Let's analyze a couple more examples with sines, cosines, and that's enough, since almost all types of the problem and most of the nuances of the solution will be considered.

Example 4

Using the method of operational calculus, find a particular solution to the system of differential equations with given initial conditions ,

Solution: This example I will also analyze it myself, but the comments will concern only special moments. I assume you are already well versed in the solution algorithm.

Let's move from the originals to the corresponding images:

Let's substitute the found images into the original remote control system:

We solve the system using Cramer's formulas:
, so the system has a unique solution.

The resulting polynomial is not factorized. What to do in such cases? Absolutely nothing. This one will do too.

As a result, the operator solution of the system:

And here is the lucky ticket! The method of indeterminate coefficients should not be used at all! The only thing, in order to apply table transformations, we rewrite the solution in the following form:

Let's move from the images to the corresponding originals:

Let us substitute the obtained images into the operator solution of the system:

Matrix notation for a system of ordinary differential equations (SODE) with constant coefficients

Linear homogeneous SODE with constant coefficients $\left\(\begin(array)(c) (\frac(dy_(1) )(dx) =a_(11) \cdot y_(1) +a_(12) \cdot y_ (2) +\ldots +a_(1n) \cdot y_(n) ) \\ (\frac(dy_(2) )(dx) =a_(21) \cdot y_(1) +a_(22) \cdot y_(2) +\ldots +a_(2n) \cdot y_(n) ) \\ (\ldots ) \\ (\frac(dy_(n) )(dx) =a_(n1) \cdot y_(1) +a_(n2) \cdot y_(2) +\ldots +a_(nn) \cdot y_(n) ) \end(array)\right.$,

where $y_(1) \left(x\right),\; y_(2) \left(x\right),\; \ldots ,\; y_(n) \left(x\right)$ -- desired functions of the independent variable $x$, coefficients $a_(jk) ,\; 1\le j,k\le n$ -- we represent the given real numbers in matrix notation:

  1. matrix of desired functions $Y=\left(\begin(array)(c) (y_(1) \left(x\right)) \\ (y_(2) \left(x\right)) \\ (\ldots ) \\ (y_(n) \left(x\right)) \end(array)\right)$;
  2. derivative decision matrix $\frac(dY)(dx) =\left(\begin(array)(c) (\frac(dy_(1) )(dx) ) \\ (\frac(dy_(2) )(dx ) ) \\ (\ldots ) \\ (\frac(dy_(n) )(dx) ) \end(array)\right)$;
  3. SODE coefficient matrix $A=\left(\begin(array)(cccc) (a_(11) ) & (a_(12) ) & (\ldots ) & (a_(1n) ) \\ (a_(21) ) & (a_(22) ) & (\ldots ) & (a_(2n) ) \\ (\ldots ) & (\ldots ) & (\ldots ) & (\ldots ) \\ (a_(n1) ) & ( a_(n2) ) & (\ldots ) & (a_(nn) ) \end(array)\right)$.

Now, based on the matrix multiplication rule, this SODE can be written as a matrix equation $\frac(dY)(dx) =A\cdot Y$.

General Method for Solving SODEs with Constant Coefficients

Let there be a matrix of some numbers $\alpha =\left(\begin(array)(c) (\alpha _(1) ) \\ (\alpha _(2) ) \\ (\ldots ) \\ (\alpha _ (n) ) \end(array)\right)$.

SODE solution is found in the following form: $y_(1) =\alpha _(1) \cdot e^(k\cdot x) $, $y_(2) =\alpha _(2) \cdot e^(k\ cdot x) $, \dots , $y_(n) =\alpha _(n) \cdot e^(k\cdot x) $. In matrix form: $Y=\left(\begin(array)(c) (y_(1) ) \\ (y_(2) ) \\ (\ldots ) \\ (y_(n) ) \end(array )\right)=e^(k\cdot x) \cdot \left(\begin(array)(c) (\alpha _(1) ) \\ (\alpha _(2) ) \\ (\ldots ) \\ (\alpha _(n) ) \end(array)\right)$.

From here we get:

Now matrix equation this SODE can be given the form:

The resulting equation can be represented as follows:

The last equality shows that the vector $\alpha $ is transformed with the help of the matrix $A$ into the vector $k\cdot \alpha $ parallel to it. This means that the vector $\alpha $ is own vector matrix $A$ corresponding to eigenvalue$k$.

The number $k$ can be determined from the equation $\left|\begin(array)(cccc) (a_(11) -k) & (a_(12) ) & (\ldots ) & (a_(1n) ) \\ ( a_(21) ) & (a_(22) -k) & (\ldots ) & (a_(2n) ) \\ (\ldots ) & (\ldots ) & (\ldots ) & (\ldots ) \\ ( a_(n1) ) & (a_(n2) ) & (\ldots ) & (a_(nn) -k) \end(array)\right|=0$.

This equation is called characteristic.

Let all roots $k_(1) ,k_(2) ,\ldots ,k_(n) $ of the characteristic equation be distinct. For each $k_(i)$ value from $\left(\begin(array)(cccc) (a_(11) -k) & (a_(12) ) & (\ldots ) & (a_(1n) ) \\ (a_(21) ) & (a_(22) -k) & (\ldots ) & (a_(2n) ) \\ (\ldots ) & (\ldots ) & (\ldots ) & (\ldots ) \\ (a_(n1) ) & (a_(n2) ) & (\ldots ) & (a_(nn) -k) \end(array)\right)\cdot \left(\begin(array)(c) (\alpha _(1) ) \\ (\alpha _(2) ) \\ (\ldots ) \\ (\alpha _(n) ) \end(array)\right)=0$ a matrix of values ​​can be defined $\left(\begin(array)(c) (\alpha _(1)^(\left(i\right)) ) \\ (\alpha _(2)^(\left(i\right)) ) \\ (\ldots ) \\ (\alpha _(n)^(\left(i\right)) ) \end(array)\right)$.

One of the values ​​in this matrix is ​​chosen arbitrarily.

Finally, the solution of this system in matrix form is written as follows:

$\left(\begin(array)(c) (y_(1) ) \\ (y_(2) ) \\ (\ldots ) \\ (y_(n) ) \end(array)\right)=\ left(\begin(array)(cccc) (\alpha _(1)^(\left(1\right)) ) & (\alpha _(1)^(\left(2\right)) ) & (\ ldots ) & (\alpha _(2)^(\left(n\right)) ) \\ (\alpha _(2)^(\left(1\right)) ) & (\alpha _(2)^ (\left(2\right)) ) & (\ldots ) & (\alpha _(2)^(\left(n\right)) ) \\ (\ldots ) & (\ldots ) & (\ldots ) & (\ldots ) \\ (\alpha _(n)^(\left(1\right)) ) & (\alpha _(2)^(\left(2\right)) ) & (\ldots ) & (\alpha _(2)^(\left(n\right)) ) \end(array)\right)\cdot \left(\begin(array)(c) (C_(1) \cdot e^(k_ (1) \cdot x) ) \\ (C_(2) \cdot e^(k_(2) \cdot x) ) \\ (\ldots ) \\ (C_(n) \cdot e^(k_(n ) \cdot x) ) \end(array)\right)$,

where $C_(i) $ are arbitrary constants.

Task

Solve the system $\left\(\begin(array)(c) (\frac(dy_(1) )(dx) =5\cdot y_(1) +4y_(2) ) \\ (\frac(dy_( 2) )(dx) =4\cdot y_(1) +5\cdot y_(2) ) \end(array)\right.$.

Write the system matrix: $A=\left(\begin(array)(cc) (5) & (4) \\ (4) & (5) \end(array)\right)$.

In matrix form, this SODE is written as follows: $\left(\begin(array)(c) (\frac(dy_(1) )(dt) ) \\ (\frac(dy_(2) )(dt) ) \end (array)\right)=\left(\begin(array)(cc) (5) & (4) \\ (4) & (5) \end(array)\right)\cdot \left(\begin( array)(c) (y_(1) ) \\ (y_(2) ) \end(array)\right)$.

We get the characteristic equation:

$\left|\begin(array)(cc) (5-k) & (4) \\ (4) & (5-k) \end(array)\right|=0$ i.e. $k^( 2) -10\cdot k+9=0$.

The roots of the characteristic equation: $k_(1) =1$, $k_(2) =9$.

We compose a system for calculating $\left(\begin(array)(c) (\alpha _(1)^(\left(1\right)) ) \\ (\alpha _(2)^(\left(1\ right))) \end(array)\right)$ for $k_(1) =1$:

\[\left(\begin(array)(cc) (5-k_(1) ) & (4) \\ (4) & (5-k_(1) ) \end(array)\right)\cdot \ left(\begin(array)(c) (\alpha _(1)^(\left(1\right)) ) \\ (\alpha _(2)^(\left(1\right)) ) \end (array)\right)=0,\]

i.e. $\left(5-1\right)\cdot \alpha _(1)^(\left(1\right)) +4\cdot \alpha _(2)^(\left(1\right)) =0$, $4\cdot \alpha _(1)^(\left(1\right)) +\left(5-1\right)\cdot \alpha _(2)^(\left(1\right) ) =0$.

Putting $\alpha _(1)^(\left(1\right)) =1$, we get $\alpha _(2)^(\left(1\right)) =-1$.

We compose a system for calculating $\left(\begin(array)(c) (\alpha _(1)^(\left(2\right)) ) \\ (\alpha _(2)^(\left(2\ right))) \end(array)\right)$ for $k_(2) =9$:

\[\left(\begin(array)(cc) (5-k_(2) ) & (4) \\ (4) & (5-k_(2) ) \end(array)\right)\cdot \ left(\begin(array)(c) (\alpha _(1)^(\left(2\right)) ) \\ (\alpha _(2)^(\left(2\right)) ) \end (array)\right)=0, \]

i.e. $\left(5-9\right)\cdot \alpha _(1)^(\left(2\right)) +4\cdot \alpha _(2)^(\left(2\right)) =0$, $4\cdot \alpha _(1)^(\left(2\right)) +\left(5-9\right)\cdot \alpha _(2)^(\left(2\right) ) =0$.

Putting $\alpha _(1)^(\left(2\right)) =1$, we get $\alpha _(2)^(\left(2\right)) =1$.

We obtain the SODE solution in matrix form:

\[\left(\begin(array)(c) (y_(1) ) \\ (y_(2) ) \end(array)\right)=\left(\begin(array)(cc) (1) & (1) \\ (-1) & (1) \end(array)\right)\cdot \left(\begin(array)(c) (C_(1) \cdot e^(1\cdot x) ) \\ (C_(2) \cdot e^(9\cdot x) ) \end(array)\right).\]

In the usual form, the SODE solution is: $\left\(\begin(array)(c) (y_(1) =C_(1) \cdot e^(1\cdot x) +C_(2) \cdot e^ (9\cdot x) ) \\ (y_(2) =-C_(1) \cdot e^(1\cdot x) +C_(2) \cdot e^(9\cdot x) ) \end(array )\right.$.

The practical value of differential equations is due to the fact that, using them, you can establish a connection between the basic physical or chemical law and often a whole group of variables that are of great importance in the study of technical issues.

The application of even the simplest physical law to a process occurring under variable conditions can lead to a very complex relationship between the variables.

When solving physical and chemical problems that lead to differential equations, it is important to find the general integral of the equation, as well as to determine the values ​​of the constants included in this integral, so that the solution corresponds to the given problem.

The study of processes in which all the required quantities are functions of only one independent variable leads to ordinary differential equations.

Steady-state processes can lead to partial differential equations.

In most cases, the solution of differential equations does not lead to finding integrals; to solve such equations, one has to use approximate methods.

Systems of differential equations are used in solving the problem of kinetics.

The most common and universal numerical method for solving ordinary differential equations is the method of finite differences.

Problems are given to ordinary differential equations in which it is required to find the relationship between the dependent and independent variables under conditions when the latter change continuously. The solution of the problem leads to the so-called equations in finite differences.



The region of continuous variation of x is replaced by a set of points called nodes. These nodes make up the difference grid. The desired function of a continuous argument is approximately replaced by the function of the argument on a given grid. This function is called the grid function. The replacement of a differential equation by a difference equation is called its grid approximation. The set of difference equations approximating the original differential equation and additional initial conditions is called the difference scheme. A difference scheme is said to be stable if a small change in the input data corresponds to a small change in the solution. A difference scheme is called correct if its solution exists and is unique for any input data, and also if this scheme is stable.

When solving the Cauchy problem, it is required to find a function y=y(x) that satisfies the equation:

and the initial condition: y \u003d y 0 for x \u003d x 0.

Let's introduce a sequence of points x 0 , x 1 , … x n and steps h i =x i +1 –x i (i = 0, 1, …). At each point x i, numbers y i are introduced, approximating the exact solution y. After replacing the derivative in the original equation by the ratio of finite differences, the transition from the differential problem to the difference problem is carried out:

y i+1 = F(x i , h i , y i+1 , y i , … y i-k+1),

where i = 0, 1, 2 ...

In this case, k is obtained - the step method of finite differences. In one-step methods, to calculate y i +1, only one previously found value in the previous step y i is used, in multi-step methods, several.

The simplest one-step numerical method for solving the Cauchy problem is the Euler method.

y i+1 = y i + h f(x i , y i).

This scheme is a difference scheme of the first order of accuracy.

If in the equation y "=f(x, y) the right side is replaced by the arithmetic mean between f(x i, y i) and f(x i+1, y i+1), i.e. , then we get the implicit difference scheme of the Euler method:


,

having the second order of accuracy.

By replacing in given equation y i+1 to y i +h f(x i , y i) the scheme goes into the Euler method with recalculation, which also has the second order:

Among the difference schemes of a higher order of accuracy, the scheme of the Runge-Kutta method is widespread. fourth order:

y i +1 = yi + (k 1 + 2k 2 + 2k 3 + k 4), i = 0, 1, …

to 1 = f(x i , y i)

to 2 = f(x i + , y i + )

to 3 = f(x i + , y i + )

k 4 = f(x i + h, y i + k 3).

To improve the accuracy of the numerical solution without a significant increase in computer time, the Runge method is used. Its essence is to carry out repeated calculations according to one difference scheme with different steps.

The refined solution is constructed using a series of calculations. If two series of calculations are carried out according to the order scheme To respectively with steps h and h/2 and the values ​​of the grid function y h and y h /2 are obtained, then the refined value of the grid function at the grid nodes with the step h is calculated by the formula:

.


Approximate calculations

In physical and chemical calculations, it is rarely necessary to use methods and formulas that give exact solutions. In most cases, methods for solving equations that lead to accurate results are either very complex or non-existent. Usually, methods of approximate problem solving are used.

When solving physical and chemical problems related to chemical kinetics, with the processing of experimental data, it often becomes necessary to solve various equations. The exact solution of some equations presents great difficulties in a number of cases. In these cases, one can use the methods of approximate solutions, obtaining results with an accuracy that satisfies the task. There are several ways: the tangent method (Newton's method), the method linear interpolation, method of repetition (iteration), etc.

Let there be an equation f(x)=0, and f(x) – continuous function. Let us suppose that it is possible to choose values ​​of a and b such that f(a) and f(b) have different signs, for example f(a)>0, f(b)<0. В таком случае существует по крайней мере один корень уравнения f(x)=0, находящийся между a и b. Суживая интервал значений a и b, можно найти корень уравнения с требуемой точностью.

Graphical finding of the roots of the equation. To solve an equation of higher degrees, it is convenient to use the graphical method. Let the equation be given:

x n +ax n-1 +bx n-2 +…+px+q=0,

where a, b, … , p, q are given numbers.

Geometrically, the equation

Y=x n +ax n -1 +bx n -2 +…+px+q

is a certain curve. One can find any number of its points by computing the y-values ​​corresponding to arbitrary x-values. Each point of intersection of the curve with the OX axis gives the value of one of the roots of this equation. Therefore, finding the roots of the equation is reduced to determining the points of intersection of the corresponding curve with the OX axis.

Iteration method. This method consists in the fact that the equation to be solved f (x) \u003d 0 is converted into a new equation x \u003d j (x) and, given the first approximation x 1, more accurate approximations x 2 \u003d j (x 1), x 3 are successively found =j(x 2) etc. The solution can be obtained with any degree of accuracy, provided that in the interval between the first approximation and the root of the equation | j "(x) |<1.

To solve one nonlinear equation, the following methods are used:

a) half division method:

The isolation interval of a real root can always be reduced by dividing it, for example, in half, determining on the boundaries of which part of the original interval the function f(x) changes sign. Then the resulting interval is again divided into two parts, and so on. This process is carried out until the decimal places stored in the response stop changing.

We choose the interval in which the solution is concluded. Calculate f(a) and f(b) if f(a) > 0 and f(b)< 0, то находим и рассчитываем f(c). Далее, если f(a) < 0 и f(c) < 0 или f(a) >0 and f(c) > 0, then a = c and b = b. Otherwise, if f(a)< 0 и f(c) >0 or f(a) > 0 and f(c)< 0, то a = a и b = c.

B) tangent method (Newton method):

Let the real root of the equation f(x) = 0 be isolated on the segment . Let's take on a segment such a number x 0 for which f (x 0) has the same sign as f ’ (x 0). Let us draw a tangent to the curve y = f(x) at the point M 0 . For the approximate value of the root, we take the abscissa of the point of intersection of this tangent with the axis Ox. This approximate value of the root can be found by the formula

Applying this technique a second time at the point M 1, we get

etc. The sequence x 0 , x 1 , x 2 , ... obtained in this way has the desired root as its limit. In general, it can be written like this:

.

To solve linear systems of algebraic equations, the Gauss-Seidel iterative method is used. Such problems of chemical technology as the calculation of material and heat balances are reduced to solving systems of linear equations.

The essence of the method lies in the fact that by means of simple transformations, the unknowns x 1, x 2, ..., x n are expressed, respectively, from equations 1,2, ..., n. Initial approximations of the unknowns x 1 =x 1 (0), x 2 =x 2 (0), ..., x n =x n (0) are set, these values ​​are substituted into the right side of the expression x 1 and x 1 (1) is calculated. Then x 1 (1), x 3 (0), ..., x n (0) are substituted into the right side of the expression x 2 and x 2 (1), etc. are found. After calculating x 1 (1), x 2 (1), ..., x n (1) the second iteration is carried out. The iterative process continues until the values ​​x 1 (k), x 2 (k), ... become close with a given error to the values ​​x 1 (k-1) , x 2 (k-2) , ... .

Such problems of chemical technology as the calculation of chemical equilibrium, etc., are reduced to solving systems of nonlinear equations. Iterative methods are also used to solve systems of nonlinear equations. The calculation of complex equilibrium is reduced to solving systems of nonlinear algebraic equations.

The algorithm for solving a system by simple iteration resembles the Gauss-Seidel method used to solve linear systems.

Newton's method has a faster convergence than the simple iteration method. It is based on the use of the expansion of the functions F 1 (x 1 , x 2 , ... x n) in a Taylor series. In this case, terms containing second derivatives are discarded.

Let the approximate values ​​of the system unknowns obtained at the previous iteration be a 1 , a 2 , …a n . The task is to find increments to these values ​​Δx 1, Δx 2, ... Δx n, due to which new values ​​of the unknowns will be obtained:

x 1 \u003d a 1 + Δx 1

x 2 \u003d a 2 + Δx 2

x n \u003d a n + Δx n.

Let us expand the left-hand sides of the equations in a Taylor series, limited to linear terms:

Since the left parts of the equations must be equal to zero, we equate the right parts to zero as well. Let's get a system of linear algebraic equations with respect to increments Δх.

Values ​​F 1 , F 2 , … F n and their partial derivatives are calculated at x 1 = a 1 , x 2 = a 2 , … x n = a n .

We write this system in the form of a matrix:

The determinant of a matrix G of this form is called the Jacobian. The determinant of such a matrix is ​​called the Jacobian. For the existence of a unique solution to the system, it must be different from zero at each iteration.

Thus, the solution of the system of equations by the Newton method consists in determining the Jacobi matrix (partial derivatives) at each iteration and determining the increments Δx 1, Δx 2, ... Δx n to the values ​​of the unknowns at each iteration by solving a system of linear algebraic equations.

To eliminate the need to find the Jacobi matrix at each iteration, an improved Newton's method is proposed. This method allows correction of the Jacobi matrix using the values ​​F 1 , F 2 , … , F n obtained at previous iterations.

In many problems of mathematics, physics and technology, it is required to determine several functions at once, interconnected by several differential equations. The set of such equations is called a system of differential equations. In particular, such systems lead to problems in which the motion of bodies in space under the action of given forces is studied.

Let, for example, a material point of mass move along a certain curve (L) in space under the action of a force F. It is required to determine the law of motion of the point, i.e., the dependence of the coordinates of the point on time.

Let's assume that

the radius vector of the moving point. If the variable coordinates of the point are denoted by , then

The speed and acceleration of a moving point is calculated by the formulas:

(see Ch. VI, § 5, n. 4).

Force F, under the action of which a point moves, generally speaking, is a function of time, point coordinates and velocity projections on the coordinate axes:

Based on Newton's second law, the equation of motion of a point is written as follows:

By projecting the vectors on the left and right sides of this equality onto the coordinate axis, we obtain three differential equations of motion:

These differential equations are a system of three second-order differential equations with respect to the three desired functions:

In the future, we will restrict ourselves to studying only a system of first-order equations of a special form with respect to the desired functions. This system has the form

The system of equations (95) is called a system in normal form, or a normal system.

In a normal system, the right-hand sides of the equations do not contain derivatives of the desired functions.

The solution of system (95) is the set of functions satisfying each of the equations of this system.

Systems of equations of the second, third and higher orders can be reduced to a normal system if new sought-for functions are introduced. For example, system (94) can be transformed into normal form as follows. We introduce new functions by setting . Then the system of Equation (94) will be written as follows:

System (96) is normal.

Consider, for example, a normal system of three equations with three unknown functions:

For a normal system of differential equations, the Cauchy theorem for the existence and uniqueness of a solution is formulated as follows.

Theorem. Let the right-hand sides of the equations of system (97), i.e., the functions be continuous in all variables in some domain G and have continuous partial derivatives in it. Then, whatever the values ​​belonging to the domain G, there is a unique solution of the system that satisfies the initial conditions:

To integrate the system (97), one can apply the method by which the given system containing three equations with respect to the three desired functions is reduced to one third-order equation with respect to one unknown function. Let us show an example of the application of this method.

For simplicity, we restrict ourselves to a system of two equations. Let the system of equations

To find a solution to the system, we proceed as follows. Differentiating the first of the equations of the system with respect to we find

Substituting into this equality the expression from the second equation of the system, we obtain

Finally, replacing the function y by its expression from the first equation of the system

we obtain a linear homogeneous equation of the second order with respect to one unknown function:

Integrating this equation, we find its general solution

Differentiating the equality we find

Substituting the expressions for x and into equality and bringing like terms, we get

are the solution to this system.

So, by integrating a normal system of two differential equations, we have obtained its solution depending on two arbitrary constants. It can be shown that in the general case for a normal system consisting of equations, its general solution will depend on arbitrary constants.

Many systems of differential equations, both homogeneous and inhomogeneous, can be reduced to one equation with respect to one unknown function. Let's show the method with examples.

Example 3.1. Solve the system

Solution. 1) Differentiating with respect to t first equation and using the second and third equations to replace And , we find

The resulting equation is differentiable with respect to again

1) We make a system

From the first two equations of the system, we express the variables And through
:

Let us substitute the found expressions for And into the third equation of the system

So, to find the function
obtained a third-order differential equation with constant coefficients

.

2) We integrate the last equation by the standard method: we compose the characteristic equation
, find its roots
and build a general solution in the form of a linear combination of exponentials, taking into account the multiplicity of one of the roots:.

3) Next to find the two remaining features
And
, we differentiate the twice obtained function

Using connections (3.1) between the system functions, we recover the remaining unknowns

.

Answer. ,
,.

It may turn out that all known functions except one are excluded from the third-order system even after a single differentiation. In this case, the order of the differential equation for finding it will be less than the number of unknown functions in the original system.

Example 3.2. Integrate the system

(3.2)

Solution. 1) Differentiating with respect to first equation, we find

Excluding variables And from the equations

we will have a second-order equation with respect to

(3.3)

2) From the first equation of system (3.2) we have

(3.4)

Substituting into the third equation of system (3.2) the found expressions (3.3) and (3.4) for And , we obtain a first-order differential equation to determine the function

Integrating this inhomogeneous equation with constant first-order coefficients, we find
Using (3.4), we find the function

Answer.
,,
.

Task 3.1. Solve homogeneous systems by reducing to one differential equation.

3.1.1. 3.1.2.

3.1.3. 3.1.4.

3.1.5. 3.1.6.

3.1.7. 3.1.8.

3.1.9. 3.1.10.

3.1.11. 3.1.12.

3.1.13. 3.1.14.

3.1.15. 3.1.16.

3.1.17. 3.1.18.

3.1.19. 3.1.20.

3.1.21. 3.1.22.

3.1.23. 3.1.24.

3.1.25. 3.1.26.

3.1.27. 3.1.28.

3.1.29.
3.1.30.

3.2. Solving systems of linear homogeneous differential equations with constant coefficients by finding a fundamental system of solutions

The general solution of a system of linear homogeneous differential equations can be found as a linear combination of the fundamental solutions of the system. In the case of systems with constant coefficients, linear algebra methods can be used to find fundamental solutions.

Example 3.3. Solve the system

(3.5)

Solution. 1) Rewrite the system in matrix form

. (3.6)

2) We will look for a fundamental solution of the system in the form of a vector
. Substituting functions
in (3.6) and reducing by , we get

, (3.7)

that is the number must be an eigenvalue of the matrix
, and the vector corresponding eigenvector.

3) From the course of linear algebra, it is known that the system (3.7) has a non-trivial solution if its determinant is equal to zero

,

that is . From here we find the eigenvalues
.

4) Find the corresponding eigenvectors. Substituting into (3.7) the first value
, we obtain a system for finding the first eigenvector

From here we get the connection between the unknowns
. It is enough for us to choose one non-trivial solution. Assuming
, Then
, that is, the vector is eigenvalue for eigenvalue
, and the function vector
fundamental solution of the given system of differential equations (3.5). Similarly, when substituting the second root
in (3.7) we have the matrix equation for the second eigenvector
. Where do we get the connection between its components
. Thus, we have the second fundamental solution

.

5) The general solution of system (3.5) is constructed as a linear combination of two obtained fundamental solutions

or in coordinate form

.

Answer.

.

Task 3.2. Solve systems by finding the fundamental system of solutions.