Process control subsystem. Process concept

DEVELOPMENT OF A EDUCATIONAL OPERATING SYSTEM MODULE

Guidelines

to course design in the discipline

"OS"

for full-time students

directions

INTRODUCTION 4

1. Theoretical section. 4

1.1. Process control subsystem. 4

1.1.1. Context and process handle. five

1.1.2. Process planning algorithms. 6

1.1.3. Preemptive and non-preemptive scheduling algorithms. nine

1.1.4. Process model and functions of the process management subsystem of the educational operating system 12

1.2. Memory management subsystem.. 17

1.2.1. Page distribution. eighteen

1.2.2. Segment distribution. 22

1.2.3. Page-segment distribution. 23

1.2.4. Page replacement algorithms. 24

1.3. File management. thirty

1.3.1. Filenames. thirty

1.3.2. File types. 32

1.3.3. Physical organization and file address. 33

2. The order of the course project. 38

3. Options for tasks. 39

References 42

APPENDIX A.. 43

INTRODUCTION

The purpose of the course project: to study the theoretical foundations of building operating system modules. Get practical skills in developing a program that is part of the operating system.

Theoretical section

The functions of a standalone computer's operating system are usually grouped either according to the types of local resources that the OS manages, or according to specific tasks that apply to all resources. Sometimes such groups of functions are called subsystems. The most important subsystems are the subsystems for managing processes, memory, files, and external devices, and the subsystems common to all resources are the user interface, data protection, and administration subsystems.

Process control subsystem

The most important part of the operating system, which directly affects the functioning of the computer, is the process control subsystem. For each newly created process, the OS generates system information structures that contain data on the process's needs for computing system resources, as well as on the resources actually allocated to it. Thus, a process can also be defined as a request to consume system resources.

In order for a process to run, the operating system must assign it an area of ​​RAM that will house the process's codes and data, as well as provide it with the required amount of processor time. In addition, the process may need to access resources such as files and I/O devices.

In a multitasking system, a process can be in one of three basic states:

RUNNING - the active state of the process, during which the process has all the necessary resources and is directly executed by the processor;

WAITING - the passive state of the process, the process is blocked, it cannot be executed for its own internal reasons, it is waiting for some event to occur, for example, the completion of an I / O operation, receiving a message from another process, releasing some resource it needs;

READY - also a passive state of the process, but in this case the process is blocked due to circumstances external to it: the process has all the resources required for it, it is ready to run, but the processor is busy executing another process.

During the life cycle, each process moves from one state to another in accordance with the process scheduling algorithm implemented in this operating system. A typical process state graph is shown in Figure 1.1.

Figure 1.1 - Graph of process states in a multitasking environment

Only one process can be in the RUN state in a single-processor system, and several processes can be in each of the WAITING and READY states, these processes form queues of waiting and ready processes, respectively.

The lifecycle of a process begins in the READY state, when the process is ready to run and is waiting for its turn. When activated, the process goes into the RUN state and stays in it until either it releases the processor itself by going into the WAITING state for some event, or it is forced out of the processor, for example, due to the exhaustion of the processor time quantum allotted to this process. In the latter case, the process returns to the READY state. The process enters the same state from the WAITING state after the expected event occurs.

Articles to read:

Basics of programming. Process management

The task management system ensures their passage through the computer. Depending on the state of the process, it needs to allocate one or another resource. For example, a new process needs to be placed in memory by allocating an address space to it; include in the list of tasks competing for CPU time.

One of the main subsystems of a multiprogram OS that directly affects the operation of a computer is process and thread management subsystem. It is engaged in their creation and destruction, and also distributes processor time between processes and threads that simultaneously exist in the system.

When multiple tasks are running in the system at the same time, although threads are created and run asynchronously, they may need to interact, for example, when exchanging data. Therefore, thread synchronization is one of the important functions of the process and thread management subsystem.

Interaction between processes is carried out using shared variables and special basic operations called primitives.

The process and thread management subsystem has the ability to perform the following operations on processes:

– creation (spawning)/destruction of a process;

– suspension / resumption of the process;

– blocking/waking up the process;

– process start;

– change of process priority;

The process and thread management subsystem is responsible for providing processes with the necessary resources. The OS maintains special information structures in memory, in which it records what resources are allocated to each process. A resource can be assigned to a process for sole use or shared with other processes. Some of the resources are allocated to the process when it is created, and some are allocated dynamically by requests at run time. Resources can be assigned to a process for its entire lifetime or only for a specific period. When performing these functions, the process control subsystem interacts with other OS subsystems responsible for resource management, such as the memory management subsystem, the I / O subsystem, the file system.

1. Creating and Deleting Processes and Threads

To create a process is, first of all, to create process descriptor, which is one or more information structures that contain all the information about the process that the operating system needs to manage it. This issue was considered in detail earlier, now we only recall that such information may include, for example, the process identifier, data on the location in the memory of the executable module, the degree of privilege of the process (priority and access rights), etc.

Creating a process involves loading the codes and data of the process's executable program from disk into RAM. In this case, the process management subsystem interacts with the memory management subsystem and the file system. In a multithreaded system, when a process is created, the OS creates at least one thread of execution for each process. When creating a thread, just like when creating a process, the OS generates a special information structure - a thread descriptor, which contains the thread identifier, data on access rights and priority, thread state, etc. Once created, a thread (or process) is in a ready-to-run state (or an idle state in the case of a special-purpose operating system).

Creation and deletion of tasks is carried out on the appropriate requests from users or from other tasks. A task can spawn a new task - in many systems, a thread can contact the OS with a request to create a so-called. descendant streams. The parent task is called the "ancestor", "parent", and the child task is called the "child" or "child task". An "ancestor" can suspend or delete its child task, while a "child" cannot control the "ancestor".

In different operating systems, relationships between child threads and their parents are built differently. In some operating systems, their execution is synchronized (after the completion of the parent thread, all its descendants are removed from execution), in others, the descendants are executed asynchronously with respect to the parent thread.

After the process is completed, the OS "cleans up the traces" of its stay in the system - it closes all the files that the process worked with, frees up the areas of RAM allocated for codes, data, and system information structures of the process. OS queues and resource lists that contained references to the process being terminated are corrected.

2. Scheduling and dispatching processes and threads

The planning strategy determines which processes are selected to be executed in order to achieve the goal. Strategies can be different, for example:

– if possible, finish the calculations in the same order in which they were started;

– give preference to shorter processes;

– provide all users (user tasks) with the same services, including the same waiting time.

During the life of a process, the execution of its threads can be repeatedly interrupted and continued.

The transition from the execution of one thread to another is carried out as a result of planning and dispatching.

Planning flows is performed on the basis of information stored in process and thread descriptors. When planning, the priority of threads, their waiting time in the queue, the accumulated execution time, the intensity of I/O access, and other factors can be taken into account. The OS schedules threads to run whether they belong to the same process or to different processes. Planning is understood as the task of selecting such a set of processes that they conflict as little as possible during execution and use the computing system as efficiently as possible.

In various information sources, there are different interpretations of the concepts of "planning" and "dispatching". Thus, some authors divide planning into long-term (global) and short-term (dynamic, i.e. the current most efficient distribution), and the latter is called dispatching. According to other sources, scheduling is understood as the implementation of the decision made at the planning stage. We will stick to this option.

Planning includes two tasks:

determining a point in time to change the active thread;

selection for execution of a thread from the queue of ready threads.

There are many scheduling algorithms that solve these problems in different ways. It is the planning features that determine the specifics of the operating system. Let's look at them a little later.

In most operating systems, scheduling is done dynamically, i.e. decisions are made during work based on an analysis of the current situation. Threads and processes appear at random times and terminate unpredictably.

Static the type of scheduling can be used in specialized systems in which the entire set of simultaneously executing tasks is predetermined (real-time systems). The scheduler generates a schedule based on knowledge of the characteristics of a set of tasks. This schedule is then used by the operating system for scheduling.

dispatching is to implement the solution found as a result of planning, i.e. in switching from one process to another. Dispatching is as follows:

saving the context of the current thread that needs to be changed;

launching a new thread for execution.

The thread context reflects, firstly, the state of the computer hardware at the time of the interrupt (the value of the program counter, the contents of general-purpose registers, the processor's operating mode, flags, interrupt masks, and other parameters), and secondly, the parameters of the operating environment (links to open files, data about pending I/O operations, error codes of system calls executed by this thread, etc.).

In the context of a thread, one can single out a part that is common to all threads of a given process (links to open files), and a part that applies only to a given thread (register contents, program counter, processor mode). For example, in the NetWare environment, there are three types of contexts - the global context (process context), thread group context, and individual thread context. The relationship between the data of these contexts is similar to the relationship between global and local variables in a program. The hierarchical organization of contexts speeds up thread switching: when switching from a thread of one group to a thread of another group within the same process, the global context does not change, but only the context of the group changes. Global context switching occurs only when you switch from a thread in one process to a thread in another process.

3. Planning algorithms

From the point of view of solving the first scheduling problem (choosing the moment of time to change the active thread), scheduling algorithms are divided into two large classes - preemptive and non-preemptive algorithms:

non-displacing- the active thread can be executed until it itself transfers control to the system so that it selects another ready thread from the queue;

displacing– the operating system decides to change the running task and switches the processor to another thread.

The main difference between these scheduling algorithms is the degree of centralization of the flow scheduling mechanism. Consider the main characteristics, advantages and disadvantages of each class of algorithms.

Non-preemptive algorithms. The application program, having received control from the OS, itself determines the moment of completion of the next cycle of its execution and only then transfers control to the OS using some system call. Consequently, control of the application by the user is lost for an arbitrary period of time. Developers need to take this into account and design applications in such a way that they work as if in "parts", periodically interrupting and transferring control to the system, i.e. during development, the functions of the scheduler are also performed.

Advantages this approach:

– interruption of the flow at an inconvenient moment for it is excluded;

– the problem of simultaneous use of data is solved, because during each run cycle, the task uses them exclusively and is sure that no one else can change them;

– faster switching speed from stream to stream.

disadvantages are the difficult development of programs and increased requirements for the qualification of a programmer, as well as the possibility of capturing the processor by one thread in case of its accidental or deliberate looping.

Preemptive Algorithms- a cyclic or circular type of scheduling, in which the operating system itself decides whether to interrupt the active application and switches the processor from one task to another in accordance with one or another criterion. In a system with such algorithms, the programmer does not have to worry about the fact that his application will be executed simultaneously with other tasks. Examples include UNIX operating systems, Windows NT/2000, OS/2. Algorithms of this class are focused on high-performance application execution.

Preemptive algorithms can be based on the concept of quantization or on a priority mechanism.

Algorithms based on quantization. Each thread is given a limited continuous quantum of processor time (its value should not be less than 1 ms - as a rule, several tens of ms). The thread is transitioned from the running state to the ready state if the quantum is exhausted. The quanta can be the same for all threads or different.

When allocating quanta to a stream, different principles can be used: these quanta can be of a fixed value or change in different periods of the stream's life. For example, for some particular thread, the first quantum may be quite large, and each subsequent quantum allocated to it may have a shorter duration (reduction to specified limits). This creates an advantage for shorter threads, and long-running tasks go into the background. Another principle is based on the fact that processes that frequently perform I/O operations do not fully realize the time slots allocated to them. To compensate for this injustice, a separate queue can be formed from such processes, which has privileges in relation to other threads. When choosing the next thread for execution, this queue is first looked through, and only if it is empty, a thread is selected from the general queue ready for execution.

These algorithms do not use any prior information about tasks. Service differentiation in this case is based on the "history of existence" of the flow in the system.

From the point of view of the second scheduling problem (the principle of choosing the next thread to execute), algorithms can also be conditionally divided into classes: non-priority and priority algorithms. In non-priority service, the next task is selected in some predetermined order without regard to their relative importance and service time. When implementing priority disciplines, some tasks are given the priority right to get into the state of execution.

Now let's look at some of the most common planning disciplines.


Service first come first served. The allocation of the processor is carried out according to the FIFO (First In First Out) principle, i.e. in order of receipt of requests for service. This approach allows us to implement the strategy "if possible, finish the calculations in the order they appear". Those tasks that were blocked in the process of execution, after the transition to the ready state, are queued before those tasks that have not yet been executed. Thus, two queues are created: one of the tasks that have not yet run, and the other of the tasks that have transitioned from the waiting state.

This discipline is implemented as non-preemptive, when tasks release the processor voluntarily.

Dignity of this algorithm is its ease of implementation. disadvantage– with a heavy load, short jobs are forced to wait in the system for a long time. The following approach eliminates this shortcoming.

The shortest process is served first. According to this algorithm, the thread with the minimum estimated time required to complete its work is assigned next to be executed. Here, streams that have a little time left before they complete are preferred. This reduces the number of pending tasks in the system. disadvantage is the need to know the estimated times in advance, which is not always possible. As a rough approximation, in some cases, you can use the time spent by the thread when it last received control.

The algorithm belongs to the category of non-preemptive non-priority algorithms.

The named algorithms can be used for batch modes of operation when the user does not expect the system to respond. For interactive computing, it is necessary, first of all, to provide an acceptable response time and equality in service for multiterminal systems. For single-user systems, it is desirable that those programs that are directly worked with have a better response time than background jobs. In addition, some applications, while executing without the direct participation of the user, must nevertheless be guaranteed to receive their share of processor time (for example, the e-mail receiving program). To solve such problems, priority service methods and the concept of quantization are used.


carousel discipline, or circularRR(Round Robin). This discipline belongs to preemptive algorithms and is based on quantization. Each task receives processor time in chunks - quanta. After the end of the time quantum, the task is removed from the processor and placed at the end of the queue of processes ready for execution, and the next task is accepted for processing by the processor. For the optimal operation of the system, it is necessary to correctly choose the law according to which time slices are allocated to tasks.

The quantum value is chosen as a compromise between the acceptable response time of the system to user requests (so that their simplest requests do not cause a long wait) and the overhead of frequent task changes. During interrupts, the OS must save a sufficiently large amount of information about the current process, put the descriptor of the terminated task in the queue, and load the context of the new task. With a small time quantum and frequent switching, the relative share of such overhead costs will become large, and this will degrade the performance of the system as a whole. With a large time quantum and an increase in the queue of ready tasks, the reaction of the system will become bad.

In some operating systems, it is possible to explicitly specify the value of the time quantum or the allowable range of its values. For example, on OS/2, the CONFIG.SYS file specifies the minimum and maximum values ​​for a time slice using the TIMESLICE statement: TIMESLICE=32,256 indicates that the time slice can be changed between 32 and 256 milliseconds.

This service discipline is one of the most common. In some cases, when the OS does not explicitly support the discipline of round-robin scheduling, such maintenance can be organized artificially. For example, some RTOS use scheduling with absolute priorities, and when priorities are equal, the principle of priority is applied. That is, only a task with a higher priority can remove a task from execution. If necessary, organize service evenly and equitably, i.e. so that all jobs receive the same time slots, the system operator can implement such maintenance himself. To do this, it is enough to assign the same priorities to all user tasks and create one high-priority task that should do nothing but be scheduled for execution by a timer at specified time intervals. This task will only remove the current application from execution, it will move to the end of the queue, and the task itself will immediately leave the processor and give it to the next process in the queue.

In its simplest implementation, a carousel service discipline assumes that all jobs have the same priority. If it is necessary to introduce a priority service mechanism, usually several queues are organized, depending on the priorities, and the lower priority queue is serviced only when the higher priority queue is empty. This algorithm is used for scheduling on OS/2 and Windows NT systems.

Planning according to priorities.

An important concept behind many preemptive algorithms is priority service. Such algorithms use the information found in the thread descriptor - its priority. Different systems define priority differently. In some systems, the highest priority value can be considered its numerically largest value, in others, on the contrary, zero is considered the highest priority.

As a rule, the priority of a thread is directly related to the priority of the process in which the thread is running. Process priority is assigned by the operating system when it is created, taking into account whether the process is a system or application process, what is the status of the user who launched the process, and whether the user explicitly indicated to assign a certain priority to the process. The priority value is included in the process handle and is used when assigning a priority to its threads. If the thread was initiated not by a user command, but as a result of a system call being executed by another thread, then the OS must take into account the parameters of the system call to assign a priority to it.

When scheduling program maintenance according to the algorithms described earlier, a situation may arise when some control or management tasks cannot be implemented for a long period of time due to an increase in the load in the system (especially in the RTOS). At the same time, the consequences due to the untimely completion of such tasks can be more serious than due to the failure to implement some programs with a higher priority. In this case, it would be advisable to temporarily change the priority of "emergency" tasks (those whose processing time is running out), and restore the previous value after execution. The introduction of mechanisms for dynamically changing priorities makes it possible to implement a faster system response to short user requests (which is important for interactive work), but at the same time guarantee the execution of any requests.

So the priority can be static(fixed) or dynamic(changing system depending on the situation in it). So-called base thread priority depends directly on the base priority of the process that spawned it. In some cases, the system can increase the priority of a thread (and to varying degrees), for example, if the quantum of processor time allotted to it has not been fully used, or lower the priority otherwise. For example, the OS boosts priority more for threads waiting for keyboard input and less for threads performing disk operations. In some systems that use the mechanism of dynamic priorities, fairly complex formulas are used to change the priority, which involve the values ​​of basic priorities, the degree of loading of the computing system, the initial value of the priority specified by the user, etc.

There are two types of priority planning: maintenance with relative priorities and service with absolute priority. In both cases, the choice of a thread for execution is carried out in the same way - the thread with the highest priority is selected, and the moment of changing the active thread is determined differently. In a system with relative priorities, an active thread runs until it leaves the processor itself (goes to a wait state, or an error occurs, or the thread terminates). In a system with absolute priorities, the interruption of an active thread, in addition to the above reasons, also occurs if a thread with a higher priority than the active one appears in the queue of ready threads. Then the running thread is interrupted and transferred to the ready state.

In a system with scheduling based on relative priorities, switching costs are minimized, but one task can take a long time on the processor. For time-sharing and real-time systems, this service mode is not suitable, but in batch processing systems (for example, OS / 360) it is widely used. Absolute priority scheduling is suitable for facility management systems where fast response to events is important.

Mixed type of planning used in many operating systems: priority-based scheduling algorithms are combined with the concept of quantization.

One of the main subsystems of any modern multiprogram OS that directly affects the functioning of a computer is the process and thread control subsystem. The main functions of this subsystem:

    creating processes and threads;

    providing processes and flows with the necessary resources;

    process isolation;

    scheduling the execution of processes and threads (in general, we should also talk about task scheduling);

    thread scheduling;

    organization of interprocess communication;

    synchronization of processes and threads;

    termination and destruction of processes and threads.

1. Five main events lead to the creation of a process:

    fulfilling a running process request to create a process;

    prompting the user to create a process, such as when logging in interactively;

    initiating a batch job;

    the creation by the operating system of a process necessary for the operation of any services.

Usually, when the OS boots, several processes are created. Some of them are high-priority processes that provide interaction with users and perform a given job. The rest of the processes are background processes, they are not associated with specific users, but perform special functions - for example, related to e-mail, Web pages, output to seal, file transfer on network, periodic launch of programs (for example, disk defragmentation) etc. Background processes are called daemons.

A new process can be created on request of the current process. Creating new processes is useful when the task to be performed can be most easily formed as a set of related, but nevertheless independent, cooperating processes. In interactive systems user can launch a program by typing a command on the keyboard or by double-clicking on the program icon. In both cases, a new process is created and launch programs in it. AT batch processing systems on mainframes, users submit a job (possibly using remote access), and the OS creates a new process and starts the next job in the queue when the required resources become free.

2. From a technical point of view, in all these cases, a new process is formed in the same way: the current process executes the system request to create a new process. The process and thread management subsystem is responsible for providing processes with the necessary resources. The OS maintains special information structures in memory, in which it records what resources are allocated to each process. It can assign resources to a process for sole use or to share with other processes. Some of the resources are allocated to the process when it is created, and some are allocated dynamically. on requests during lead time. Resources can be allocated to a process for its entire lifetime or only for a specific period. When performing these functions, the process control subsystem interacts with other OS subsystems responsible for resource management, such as the memory management subsystem, I/O subsystem, file system.

3. To prevent processes from interfering with resource allocation, and also could not damage each other's codes and data, the most important task of the OS is to isolate one process from another. For this operating system provides each process with a separate virtual address space so that no process can directly access commands and data from another process.

4. In an OS where there are processes and threads, a process is considered as an application for the consumption of all types of resources, except for one - processor time. This most important resource is distributed by the operating system among other units of work - threads, which got their name due to the fact that they are sequences (flows of execution) of commands. The transition from the execution of one thread to another is carried out as a result ofplanning anddispatching . Work on Determining when to interrupt the execution of the current thread and the thread that should be allowed to run is called scheduling. Thread scheduling is based on information stored in process and thread descriptors. Planning takes into account thread priority, their waiting time in the queue, accumulated lead time, I/O access rate, and other factors.

5. Dispatching consists in the implementation of the solution found as a result of planning, i.e. in switching the processor from one thread to another. Dispatching takes place in three stages:

    saving the context of the current thread;

    launching a new thread for execution.

6. When multiple independent tasks are running on the system at the same time, additional problems arise. Although threads are created and executed synchronously, they may need to interact, for example, when exchanging data. Processes and threads can use a wide range of capabilities to communicate with each other: channels (in UNIX), mailboxes ( Windows), remote procedure call, sockets (in Windows connect processes on different machines). Thread rate matching is also very important to prevent race conditions (when multiple threads try to change the same file), deadlocks, and other collisions that occur when sharing resources.

7. Synchronization threads is one of the most important functions of the process and thread management subsystem. Modern operating systems provide many synchronization mechanisms, including semaphores, mutexes, critical regions, and events. All these mechanisms work with threads, not processes. That's why when flow is blocked on the semaphore, other threads of this process may continue to run.

8. Every time a process terminates, - and this happens due to one of the following events: the usual exit, exit on mistake, exit on fatal error, destruction by another process - the OS takes steps to "clean up the traces" of its stay in the system. The process control subsystem closes all files that the process worked with, frees up areas of RAM allocated for codes, data, and system information structures of the process. Performed correction all kinds of OS queues and list resources that contained references to the process being terminated.

Ministry of Transport of the Russian Federation

Federal Agency for Railway Transport

GOU VPO "DVGUPS"

Department: "Information technologies and systems"

on the topic: "Process management subsystem"

Completed by: Sholkov I.D.

group 230

Checked by: Reshetnikova O.V.

Khabarovsk 2010

Introduction

1. Description of the program

1.1 Functionality

1.2 Technical means used to create the program

1.3 Multithreading and multiprocessing

1.4 Thread and process priorities

1.5 Thread Synchronization Methods

1.3 Logical structure of the program

2. User's guide for working with the program

2.1 General information and purpose of the program

2.2 GUI

2.3 Working with the program

2.4 Key Features of ProcessManager

Conclusion

Bibliography

C# supports parallel code execution through multithreading. A thread is an independent execution path capable of executing concurrently with other threads.

A C# program starts as a single thread automatically created by the CLR and the operating system (the "master" thread) and becomes multithreaded by creating additional threads.

Multithreading is managed by the thread scheduler, a function the CLR usually delegates to the operating system. The thread scheduler ensures that active threads are given appropriate execution time, and threads that are waiting or blocked, such as waiting for an exclusive lock or user input, do not consume CPU time.

On uniprocessor computers, the thread scheduler uses time slicing—quick switching between the execution of each of the active threads. This leads to unpredictable behavior, as in the very first example, where each sequence of characters 'X' and 'Y' corresponds to a time slice allocated to the thread. In Windows XP, the typical time slice value - tens of milliseconds - is chosen to be much larger than the CPU cost of a context switch between threads (several microseconds).

On multiprocessor computers, multithreading is implemented as a mixture of time slicing and true parallelism, with different threads executing code on different CPUs. The need for time slicing still remains, as the operating system must service both its own threads and those of other applications.

A thread is said to be preempted when its execution is suspended due to external factors such as time slicing. In most cases, a thread has no control over when and where it will be preempted.

All threads of an application are logically contained within a process, the operating system module in which the application is running.

In some respects, threads and processes are similar—for example, time is shared between processes running on the same computer, just as it is shared between threads in the same C# application. The key difference is that the processes are completely isolated from each other. Threads share memory (the heap) with other threads in the same application. This allows one thread to supply data in the background and another to show that data as it arrives.

The Priority property determines how much time a thread will be allowed to execute relative to other threads in the same process. There are 5 levels of thread priority: enum ThreadPriority ( Lowest, BelowNormal, Normal, AboveNormal, Highest )

The priority value becomes significant when multiple threads are executing at the same time.

Setting the thread priority to the maximum does not mean real-time operation, as there is still an application process priority. To work in real time, you need to use the Process class from the System.Diagnostics namespace to raise the priority of the process.

From ProcessPriorityClass.High one step to the highest process priority - Realtime. By setting the process priority to Realtime, you are telling the operating system that you want your process to never be preempted. If your program accidentally gets into an infinite loop, the operating system can be completely blocked. In this case, only the power button can save you. For this reason, ProcessPriorityClass.High is considered the highest usable process priority.

If a real-time application has a user interface, it may not be desirable to raise the priority of its process, since updating the screen will eat up too much CPU time - slowing down the entire computer, especially if the UI is quite complex.

The lock statement (aka Monitor.Enter/Monitor.Exit) is one example of thread synchronization constructs. Lock is the most suitable means for organizing exclusive access to a resource or section of code, but there are synchronization tasks (such as signaling the start of work to a waiting thread) for which lock will not be the most adequate and convenient means.

The Win32 API has a rich set of synchronization constructs, and they are available in the .NET Framework as the EventWaitHandle, Mutex, and Semaphore classes. Some are more practical than others: Mutex, for example, largely duplicates the capabilities of lock, while EventWaitHandle provides unique signaling capabilities.

All three classes are based on the WaitHandle abstract class, but their behavior is quite different. One of the common features is the naming ability, which makes it possible to work with threads not only of one, but also of different processes.

EventWaitHandle has two derived classes, AutoResetEvent and ManualResetEvent (which have nothing to do with C# events and delegates). Both classes have access to all the functionality of the base class, the only difference is that the base class constructor is called with different parameters.

In terms of performance, all WaitHandles typically run in the region of a few microseconds. This rarely matters given the context in which they are applied.

AutoResetEvent is the most commonly used WaitHandle class and the main synchronization construct, along with lock.

AutoResetEvent is very similar to a turnstile - one ticket allows one person to enter. The "auto" prefix in the name refers to the fact that an open turnstile automatically closes or "resets" after allowing someone to pass. The thread is blocked at the turnstile by calling WaitOne (wait (wait) at this (one) turnstile until it opens), and the ticket is inserted by calling the Set method. If multiple threads call WaitOne, a queue forms behind the turnstile. The ticket can "insert" any thread - in other words, any (non-blocking) thread that has access to the AutoResetEvent object can call Set to skip one blocked thread.

If Set is called when there are no waiting threads, the handle will remain open until a thread calls WaitOne. This feature helps to avoid a race between the thread approaching the turnstile and the thread inserting the ticket ("oops, the ticket was inserted a microsecond too early, sorry, but you'll have to wait some more!"). However, multiple calls to Set for a free turnstile do not allow a whole crowd to pass at a time - only one person will be able to pass, all other tickets will be wasted.

WaitOne takes an optional timeout parameter - the method returns false if the wait ends by timeout and not by a signal. WaitOne can also be trained to exit the current synchronization context to continue waiting (if auto-blocking mode is used) to avoid excessive blocking.

The Reset method ensures that an open turnstile is closed without any waiting or blocking.

An AutoResetEvent can be created in two ways. First, with its constructor: EventWaitHandle wh = new AutoResetEvent(false);

If the constructor argument is true, the Set method will be called automatically immediately after the object is created.

The other method is to create a base class object, EventWaitHandle:EventWaitHandle wh = new EventWaitHandle(false, EventResetMode.Auto);

The EventWaitHandle constructor can also be used to create a ManualResetEvent object (by setting the EventResetMode.Manual parameter).

The Close method must be called as soon as the WaitHandle is no longer needed - to free the operating system resources. However, if the WaitHandle is used throughout the lifetime of the application (as in most of the examples in this section), this step can be omitted, as it will be performed automatically when the application domain is destroyed.

ManualResetEvent is a variation of AutoResetEvent. The difference is that it doesn't reset automatically after a thread passes through WaitOne, and acts like a barrier - Set opens it, allowing any number of threads that called WaitOne to pass through. Reset closes the barrier, potentially building up a queue waiting for the next opening.

This functionality can be emulated using the boolean variable "gateOpen" (declared as volatile) in combination with "spin-sleeping" - repeating flag checks and waiting for a short period of time.

ManualResetEvent can be used to signal completion of some operation or thread initialization and readiness to perform work.

1.3 Logical structure of the program

The basis of the program is the abstract class Betaproc. It implements an abstract process model, without specifying the actions to be performed, with a set of variables and methods common to all processes. Three processes sinProc, FibbonProc and ProcRandom are derived from this class and each of them implements only methods that return the type of the process and the method itself executed by the process. At the same time, each Base method, in which the executable code is located, has in its body a handle common to all, which allows only one process to execute the code when others queue up and receive their execution time in proportion to priority. At the start of work, a timer is started, which provides the same time slice of 3 seconds for each process.

However, processes do not hang in the computer's memory just like that. The ProcManager class was created specifically for process management. When a process is created, it and all information about it is entered into an array, and in accordance with the number of the cell in which the process is written, it is given an identifier by which it can be accessed during operation. The ProcManager class also implements a graphical representation in memory. Each important element of the process is displayed in a special table on the form, and when one of them changes, an event is called that changes the entry in the table in real time, so we can observe how beautifully the "working" inscription runs from one process to another.

2. User's guide for working with the program

2.1 General information and purpose of the program

The program was written in the VisualStudio 2008 environment in C# and is a process control manager built on modern control tools with an intuitive graphical interface. The program is completely standalone and does not require the installation of other software. All reference information is stored in this Manual and the Technical Project. In case of failures in the program, you must contact the Author to eliminate them. The main part of the manual will describe the features of the program, a description of the main characteristics and features of the program.

2.2 GUI

After opening the program, the user is presented with a graphical interface.

pic 1.: the main window of the program after launch


The "Process" area allows you to select the process that we need to run. It has three points: Fibonacci Numbers, Random Numbers and Recursive Sine.

The "Priority" area allows you to set the priority of the running process. Has 5 points: Low, Below average, Average, Above average, High.

The start button is used to start the process with the selected parameters.

The table in the central part of the window displays the status of each of the running processes. As each process is added, one line is automatically added to it. Has 5 fields:

1) Process number - shows the sequence number of the process

2) Process type - shows the action with which this process performs

3) Process status - shows whether the process is currently running or not. Also shows if the process is stopped, terminated, or just resumed.

4) Process priority - shows the priority of the process that was assigned to it when it was created.

5) Percentage of CPU time - displays the percentage of CPU time used.

In the right part of the program window there is a field in which the result of the running process is recorded in real time.

At the bottom of the program window there are controls for working with already running processes.

"Stop" button - stops the selected process with the possibility of its subsequent restart.

"Resume" button - restarts the stopped process.

"End" button - stops the selected process without the possibility of its termination.

The program also has a standard status bar, with which you can minimize, maximize or close the working window of the program.

2.3 Working with the program

Processes are launched using the "Run" button, but before that, you must set the process parameters.

Random number - the process generates a random number in the range from one to one hundred and displays it in the output field.

Fibonacci numbers - generates a Fibonacci sequence starting from the first term and displays them in the output field. As soon as the value of the sequence numbers exceeds a thousand, the values ​​are reset to the first members.

Recursive sine - generates the value of the sine of X. Initially, X is 1, later it is assigned the calculated values ​​of sin(x). Values ​​rounded to the third decimal place are displayed in the output field.

The process priority indicates how likely a process is to start running if the previous process has finished running. For example, if you run three processes with the same priorities, then the number of times they will run will be approximately the same, and if you run two high priority processes and one low priority process, then the low priority process will run approximately one in sixteen once. However, the architecture of the program provides that if the number of running processes is more than one, then the same process cannot be executed more than once in a row.

After starting several processes, the program window will look like this:


fig 2: program work.

A process that is in a working state is indicated by the signature running, and it is the results of the execution of its work that are currently displayed in the output field.

If we want to stop process number 2, we need to select the second line in the process table and click the "Stop" button. After executing the command, the window will look like this:

rice. 3: The process with Process ID = 2 is stopped.

A stopped process is marked with "Stopped" in the third column. Later, if we want to restore it, we must again select its number in the "PID" menu and click the "Resume" button. If you click the "resume" button, pointing to a process that is not marked with the caption "Stopped", then nothing will happen.

A process that has started working again, but has never been executed yet, is marked with the caption "Resumed" in the third column, as in the picture:

rice. 4: The process with Process ID = 2 has been resumed.

If we want to end the process, number 3, we must select the third line in the process table and click the "Finish" button. A process that has already terminated cannot be started again. Create a process with the same parameters.


It is also possible to sort running threads. By default, they are sorted by their ID, which depends on the order in which they were created. Clicking on the 'Process Type' column heading will sort the processes by type, 'Process Status' by status, and 'Priority' will group the processes by priority type. Clicking again groups the processes in reverse order.

To exit the program correctly, simply click on the cross in the title bar.

2.4 Main characteristics of the program process manager

ProcessManager is a program designed to manage processes by the user of a personal computer. It runs under MS Windows 2000/XP/Vista/7 operating systems.

ProcessManager allows you to create lists of processes and display the results of their activities on the screen with subsequent saving. You can't get accurate results using standard tools, you can only see the approximate amount of CPU and memory resources that a process is using in a given period of time.

The program includes a set of basic elements, each of which allows you to perform a specific task of the project. After loading the project and starting the processes, the values ​​and parameters of the processes are immediately displayed on the display. A detailed description of working with the program is presented in the first paragraph of this document. A detailed description of the logical structure of the program and the code of the program can be found in the document "Technical design of an automated subsystem for visualizing work with ProcessManager processes". The program works normally, however, if necessary, it can work around the clock and continuously. The program has been extensively tested and is fully protected from failures and exceptions.

Conclusion

The process control subsystem is one of the most important parts of the operating system. In this term paper, one of the options for its implementation was presented, which is not applicable in real life. However, it should be noted that this project is very simplified. The process control subsystem, which could really become part of a multitasking operating system, requires a much greater degree of sophistication, the ability to work with interrupts, heterogeneous processes, and also have some protection from external influences, because intentional or unintentional termination of critical processes can lead to the collapse of the system. But still, the work presents a rather elegant implementation. Its main problem is that not all possible components are implemented in it, and some moments are regulated by the built-in tools of the operating system that runs on the computer, in this case Windows. When actually programming an operating system, the process control subsystem must be built from scratch and have a definition and description of many of the elements that work in this project by default.

Bibliography

1) Bezbogov, A.A. Security of operating systems: textbook / A.A. Bezbogov, A.V. Yakovlev, Yu.F. Martemyanov. - M. : "Publishing house Mashinostroenie-1", 2007. - 220 p.

2) Operating systems, lectures on operating systems [Electronic resource] / www.osi-ru.ru. - Content: Process management; Memory management; Data management; Device Management

3) Troelsen, E. C# and the .NET Platform: Tutorial / E. Troelsen. - St. Petersburg. : "Peter Press", 2007. -796s.

Application

ProcessManager source code

using System.Diagnostics;

using System.Linq;

using System.Text;

using System.Threading;

using System.Windows.Forms;

using Timer=System.Threading.Timer;

namespace ProcManager

abstract class BetaProc

protected Thread a;

bool isWorking = false;

public event EventHandler WorkingStateChanged = delegate();

public bool IsWorking

get ( return isWorking; )

isWorking = value;

WorkingStateChanged(this, EventArgs.Empty);

public void Delete()

if (IsWorking == true)

if (WaitToStart.Set() == false)

WaitToStart.Set();

public ThreadPriority

get ( return a.Priority; )

set ( a.Priority = value; )

public void Stop()

if(isWorking == true)

WaitToStart.Set();

private DataGridViewdata;

public delegate void ChangeStateEventHandler(string msg);

public static event ChangeStateEventHandler change;

public abstract string GetType();

public string GetState()

return IsWorking ? "Working" : "Not working";

public string GetPriority()

return a.Priority.ToString();

public void ChangeState()

if(IsWorking==false)

IsWorking = true;

IsWorking = false;

public abstract void Base();

private Control pReporter;

public DataGridView reporterD

public control reporter

return pReporter;

pReporter = value;

public EventWaitHandle SwaitTostart

WaitToStart = value;

protected Stopwatch timer = new Stopwatch();

public void Start()

a = new Thread(Base);

delegate void SetTextDelegate2(string Text);

public static EventWaitHandle WaitToStart;

public void SetText2(stringText)

if (reporter.InvokeRequired)

SetTextDelegate2 a = new SetTextDelegate2(SetText2);

reporter.Invoke(a, new object ( Text ));

else reporter.Text += Text;

public void Restart()

if(isWorking == true)

timer = new stopwatch();

a=new Thread(Base);

using System.Collections.Generic;

using System.Diagnostics;

using System.Linq;

using System.Text;

using System.Threading;

namespace ProcManager

class FibbonProc:BetaProc

public readonly string Type = "Fibbonacci numbers";

private int FSum = 1;

private int FSum2 = 1;

private int temp = 0;

public override void Base()

WaitToStart.WaitOne();

if(IsWorking==false)

if (FSum >= 1000)

FSum = FSum + FSum2;

SetText2(FSum.ToString() + "\r\n");

Thread.Sleep(1110);

<= 3000);

WaitToStart.Set();

using System.Collections.Generic;

using System.Diagnostics;

using System.Linq;

using System.Text;

using System.Threading;

namespace ProcManager

class ProcRandom:BetaProc

Random a = new Random();

private int res;

public readonly string Type = "Random number";

public override string GetType()

public override void Base()

WaitToStart.WaitOne();

if(IsWorking==false)

res = a.Next(100);

SetText2(res.ToString()+"\r\n");

Thread.Sleep(1110);

while (timer.ElapsedMilliseconds<= 3000);

WaitToStart.Set();

using System.Collections.Generic;

using System.Diagnostics;

using System.Linq;

using System.Text;

using System.Threading;

namespace ProcManager

class SinProc:BetaProc

private double x = 1;

public readonly string Type = "Sine X";

public override string GetType()

public override void Base()

WaitToStart.WaitOne();

if(IsWorking == false)

x = Math.Sin(x);

SetText2(Math.Round(x, 3).ToString()+"\r\n");

Thread.Sleep(1110);

while (timer.ElapsedMilliseconds<= 3000);

WaitToStart.Set();

using System.Collections;

using System.Threading;

using System.Windows.Forms;

namespace ProcManager

class ClassProcManager

private BetaProc mas = new BetaProc;

private DataGridView a;

private int index = 0;

public BetaProc ReturnMas()

public int Index()

public DataGridView reporterD

public void AddThread(BetaProc a)

if (index< mas.Length)

MessageBox.Show("Too many processes");

public void ShowInDataView(BetaProc b)

a.Rows.Add(index + 1, b.GetType(), b.GetState(), b.GetPriority());

public void SetWaitProperty(BetaProc b)

int i = Array.IndexOf(mas, b);

if((i<0) || (i>a.Rows.Count - 1))

for (int s = 0; s< index; s++)

if ((int)a.Rows[s].Cells.Value == i+1)

DataGridViewRow row = a.Rows[s];

row.Cells.Value = b.GetState();

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Diagnostics;

using System.Drawing;

using System.Linq;

using System.Text;

using System.Threading;

using System.Windows.Forms;

namespace ProcManager

public partial class Form1: Form

InitializeComponent();

public int index = 0;

private ClassProcManager manager = new ClassProcManager();

private EventWaitHandle wh1 = new AutoResetEvent(true);

private RadioGroup processType;

private RadioGroup processPriority;

private ThreadPriority ProcessPriorities = new ThreadPriority;

ThreadPriority HighestPriority = ThreadPriority.Lowest;

///

/// Returns the process priority

///

/// Tag

/// ThreadPriority enum object

private ThreadPriority IndexToPriority(int priority)

switch (priority)

case 0: return ThreadPriority.Lowest;

case 1: return ThreadPriority.BelowNormal;

case 2: return ThreadPriority.Normal;

case 3: return ThreadPriority.AboveNormal;

case 4: return ThreadPriority.Highest;

default: return ThreadPriority.Normal;

private void button1_Click(object sender, EventArgs e)

BetaProc process;

switch(processType.SelectedButton)

case 0: process = new FibbonProc();

case 1: process = new ProcRandom();

case 2: process = new SinProc();

default: process = new ProcRandom();

process.SwaitTostart = wh1;

process.reporter = richTextBox1;

process.reporterD = dataGridView1;

process.Start();

process.Prior = IndexToPriority(processPriority.SelectedButton);

manager.AddThread(process);

manager.ShowInDataView(process);

process.WorkingStateChanged += new EventHandler(a_WorkingStateChanged);

// calculation of processor time

if (process.Prior > HighestPriority) HighestPriority = process.Prior;

ProcessPriorities = process.prior;

if (index >= 1)

double FreeProcessorTime = 100;

double TimePerProcess = 100 / (index + 1);

doublePriorityWeight = 0;

int HighPriorityProcessCount = 0;

// calculation for processes with a priority volume lower than the highest

for (int i = 0; i< index + 1; i++)

if (ProcessPriorities[i] != HighestPriority)

switch (ProcessPriorities[i])

case ThreadPriority.Lowest: PriorityWeight = 0.2;

case ThreadPriority.BelowNormal: PriorityWeight = 0.4;

case ThreadPriority.Normal: PriorityWeight = 0.6;

case ThreadPriority.AboveNormal: PriorityWeight = 0.8;

FreeProcessorTime -= TimePerProcess * PriorityWeight;

dataGridView1.Rows[i].Cells.Value = Math.Round(TimePerProcess * PriorityWeight);

else HighPriorityProcessCount++;

// calculation for processes with the highest priority

for (int i = 0; i< index + 1; i++)

if (ProcessPriorities[i] == HighestPriority)

dataGridView1.Rows[i].Cells.Value = Math.Round(FreeProcessorTime / HighPriorityProcessCount);

else dataGridView1.Rows.Cells.Value = "100";

void a_WorkingStateChanged(object sender, EventArgs e)

BetaProc b = sender as BetaProc;

manager.SetWaitProperty(b);

private void Form1_Load(object sender, EventArgs e)

manager.reporterD = dataGridView1;

// Populate the RadioGroup processType and processPriority with RadioButton objects

RadioButton processTypeRadioButtons = new RadioButton;

for (int i = 0; i< groupBox1.Controls.Count; i++) processTypeRadioButtons[i] = (RadioButton)groupBox1.Controls[i];

RadioButton processPriorityRadioButtons = new RadioButton;

for (int i = 0; i< groupBox2.Controls.Count; i++) processPriorityRadioButtons[i] = (RadioButton)groupBox2.Controls[i];

processType = new RadioGroup(processTypeRadioButtons);

processPriority = new RadioGroup(processPriorityRadioButtons);

private void button2_Click(object sender, EventArgs e)

if (processID != -1 && (string)manager.reporterD.Rows.Cells.Value != "Completed")

manager.ReturnMas().Stop();

manager.reporterD.Rows.Cells.Value = "(!LANG:Suspended";!}

private void button3_Click(object sender, EventArgs e)

int processID = (int)dataGridView1.SelectedRows.Cells.Value - 1;

if ((string)manager.reporterD.Rows.Cells.Value == "(!LANG:Suspended")!}

manager.ReturnMas().Restart();

manager.reporterD.Rows.Cells.Value = "(!LANG:Resumed";!}

private void button4_Click(object sender, EventArgs e)

int processID = (int)dataGridView1.SelectedRows.Cells.Value - 1;

if (processID != -1)

manager.ReturnMas().Delete();

manager.reporterD.Rows.Cells.Value = "(!LANG:Completed";!}

One of the main subsystems of any modern multiprogram OS that directly affects the functioning of a computer is the process and thread control subsystem. The main functions of this subsystem:

    creating processes and threads;

    providing processes and flows with the necessary resources;

    process isolation;

    scheduling the execution of processes and threads (in general, we should also talk about task scheduling);

    thread scheduling;

    organization of interprocess communication;

    synchronization of processes and threads;

    termination and destruction of processes and threads.

1. Five main events lead to the creation of a process:

    fulfilling a running process request to create a process;

    prompting the user to create a process, such as when logging in interactively;

    initiating a batch job;

    the creation by the operating system of a process necessary for the operation of any services.

Usually, when the OS boots, several processes are created. Some of them are high-priority processes that provide interaction with users and perform a given job. The rest of the processes are background processes, they are not associated with specific users, but perform special functions - for example, related to e-mail, Web pages, output to seal, file transfer on network, periodic launch of programs (for example, disk defragmentation) etc. Background processes are called daemons.

A new process can be created on request of the current process. Creating new processes is useful when the task to be performed can be most easily formed as a set of related, but nevertheless independent, cooperating processes. In interactive systems user can launch a program by typing a command on the keyboard or by double-clicking on the program icon. In both cases, a new process is created and launch programs in it. AT batch processing systems on mainframes, users submit a job (possibly using remote access), and the OS creates a new process and starts the next job in the queue when the required resources become free.

2. From a technical point of view, in all these cases, a new process is formed in the same way: the current process executes the system request to create a new process. The process and thread management subsystem is responsible for providing processes with the necessary resources. The OS maintains special information structures in memory, in which it records what resources are allocated to each process. It can assign resources to a process for sole use or to share with other processes. Some of the resources are allocated to the process when it is created, and some are allocated dynamically. on requests during lead time. Resources can be allocated to a process for its entire lifetime or only for a specific period. When performing these functions, the process control subsystem interacts with other OS subsystems responsible for resource management, such as the memory management subsystem, I/O subsystem, file system.

3. To prevent processes from interfering with resource allocation, and also could not damage each other's codes and data, the most important task of the OS is to isolate one process from another. For this operating system provides each process with a separate virtual address space so that no process can directly access commands and data from another process.

4. In an OS where there are processes and threads, a process is considered as an application for the consumption of all types of resources, except for one - processor time. This most important resource is distributed by the operating system among other units of work - threads, which got their name due to the fact that they are sequences (flows of execution) of commands. The transition from the execution of one thread to another is carried out as a result ofplanning anddispatching . Work on Determining when to interrupt the execution of the current thread and the thread that should be allowed to run is called scheduling. Thread scheduling is based on information stored in process and thread descriptors. Planning takes into account thread priority, their waiting time in the queue, accumulated lead time, I/O access rate, and other factors.

5. Dispatching consists in the implementation of the solution found as a result of planning, i.e. in switching the processor from one thread to another. Dispatching takes place in three stages:

    saving the context of the current thread;

    launching a new thread for execution.

6. When multiple independent tasks are running on the system at the same time, additional problems arise. Although threads are created and executed synchronously, they may need to interact, for example, when exchanging data. Processes and threads can use a wide range of capabilities to communicate with each other: channels (in UNIX), mailboxes ( Windows), remote procedure call, sockets (in Windows connect processes on different machines). Thread rate matching is also very important to prevent race conditions (when multiple threads try to change the same file), deadlocks, and other collisions that occur when sharing resources.

7. Synchronization threads is one of the most important functions of the process and thread management subsystem. Modern operating systems provide many synchronization mechanisms, including semaphores, mutexes, critical regions, and events. All these mechanisms work with threads, not processes. That's why when flow is blocked on the semaphore, other threads of this process may continue to run.

8. Every time a process terminates, - and this happens due to one of the following events: the usual exit, exit on mistake, exit on fatal error, destruction by another process - the OS takes steps to "clean up the traces" of its stay in the system. The process control subsystem closes all files that the process worked with, frees up areas of RAM allocated for codes, data, and system information structures of the process. Performed correction all kinds of OS queues and list resources that contained references to the process being terminated.