Computer simulation modeling. Statistical Simulation

Model An object is any other object whose individual properties completely or partially coincide with the properties of the original one.

It should be clearly understood that exhaustively complete model it can not be. She is always limited and should only correspond to the goals of modeling, reflecting exactly as many properties of the original object and in such completeness as is necessary for a particular study.

Source object can be either real, or imaginary. We deal with imaginary objects in engineering practice at the early stages of design technical systems. Models of objects not yet embodied in real developments are called anticipatory.

Modeling Goals

The model is created for the sake of research, which is either impossible, or expensive, or simply inconvenient to carry out on a real object. There are several goals for which models and a number of main types of studies are created:

  1. Model as a means of understanding helps to identify:
  • interdependencies of variables;
  • the nature of their change over time;
  • existing patterns.

When compiling the model, the structure of the object under study becomes more understandable, important cause-and-effect relationships are revealed. In the process of modeling, the properties of the original object are gradually divided into essential and secondary from the point of view of the formulated requirements for the model. We are trying to find in the original object only those features that are directly related to the side of its functioning that interests us. In a certain sense, all scientific activity is reduced to the construction and study of models of natural phenomena.

  1. Model as a means of forecasting allows you to learn how to predict behavior and control an object by testing various control options on the model. Experimenting with a real object is often, at best, inconvenient, and sometimes simply dangerous or even impossible due to a number of reasons: the long duration of the experiment, the risk of damaging or destroying the object, the absence of a real object in the case when it is still being designed.
  2. The built models can be used to finding optimal ratios of parameters, studies of special (critical) modes of operation.
  3. The model may also in some cases replace the original object when training, for example, be used as a simulator in training personnel for subsequent work in a real environment, or act as an object of study in a virtual laboratory. Models implemented in the form of executable modules are also used as simulators of control objects in bench tests of control systems, and, at the early stages of design, replace the future hardware-realized control systems themselves.

Simulation

In Russian, the adjective "imitation" is often used as a synonym for the adjectives "similar", "similar". Among the phrases “mathematical model”, “analog model”, “statistical model”, a pair of “simulation model”, which appeared in Russian, probably as a result of inaccurate translation, gradually acquired a new meaning different from its original one.

Indicating that this model is a simulation model, we usually emphasize that, unlike other types of abstract models, this model retains and easily recognizes such features of the modeled object as structure, connections between components way of transmitting information. Simulation models are also usually associated with the requirement illustrations of their behavior with the help of graphic images accepted in this application area. It is not without reason that imitative models are usually called enterprise models, environmental and social models.

Simulation = computer simulation (synonyms). Currently, for this type of modeling, the synonym "computer modeling" is used, thereby emphasizing that the tasks to be solved cannot be solved using standard means for performing computational calculations (calculator, tables or computer programs that replace these tools).

A simulation model is a special software package that allows you to simulate the activity of any complex object, in which:

  • the structure of the object is reflected (and presented graphically) with links;
  • running parallel processes.

To describe the behavior, both global laws and local laws obtained on the basis of field experiments can be used.

Thus, simulation modeling involves the use of computer technology to simulate various processes or operations (i.e., their simulation) performed by real devices. Device or process commonly referred to system . To study a system scientifically, we make certain assumptions about how it works. These assumptions, usually in the form of mathematical or logical relationships, constitute a model from which one can get an idea of ​​the behavior of the corresponding system.

If the relationships that form the model are simple enough to obtain accurate information on the issues of interest to us, then mathematical methods can be used. This kind of solution is called analytical. However, most existing systems are very complex, and it is impossible to create a real model for them, described analytically. Such models should be studied by simulation. In modeling, a computer is used to numerically evaluate the model, and with the help of the obtained data, its real characteristics are calculated.

From the point of view of a specialist (informatics-economist, mathematician-programmer or economist-mathematician), simulation modeling of a controlled process or controlled object is a high-level information technology that provides two types of actions performed using a computer:

  • work on the creation or modification of a simulation model;
  • operation of the simulation model and interpretation of the results.

Simulation (computer) modeling of economic processes is usually used in two cases:

  • to manage a complex business process, when the simulation model of a managed economic object is used as a tool in the contour of an adaptive control system created on the basis of information (computer) technologies;
  • when conducting experiments with discrete-continuous models of complex economic objects to obtain and track their dynamics in emergency situations associated with risks, the full-scale modeling of which is undesirable or impossible.

Typical simulation tasks

Simulation modeling can be applied in various fields of activity. Below is a list of tasks for which modeling is especially effective:

  • design and analysis production systems;
  • determination of requirements for equipment and protocols of communication networks;
  • determination of equipment requirements and software various computer systems;
  • design and analysis of the operation of transport systems, such as airports, highways, ports and subways;
  • evaluation of projects for the creation of various queuing organizations, for example, order processing centers, establishments fast food, hospitals, post offices;
  • modernization of various business processes;
  • defining policies in inventory management systems;
  • analysis of financial and economic systems;
  • assessment of various weapons systems and requirements for their logistics.

Model classification

The following were chosen as the basis for classification:

  • a functional feature that characterizes the purpose, purpose of building a model;
  • the way the model is presented;
  • time factor reflecting the dynamics of the model.

Function

Model class

Example

Descriptions

Explanations

Demo Models

Educational posters

Predictions

Scientific and technical

Economic

Mathematical models of processes

Models of developed technical devices

measurements

Processing of empirical data

Model ship in the pool

Aircraft model in a wind tunnel

Interpreting

Military, economic, sports, business games

criterial

Exemplary (reference)

shoe model

clothing model

In accordance with it, the models are divided into two large groups: material and abstract (non-material). Both material and abstract models contain information about the original object. Only for a material model, this information has a material embodiment, and in an intangible model, the same information is presented in an abstract form (thought, formula, drawing, diagram).

Material and abstract models can reflect the same prototype and complement each other.

Models can be roughly divided into two groups: material and ideal, and, accordingly, to distinguish between subject and abstract modeling. The main varieties of subject modeling are physical and analog modeling.

Physical it is customary to call such modeling (prototyping), in which a real object is associated with its enlarged or reduced copy. This copy is created on the basis of the theory of similarity, which allows us to assert that the required properties are preserved in the model.

In physical models, in addition to geometric proportions, for example, the material or color scheme of the original object, as well as other properties necessary for a particular study, can be saved.

analog modeling is based on replacing the original object with an object of a different physical nature, which has a similar behavior.

Both physical and analog modeling as the main method of research involves natural experiment with the model, but this experiment turns out to be in some sense more attractive than the experiment with the original object.

Ideal models are abstract images of real or imaginary objects. There are two types of ideal modeling: intuitive and iconic.

About intuitive modeling is said when they cannot even describe the model used, although it exists, but they are taken to predict or explain the world around us with its help. We know that living beings can explain and predict phenomena without the visible presence of a physical or abstract model. In this sense, for example, the life experience of each person can be considered his intuitive model of the world around him. When you are about to cross a street, you look to the right, to the left, and intuitively decide (usually correctly) whether you can go. How the brain copes with this task, we simply do not yet know.

Iconic called modeling, using signs or symbols as models: diagrams, graphs, drawings, texts in various languages, including formal, mathematical formulas and theories. An obligatory participant in sign modeling is an interpreter of a sign model, most often a person, but a computer can also cope with the interpretation. Drawings, texts, formulas in themselves have no meaning without someone who understands them and uses them in their daily activities.

The most important type of sign modeling is mathematical modeling. Abstracting from the physical (economic) nature of objects, mathematics studies ideal objects. For example, using the theory of differential equations, one can study the already mentioned electrical and mechanical vibrations in the most general form, and then apply the acquired knowledge to study objects of a specific physical nature.

Types of mathematical models:

Computer model - this is a software implementation of a mathematical model, supplemented by various utility programs (for example, those that draw and change graphic images in time). The computer model has two components - software and hardware. The software component is also an abstract sign model. This is just another form of an abstract model, which, however, can be interpreted not only by mathematicians and programmers, but also by a technical device - a computer processor.

A computer model exhibits the properties of a physical model when it, or rather its abstract components - programs, are interpreted by a physical device, a computer. The combination of a computer and a simulation program is called " electronic equivalent of the object under study". A computer model as a physical device can be part of test benches, simulators and virtual laboratories.

Static model describes the immutable parameters of an object or a one-time slice of information on a given object. Dynamic Model describes and investigates time-varying parameters.

The simplest dynamic model can be described as a system of linear differential equations:

all modeled parameters are functions of time.

Deterministic Models

There is no place for chance.

All events in the system occur in a strict sequence, exactly in accordance with the mathematical formulas that describe the laws of behavior. Therefore, the result is precisely defined. And the same result will be obtained, no matter how many experiments we conduct.

Probabilistic models

Events in the system do not occur in an exact sequence, but randomly. But the probability of occurrence of this or that event is known. The result is not known in advance. When conducting an experiment, different results can be obtained. These models accumulate statistics over many experiments. Based on these statistics, conclusions are drawn about the functioning of the system.

Stochastic Models

When solving many problems of financial analysis, models are used that contain random variables whose behavior cannot be controlled by decision makers. Such models are called stochastic. The use of simulation allows you to draw conclusions about the possible results based on the probability distributions of random factors (values). Stochastic simulation often called the Monte Carlo method.

Stages of computer simulation
(computational experiment)

It can be represented as a sequence of the following basic steps:

1. STATEMENT OF THE PROBLEM.

  • Description of the task.
  • The purpose of the simulation.
  • Formalization of the task:
    • structural analysis of the system and processes occurring in the system;
    • building a structural and functional model of the system (graphic);
    • highlighting the properties of the original object that are essential for this study

2. DEVELOPMENT OF THE MODEL.

  • Construction of a mathematical model.
  • Choice of modeling software.
  • Design and debugging of a computer model (technological implementation of the model in the environment)

3. COMPUTER EXPERIMENT.

  • Assessment of the adequacy of the constructed computer model (satisfaction of the model with the goals of modeling).
  • Drawing up a plan of experiments.
  • Conducting experiments (studying the model).
  • Analysis of the results of the experiment.

4. ANALYSIS OF SIMULATION RESULTS.

  • Generalization of the results of experiments and conclusion about the further use of the model.

According to the nature of the formulation, all tasks can be divided into two main groups.

To first group include tasks that require explore how the characteristics of an object will change with some impact on it. This kind of problem statement is called "what happens if…?" For example, what happens if you double your utility bills?

Some tasks are formulated somewhat more broadly. What happens if you change the characteristics of an object in a given range with a certain step? Such a study helps to trace the dependence of the object parameters on the initial data. Very often it is required to trace the development of the process in time. This extended problem statement is called sensitivity analysis.

Second group tasks has the following generalized formulation: what effect should be made on the object so that its parameters satisfy some given condition? This problem statement is often referred to as "How do you make...?"

How to make sure that "both the wolves are fed and the sheep are safe."

The largest number of modeling tasks, as a rule, is complex. In such problems, a model is first built for one set of initial data. In other words, the problem “what happens if ...?” is solved first. Then the study of the object is carried out while changing the parameters in a certain range. And, finally, according to the results of the study, the parameters are selected so that the model satisfies some of the designed properties.

It follows from the above description that modeling is a cyclic process in which the same operations are repeated many times.

This cyclicality is due to two circumstances: technological, associated with "unfortunate" mistakes made at each of the considered stages of modeling, and "ideological", associated with the refinement of the model, and even with the rejection of it, and the transition to another model. Another additional "outer" loop can appear if we want to expand the scope of the model, and change the inputs that it must correctly account for, or the assumptions under which it must be fair.

Summing up the results of the simulation may lead to the conclusion that the planned experiments are not enough to complete the work, and possibly to the need to refine the mathematical model again.

Planning a computer experiment

In experiment design terminology, the input variables and structural assumptions that make up the model are called factors, and the output performance measures are called responses. The decision about which parameters and structural assumptions to consider as fixed indicators, and which as experimental factors, depends on the purpose of the study, rather than on the internal form of the model.

Read more about planning a computer experiment on your own (pp. 707–724; pp. 240–246).

Practical methods of planning and conducting a computer experiment are considered in practical classes.

Limits of possibilities of classical mathematical methods in economics

Ways to study the system

Experiment with a real system or with a model system? If it is possible to physically change the system (if it is cost-effective) and put it into operation in new conditions, it is best to do just that, since in this case the question of the adequacy of the result obtained disappears by itself. However, such an approach is often not feasible, either because it is too costly to implement or because it has a devastating effect on the system itself. For example, the bank is looking for ways to reduce costs, and for this purpose it is proposed to reduce the number of tellers. If you try it in action new system– with fewer cashiers, this can lead to long delays in serving customers and their abandonment of the bank. Moreover, the system may not actually exist, but we want to explore its various configurations in order to choose the most efficient way to execute. Communication networks or strategic nuclear weapons systems are examples of such systems. Therefore, it is necessary to create a model representing the system and examine it as a substitute for the real system. When using a model, the question always arises - whether it really accurately reflects the system itself to such an extent that it is possible to make a decision based on the results of the study.

Physical model or mathematical model? When we hear the word "model," most of us think of cockpits set up outside the planes on training grounds and used for pilot training, or miniature supertankers moving around in a pool. These are all examples of physical models (also called iconic or figurative). They are rarely used in operations research or systems analysis. But in some cases, the creation of physical models can be very effective in the study of technical systems or control systems. Examples include scale tabletop models of loading and unloading systems and at least one full-scale physical model of a fast food restaurant in a large store that involved real customers. However, the vast majority of created models are mathematical. They represent the system through logical and quantitative relationships, which are then processed and modified to determine how the system responds to change, more precisely, how it would respond if it actually existed. Probably the simplest example of a mathematical model is the well-known relation S=V/t, where S- distance; V– movement speed; t- travel time. Sometimes such a model may be adequate (for example, in the case of a space probe directed to another planet, after it reaches the speed of flight), but in other situations it may not correspond to reality (for example, traffic during rush hours on an urban congested freeway) .

Analytical solution or simulation? To answer questions about the system that a mathematical model represents, it is necessary to establish how this model can be built. When the model is simple enough, it is possible to calculate its relations and parameters and obtain an accurate analytical solution. However, some analytical solutions can be extremely complex and require huge computer resources. The inversion of a large nonsparse matrix is ​​a familiar example of a situation where there is a known analytical formula in principle, but in this case it is not so easy to obtain a numerical result. If, in the case of a mathematical model, an analytical solution is possible and its calculation seems to be effective, it is better to study the model in this way, without resorting to simulation. However, many systems are extremely complex; they almost completely exclude the possibility of an analytical solution. In this case, the model should be studied using simulation, i.e. repeated testing of the model with the desired input data to determine their impact on the output criteria for evaluating the performance of the system.

Simulation is perceived as a "method of last resort", and there is a grain of truth in this. However, in most situations, we quickly realize the need to resort to this particular tool, since the systems and models under study are quite complex and need to be represented in an accessible way.

Suppose we have a mathematical model that needs to be investigated using simulation (hereinafter referred to as the simulation model). First of all, we need to come to a conclusion about the means of its study. In this regard, simulation models should be classified according to three aspects.

Static or dynamic? A static simulation model is a system at a certain point in time, or a system in which time simply does not play any role. Examples of a static simulation model are Monte Carlo models. A dynamic simulation model represents a system that changes over time, such as a conveyor system in a factory. Having built a mathematical model, it is necessary to decide how it can be used to obtain data about the system it represents.

Deterministic or stochastic? If the simulation model does not contain probabilistic (random) components, it is called deterministic. In a deterministic model, the result can be obtained when all input quantities and dependencies are given for it, even if in this case a large amount of computer time is required. However, many systems are modeled with multiple random component inputs, resulting in a stochastic simulation model. Most queuing and inventory management systems are modeled this way. Stochastic simulation models produce a result that is random in itself and therefore can only be considered as an estimate of the true characteristics of the model. This is one of the main disadvantages of modeling.

Continuous or discrete? Generally speaking, we define discrete and continuous models in a similar way to the previously described discrete and continuous systems. It should be noted that a discrete model is not always used to model a discrete system, and vice versa. Whether it is necessary to use a discrete or continuous model for a particular system depends on the objectives of the study. Thus, a traffic flow model on a highway will be discrete if you need to take into account the characteristics and movement of individual cars. However, if the vehicles can be considered collectively, the traffic flow can be described using differential equations in a continuous model.

The simulation models that we will consider next will be discrete, dynamic, and stochastic. In what follows, we will refer to them as discrete-event simulation models. Since deterministic models are a special kind of stochastic models, the fact that we restrict ourselves to such models does not introduce any generalization errors.

Existing approaches to visual modeling of complex dynamic systems.
Typical simulation systems

Simulation modeling on digital computers is one of the most powerful means of research, in particular, complex dynamic systems. Like any computer simulation, it makes it possible to carry out computational experiments with systems that are still being designed and to study systems with which full-scale experiments, due to safety or high cost reasons, are not appropriate. At the same time, due to its closeness in form to physical modeling, this research method is accessible to a wider range of users.

At present, when the computer industry offers a variety of modeling tools, any qualified engineer, technologist or manager should be able to not only model complex objects, but to model them using modern technologies implemented in the form of graphical environments or visual modeling packages.

“The complexity of the systems being studied and designed leads to the need to create a special, qualitatively new research technique that uses the apparatus of imitation - reproduction on a computer by specially organized systems of mathematical models of the functioning of the designed or studied complex” (N.N. Moiseev. Mathematical problems of system analysis. M .: Nauka, 1981, p. 182).

Currently, there is a great variety of visual modeling tools. We will agree not to consider in this paper packages oriented to narrow application areas (electronics, electromechanics, etc.), since, as noted above, the elements of complex systems, as a rule, belong to different application areas. Among the remaining universal packages (oriented to a certain mathematical model), we will not pay attention to packages focused on mathematical models, different from a simple dynamical system (partial differential equations, statistical models), as well as purely discrete and purely continuous. Thus, the subject of consideration will be universal packages that allow modeling structurally complex hybrid systems.

They can be roughly divided into three groups:

  • "block modeling" packages;
  • "physical modeling" packages;
  • packages focused on the scheme of a hybrid machine.

This division is conditional, primarily because all these packages have much in common: they allow you to build multi-level hierarchical functional diagrams, support OOM technology to some extent, provide similar visualization and animation capabilities. The differences are due to which of the aspects of a complex dynamical system is considered the most important.

"block modeling" packages focused on the graphic language of hierarchical block diagrams. Elementary blocks are either predefined or can be constructed using some special lower level auxiliary language. A new block can be assembled from existing blocks using oriented links and parametric tuning. The predefined elementary blocks include purely continuous, purely discrete, and hybrid blocks.

The advantages of this approach include, first of all, the extreme simplicity of creating not very complex models, even by a not very trained user. Another advantage is the efficiency of the implementation of elementary blocks and the simplicity of constructing an equivalent system. At the same time, when creating complex models, one has to build rather cumbersome multilevel block diagrams that do not reflect the natural structure of the system being modeled. In other words, this approach works well when there are suitable building blocks.

The most famous representatives of the "block modeling" packages are:

  • SIMULINK subsystem of the MATLAB package (MathWorks, Inc.; http://www.mathworks.com);
  • EASY5 (Boeing)
  • SystemBuild subsystem of the MATRIXX package (Integrated Systems, Inc.);
  • VisSim (Visual Solution; http://www.vissim.com).

"Physical Simulation" packages allow the use of undirected and streaming relationships. The user can define new block classes himself. The continuous component of the behavior of an elementary block is given by a system of algebraic differential equations and formulas. The discrete component is specified by the description of discrete events (events are specified by a logical condition or are periodic), upon occurrence of which instantaneous assignments of new values ​​to variables can be performed. Discrete events can propagate through special links. Changing the structure of equations is possible only indirectly through the coefficients on the right-hand sides (this is due to the need for symbolic transformations when passing to an equivalent system).

The approach is very convenient and natural for describing typical blocks of physical systems. The disadvantages are the need for symbolic transformations, which sharply narrows the possibilities of describing hybrid behavior, as well as the need to numerically solve a large number of algebraic equations, which greatly complicates the task of automatically obtaining a reliable solution.

Physical modeling packages include:

  • 20 SIM(Controllab Products B.V; http://www.rt.el.utwente.nl/20sim/);
  • Dymola(Dymasim; http://www.dynasim.se);
  • Omola, OmSim(Lund University; http://www.control.lth.se/~case/omsim.html);

As a generalization of the experience of developing systems in this direction, an international group of scientists developed a language Modelica(The Modelica Design Group; http://www.dynasim.se/modelica) offered as a standard for exchanging model descriptions between different packages.

Packages based on the use of the hybrid machine scheme, make it possible to describe hybrid systems with complex switching logic very clearly and naturally. The need to determine an equivalent system at each switch makes it necessary to use only oriented connections. The user can define new block classes himself. The continuous component of the behavior of an elementary block is given by a system of algebraic differential equations and formulas. The redundancy of the description when modeling purely continuous systems should also be attributed to the disadvantages.

This package includes Shift(California PATH: http://www.path.berkeley.edu/shift) as well as the native package Model Vision Studio. The Shift package is more focused on describing complex dynamic structures, while the MVS package is more focused on describing complex behaviors.

Note that there is no insurmountable gap between the second and third directions. In the end, the impossibility of sharing them is due only to today's computing capabilities. In the same time common ideology building models is almost the same. In principle, a combined approach is possible, when in the structure of the model the constituent blocks, the elements of which have a purely continuous behavior, should be singled out and transformed once to an equivalent elementary one. Further, the cumulative behavior of this equivalent block should be used in the analysis of the hybrid system.

Introduction. 4

1 Simulation. 5

2 Guidelines for the implementation of practical work. 31

3 Tasks for practical work. 38

List of used literature.. 40

Appendix A.. 41


Introduction

Simulation modeling is one of the most effective methods
analysis for research and development of complex processes and systems. This simulation allows the user to experiment with systems in cases where it is impossible or impractical to do this on a real object. Simulation modeling is based on mathematics, probability theory and statistics. At the same time, simulation and experimentation in many cases remain intuitive processes. This is due to the fact that such processes as the selection of existing factors for building a model, the introduction of simplifying assumptions and making the right decisions based on models of limited accuracy, rely heavily on the intuition of the researcher and the practical experience of one or another leader.

Toolkit contains information about modern approaches to
evaluating the effectiveness of any technological or other process. In them
some methods of documenting information, identification at the stage of search and discovery of facts are considered in order to ensure their most effective use. For this purpose, a group of methods can be used, which can be called schematic models. This name refers to methods of analysis, including a graphical representation of the system. They are intended to help the manager (engineer) to better understand and document the process or system under study. Although at present there are many methods of schematic representation of technological processes, we will limit ourselves to consideration only technological maps, process diagrams and multifunctional operation diagrams .

Simulation

Office in modern world becomes more and more hard work as the organizational structure of our society becomes more complex. This complexity is due to the nature of the relationships between the various elements of our organizations and the physical systems with which they interact. Although this complexity has existed for a long time, we are only now beginning to understand its significance. We now recognize that a change in one of the characteristics of a system can easily lead to change or create a need for change in other parts of the system; in this regard, the methodology of systems analysis was developed, which was designed to help managers and engineers study and comprehend the consequences of such changes. In particular, with the advent of electronic computers, one of the most important and useful tools for analyzing the structure of complex processes and systems has become simulation modeling. To imitate means "to imagine, to achieve the essence of a phenomenon without resorting to experiments on a real object."

Simulation is the process of constructing a model
real system and setting up experiments on this model in order to either
understand the behavior of the system, or evaluate (within the limits imposed by some criterion or set of criteria) various strategies that ensure the functioning of this system. Thus, the process of simulation modeling is understood as a process that includes both the construction of a model and the analytical application of the model to study a certain problem. Under the model of a real system we mean the representation of a group of objects or ideas in some form different from their real embodiment; hence the term "real" is used in the sense of "existing or capable of assuming one of the forms of existence". Therefore, systems that are still on paper or in the planning stage can be modeled in the same way as existing systems.

By definition, the term "simulation" can also cover stochastic models and Monte Carlo experiments. In other words, model inputs and (or) functional relationships between its various components may or may not contain an element of chance, subject to probabilistic laws. Simulation modeling is therefore an experimental and applied methodology aimed at:

− describe the behavior of systems;

− build theories and hypotheses that can explain the observed behavior;

− use these theories to predict the future behavior of the system, i.e. those impacts that may be caused by changes in the system or changes in the way it functions.

Unlike most technical methods, which can be
classified according to the scientific disciplines in which they
are rooted (for example, with physics or chemistry), simulation
modeling is applicable in any branch of science. It is applied in commercial activities, economics, marketing, education, politics, social science, behavioral science, international relations, transportation, personnel policy, law enforcement, urban and global systems research, and many other areas.

Consider a simple example that allows you to understand the essence of the idea of ​​simulation. For example, a line of customers at the counter of a small store (the so-called one-line queuing system). Let's assume that the time intervals between successive appearances of buyers are distributed evenly in the range from 1 to 10 minutes (for simplicity, we round the time to the nearest whole number of minutes). Suppose further that the time required to serve each customer is distributed evenly over the interval from 1 to 6 minutes. We are interested in the average time that a customer spends in a given system (including both waiting and serving), and the percentage of time that a customer is not busy with work while in control.

To model the system, we need to set up an artificial experiment that reflects the basic conditions of the situation. To do this, we must come up with a way to simulate an artificial sequence of arrivals of customers and the time required to serve each of them. One way we could do this is to borrow ten chips and one die from a poker friend. Following this, we could number the chips with the numbers 1 through 10, put them in a hat and, shaking it, mix the chips. By pulling a chip from the hat and reading the number rolled, we could in this way represent the time intervals between the appearance of the previous and subsequent buyers. Throwing our die and reading the number of points from its upper face, we could represent the service time of each customer with such numbers. By repeating these operations in this sequence (putting the chips back each time and shaking the hat before each draw), we could obtain a time series representing the time intervals between successive customer arrivals and their corresponding service times. Our task will then be reduced to a simple registration of the results of the experiment. Table 1 shows what, for example, results can be obtained in the case of analyzing the arrival of 20 customers.

Table 1.1 - Results of the experiment when analyzing the arrival of 20 buyers

Buyer Time after the arrival of the previous buyer, min Service time, min Current model time at the time of arrival of buyers Service start End of Service Time spent by the customer at the counter, min Downtime of the seller waiting for the buyer, min
1. - 0,00 0,00 0,01
2. 0,03 0,03 0,07
3. 0,10 0,10 0,14
4. 0,13 0,14 0,16
5. 0,22 0,22 0,23
6. 0,32 0,32 0,37
7. 0,38 0.38 0,42
8. 0,46 0,46 0,52
9. 0,54 0,54 0,55
10. 1,02 1,02 1,05
11. 1,09 1,09 1,14
12. 1.12 1,14 1,19
13. 1,20 1,20 1,23
14. 1,24 1,24 1,30
15. 1,28 1,30 1,31
16. 1,35 1,35 1,36
17. 1.36 1,36 1,42
18. 1.42 1,42 1,43
19. 1,49 1,49 1,51
20. 1,55 1,55 1,57
Total:

Obviously, in order to obtain the statistical significance of the results, we
we had to take a much larger sample, in addition, we did not take into account some important circumstances, such as, for example, the initial conditions. An important point is that we used two devices to generate random numbers (numbered poker chips and a die); it was done with a rush to carry out an artificial (imitation) experiment with a system that allows revealing certain features of its behavior. Now let's move on to the next concept - the model. A model is a representation of an object, system or concept (idea) in some form different from the form of their real existence. A model is usually a tool to help us explain, understand, or improve a system. The model of an object can either be an exact copy of this object (albeit made of a different material and on a different scale), or display some of the characteristic properties of the object in an abstract form. Due to the fact that simulation is only one type of modeling, let's first consider modeling in its general form.

It is usually considered that the model is used to predict and
comparison tool that allows you to logically predict
consequences of alternative actions and indicate with sufficient confidence which of them to give preference. Modeling covers a wide range of acts of human communication in evolutionary terms - from rock art and the construction of idols to the compilation of systems of complex mathematical equations that describe the flight of a rocket in outer space. In essence, the progress and history of science and technology have found their most accurate expression in the development of man's ability to create models of natural phenomena, concepts and objects.

Almost all researchers argue that one of the main elements necessary for the effective solution of complex problems is the construction and appropriate use of the model. Such a model can take a variety of forms, but one of the most useful and certainly the most widely used forms is the mathematical one, which expresses, through a system of equations, the essential features of the real systems or phenomena being studied. Unfortunately, it is not always possible to create a mathematical model in the narrow sense of the word. When studying most industrial systems, we can define goals, specify limitations, and ensure that our design obeys technical and/or economic laws. At the same time, significant connections in the system can be revealed and presented in one or another mathematical form. In contrast, addressing air pollution protection, crime prevention, public health, and urban growth is associated with unclear and conflicting goals, as well as the choice of alternatives dictated by political and social factors. Therefore, the definition of a model should include both quantitative and qualitative characteristics of the model.

There are five most common functions of applying models, such as:

- means of understanding reality,

− means of communication,

− means of education and training,

− forecasting tool,

− means of setting up experiments.

The usefulness of the model as a means of understanding real relationships and
patterns is obvious. Models can help us organize our
fuzzy or conflicting concepts and inconsistencies. For example, representing complex systems design work as a network model encourages us to think about what steps to take and in what sequence. Such a model helps us to identify interdependencies, necessary activities, time relationships, required resources, etc. The very attempt to present our verbal formulations and thoughts in some other form often reveals contradictions and ambiguities. A well-built model forces us to organize our ideas, evaluate and test their validity.

As a means of communication, a well-designed model is second to none. This function of models is perfectly confirmed by the proverb: “It is better to see once than hear a hundred times.” All word-based languages ​​are inaccurate to some extent when it comes to complex concepts and descriptions. Well-built models can help us eliminate these inaccuracies by providing us with more efficient, more successful ways of communicating. The advantage of the model over verbal descriptions is in the conciseness and accuracy of the representation of a given situation. The model makes the general structure of the object under study more understandable and reveals important cause-and-effect relationships.

Models have been and continue to be widely used as
funds vocational training and learning. Psychologists have long recognized the importance of teaching a person professional skills in conditions where he does not have strong motivations for this. If a person practices something, then he should not be pressured. A critical situation arises here when choosing the wrong time and place for teaching a person new professional techniques. Therefore, models are often used as an excellent means of teaching individuals who must be able to cope with all sorts of contingencies before a real critical situation arises. Most are already familiar with the use of models, such as life models or models. spaceships used for training astronauts, simulators for training car drivers and business games for training administrative personnel of companies.

One of the most important applications models in both practical and historical aspects is the prediction of the behavior of the simulated objects. It is not economically feasible to build an ultrasonic jet aircraft to determine its flight characteristics, but they can be predicted by simulation tools.

Finally, the use of models also makes it possible to conduct controlled experiments in situations where experimentation on real objects would be practically impossible or not economically feasible. Direct experimentation with the system usually consists in varying some of the parameters; while maintaining all other parameters unchanged, observe the results of the experiment. For most systems that the researcher has to deal with, this is either practically inaccessible, or too expensive, or both. When it is too expensive and/or impossible to experiment on a real system, a model can often be built on which the necessary experiments can be carried out with relative ease and inexpensively. When experimenting with the model complex system we can often learn more about its internal interacting factors than we could learn by manipulating the real system; this becomes possible due to the measurability of the structural elements of the model, due to the fact that we can control its behavior, easily change its parameters, etc.

Thus, a model can serve one of two main purposes: either descriptive, when the model serves to explain and/or better understand an object, or prescriptive, when the model allows one to predict and/or reproduce the characteristics of an object that determine its behavior. A model of prescriptive type is usually also descriptive, but not vice versa. This means that the prescriptive model is almost always descriptive of the object being modeled, but the descriptive model is not always useful for planning and design purposes. This is probably one of the reasons why economic models (which tend to be descriptive) have had little impact on the management of economic systems and have been little used as an auxiliary management tool for highest level, while operations research models have had a significant impact on these areas.

In engineering, models serve as aids in the development of new or improved systems, while in the social sciences, models explain existing systems. A model suitable for the purposes of developing a system must also explain it, but it is obvious that models created solely for explanation often do not even correspond to their intended purpose.

Models in general, and simulation models in particular, can be classified in various ways. Let us indicate some typical groups of models that can form the basis of a classification system:

− static (for example, a cross section of an object) and dynamic (time series);

− deterministic and stochastic;

− discrete and continuous;

− natural, analog, symbolic.

Simulation models are conveniently represented as a continuum, ranging from precise models or layouts of real objects to completely abstract mathematical models (Figure 1.1). Models at the beginning of the spectrum are often called physical or natural models because they superficially resemble the system under study. Static physical models, such as models of architectural objects or layouts of factory buildings, help us visualize spatial relationships. An example of a dynamic physical model would be a pilot plant model (scaled down) designed to study a new chemical process before moving to full capacity production, or a scaled-down aircraft model that is tested in a wind tunnel to evaluate dynamic stability. A distinctive feature of a physical model is that it in some sense "looks" like the object being modeled. Physical models can be in the form of full-scale layouts (for example, simulators), performed on a reduced scale (for example, a model solar system) or on a larger scale (such as a model of an atom). They can also be 2D or 3D. They can be used for demonstration purposes (like a globe) or to perform indirect experiments. The graded templates used in the study of plant layouts are an example of a scaled-down two-dimensional physical model used for experimentation purposes.

Accuracy
Abstraction

Figure 1.1 - Mathematical models

Analog models are models in which a property of a real object is represented by some other property of an object similar in behavior. The problem is sometimes solved by replacing one property with another, after which the results obtained must be interpreted in relation to the original properties of the object. For example, a voltage change in a network of a certain configuration can represent the flow of goods in a system and is an excellent example of an analog simulation model. Another example is a slide rule, in which the quantitative characteristics of some object are represented by scale segments on a logarithmic scale.

Costs
Volume of production

Figure 1.2 - Curve of production costs

The graph is a different type of analog model: here, the distance represents such characteristics of the object. Like time, service life, number of units, etc. The graph can also show the relationship between different quantities and can predict how some quantities will change when other quantities change. So, for example, the graph in Figure 1.2 shows how the cost of manufacturing a particular product can depend on the volume of production. This graph shows exactly how costs are related to output, so we can predict what will happen to costs if we increase or decrease output. For some relatively simple cases, the graph can indeed serve as a means of solving the problem. From the graph of Figure 1.2, you can get a curve for changing the marginal cost of the product.

If the task is to determine the optimal volume of production at a given price (i.e., the volume of production that provides the maximum net profit), then we solve this problem by plotting the price change curve for one product on the same graph. The optimal volume will be at the point where the price curve and the marginal cost curve intersect. Graphical solutions are also possible for certain linear programming tasks, as well as for gaming tasks. Sometimes graphs are used in conjunction with mathematical models, with one of these models providing input to the other.

Models other than graphs, which are circuits of various sorts, are also useful analog models; a common example of such schemes is the structural diagram of an organization. The “squares” connected by lines in such a scheme reflect the subordination between the members of the organization by the time the scheme was drawn up, as well as the channels of information exchange between them. System studies also make extensive use of process flow diagrams, in which various events such as operations, delays, checks, stocks, etc., are represented by lines and symbols representing movement.

As we move along the spectrum of models, we will reach those where people and machine components interact. Such modeling is often called games (management, planning). Because management decision-making processes are difficult to model, it is often considered expedient to abandon such an attempt. In the so-called management (business) games, a person interacts with information coming from the output of a computer (which models all other properties of the system), and makes decisions based on the information received. The human decisions are then fed back into the machine as input, which is used by the system. Continuing this process further, we come to fully machine simulation, which is usually understood by the term "simulation". The computer can be a component of all simulation models of the considered part of the spectrum, although this is not necessary.

Symbolic or mathematical models are those that use symbols rather than physical devices to represent a process or system. In this case, systems of differential equations can be considered as a common example of the representation of systems. Since the latter are the most abstract and, therefore, the most general models, mathematical models are widely used in systems research. The symbolic model is always an abstract idealization of the problem, and if one wants this model to solve the problem, some simplifying assumptions are needed. Therefore, special care must be taken to ensure that the model serves as a valid representation of the given problem.

When modeling a complex system, the researcher is usually forced to use a combination of several models from among the varieties mentioned above. Any system or subsystem can be represented in a variety of ways, which vary greatly in complexity and detail. In most cases, systems research results in several different models of the same system. But usually, as the researcher analyzes more deeply and understands the problem better, simple models are replaced by more and more complex ones.

All simulation models are so-called black box models. This means that they provide the output signal of the system if its interacting subsystems receive an input signal. Therefore, in order to obtain the necessary information or results, it is necessary to “run” simulation models, and not “solve” them. Simulation models are not able to form their own solution in the form in which it takes place in analytical models, but can only serve as a means to analyze the behavior of the system under conditions that are determined by the experimenter. Therefore, simulation modeling is not a theory, but a methodology for solving problems. Moreover, simulation is only one of several available to the systems analyst. essential methods problem solution. Since it is necessary and desirable to adapt a tool or Method to the solution of a problem, and not vice versa, a natural question arises: in what cases is simulation useful?

Based on the above, the researcher should consider the feasibility of using simulation in the presence of any of the following conditions:

1. there is no complete mathematical formulation of this problem, or analytical methods for solving the formulated mathematical model have not yet been developed. Many queuing models fall into this category;

2. analytical methods are available, but mathematical procedures are so complex and time-consuming that simulation modeling provides an easier way to solve the problem;

3. analytical solutions exist, but their implementation is impossible due to insufficient mathematical training of the existing staff. In this case, the costs of designing, testing and working on a simulation model should be compared with the costs associated with inviting specialists from outside;

4. in addition to assessing certain parameters, it is desirable to monitor the progress of the process on a simulation model for a certain period;

5. simulation modeling may be the only possibility due to the difficulties of setting up experiments and observing phenomena in real conditions;

6. for the long-term operation of systems or processes, compression may be necessary: ​​timeline. Simulation modeling makes it possible to fully control the time of the process being studied, since the phenomenon can be slowed down or accelerated at will.

An added advantage simulation modeling can be considered the widest possibilities of its application in the field of education and training. The development and use of a simulation model allows the experimenter to see and "play out" real processes and situations on the model. This, in turn, should greatly help him understand and feel the problem, which stimulates the process of searching for innovations.

The use of simulation is attractive to both managers and systems researchers due to its simplicity. However, developing a good simulation model is often expensive and time consuming. For example, it may take 3 to 11 years to develop a good internal planning model. In addition, simulation models are not accurate and it is almost impossible to measure the degree of this inaccuracy. Nevertheless, the advantages of simulation modeling have been indicated above.

Before starting the development of a model, it is necessary to understand what the structural elements are from which it is built. Although the mathematical or physical structure of the model can be very complex, the basics of its construction are quite simple. In the most general form, the structure of the model can be represented mathematically in the form (1.1):

, (1.1)

where E is the result of the system;

X i - variables and parameters that we can control;

i has variables and parameters that we
we cannot manage;

F is a functional relationship between x i and y i , which
determines the value of E.

This simplification is useful in that it shows the dependence of the functioning of the system on both controlled by us and uncontrolled variables. Almost every model is some combination of such components as:

− components,

− variables,

− parameters,

− functional dependencies,

− restrictions,

− objective functions.

Components are understood as constituent parts, which, when properly combined, form a system. Sometimes elements of a system or all subsystems are also considered components.

The model of a city may consist of such components as an education system, a healthcare system, a transportation system, and so on. In an economic model, individual firms, individual consumers, and so on can be components. A system is defined as a group or set of objects that are brought together by some form of regular interaction or interdependence to perform a given function. Components are the objects that form the system under study.

Parameters are quantities that the operator working on the model can choose arbitrarily, in contrast to variables that can only take values ​​determined by the type of this function. Looking at it from a different angle, we can say that the parameters, once set, are constant values ​​that cannot be changed. For example, in an equation like y=3x, the number 3 is the parameter, and x and y are the variables. With the same success, you can set y=16x or y=30x. Statistical analysis often seeks to determine these unknown but fixed parameters for whole group data. If we consider a certain group of data or a statistical population, then the quantities that determine the trend in the behavior of this population, such as, for example, the mean value, median or mode, are parameters of the population in the same way that measures of variability are such quantities as range, variance, standard deviation. So, for the Poisson distribution, where the probability x is given by the function , l is a distribution parameter, x is a variable, and e is a constant.

The system model distinguishes between two types of variables - exogenous and
endogenous. Exogenous variables are also called input; this means that they are generated outside the system or are the result of external causes. Endogenous variables are variables that arise in the system or as a result of exposure internal causes. We also call endogenous variables state variables (when they characterize the state or conditions that take place in the system) or output variables (when it refers to the outputs of the system). Statisticians sometimes refer to exogenous variables as independent variables and endogenous variables as dependent variables.

Functional dependencies describe the behavior of variables and
parameters within a component or express relationships between system components. These ratios, or operational characteristics, are either deterministic or stochastic in nature. Deterministic relationships are identities or definitions that establish a relationship between certain variables or parameters in cases where the process at the output of the system is uniquely determined by the given information at the input. In contrast, stochastic relations are such dependencies that, given input information, give an uncertain result at the output. Both types of relationships are usually expressed in the form of a mathematical equation that establishes a relationship between endogenous variables (state variables) and exogenous variables. Typically, these relationships can only be built on the basis of hypotheses or derived using statistical or mathematical analysis.

Constraints are set limits for changing the values ​​of variables or limiting conditions for the distribution and expenditure of certain funds (energy, time reserves, etc.). They can be introduced either by the developer (artificial restrictions) or by the system itself due to its inherent properties (natural restrictions). Examples of artificial limits might be given maximum and minimum employment levels for workers, or a set maximum amount Money allocated for capital investment. Most system specifications are a set of artificial constraints. Natural limitations are due to the very nature of the system. For example, one cannot sell more products than the system can produce, and one cannot design a system that violates the laws of nature. Thus, restrictions of one type are due to the immutable laws of nature, while restrictions of another type, being the work of human hands, can be subject to change. It is very important for the researcher to keep this in mind, because in the course of his research he must constantly evaluate the limitations introduced by man in order to weaken or strengthen them as necessary.

The objective function, or criterion function, is an accurate representation of the goals or objectives of the system and the necessary rules for evaluating their implementation. Usually point to two types of goals: preservation and acquisition. Conservation goals are related to the preservation or maintenance of any resources (temporary, energy, creative, etc.) or conditions (comfort, safety, employment level, etc.). Acquisition goals are associated with the acquisition of new resources (profit, personnel, customers, etc.) or the achievement of certain states that the organization or leader is striving for (capturing a part of the market, achieving a state of intimidation, etc.). The expression for the objective function must be an unambiguous definition of the goals and objectives with which the decisions made must be commensurate. Webster's Dictionary defines "criteria" as "a standard of judgment, a rule, or a kind of test by which a correct judgment is made about something." This clear and unambiguous definition of the criterion is very important for two reasons. First, it has a huge impact on the process of creating and manipulating the model. Secondly, the wrong definition of the criterion usually leads to wrong conclusions. The criterion function (objective function) is usually organic integral part model, and the whole process of manipulating the model is aimed at optimizing or satisfying a given criterion.

Even small areas real world are too complex for a person to fully understand and describe. Almost all problem situations are extremely complex and include an almost infinite number of elements, variables, parameters, relationships, constraints, etc. When trying to build a model, you can include an infinite number of facts in it and spend a lot of time collecting the smallest facts about any situation. and establishing links between them. Consider, for example, the simple act of taking a piece of paper and writing a letter on it. After all, it would be possible to determine the exact chemical composition of paper, pencil lead and gum; the influence of atmospheric conditions on paper moisture and the influence of the latter on the friction force acting on the tip of a pencil moving on paper; investigate the statistical distribution of letters in the phrases of the text, etc. However, if the only aspect that interests us in this situation is the fact that the letter was sent, then none of the details mentioned are relevant. Therefore, we must discard most of the real characteristics of the event under study and abstract from the real situation only those features that recreate an idealized version of the real event. All models are simplified representations of the real world or abstractions, if done correctly, these idealizations give us a useful approximation of the real situation, or at least certain features of it.

The similarity of a model to the object it represents is called the degree of isomorphism. In order to be isomorphic (i.e., identical or similar in shape), a model must satisfy two conditions.

First, there must be a one-to-one correspondence
between elements of the model and elements of the represented object. Second, precise relationships or interactions between elements must be maintained. The degree of model isomorphism is relative, and most models are homomorphic rather than isomorphic. Homomorphism is understood as the similarity in form with a difference in the basic structures, and there is only a superficial similarity between different groups of elements of the model and the object. Homomorphic models are the result of processes of simplification and abstraction.

To develop an idealized homomorphic model, we usually
we break the system into a number of smaller parts. This is done for
in order to properly interpret them, i.e., to carry out the required analysis of the problem. This mode of operation depends on the presence of parts or elements that, to a first approximation, are independent of each other or interact with each other in a relatively simple way. Thus, we can first analyze the mode of operation of a car by checking successively the engine, gearbox, drive, suspension system, etc., although these components are not completely independent.

Closely related to this kind of model-building analysis is the process
simplifying the real system. The notion of simplification is readily available to most people - by simplification is meant the neglect of irrelevant details or the acceptance of assumptions about simpler relationships. For example, we often assume that there is a linear relationship between two variables, although we may suspect or even know for sure that the true relationship between them is non-linear. We assume that, at least in a limited range of values
variables, such an approximation will be satisfactory. An electrical engineer works with circuit models assuming that resistors, capacitors, etc. do not change their parameters; this is a simplification because we know that the electrical characteristics of these components change with temperature, humidity, age, etc. The mechanical engineer works with models in which gases are considered ideal, pressures are adiabatic, and conductivity is uniform. In most practical cases, such approximations or simplifications are good enough to give useful results.

A scientist who studies the problems of "management" for building utility models also resorts to simplification. He assumes that his variables are either deterministic (an extremely simplified interpretation of reality) or obey the laws of random events described by known probability distribution functions, such as normal, Poisson, exponential, etc. He also often assumes that the relationships between variables are linear, knowing that such an assumption is not entirely valid. This is often necessary and justified if it is required to build models that can be described mathematically.

Another aspect of analysis is abstraction, a concept that
difference from simplification is not so easy to explain and comprehend. Abstraction
contains or concentrates essential qualities or features
the behavior of an object (thing), but not necessarily in the same form and in such detail as is the case in the original. Most models are abstractions in the sense that they seek to represent the qualities and behavior of the object being modeled in a form or manner different from their actual implementation. So, in the work organization scheme, we are trying to reflect in an abstract form the labor relations between various groups of workers or individual members of such groups. The fact that such a diagram only superficially depicts real relationships does not detract from its usefulness for certain purposes.

After we have analyzed and modeled the parts or elements of the system, we proceed to combine them into a single whole. In other words, by synthesizing relatively simple parts, we can construct some approximation to a complex real situation. It is important to note two points here. Firstly, the parts used for synthesis must be chosen correctly, and secondly, their interaction must be correctly predicted. If all this is done properly, then these processes of analysis, abstraction, simplification and synthesis will eventually lead to the creation of a model that approximates the behavior of the real system under study. It must be remembered, however, that the model is only an approximation and therefore will not behave exactly like a real object. We optimize the model, but not the real system. The question of whether there really is a relationship between the characteristics of our model and reality depends on how correctly and intelligently we have carried out our processes of analysis, abstraction, simplification, and synthesis. We rarely come across a model that would fully satisfy a given managerial situation.

Apparently, the basis of a successful modeling technique should be careful testing of models. Usually, starting with a very simple model, gradually moving towards a more advanced form of it, reflecting difficult situation more accurately. Analogies and associations with well-built structures seem to play an important role in setting the starting point for this process of refinement and refinement. This process of improvement and refinement is connected with the constant process of interaction and feedback between the real situation and the model. There is a continuous interaction between the process of model modification and the process of processing data generated by a real object. As each version of the model is tested and evaluated, new version, which leads to repeated tests and re-evaluations.

As long as the model is amenable to mathematical description, the analyst can make ever greater improvements to it or complicate the initial assumptions. When the model becomes "naughty", i.e. undecidable, the developer resorts to this simplification and the use of a deeper abstraction.

Thus, the art of modeling consists in the ability to analyze a problem, extract its essential features by abstraction, select and modify as appropriate the basic assumptions that characterize the system, and then refine and improve the model until it gives useful results for practice. . This is usually formulated in the form of seven instructions, according to which it is necessary:

− decompose the general task of studying the system into a number of simpler tasks;

- clearly formulate goals;

− find analogies;

− to consider a special numerical example corresponding to the given problem;

- choose certain designations;

− write down the obvious relationships;

− if the resulting model lends itself to mathematical description, expand it. Otherwise, simplify.

In general, you can simplify a model by doing one of the following operations (while extending a model requires just the opposite):

− turn variables into constants;

- exclude some variables or combine them;

− assume a linear relationship between the studied quantities;

− introduce more stringent assumptions and restrictions;

− impose more stringent boundary conditions on the system.

The evolutionary nature of the process of model construction is inevitable and desirable, so we should not think that this process is reduced to the construction of a single base case models. As the goals are achieved and the set tasks are solved, new tasks are set or there is a need to achieve a greater correspondence between the model and the real object, which leads to a revision of the model and all its better implementations. This process, which starts with building a simple model as well; then complicate and work out it has a number of advantages in terms of the successful completion of model development. The pace and direction of evolutionary model change depend on two main factors. The first of these is obviously the inherent flexibility of the model, and the second is the relationship between the creator of the model and its user. With their close cooperation throughout the evolution of the model, its developer and user can create an atmosphere of mutual trust and relationships that will contribute to obtaining final results that meet the goals, objectives and criteria.

The art of modeling can be mastered by those who have original thinking, ingenuity and resourcefulness, as well as a deep knowledge of the systems and physical phenomena that need to be modeled.

There are no hard and effective rules as to how
it is necessary to formulate the problem at the very beginning of the modeling process, i.e. immediately after meeting her for the first time. There are also no magic formulas for solving issues such as the choice of variables and parameters, relationships that describe the behavior of the system, and constraints, as well as criteria for evaluating the effectiveness of the model, when building a model. It must be remembered that no one solves the problem in its pure form, everyone operates with a model that he built based on the task.

Simulation is closely related to the functioning of the system. The system is
a group or collection of entities that are brought together by some form of regular interaction or interdependence in order to perform a particular function.

Examples of systems can be: an industrial plant, an organization, a transport network, a hospital, a city development project, a person and a machine that he controls. The functioning of the system is a set of coordinated actions necessary to perform a specific task. From this point of view, the systems we are interested in are purposeful. This circumstance requires us, when modeling a system, to pay close attention to the goals or tasks that this system must solve. We must constantly keep the objectives of the system and the model in mind in order to achieve the necessary correspondence between them.

Since simulation is about solving real problems, we need to be sure that the end results accurately reflect the true state of affairs. Therefore, a model that can give us absurd results should immediately be taken under suspicion. Any model should be evaluated by the maximum limits of changes in the value of its parameters and variables. If the model gives ridiculous answers to the questions posed, then we will have to return to the drawing board again. The model should also be able to answer “what if...” questions, since these are the questions that are most useful for us, as they contribute to a deeper understanding of the problem and the search for better ways evaluation of our possible actions.

Finally, we should always keep in mind the consumer of the information that our model allows us to obtain. One cannot justify developing a simulation model if it is ultimately unusable or if it does not benefit the decision maker.

The consumer of the results may be the person responsible for the creation of the system or for the entire operation; in other words, there must always be a user of the model - otherwise we will waste the time and effort of managers who will support operations research, control theory, or systems analysis teams for a long time if the results of their work cannot be applied in practice. .

Taking all this into account, we can formulate specific criteria that a good model should satisfy. Such a model should be:

- simple and understandable to the user;

− purposeful;

− reliable in the sense of a guarantee against absurd answers;

- easy to manage and handle, i.е. communication with her should be easy;

− complete from the point of view of the possibilities of solving the main tasks; adaptive, allowing you to easily switch to other modifications or update data;

− Allowing incremental changes in the sense that, being simple at the beginning, it can become more and more complex in interaction with the user.

Based on the fact that simulation should be used to study
real systems, the following stages of this process can be distinguished:

- system definition - the establishment of boundaries, restrictions and measures of the effectiveness of the system to be studied;

- formulating a model - the transition from a real system to some logical scheme (abstraction);

- data preparation - selection of data necessary for building a model, and their presentation in an appropriate form;

− translation of the model - a description of the model in a language acceptable for
used computer;

- assessment of adequacy - increasing to an acceptable level the degree of confidence with which one can judge the correctness of the conclusions about the real system obtained on the basis of the reference to the model;

strategic planning- planning an experiment that should provide the necessary information;

- tactical planning - determining the method of conducting each series of tests provided for by the experiment plan;

− experimentation - the process of performing a simulation in order to obtain the desired data and sensitivity analysis;

- interpretation - drawing conclusions from data obtained by imitation;

− implementation - practical use of the model and (or) simulation results;

- documentation - recording the progress of the project and its results, as well as documenting the process of creating and using the model.

The listed stages of creation and use of the model are defined on the assumption that the problem can be solved in the best way with the help of simulation modeling. However, as we have already noted, this may not be the most efficient way. It has been repeatedly pointed out that imitation is a last resort or a brute force technique used to solve a problem. Undoubtedly, when the problem can be reduced to a simple model and solved analytically, there is no need for imitation. All possible means suitable for solving this particular problem should be sought, while striving for the optimal combination of cost and desired results. Before proceeding to evaluate the possibilities of imitation, you should make sure that a simple analytical model is not suitable for this case.

The stages, or elements, of the simulation process in their interrelation are shown in the flowchart of Figure 1.3. The design of a model usually begins with the fact that someone in the organization comes to the conclusion that there is a problem that needs to be studied.

An appropriate worker (usually from the group associated with the problem) is assigned to carry out preliminary research. At some point, it is recognized that quantitative methods of research can be useful in studying the problem, and then the mathematician enters the scene. Thus begins the stage of defining the problem statement.

Einstein once said that the correct formulation of the problem is even more important than its solution. In order to find an acceptable or optimal solution to a problem, one must first know what it consists of.

Most of the practical tasks are reported to the leaders of the scientific and
research units in an insufficiently clear, inaccurate form. In many cases, management is unable or unable to correctly express the essence of their problems. It knows that there is a problem, but it cannot articulate exactly what the problem is. Therefore, the analysis of the system usually begins with an exploratory study of the system under the guidance of a responsible person authorized to make decisions. The research team must understand and articulate a set of relevant objectives and goals. Experience shows that the formulation of a problem is a continuous process that permeates the entire course of research. This research continuously generates new information regarding the limitations, challenges and possible alternatives. Such information should be used periodically to update the formulation and problem statement.

An important part of the problem statement is the determination of the characteristics of the system to be studied. All systems are subsystems of other larger systems. Therefore, we must determine the goals and constraints that we must take into account in the process of abstracting or building a formal model. It is said that a problem can be defined as a state of unmet need. The situation becomes problematic when the action of any system does not give the desired results.

If the desired results are not achieved, there is a need
modify the system or the environment in which it operates. Mathematically, the problem can be defined as follows (1.2):

(1.2)

where P t is the state of the problem at time t;

D t is the desired state at time t;

A t is the actual state at time t.

Figure 1.3 - Stages of the simulation process

Therefore, the first step in characterizing the system to be studied is to analyze the needs of the environment for which the system is intended. This analysis begins with the definition of goals and boundary conditions (i.e., what is and what is not part of the system to be studied). We are interested here in two functional boundaries, or two interfaces: the boundary that separates our problem from the rest of the world, and the boundary between the system and the environment (i.e., what we consider to be an integral part of the system and what constitutes the environment in which this system operates) . We can describe what happens within the system itself in many ways. If we did not stop at some set of elements and relationships that should be studied with a well-defined goal in mind, we would have an infinite number of connections and combinations.

Having outlined the goals and objectives of the study and determined the boundaries of the system, we further reduce the real system to a logical block diagram or to a static model. We want to build a model of a real system that, on the one hand, will not be so simplified that it becomes trivial, and on the other hand, will not be so detailed that it will become cumbersome to use and prohibitively expensive. The danger that lies in wait for us when constructing a logical block diagram of a really operating system lies in the fact that the model tends to acquire details and elements that sometimes do not contribute anything to the understanding of a given task.

Therefore, there is almost always a tendency to imitate an excessive number of details. To avoid this situation, you should build a model focused on solving the questions that need to be answered, and not imitate the real system - in all details. The Pareto Law states that in every Group or population there is a vital minority and a trivial majority. Nothing really important happens until a vital minority is affected. Too often, systems analysts have sought to transfer all the detail-aggravated complexities of real situations into the model, hoping that the computer will solve their problems. This approach is unsatisfactory, not only because the difficulties of programming the model and the cost of lengthening experimental runs increase, but also because important aspects and relationships can be drowned out in a mass of trivial details. That is why the model should display only those aspects of the system that correspond to the objectives of the study.

In many studies, the simulation may end there. In a surprisingly large number of cases, as a result of an accurate and consistent description of situations, defects and “bottlenecks” of the system become obvious, so that there is no need to continue research using simulation methods.

Each study also covers the collection of data, which is usually understood as obtaining some kind of numerical characteristics. But this is only one side of data collection. The systems analyst should be interested in the input and output data of the system under study, as well as information about the various components of the system, interdependencies and relationships between them. Therefore, he is interested in collecting both quantitative and qualitative data; he must decide which of them are needed, how appropriate they are for the task at hand, and how to collect all this information.

When creating a stochastic simulation model, one always has to decide whether the model should use the available empirical data directly or whether it is advisable to use probability or frequency distributions. This choice is of fundamental importance for three reasons. First, the use of raw empirical data means that no matter how hard we try, we can only mimic the past. Using data from one year will reflect the performance of the system for that year and does not necessarily tell us anything about the expected behavior of the system in the future. In this case, only those events that have already occurred will be considered possible. It is one thing to assume that a given distribution in its basic form will be unchanged over time, and quite another thing to assume that the characteristics of a given year will always repeat themselves. Secondly, in the general case, the use of theoretical frequency or probability distributions, taking into account the requirements for computer time and memory, is more efficient than using tabular data to obtain random variational series needed to work with the model. Thirdly, it is highly desirable and even, perhaps, mandatory that the analyst-developer of the model determine its sensitivity to changes in the form of the used probability distributions and parameter values. In other words, it is extremely important to test the model for the sensitivity of the final results to changes in the initial data. Thus, decisions regarding the suitability of data for use, their reliability, form of presentation, degree of conformity with theoretical distributions and past performance of the system all greatly affect the success of a simulation experiment and are not the result of purely theoretical conclusions.

Model validation is the process by which an acceptable level of user confidence is reached that any conclusion drawn from the simulation about the behavior of the system will be correct. It is impossible to prove that a particular simulation is a correct or "true" representation of a real system. Fortunately, we are rarely concerned with the problem of proving the "veracity" of the model. Instead, we are mainly interested in the validity of those deeper inferences that we have come to or will come to on the basis of simulation. Thus, we are usually concerned not with the fairness of the structure of the model itself, but with its functional usefulness.

Model validation is an extremely important step, because simulation models give the impression of reality, and both modelers and their users easily gain confidence in them. Unfortunately, for a casual observer, and sometimes for a specialist experienced in modeling issues, the initial assumptions on the basis of which this model was built are hidden. Therefore, a check performed without due diligence can lead to disastrous consequences.


Similar information.


In the article we will talk about simulation models. This is a rather complex topic that requires separate consideration. That is why we will try to explain this issue in an accessible language.

simulation models

What is it about? Let's start with the fact that simulation models are necessary to reproduce any characteristics of a complex system in which elements interact. At the same time, such modeling has a number of features.

Firstly, it is an object of modeling, which most often represents a complex complex system. Secondly, these are random factors that are always present and have a certain influence on the system. Thirdly, it is the need to describe the complex and lengthy process that is observed as a result of modeling. The fourth factor is that without the use of computer technology, it is impossible to obtain the desired results.

Development of a simulation model

It lies in the fact that each object has a certain set of its characteristics. All of them are stored in the computer using special tables. The interaction of values ​​and indicators is always described using an algorithm.

The peculiarity and beauty of modeling is that each stage is gradual and smooth, which makes it possible to change the characteristics and parameters step by step and get different results. A program that uses simulation models displays information about the results obtained, based on certain changes. Their graphical or animated representation is often used, greatly simplifying the perception and understanding of many complex processes that are quite difficult to understand in an algorithmic form.

determinism

Simulation mathematical models are built on the fact that they copy the qualities and characteristics of some real systems. Consider an example when it is necessary to study the number and dynamics of the number of certain organisms. To do this, with the help of modeling, each organism can be considered separately in order to analyze specifically its indicators. In this case, the conditions are most often set verbally. For example, after a certain period of time, you can set the reproduction of the organism, and after a longer period - its death. The fulfillment of all these conditions is possible in the simulation model.

Very often they give examples of modeling the movement of gas molecules, because it is known that they move randomly. It is possible to study the interaction of molecules with vessel walls or with each other and describe the results in the form of an algorithm. This will allow you to get the average characteristics of the entire system and perform analysis. At the same time, one must understand that such a computer experiment, in fact, can be called real, since all characteristics are modeled very accurately. But what is the purpose of this process?

The fact is that the simulation model allows you to highlight specific and pure characteristics and indicators. It seems to get rid of random, superfluous and a number of other factors that researchers may not even be aware of. Note that very often determination and mathematical modeling are similar, unless an autonomous action strategy is to be created as a result. The examples that we have considered above concern deterministic systems. They differ in that they do not have elements of probability.

random processes

The name is very easy to understand if you draw a parallel from ordinary life. For example, when you are standing in line at a store that closes in 5 minutes and wondering if you will have time to purchase an item. You can also see the manifestation of randomness when you call someone and count the beeps, thinking about how likely you will get through. It may seem surprising to some, but it was thanks to such simple examples that at the beginning of the last century the newest branch of mathematics, namely the theory of queuing, was born. She uses statistics and probability theory to draw some conclusions. Later, researchers proved that this theory is very closely related to military affairs, economics, production, ecology, biology, etc.

Monte Carlo method

An important method for solving the self-service problem is the statistical test method or the Monte Carlo method. Note that the possibilities of studying random processes analytically are quite complex, and the Monte Carlo method is very simple and universal, which is why it main feature. We can consider the example of a store that one or several customers enter, the arrival of patients at the emergency room one by one or by a whole crowd, etc. At the same time, we understand that all these are random processes, and the time intervals between some actions are independent events that are distributed according to laws that can be deduced only by making a huge number of observations. Sometimes this is not possible, so the average option is taken. But what is the purpose of modeling random processes?

The fact is that it allows you to get answers to many questions. It is trite to calculate how long a person will have to stand in line, taking into account all the circumstances. It would seem that this is a fairly simple example, but this is only the first level, and there can be a lot of similar situations. Sometimes timing is very important.

You can also ask a question about how you can allocate time while waiting for service. An even more difficult question concerns how the parameters should be related so that the queue never reaches the newly entered buyer. It seems to be quite easy question, but if you think about it and start to complicate it at least a little, it becomes clear that the answer is not so easy.

Process

How does random modeling work? Mathematical formulas are used, namely the laws of distribution of random variables. Numeric constants are also used. Note that in this case it is not necessary to resort to any equations that are used in analytical methods. In this case, there is simply an imitation of the same queue that we talked about above. Only at first, programs are used that can generate random numbers and correlate them with a given distribution law. After that, a volumetric, statistical processing of the obtained values ​​is carried out, which analyzes the data for whether they meet the original purpose of modeling. Continuing further, let's say that you can find the optimal number of people who will work in the store so that the queue never arises. At the same time, the mathematical apparatus used in this case is the methods of mathematical statistics.

Education

Little attention is paid to the analysis of simulation models in schools. Unfortunately, this can affect the future quite seriously. Children should know some basic modeling principles from school, since the development of the modern world is impossible without this process. In the basic computer science course, children can easily use the Life simulation model.

More thorough study may be taught in high school or specialized schools. First of all, it is necessary to study the simulation modeling of random processes. Remember that in Russian schools such a concept and methods are just beginning to be introduced, therefore it is very important to maintain the level of education of teachers who will face a number of questions from children with an absolute guarantee. At the same time, we will not complicate the task, focusing on the fact that we are talking about an elementary introduction to this topic, which can be considered in detail in 2 hours.

After the children have mastered the theoretical base, it is worth highlighting the technical issues that relate to generating a sequence of random numbers on a computer. At the same time, it is not necessary to load children with information about how a computer works and on what principles analytics is built. From practical skills, they need to be taught to create generators of uniform random numbers on a segment or random numbers according to the law of distribution.

Relevance

Let's talk a little about why simulation models of management are needed. The fact is that in the modern world it is almost impossible to do without modeling in any field. Why is it so in demand and popular? Simulation can replace the real events needed to obtain concrete results, the creation and analysis of which are too expensive. Or there may be a case when it is forbidden to conduct real experiments. Also, people use it when it is simply impossible to build an analytical model due to a number of random factors, consequences and causal relationships. The last case when this method is used is when it is necessary to simulate the behavior of a system over a given period of time. For all this, simulators are created that try to reproduce the qualities of the original system as much as possible.

Kinds

Simulation research models can be of several types. So, let's consider simulation modeling approaches. The first is system dynamics, which is expressed in the fact that there are interconnected variables, certain accumulators and feedback. Thus, two systems are most often considered, in which there are some common characteristics and points of intersection. The next type of simulation is discrete-event. It concerns those cases when there are certain processes and resources, as well as a sequence of actions. Most often, in this way, the possibility of an event is studied through the prism of a number of possible or random factors. The third type of modeling is agent-based. It lies in the fact that the individual properties of the organism in their system are studied. In this case, indirect or direct interaction of the observed object and others is necessary.

Discrete-event modeling suggests abstracting from the continuity of events and considering only the main points. Thus, random and unnecessary factors are excluded. This method is the most developed, and it is used in many areas: from logistics to production systems. It is he who is best suited for modeling production processes. By the way, it was created in the 1960s by Jeffrey Gordon. System dynamics is a modeling paradigm, where research requires a graphical representation of the relationships and mutual influences of some parameters on others. This takes into account the time factor. Only on the basis of all the data a global model is created on the computer. It is this type that allows you to deeply understand the essence of the event under study and identify some causes and connections. Thanks to this simulation, business strategies, production models, the development of diseases, city planning, and so on are built. This method was invented in the 1950s by Forrester.

Agent-based modeling appeared in the 1990s and is relatively new. This direction is used to analyze decentralized systems, the dynamics of which is determined not by generally accepted laws and rules, but by the individual activity of certain elements. The essence of this simulation is to get an idea of ​​the new rules, to characterize the system as a whole and to find the relationship between the individual components. At the same time, an element is studied that is active and autonomous, can make decisions on its own and interact with its environment, as well as independently change, which is very important.

Stages

Let us now consider the main stages of the development of a simulation model. They include its formulation at the very beginning of the process, building a conceptual model, choosing a modeling method, choosing a modeling apparatus, planning, and completing a task. At the last stage, the analysis and processing of all received data takes place. Building a simulation model is a complex and lengthy process that requires a lot of attention and understanding of the essence of the matter. Note that the steps themselves take a maximum of time, and the simulation process on a computer takes no more than a few minutes. It is very important to use the right simulation models, because without it you will not be able to achieve the desired results. Some data will be received, but they will not be realistic and not productive.

Summing up the article, I would like to say that this is a very important and modern industry. We looked at examples of simulation models to understand the importance of all these points. In the modern world, modeling plays a huge role, since on its basis the economy, urban planning, production, and so on develop. It is important to understand that models of simulation systems are in great demand, as they are incredibly profitable and convenient. Even when creating real conditions, it is not always possible to obtain reliable results, since there are always a lot of scholastic factors that are simply impossible to take into account.

simulation models

simulation modelreproduces behaviorcomplex system of interacting elementscomrade Simulation modeling is characterized by the presence of the following circumstances (simultaneously all or some of them):

  • the object of modeling is a complex inhomogeneous system;
  • in the simulated system there are factors of random behavior;
  • it is required to obtain a description of the process developing in time;
  • it is fundamentally impossible to obtain simulation results without using a computer.

The state of each element of the simulated system is described by a set of parameters that are stored in the computer memory in the form of tables. The interactions of the elements of the system are described algorithmically. Modeling is carried out in a step-by-step mode. At each simulation step, the values ​​of the system parameters change. The program that implements the simulation model reflects the change in the state of the system, giving the values ​​of its desired parameters in the form of tables in time steps or in the sequence of events occurring in the system. To visualize the simulation results, a graphical representation is often used, incl. animated.

Deterministic Simulation

The simulation model is based on the imitation of a real process (simulation). For example, when simulating the change (dynamics) in the number of microorganisms in a colony, one can consider many individual objects and monitor the fate of each of them, setting certain conditions for its survival, reproduction, etc. These conditions are usually specified verbally. For example: after a certain period of time, the microorganism is divided into two parts, and after another (longer) time period, it dies. The fulfillment of the described conditions is algorithmically implemented in the model.

Another example: modeling the movement of molecules in a gas, when each molecule is represented as a ball with a certain direction and speed of movement. The interaction of two molecules or a molecule with the vessel wall occurs according to the laws of absolutely elastic collision and is easily described algorithmically. Obtaining the integral (general, averaged) characteristics of the system is carried out at the level of statistical processing of the simulation results.

Such a computer experiment actually claims to reproduce a full-scale experiment. To the question: "Why do you need to do this?" we can give the following answer: simulation modeling allows us to single out "in pure form" the consequences of hypotheses embedded in the concept of micro-events (i.e., at the level of system elements), saving them from the influence of other factors that are inevitable in a full-scale experiment, which we can not even suspect. If such modeling also includes elements of a mathematical description of processes at the microlevel, and if the researcher does not set the task of finding a strategy for regulating the results (for example, controlling the number of microorganism colonies), then the difference between the simulation model and the mathematical (descriptive) one turns out to be rather arbitrary.

The examples of simulation models given above (the evolution of a colony of microorganisms, the movement of molecules in a gas) lead to determinirobathroom description of systems. They lack elements of probability, randomness of events in the simulated systems. Consider an example of modeling a system that has these qualities.

Models of random processes

Who hasn't stood in line and impatiently wondered if he could make a purchase (or pay a rent, ride a carousel, etc.) in some of the time at his disposal? Or, trying to call the help desk by phone and bumping several times on short beeps, get nervous and evaluate whether I will get through or not? From such "simple" problems at the beginning of the 20th century, a new branch of mathematics was born - the theory of queuing, using the apparatus of probability theory and mathematical statistics, differential equations and numerical methods. Subsequently, it turned out that this theory has numerous outlets in the economy, military affairs, organization of production, biology and ecology, etc.

Computer simulation in solving queuing problems, implemented in the form of a statistical test method (Monte Carlo method), plays an important role. The possibilities of analytical methods for solving real-life queuing problems are very limited, while the method of statistical testing is universal and relatively simple.

Consider the simplest problem of this class. There is a shop with one seller, which randomly includes buyers. If the seller is free, then he begins to serve the buyer immediately, if several buyers have entered at the same time, a queue is built. There are many other similar situations:

  • repair zone and auto fleet and buses that left the line due to a breakdown;
  • emergency room and patients who came to the reception on the occasion of an injury (ie without a system of appointment);
  • a telephone exchange with one entrance (or one telephone operator) and subscribers who are queued when the entrance is busy (such a system is sometimes
    practiced);
  • server local network and personal machines at the workplace that send a message to a server capable of accepting and processing no more than one message at a time.

The process of customers coming to the store is a random process. The time intervals between the arrivals of any consecutive pair of buyers are independent random events distributed according to some law, which can only be established by numerous observations (or some plausible version of it is taken for modeling). The second random process in this problem, which has nothing to do with the first one, is the duration of service for each of the customers.

The purpose of modeling systems of this kind is to answer a number of questions. A relatively simple question - what is the average time to stand and queue for given distribution laws of the above random variables? More difficult question; What is the distribution of service waiting times in the queue? An equally difficult question is: at what ratios of the parameters of input distributions will a crisis occur, in which the turn of the newly entered buyer will never reach? If you think about this relatively simple task, the possible questions will multiply.

The modeling approach looks like this in general terms. Used mathematical formulas - laws of distribution of initial random variables; the numerical constants used are the empirical parameters included in these formulas. No equations are solved that would be used in the analytical study of this problem. Instead, there is an imitation of the queue, played out with the help of computer programs that generate random numbers with given distribution laws. Then the statistical processing of the totality of the obtained values ​​of the quantities determined by the given modeling goals is performed. For example, the optimal number of sellers for different periods of store operation is found, which will ensure the absence of queues. The mathematical apparatus used here is called methods of mathematical statistics.

The article "Modeling Ecological Systems and Processes" describes another example imitationfoot modeling: one of the many models of the "predator-prey" system. Individuals of the species that are in these relationships, according to certain rules, containing elements of chance, move, predators eat prey, both of them multiply, and so on. Such the model does not contain any mathematical formulas, but requires by the waystatic processing results.

An example of a deterministic algorithm simulation model

Consider a simulation model of the evolution of a population of living organisms, known as "Life", which is easy to implement in any programming language.

To construct a game algorithm, consider a square field from n -\- 1 columns and rows with the usual numbering from 0 to P. For convenience, we define the extreme boundary columns and rows as the "dead zone", they play only an auxiliary role.

For any internal cell of the field with coordinates (i, j), 8 neighbors can be determined. If the cell is "live", we paint over it, if the cell is "dead", it empty.

Let's set the rules of the game. If a cell (i, j) is "alive" and it is surrounded by more than three "live" cells, it dies (due to overpopulation). A "live" cell also dies if there are less than two "live" cells in its environment (from loneliness). A "dead" cell comes to life if three "live" cells appear around it.

For convenience, we introduce a two-dimensional array BUT, whose elements take the value 0 if the corresponding cell is empty, and 1 if the cell is "live". Then the algorithm for determining the state of the cell with the coordinate (i, j) can be defined as follows:

S:=A+A+A+A+A+A+A+A;
If (A = 1) And (S > 3) Or (S< 2)) Then B: =0;
If (A=0) And (S=3)
ThenB:=1;

Here, the array B defines the coordinates of the field at the next stage. For all internal cells from i = 1 to n - 1 and j = 1 to n - 1, the above is true. Note that subsequent generations are determined similarly, it is only necessary to carry out the reassignment procedure:

For I: = 1 Then N - 1 Do
For J: = 1 Then N - 1 Do
A := B ;

On the display screen, it is more convenient to display the state of the field not in a matrix, but in a graphical form.
It remains only to determine the procedure for setting the initial configuration of the playing field. When randomly determining the initial state of cells, the algorithm is suitable

For I: = 1 To K Do
Begin K1:=Random(N-1);
K2:= Random(N-1)+1;
end;

It is more interesting for the user to set the initial configuration himself, which is easy to implement. As a result of experiments with this model, one can find, for example, stable settlements of living organisms that never die, remaining unchanged or changing their configuration with a certain period. Absolutely unstable (perishing in the second generation) is resettlement by the "cross".

In the basic computer science course, students can implement the Life simulation model as part of the Introduction to Programming section. A more thorough mastering of simulation modeling can take place in high school in a profile or elective course in computer science. This option will be discussed next.

The beginning of the study is a lecture on simulation modeling of random processes. AT Russian school the concepts of probability theory and mathematical statistics are just beginning to be introduced into the course of mathematics, and the teacher should be prepared to make an introduction to this most important material for the formation of a worldview and mathematical culture. We emphasize that we are talking about an elementary introduction to the range of concepts under discussion; this can be done in 1-2 hours.

Then we discuss technical issues related to the generation on a computer of sequences of random numbers with a given distribution law. In this case, you can rely on the fact that in every universal programming language there is a sensor of random numbers uniformly distributed on the segment from 0 to 1. At this stage, it is inappropriate to go into the difficult question of the principles of its implementation. Based on the available random number generators, we show how you can arrange

a) a generator of uniformly distributed random numbers on any segment [a, b];

b) a random number generator for almost any distribution law (for example, using an intuitively clear "selection-rejection" method).

It is advisable to start the consideration of the queuing problem described above with a discussion of the history of solving queuing problems (the Erlang problem of servicing requests at the telephone exchange). This is followed by consideration of the simplest problem, which can be formulated using the example of forming and examining a queue in a store with one seller. Note that at the first stage of modeling the distribution of random variables at the input can be assumed equally probable, which, although not realistic, removes a number of difficulties (to generate random numbers, you can simply use the sensor built into the programming language).

We draw students' attention to what questions are posed in the first place when modeling systems of this type. Firstly, this is the calculation of average values ​​(mathematical expectations) of some random variables. For example, what is the average time you have to queue at the counter? Or: find the average time spent by the seller waiting for the buyer.

The task of the teacher, in particular, is to explain that the sample means themselves are random variables; in another sample of the same size, they will have different values ​​(for large sample sizes, they will not differ too much from each other). Further options are possible: in a more prepared audience, you can show a method for estimating confidence intervals in which the mathematical expectations of the corresponding random variables are found for given confidence probabilities (by methods known from mathematical statistics without attempting to substantiate). In a less prepared audience, one can confine oneself to a purely empirical statement: if in several samples of equal size the average values ​​coincided in some decimal place, then this sign is most likely correct. If the simulation fails to achieve the desired accuracy, the sample size should be increased.

In an even more mathematically prepared audience, one can pose the question: what is the distribution of random variables that are the results of statistical modeling, given the distributions of random variables that are its input parameters? Since the presentation of the corresponding mathematical theory in this case is impossible, one should limit oneself to empirical methods: constructing histograms of the final distributions and comparing them with several typical distribution functions.

After working out the primary skills of this modeling, we move on to a more realistic model in which the input streams of random events are distributed, for example, according to Poisson. This will require students to additionally master the method of generating sequences of random numbers with the specified distribution law.

In the considered problem, as in any more complex problem about queues, a critical situation may arise when the queue grows indefinitely with time. Modeling the approach to a critical situation as one of the parameters increases is an interesting research task for the most prepared students.

On the example of the task about the queue, several new concepts and skills are worked out at once:

  • concepts of random processes;
  • concepts and basic simulation skills;
  • construction of optimization simulation models;
  • building multicriteria models (by solving problems of the most rational customer service in combination with the interests
    store owner).

Exercise :

    1. Make a diagram of key concepts;
  • Select practical tasks with solutions for basic and specialized computer science courses.

Simulation Modeling.

The concept of a simulation model.

Approaches to the construction of simulation models.

According to the definition of academician V. Maslov: “simulation modeling consists primarily in the construction of a mental model (simulator) that simulates objects and processes (for example, machines and their work) according to the necessary (but incomplete) indicators: for example, by working time, intensity, economic costs, location in the shop, etc. It is the incompleteness of the description of the object that makes the simulation model fundamentally different from the mathematical one in the traditional sense of the word. Then there is a search in dialogue with a computer of a huge number of possible options and a choice in a specific timeframe of the most acceptable solutions from the point of view of an engineer. At the same time, the intuition and experience of the engineer who makes the decision, who understands the entire most difficult situation in production, is used.

In the study of such complex objects, the optimal solution in the strictly mathematical sense may not be found at all. But you can get an acceptable solution in a relatively short time. The simulation model includes heuristic elements, sometimes uses inaccurate and contradictory information. This makes simulation closer to real life and more accessible to users - engineers in industry. In dialogue with the computer, specialists expand their experience, develop intuition, in turn, transfer them to the simulation model.

So far, we've talked a lot about continuous objects, but it's not uncommon to deal with objects that have discrete input and output variables. As an example of the analysis of the behavior of such an object on the basis of a simulation model, let us consider the now classical “problem of a drunken passer-by” or the problem of random walk.

Let us suppose that a passer-by, standing at the corner of the street, decides to take a walk to disperse the hops. Let the probabilities that, having reached the next intersection, he will go north, south, east or west are the same. What is the probability that, after walking 10 blocks, a passer-by will be no more than two blocks from the place where he started walking?

Denote its location at each intersection by a two-dimensional vector

(X1, X2) ("exit"), where

Each move one block to the east corresponds to an increment of X1 by 1, and each move to one block to the west corresponds to a decrease in X1 by 1 (X1, X2 is a discrete variable). Similarly, moving a passerby one block north, X2 increases by 1, and one block south, X2 decreases by 1.

Now, if we designate the initial position as (0,0), then we will know exactly where the passerby will be relative to this initial position.

If at the end of the walk the sum of the absolute values ​​of X1 and X2 is greater than 2, then we will assume that he has gone further than two blocks at the end of the walk of 10 blocks.

Since the probability of our passer-by moving in any of the four possible directions is the same and equals 0.25 (1:4=0.25), we can estimate his movement using a table of random numbers. Let's agree that if the random number (SN) lies between 0 and 24, the drunk will go east and we will increase X1 by 1; if from 25 to 49, then it will go west, and we will decrease X1 by 1; if from 50 to 74, he will go north and we will increase X2 by 1; if the midrange is between 74 and 99, then the passer-by will go south, and we will decrease X2 by 1.

Scheme (a) and algorithm (b) of the movement of a "drunk passerby".

a) b)

It is necessary to carry out a sufficiently large number of "machine experiments" in order to obtain a reliable result. But it is practically impossible to solve such a problem by other methods.

In the literature, the simulation method is also found under the names of the digital, machine, statistical, probabilistic, dynamic modeling or machine simulation method.

The simulation method can be considered as a kind of experimental method. The difference from a conventional experiment is that the object of experimentation is a simulation model implemented as a computer program.

Using a simulation model, it is impossible to obtain analytical relationships between quantities.

It is possible to process the experimental data in a certain way and select the appropriate mathematical expressions.

When creating simulation models are currently used two approach: discrete and continuous.

The choice of approach is largely determined by the properties of the object - the original and the nature of the influence of the external environment on it.

However, according to the Kotelnikov theorem, a continuous process of changing the states of an object can be considered as a sequence of discrete states and vice versa.

When using a discrete approach to creating simulation models, abstract systems are usually used.

The continuous approach to building simulation models has been widely developed by the American scientist J. Forrester. The modeled object, regardless of its nature, is formalized as a continuous abstract system, between the elements of which continuous "streams" of one nature or another circulate.

Thus, under the simulation model of the original object, in the general case, we can understand a certain system consisting of separate subsystems (elements, components) and connections between them (having a structure), and functioning (state change) and internal change all elements of the model under the influence of connections can be algorithmized in one way or another, just like the interaction of the system with the external environment.

Thanks not only to mathematical techniques, but also to the well-known capabilities of the computer itself, in simulation modeling, the processes of functioning and interaction of various elements of abstract systems can be algorithmized and reproduced - discrete and continuous, probabilistic and deterministic, performing the function of service, delays, etc.

A computer program (together with service programs) written in a universal high-level language acts as a simulation model of an object in this setting.

Academician N.N. Moiseev formulated the concept of simulation modeling in the following way: “A simulation system is a set of models that simulate the course of the process under study, combined with a special system of auxiliary programs and an information base that allows you to quite simply and quickly implement variant calculations.”

Loading...Loading...