Acceptance test definition. Development of documentation, production and testing of prototypes. Set of standards for automated systems

Page 1


Acceptance tests are carried out in accordance with the specified program and methodology upon presentation of the terms of reference for the creation of the AU, work logs, acts of acceptance and completion of trial operation. During these tests, the NPP functioning is checked under the conditions specified in the ToR, autonomously and as part of a complex, as well as checking the means of restoring the NPP operability after failures and the possibility of practically implementing all the recommended procedures. The test protocols for the program are summarized in a single protocol, on the basis of which a conclusion is made about the system's compliance with the requirements of the TOR and the possibility of issuing an act of acceptance of the NPP into permanent operation.

Usually this testing is done by another team that repeats a lot of what is covered in automatic acceptance tests manually or with another tool. This may be done because acceptance tests cannot be easily run across different environments, or the development team cannot easily understand what the tests are doing, and the data used is hidden in the code, making it difficult to find or change. Hiding the intent of a test in code makes it difficult to understand what the test is trying to do.

It also means that non-technical team members cannot easily contribute to testing, and technical people spend more time on automated test code. When tests need to be refactored, which is inevitable that they separate the intent from the means of implementation, you can focus on refactoring the implementation of the test rather than what the test is supposed to do.

Acceptance tests should be carried out 2 times: primary within 3 months.

Acceptance tests are carried out by testing organizations and departments included in the system government organizations on state tests, or other organizations and enterprises involved by the parent organization to conduct acceptance tests in the prescribed manner with the participation of the manufacturer and developer.

Separation of intent, implementation and data

Below is a diagram illustrating the idea of ​​separating intent, implementation and data. The environment is also included, as it is important to understand what environment the tests will be run in and, if necessary, change the data and run the test. Creating an intent can be done by non-technical team members in a language that is independent of the implementation. This is important because it means that the implementation can be refactored without having to change the intent of the test. The reverse is also true; it is possible to change the intent of a test without affecting the implementation, provided there is an implementation that satisfies all the words in the intent paragraph.

Acceptance tests are carried out to determine the feasibility and feasibility of manufacturing products. Experimental or prototype samples (batches) of products certified by the commission are subjected to tests using certified test equipment.

Acceptance tests are carried out to determine the feasibility and feasibility of putting products into production. Acceptance tests of single-piece production products are carried out to resolve the issue of the feasibility of transferring these products into operation. Experimental or prototype samples (batches) of products are subjected to tests. When delivering a family, range or size range of products for production, a typical representative is selected based on the condition of the possibility of extending the results of its tests to the entire set of products. Acceptance tests are carried out by certified test departments using certified test equipment. The products assigned to the parent testing organizations are checked by these organizations.

If, in the worst case, you decide to throw out the entire implementation and start over, you can still keep the test intent and therefore not lose the most important part of automated tests, what tests try to do. The data must also be separated into intentions and implementations, by doing this the data can be specified in the test by abstraction and then the implementation is removed. For example, a test might indicate "Gold Client", as the client is identified, is not part of the test, but is part of the data layer.

Data layer creation and data abstraction are not new techniques; they are typically used for application code and should be applied to automated tests as the benefits still apply. In the test case, the data layer is called to retrieve data from the data source for the appropriate client type.

Acceptance tests must be carried out on prototype electric machine, so the scope of these tests is quite large. Yes, for cars. direct current the acceptance test program contains 17 items, for synchronous machines - 22 items, for asynchronous motors - 16 items.

Acceptance tests cannot detect the described software corruption, since it does not affect the design in any way, and the simulator program can detect it only in rare cases, since the range of malicious actions is too large for it to be advisable to implement all possible dangerous situations in the simulator.

The test case then iterates over all the entries in the list and performs a test on each customer entry. System tests correspond to one of the levels of software testing combined with various tests such as recovery, security, resilience, performance, connectivity, volume, voltage, data availability, ease of use, operation, Environment, storage, configuration, installation and documentation. Each has a different purpose and has a common purpose that demonstrates the systematic vision of the project. Acceptance testing is another type of level that is complemented by software test levels, these tests are very fundamental since they are the ones that will allow us to get a product that meets the required standards and that simultaneously satisfies the requirements of users in accordance with the requirements that they raised from the very beginning . This forces the system to test the vital testing process, since in terms of the product, the number of bugs, and the severity of those bugs, it is the step in development that is usually prone to most bugs. Figure 1: Verification of iterative systems System tests are not processes for verifying system functionality or full program, as it would be redundant with the functional test process. System tests have a specific purpose: to compare a system or program to its original goals. For this purpose, two values ​​are presented. System testing is not limited to systems. If the product is a program, system testing is an attempt to demonstrate how the program completely fails to meet its goals or requirements. System tests, by definition, are not possible unless there are written requirements that are measurable for the product. System testing is the phase of research in which it ensures that each component or module interacts with other components or modules as intended. System tests are aimed at deep implementation of the system, checking the integration of an information system around the world, checking the correct functioning of the interfaces between the various subsystems that make it up and with other information systems with which it communicates. The classic system test problem is "pointing the finger". This happens when a bug is discovered and the developer of each system element blames others. Instead of falling into this absurdity, the software engineer should anticipate possible problems with an interface:  a design error handling path that checks all information coming from other elements of the system. System Testing and Testing 6  Apply a series of tests that simulate bad data or other possible mistakes in the software interface.  Register test results as evidence in case of guilt.  Participate in the planning and development of system testing to ensure that the software has been properly tested. In fact, the system test covers a series of different tests, the main purpose of which is to deep-compute the system while computing. Although each test has a different purpose, they all work to make sure that all elements of the system have been properly integrated and that they perform the appropriate functions. 2 System review of the test. When testing is to be carried out, it is necessary to maintain a systems, that is, an integral approach to software development. Applying these concepts to a software test yields a set of principles that will serve as the basis for the test: you must make sure you know exactly the goals of the software being tested, as well as your success rates. These elements are found in the documents obtained during the requirements gathering phase, as well as in the software specifications. This information will be needed to prepare the test plan and will form the basis for initiating test case development. The inputs and outputs of the approved system must be defined. This aspect is necessary in the preparation of test cases, as well as in the creation of test procedures, especially test case-oriented ones that show the fulfillment of goals. Consider the main system on which the software under test is running. It is usually an organizational environment that consists of hardware, software, and people. All these elements have a great influence on the system and especially help in the preparation of test cases of undesirable situations associated with inadequate data, lack of necessary elements and the occurrence of exceptions. 3 System Testing Overview The system testing process consists of two phases that can be very separate over time: test preparation and test application. The former is closely related to requirements so it happens early in the project and the latter requires the full system or at least one integration as the partial product is said to be unreleased to be able to apply tests so it happens in the advanced stages of the project. The exact situation with these parts depends on the chosen model. life cycle. To perform the second and third activities, a requirements document is required. The first phase of testing provides feedback to analyze requirements, identify gaps, ambiguities, and other issues. It also provides valuable advice on system design and implementation if you are just developing one. The test application phase requires a test plan and an executable system version. In this case, the test cases that have been prepared will be applied, the results will be analyzed and any defects identified. This second step provides feedback to the implementation and design, showing possible defects that need to be fixed. It also provides information that will be useful in releasing a system, adopting it, evaluating its reliability, and maintaining it. Figure 1 shows the system testing process and its relationship with other processes. The second point is important because sometimes the system test is confused with the interface test. The first checks the interaction of all parts, and the second analyzes the interface elements and, possibly, the processing of related events. However, tools that help test the interface can be used to run system tests. Several questions arise: how many cases will be enough? How to generate the least possible? What values ​​are appropriate? System Testing and Testing 9 1 Test Plan A test plan is very important document during software testing. It explains the objectives and approaches to testing, the work plan, operational procedures, necessary tools and responsibilities. System testing and testing 10 The requirements document should contain a list of functions to be performed by the software, describing and prioritizing them; it should also include non-functional requirements, which may include organizational, operational, and other aspects. A well-prepared requirements document should provide a way for each requirement to verify that it is met. In the case of features, this would be a description, and in the case of non-functional requirements, it could be very precise specifications such as response time. For now, we will focus on the functional requirements, leaving the rest for a later section. System tests are important because of the following factors:  It is the system that is tested as a whole within the system development life cycle.  The system is checked for compliance with its functional and technical requirements .  The system is tested in an environment that is as close as possible to the production environment.  System tests allow you to test, validate and validate both business requirements and application architecture. 6Type of system tests Functional tests  Integration test. - In which the test equipment has access to the source code of the system. When a problem is discovered, the integration team tries to find the source of the problem and determine the components that need to be debugged. Integration testing is mainly concerned with finding defects in the system.  Proof of delivery. Here is the version of the system that can be delivered to the user. Here, the test team is concerned with verifying that the system meets its requirements and guarantees the reliability of the system. Delivery tests are usually black box testing, in which the test equipment is simply concerned with whether the system is functioning properly. The components that can be integrated can be commercial components, reusable components tailored to a particular system, or newly developed components. For many large systems, they will likely use all three types of components. The integration test verifies that these components actually work together, is called correctly, and passes the correct data at the exact time through their interfaces. System integration involves identifying groups of components that provide some system functionality and integrating them by adding codes to work together. Sometimes the skeleton of the whole system is developed first, and the components are added. This is called post-integration. This is upward integration. In practice, for many systems, the integration strategy is a combination of both, adding additional infrastructure components and functional components. Both integration approaches require additional code to model other components and enable the system to start. The main difficulty encountered during integration tests is the location of bugs. There are complex interactions between the components of a system, and when an anomalous output is detected, it can be difficult to determine where the error occurred. To make it easier to isolate bugs, you should always use an incremental approach to system integration and testing. The main purpose of this process is to increase the supplier's confidence that the system meets his requirements. If so, it may be delivered as a product or delivered to a customer. To demonstrate that a system meets its requirements, it must be demonstrated that it delivers the specified functionality, performance, and reliability and that it does not fail under normal use. Delivery tests are usually a black box testing process in which the tests are derived from the system specification. The system is viewed as a black box whose behavior should only be determined by examining its respective inputs and outputs. Another name for this is functional testing because the tester is only interested in the functionality and not the implementation of the software. In the following figure, we will see an illustration of the system model that it allows for in black box validation. The tester presents inputs to a component or system and considers the inputs. In some cases, the system must be fault-tolerant; that is, processing failures should not lead to failures of the system as a whole. Test systems and acceptance tests 14 System failure must be corrected within a certain period of time or will have serious economic damage. The recovery test is a system test that causes the software to fail in several ways and checks that the recovery is performed correctly. If the recovery happens automatically, you need to check if the reinitialization, system backup mechanisms, data recovery and reboot are correct. If recovery requires human intervention, the average recovery time must be determined to determine if it is within acceptable limits. The goal is to establish endpoints where the system starts to operate below the specified requirements. This should not be confused with the volume test; high voltage is the maximum amount of data or activity per a short time. The analogy would be to evaluate the typist. A volume test will be determined if the typist is faced with a draft of a large report; the stress test will determine if the typist can type at an index of 50 words per minute. Violation includes a wide range of activities:  A hacker who tries to log in per game. The security test verifies that the security mechanisms built into the system actually protect it from inappropriate intrusions. "Systems must be tested for system security to be immune to frontal attacks, but also to those who make flanks or rear." During the security test, anyone who applies it plays the role of the person who wants to log in. All this is worth it! You should try to get any passwords external means; can attack the system using special software designed to bypass any protected architecture; he can saturate the system, thereby denying service to others; can lead to intentional errors in the system to try to gain access during recovery: you can view the data without protection, with the idea of ​​searching for the password for the system. If enough time and resources are given, the system will eventually good test for safety. The role of the system designer is that the cost of interruption is greater than the cost of information received. However, the analysis of human factors remains a highly subjective problem. The test cases are intended to demonstrate that these storage targets are not found. Often the number of possible configurations is too large to test each one, but if possible, you should test the program with each type of hardware device and with the minimum and maximum configuration. If the program itself can be configured to skip components, or if it can run on multiple computers, each configuration must be tested. Testing installation procedures is an important part of the system testing process. This is especially true for the system automatic installation included in the software package. An improperly run installer can prevent the user from having a successful experience with the system. The user's first experience is when he or she installs the application. One way to achieve this is to use the documentation to define a view of previous system test cases. That is, once you want to develop an overload case, you should use the documentation as a guide for writing the actual test case. In addition, user documentation should be subject to scrutiny for accuracy and clarity. Any of the examples shown in the documentation should be tested and added to the to-do list and included in the program. System Testing and Testing 17 operating point of view, after the adoption of the system in the actual environment and based on compliance with the specified non-functional requirements. Resistance testing is designed to counter programs in abnormal situations. In essence, the person doing the resistance test will be asking. How far do you make it before it fails? The resistance test conducts the system in such a way that it requires an abnormal amount, frequency, or amount of resources. For example:  special tests have been developed that generate 10 interrupts per second when the average cup is one or two.  The frequency of data input is increased by an amount that will allow responses to input functions.  Run test cases that require maximum memory or other resources.  Test cases are designed to troubleshoot memory management issues. Test cases are created that cause excessive disk lookups. The figure is an example of a resistance test. The performance test is applied at all stages of the testing process. Even at the unit level. The performance of an individual module should be evaluated during testing. However, only after all elements of the system are fully integrated can true system performance be achieved. Performance tests often involve resistance testing and often require software and hardware instructions. That is, it is often necessary to accurately measure resource usage. Using external tools, you can regularly monitor run intervals, logged events, and status of hardware samples. These tests are performed so that the client confirms that the system is valid for him. Detailed planning for these tests should be carried out early in the development phase with the aim of using the results as an indication of their validity: if the documented tests are performed to customer satisfaction, the product is considered correct and therefore adequate to be put into production. 6 Acceptance tests2 are basically functional tests on the complete system and they aim to check whether the established requirements . Its execution is optional for the client, and in case they are not explicitly specified, they are included in the system tests. That is, acceptance tests are often the responsibility of the user or customer, although anyone involved in the business can perform them. Acceptance testing requires a test environment that represents the production environment. This phase or level as a starting point defines the baseline of product acceptance already established in the certification environment. System Testing and Acceptance 19 Figure - Acceptance control. 1 Examining the current situation in acceptance tests. As part of the tests, we have to check the software, one of the most important is acceptance testing. These are the tests that are developed by the development team itself based on the functional requirements specified in the analysis phase in order to cover the entire spectrum and be performed by the end user, but not all but a few users have a significant result that gives validity and conformity to the product that is delivered to them on basis of what was originally agreed. Depending on the complexity of the system under test, regardless of whether it is divided into modules, etc. the execution of these tests is performed differently. If the application were divided into modules, these would be considered as subsystems and would be complex enough for them to be handled differently, different acceptance test sessions would have to be conducted. 2 Purpose of acceptance testing Acceptance testing aims to obtain final customer acceptance prior to delivery of a product for it to go into production. When an organization has conducted system tests and fixed most of its defects, the system will be delivered to the user or customer for approval. The purpose of acceptance testing is to verify that the system meets the expected performance and allows the user of that system to determine its acceptance in terms of its functionality and performance. Acceptance tests are determined by the user of the system and prepared by the development team, although the final execution and approval is up to the user. System validation is achieved by running black box tests that demonstrate compliance and are included in a test plan that defines the validations to be performed and their associated test cases. This plan is designed to ensure that all functional requirements specified by the user are met, as well as non-functional requirements related to performance, security of system access, data and processes, and various system resources, 3 Generation of acceptance testing. The system must be accepted by the user. For this reason, based on the structured specifications of the system, the analyst creates a set of test cases that must pass satisfactorily. Since acceptance tests can be developed in parallel with design and practice activities, it is normal for these activities to be initiated by the analyst as soon as the Structured Analysis activity is completed. 4 Acceptance Testing Strategies If the system was designed for the mass market, then it would not be practical to test it for individual users or customers, in some cases it would not be possible. In these cases, feedback is required before the product is put up for sale. Often such systems have two stages of acceptance testing. Alpha and Beta Testing When custom software is built for a client, a series of acceptance tests are performed to allow the client to test all requirements. Performed by the client at the development site. The software is used naturally with the developer as the user's watcher, along with registration errors and usage issues. Alpha tests are conducted in a controlled environment. You work in a controlled environment and the client always has an expert to help you use the system. The developer keeps track of found bugs and usage issues. Β-beta tests are conducted after the α-alpha test and are developed in a client environment. In this case, the client is left alone with the product and tries to find bugs that inform the developer. They are performed by the end users of the software at the customer's workstations. Unlike an alpha test, the developer is usually not present. Thus, a beta test is a live application of the software in an environment that cannot be controlled by the developer. The client logs all issues that occur during beta testing and reports back to the developers at regular intervals. 5 Inputs, outputs, tasks and roles of the acceptance test. Specification of entry requirements. Tasks Prepare the test environment. We recommend having a specific test environment for this type of testing. Installation in a test environment. System Testing and Testing 22 Identify the tests to be performed. Possible dependencies that exist between tests will be established and the order or sequence in which the tests will be executed will be established based on these dependencies. Receiving and recording results. Fixed bugs and bugs. Repeat the task until you pass all the tests. Preparing an acceptance test report. An overview of the correct execution and results of all submitted tests. Creation of a production base. Formal closure of activities. Test results. Accepted product acceptance report. Project manager. The focus on acceptance testing is related to an attempt to reinforce the opinion that the user integrated in this stage, at an early stage, would help to improve this process in the test planning and design phase, with subsequent improvements in many quantitative and desirable aspects, such as: Improvement in the quality of the integrated software security. Cost minimization. Increased reliability in project results. When using software with fewer errors, there is an increase in customer satisfaction. This improves the efficiency of the development process. 7 Criteria for acceptance testing. Software acceptance is achieved through a series of tests that demonstrate that they meet the requirements. The test plan describes the type of test to be applied and the test procedure defines the specific test cases, both plan and procedure are designed to ensure that they meet all functional requirements, that all behavioral characteristics are achieved, all performance requirements are met, documentation is correct and followed all requirements for ease of use and other specified requirements. 8 Tools for acceptance tests. This allows customers, testers and programmers to know what their software is supposed to do and automatically compare what it actually does. Allows you to write tests that are easy to read and easy to maintain. This activity is known as the final test or acceptance test. This requires the input of acceptance test data and an integrated system created during this activity. The test will be conducted by some member or department of the user, or even an independent quality control department. It is important to note that it is important to carry out quality control activities in each of the previous analytical, design and implementation activities to ensure that they have been carried out to the appropriate level of quality. This ensures that the analyst produces quality specifications, that the designer produces quality designs, and that the programmer produces quality coding programs. In computer science, an implementation is a technical specification or algorithms, such as a program, software component, or other computer system. Many implementations are provided according to a specification or standard. We summarize the previous work of the authors to obtain test goals, which are the starting point for developing automated tests. In the context of system testing from use cases, the goal of testing can be expressed as a use case. This scenario will consist of a sequence of steps with no possible alternative and a set of test values, and preconditions and post-conferences related to this scenario. To generate test scenarios, first, an activity diagram is built from the main sequence and the error and alternative sequences of the use case. In the activity diagram, the actions performed by the system and the actions performed by the participants are stereotyped. Path analysis is then performed and each activity diagram path will be a use case scenario and therefore a potential test target. 2 Implementation of system tests. System testing architecture. The architecture for executing and automatically validating system tests is shown in Figure 7. This architecture is similar to the architecture needed to automate other types of tests, such as unit tests. The main difference is that in a unit test, the test itself calls the executable code, whereas the test system's functional test and test acceptance 27 requires an intermediary that knows how to manipulate its external interface. Implementation of test cases. A test-test is the realization of the goal of testing. The general behavior for a test case is listed in the General Test Case Behavior table. Each use case will be associated with a test suite. This suite will contain tests for all scenarios of the mentioned use case. As can be seen from the test objectives, each step must be specified whether it is performed by an actor or by the system under test. This information is very relevant when coding package test methods. All actions performed by the actor transform the code of the test code into an interaction between the test case and the system. System Testing and Acceptance Testing 29 Operational and category variable methodology will be applied to determine the required test values. Three have been identified various types operational variables. Each type will be implemented differently in the test cases. The first type consists of those operational variables that indicate the transfer of information to the system by an external entity. For each variable of this type, a new class will be defined, the objects of which will contain different test values ​​for this variable. An example of this type of work variable is shown in the case study. The second type consists of those operational variables that indicate a choice between multiple options available to the external actor. Instead, such a choice will be implemented directly as part of the code that implements the interaction between the actor and the system. The third type consists of those operating variables that indicate the state of the system. To implement the test case setup method, write the necessary code to correctly set the value of the operational variables that describe the system states, or to verify that the values ​​match. Similarly, the break method must restore these values ​​to their original states. In addition, the tracking method must exclude, if necessary, the information entered by the test case into the system during the execution of the test case. Several examples of operational variables of this type are shown in the case study. In this case, the first thing to do is apply what was seen to get the set of test goals from the use case. The characteristics of the test harness used are then determined. Finally, we apply what we saw in previous sections to implement the test case from the test target. The system under test artifacts were identified on English language, insofar as Spanish language not supported by the tools used. The use case in Table 2 describes the introduction of a new link in the system. As an addition, a requirement to store information describing the information handled by each link is also shown. From the use case and automatically generated a set of scenarios that will be the target of testing the said use case. Considering that the use case has unlimited loops with an infinite number of potential repetitions, the coverage criterion chosen for obtaining roads is criterion 01, which checks and tests the tests of system 30, consists of obtaining all possible paths for repeating none or one time of each of the loops. All scripts obtained using this criterion and translated into Spanish are listed in the table. For this case study, we have chosen Scenario 09, which is detailed in Table 5 for its implementation. Testing and Testing in Testing 31 Table Information Requirement for references. You can also apply the category splitting method. Sections for each of these variables are listed in the table. System Testing and Testing 32 Table variables defined for the use case. Testing and Testing in Testing 33 Table of categories for identified variables. As described in Figure 7, the harness test is designed to simulate user behavior and offer a set of statements to evaluate the result. This translation is currently done manually and is shown in the table. Testing and Testing the Transceiver 35 Figure Implementation of the test case. Translation to executable code of steps performed by the user in the main scenario. That is, to make sure that the categories exist and that there are no circumstances causing an error when restoring the categories or inserting a new link. The implementation of the break method was to restore the original set of links stored in the system. 2 Implementation of acceptance tests. Acceptance tests only work with customer support, or at least a proxy for the customer to determine the criteria. Without driver acceptance criteria, it becomes difficult to check if you are building the right software. The client, along with all members of the development team, must come together to define the system in terms of a series of "scripts" that describe what the system should do and how it should do it. By creating tests with clear requirements and approval criteria, the software is more likely to meet customer expectations. However, this means that someone manually checks that the requirements are met and that the application works as expected. This is where automated acceptance tests come in instead of requirements in a legacy document, requirements are defined as examples and scenarios, protected in source control with deployment artifacts, and can be run at any time to check if they meet any requirements and work correctly. You can use the same approach to write tests, but instead of typing them into test case management software or a spreadsheet, write them directly into code. System Testing and Testing 37 1 Automatic acceptance testing. Therefore, the first step in the implementation of any new functionality is to describe your expectations with a test. Others are not, and are finding themselves struggling with process control over time, especially as evidence grows and testing flexibility begins to deteriorate. The test-driven approach is based on which tests should guide the development of a software product. In industrial software products, when requirements engineering methods are used, they are mostly supported by natural language, which entails the well-known inconvenience of ambiguity. However, the need for validation may outweigh the benefits that a more formal and rigorous requirements specification can offer. The client must be able to read and understand the requirements in order to be able to agree to them. The most popular methods for defining requirements are listed in the Use Cases and User Stories section. To define the requirements is the key to attracting the client. Requirements - the goal of achievement, that is, what the client expects from the software product. System Testing and Testing 38 roles related to requirements specification and validation and acceptance testing. The concept of requirements becomes the Container of Acceptance Tests, and these are the ones that take center stage as the specification of each requirement. Consider the requirement "Withdraw money" in the context of an ATM. A typical descriptive specification might be as follows: The customer must be able to withdraw cash from the cashier in selected amounts. Always get a receipt if the cashier has no paper left. When it comes to Preferred Client, you can withdraw more money than you have in your account, but you should be warned that you will be charged a percentage. The client should be able to cancel at any time before confirming the withdrawal. The amounts must be able to be serviced by the accounts the cashier has at the time, and other amounts must not be accepted. Figure - Specification Alternatives Figure 10 shows some specification alternatives for this requirement. Icons reflect the convenience of each specification. It may be interesting to develop a sequence diagram to define each scenario for fulfilling a requirement, however, in general, this is not suitable due to a large number generated diagrams. It is more interesting to identify scenarios than to illustrate each one in a diagram. Narrative description is not a one-off, at least for short definition requirements, which focuses on defining the concepts involved. However, the usage model is not suitable for illustrating the detailed requirements structure of a software product in a long-term maintenance environment because an average software product can have thousands of requirements. More appropriate mechanisms are needed to visualize and manage many requirements. Templates are one of the most commonly used alternatives to use for use cases. Templates are elegant and provide a sense of order to the specification. However, they are generally counter-productive because they tend to provide a single level of detail processing for all requirements. In these very simple cases, they include things that are obvious or irrelevant just to cover all sections of the template. When a requirement includes multiple scripts, trying to synthesize all the scripts in a template usually results in confusing specifications. In this way, the requirements act as containers for the country. Depending on the requirement, other additional forms of specification may be useful. For example, an activity diagram if the behavior associated with a requirement has an algorithmic symbol, or a state diagram if the behavior includes enabling or disabling actions according to system assertions and system acceptance tests. An essential prerequisite is pragmatism with respect to the specification, which does not preclude the sharing of alternatives to the specification, but the primary criterion should be the desire to be profitable and contribute to the maintenance of that specification. On the other hand, in terms of efforts to maintenance, especially in terms of consistency, it is important not to overuse duplication or duplication of specifications in different means representation. A directed graph is an adequate representation for level refinement. This graph allows you to visualize the decomposition relationships and dependencies between requirements. Thus, each node is a functional or non-functional requirement. Arcs between nodes establish relationships between parents and children, or "nodes affecting nodes" dependency relationships. Thus, in the above example, the requirement "Withdraw money" can be a node of the requirements structure. Return with the quantity entered by the customer. No tickets available. There is no paper to receive. The communication time with the central system has been exceeded. Notification of the internal operations of the cashier. Timed out to initiate action. This may be related to a functional requirement or a non-functional requirement. It is optional and is used to set prerequisites before applying test steps. These are the interaction actions of the actor with the system. When they perform multiple actions, they can be placed on a numbered list. This is the effect of actor interactions. Each action can cause one or more results. It's important that when it comes to messages to the user, the text is included as part of the expected output, so the programmer already has this information verified with the client. This, as we will point out below, will establish the dependency between the requirements. Condition Must be a normal client. Steps  Try to refund a normal customer and request an amount in excess of the balance.  Expected result. System Testing and Acceptance 42  The message "The requested quantity exceeds your current balance, re-enter the quantity" is displayed and returns to the window to enter the quantity. Acceptance testing will help confirm that you are building the app the client wants, while automating these scripts allows you to continually test the app throughout the development process and use them as part of your regression testing suite to ensure that future changes don't break current ones. requirements. However, having a client associated with the compilation of evidence, especially automated testing, presents a number of potential problems. Clients, in general, are non-technical and tend to distance themselves from software development itself. The client can provide data and examples, while testers or developers can quickly code scripts and executable specifications. User Interface Acceptance Tests In the examples, the acceptance tests focused on the business logic and domain objects to see if the logic worked successfully. But what about how the user interacts with the application? These acceptance tests should be there to test the correctness of the logic from the user's point of view, and the user's point of view is the user interface. If the application has good decoupling and a good separation of logic from the UI code, this should make it easier to implement tests. If you test at this level, the tests will not change in the user interface. Although testing should focus solely on logic, this does not mean that you should not pass acceptance tests on the entire user interface. I like to have a set of smoke tests that are aimed at a basic user interface" happy road". They focus on the parts of the application that users are most likely to use in order to get the most out of their experience. least quantity testing. If you try to cover everything possible ways and the use of the UI, and if the UI changes, you will have to change all the tests. For example, if you test the user interface for an e-commerce site, the road will be happy to select an item, add it to the cart, check it out, and see a confirmation of purchase. If this scenario fails, you really want to find out ASAP. For certain applications, depending on complexity and lifespan, you may want to have more acceptance tests in the UI to make sure you have more confidence in the UI layer. However, successful UI testing is difficult question and I don't have room to cover it. Testing and acceptance of systems 43 3Intelligent tests. Once you've told the story and scripts in a clear and understandable format, the next step is to automate the story and scripts. This allows them to run during development to track progress and catch regression bugs. Conclusions and recommendations. On the other hand, we have systems tests that are responsible for evaluating the operation throughout the process, in order to detect errors that may occur for this, it is necessary to develop a strategy with tests and separate code development from interface development, so it is best to implement system tests . The implementation of these two software tests must be carried out with rigorous tests that meet certain standards and with the coordination of the stakeholders involved in the development of the system. System Testing and Acceptance 44 Bibliography 1. Isabelle Ramos Roman, José Javier Dolado Cosin. Quantitative management methods in software development. Alonso Amo, Loic Martinez Normand. Introduction to software development. Systems Testing and Acceptance Testing 45 8. -Department of Computer Languages ​​and Systems. Structural system analysis. Methods for identification of bacteria in the laboratory of microbiology.

Acceptance tests are carried out by departmental, interdepartmental or state commissions after the successful completion of preliminary tests. In addition to the scope of preliminary tests during acceptance tests, the oil consumption or lubrication of cylinders, seals, bearings and the crank mechanism is determined.

Acceptance tests are carried out according to the most detailed programs established by standards or specifications for this type of machine. Their purpose is to check the compliance of manufactured machines with all technical requirements. Acceptance tests are subjected to prototypes - the first industrial samples of machines of this type, produced by the enterprise. The number of samples that must be taken for acceptance testing is established in the standards or specifications for given type machines. All subsequent machines must be produced by the enterprise without changing the design, technology or materials used for the manufacture.

Acceptance tests are carried out in order to identify the actual performance of the machine, as well as to establish the correct operation of the components (gears, bearings, brakes, etc. Acceptance tests are carried out on a test site under conditions close to operational ones. The test results are recorded in the machine passport. If during the test, defects, they are recorded in the defective statement and then eliminated.

Acceptance tests are carried out to verify the performance guaranteed by the equipment supplier. The program of these tests usually provides for a series of balance experiments of increased accuracy under conditions that are subject to verification in accordance with the supplier's warranty data.

Acceptance tests are official tests in the presence of a commission, based on the results of which a conclusion is made about the advisability of starting mass production, and for pumps individual production- commissioning. At the same time, parametric indicators and characteristics of the pump obtained during testing are determined and included in the documentation. In the future, according to these indicators and characteristics, taking into account tolerances quality control of serial pumps is carried out.

Acceptance tests establish the compliance of the actual operational characteristics of the machine with the specifications and are carried out on special stands in conditions that are as close as possible to operational ones.

Acceptance testing of machine tools in accordance with the general specifications for their manufacture and acceptance, they are carried out at idle to check the operation of mechanisms and under load to determine productivity, accuracy and cleanliness of processing. During the test, all switching on, switching and transmission of controls are checked to determine the correctness of their action, interlocking, reliability of fixation and the absence of spontaneous displacements, the absence of jamming, cranking, etc.

Acceptance testing is one of the milestones building a new car. Their purpose is: a comprehensive check of the operational properties of prototypes in a variety of road and climatic conditions in accordance with the terms of reference for the development (including in hot and cold climatic regions); determination of the actual values ​​of all the most important parameters; identification of the reliability of the car as a whole, as well as its main components, assemblies and systems; establishing the degree of compliance of the created car with the intended purpose and determining the feasibility of putting a new model into production. On average, two to four samples are submitted for acceptance tests. Tests include carrying out a significant number of laboratory and laboratory road works to determine the technical and operational indicators and mileage of vehicles in all typical conditions of their intended operation.

STATE STANDARD OF THE UNION OF THE SSR

Set of standards for automated systems

This standard applies to automated systems (AS) used in various types activities (research, design, management, etc.), including their combinations created in organizations, associations and enterprises (hereinafter - organizations).

The standard establishes the types of AC tests and General requirements to their implementation.

The terms used in this standard and their definitions are in accordance with GOST 34.003.

The requirements of this standard, except for clauses 2.2.4, 4.4, 4.5, are mandatory, the requirements of clauses 2.2.4, 4.4, 4.5 are recommended.

1. General Provisions.

1.1. NPP tests are carried out at the stage of "Commissioning" in accordance with GOST 34.601 in order to verify the compliance of the created NPP with the requirements of the terms of reference (TOR).

1.2. NPP testing is a process of checking the performance of the specified functions of the system, determining and verifying compliance with the requirements of the TOR of the quantitative and (or) qualitative characteristics of the system, identifying and eliminating shortcomings in the system's actions, in the developed documentation.

1.3. For the AU, the following main types of tests are established: 1) preliminary; 2) trial operation; 3) acceptance.

Notes:

1. It is allowed to additionally conduct other types of tests of the AU and their parts.

2. It is allowed to classify acceptance tests depending on the status of the acceptance committee (the composition of the members of the committee and the level of its approval).

3. The types of tests and the status of the acceptance committee are established in the contract and (or) TOR.

1.4. Depending on the interconnections of the objects tested in the NPP, tests can be autonomous or complex.

Autonomous tests cover parts of the AU. They are carried out as parts of the NPP are ready for commissioning for trial operation.

Comprehensive tests are carried out for groups, interconnected parts of the AU or for the AU as a whole.

1.5. To plan all types of tests, a document "Program and test methods" is developed. The developer of the document is established in the contract or TK.

1.6. The test program and methodology should establish the necessary and sufficient scope of tests to ensure the specified reliability of the results obtained.

1.7. The test program and methodology can be developed for the AC as a whole, for a part of the AC. Tests (test cases) may be included as an application.

1.8. Preliminary tests The AU is carried out to determine its performance and to decide whether it is possible to accept the AC for trial operation.

1.9. Preliminary testing should be performed after the developer has debugged and tested the supplied software and technical means systems and submitting to them the relevant documents on their readiness for testing, as well as after familiarization of the NPP personnel with the operational documentation.

1.10. Trial operation of the NPP is carried out in order to determine the actual values ​​of the quantitative and qualitative characteristics of the NPP and the readiness of personnel to work in the conditions of the operation of the NPP, determine the actual efficiency of the NPP, and correct (if necessary) documentation.

1.11. Acceptance tests of the NPP are carried out to determine the compliance of the NPP with the terms of reference, assess the quality of trial operation and decide on the possibility of accepting the NPP for permanent operation.

1.12. Acceptance tests of the AU should be preceded by its trial operation at the facility.

1.13. Depending on the type of requirements for the AU for testing, verification or certification, it is subjected to: 1) a set of software and hardware; 2) personnel; 3) operational documentation regulating the activities of personnel during the operation of the NPP; 4) AS in general.

1.14. When testing the AU, they check: 1) the quality of the automatic functions performed by the complex of software and hardware in all modes of operation of the AU in accordance with the statement of work for the creation of the AU; 2) knowledge of the operational documentation by the personnel and the availability of skills necessary to perform the established functions in all modes of operation of the NPP, in accordance with the TOR for the creation of the NPP; 3) the completeness of the instructions contained in the operational documentation for the personnel to perform their functions in all modes of operation of the NPP in accordance with the TOR for the creation of the NPP; 4) quantitative and (or) qualitative characteristics of the performance of automatic and automated functions of the AU in accordance with the TOR; 5) other properties of the AU, which it must comply with according to the TOR.

1.15. AU tests should be carried out at the customer's site. By agreement between the customer and the developer, preliminary testing and acceptance of the AU software is allowed to be carried out on the developer's hardware when creating conditions for obtaining reliable test results.

1.16. Sequential testing and commissioning of parts of the NPP for trial and permanent operation is allowed, subject to the order of putting the NPP into operation established in the ToR.

2. Preliminary tests.

2.1. Preliminary tests of the AU can be: 1) autonomous; 2) complex.

2.2. Autonomous tests

2.2.1. Autonomous tests of the AU should be carried out in accordance with the program and methodology of autonomous tests developed for each part of the AU.

2.2.2. The program of autonomous tests indicates: 1) a list of functions to be tested; 2) description of the relationship of the test object with other parts of the NPP; 3) conditions, procedure and methods for conducting tests and processing results; 4) acceptance criteria for parts based on test results.

An offline test schedule should be attached to the offline test program.

2.2.3. Prepared and coordinated tests (test cases) at the stage of autonomous testing should provide: 1) full verification of functions and procedures according to the list agreed with the customer; 2) the required accuracy of calculations, established in the TOR; 3) verification of the main temporal characteristics of the functioning of software (in cases where this is significant); 4) checking the reliability and stability of the functioning of software and hardware.

2.2.4. As initial information for the test, it is recommended to use a fragment of real information of the customer organization in an amount sufficient to ensure the necessary reliability of the tests.

2.2.5 The results of autonomous testing of parts of the AU should be recorded in the test reports. The protocol must contain a conclusion on the possibility (impossibility) of admitting a part of the NPP to complex tests.

2.2.6. In the event that the autonomous tests carried out are found to be insufficient, or a violation of the requirements of the regulatory documents on the composition or content of the documentation is revealed, the specified part of the AU can be returned for revision and a new test period is assigned.

2.3. Complex tests

2.3.1. Comprehensive testing of the AU is carried out by performing complex tests. The test results are reflected in the protocol. The work is completed with the execution of the acceptance certificate for trial operation.

2.3.2. The program of integrated testing of the NPP or parts of the NPP indicates: 1) a list of test objects; 2) the composition of the submitted documentation; 3) a description of the relationships being tested between the test items; 4) the sequence of tests of NPP parts; 5) the procedure and methods of testing, including the composition of software and equipment necessary for testing, including special stands and test sites.

2.3.3. To conduct complex tests, the following must be submitted: 1) a program of complex tests; 2) conclusion on autonomous testing of the relevant parts of the AU and elimination of errors and comments identified during autonomous testing; 3) complex tests; 4) software and hardware and related operational documentation.

2.3.4. In complex tests, it is allowed to use as initial information obtained from autonomous tests of parts of the NPP.

2.3.5. A comprehensive test should: 1) be logically linked; 2) to ensure the verification of the performance of the functions of the NPP parts in all modes of operation established in the ToR for the NPP, including all connections between them; 3) provide a check of the system's response to incorrect information and emergency situations.

2.3.6. The integrated test protocol should contain a conclusion on the possibility (impossibility) of accepting the NPP for trial operation, as well as a list of necessary improvements and recommended deadlines for their implementation.

After elimination of shortcomings, repeated complex tests are carried out in required amount.

3. Trial operation.

3.1. Trial operation is carried out in accordance with the program, which indicates: 1) the conditions and procedure for the functioning of parts of the NPP and the NPP as a whole; 2) the duration of trial operation, sufficient to verify the correct functioning of the NPP when performing each function of the system and the readiness of personnel to work in the conditions of operation of the NPP; 3) the procedure for eliminating deficiencies identified during trial operation.

3.2. During the trial operation of the AU, a working log is kept, in which information is entered on the duration of the AU operation, failures, failures, emergencies, changes in the parameters of the automation object, ongoing adjustments to documentation and software, adjustment, and technical means. Information is recorded in the journal with the date and responsible person. The journal may include comments from personnel on the ease of operation of the AU.

3.3. Based on the results of trial operation, a decision is made on the possibility (or impossibility) of presenting parts of the NPP and the system as a whole for acceptance tests.

The work ends with the execution of an act on the completion of trial operation and the admission of the system to acceptance tests.

4. Acceptance tests

4.1. Acceptance tests are carried out in accordance with the program, which indicates: 1) a list of objects allocated in the system for testing and a list of requirements that the objects must comply with (with reference to the points of the TOR); 2) acceptance criteria for the system and its parts; 3) conditions and terms of testing; 4) means for testing; 5) names of persons responsible for conducting tests; 6) test methodology and processing of their results; 7) a list of documentation to be drawn up.

4.2. For acceptance testing, the following documentation must be presented: 1) technical task to create AS; 2) act of acceptance for trial operation; 3) work logs of trial operation; 4) act of completion of trial operation and admission of the NPP to acceptance tests; 5) program and test methodology.

Acceptance testing should be carried out at a functioning facility.

4.3. Acceptance tests, first of all, should include verification of: 1) completeness and quality of the implementation of functions at standard, limiting, critical values ​​of the parameters of the automation object and in other operating conditions of the NPP specified in the ToR; 2) fulfillment of each requirement related to the system interface; 3) the work of personnel in an interactive mode; 4) means and methods for restoring the operability of the AU after failures; 5) completeness and quality of operational documentation.

4.4. Verification of the completeness and quality of the performance of the functions of the AU is recommended to be carried out in two stages. At the first stage, individual functions (tasks, task complexes) are tested. At the same time, they check the fulfillment of the requirements of the TOR for functions (tasks, task complexes). At the second stage, the interaction of tasks in the system and the fulfillment of the requirements of the TOR for the system as a whole are checked.

4.5. By agreement with the customer, the verification of tasks, depending on their specifics, can be carried out autonomously or as part of a complex. It is advisable to combine tasks when checking in complexes, taking into account the commonality of the information used and internal connections.

4.6. Checking the work of personnel in an interactive mode is carried out taking into account the completeness and quality of the performance of the functions of the system as a whole.

The following is subject to verification: 1) the completeness of messages, directives, requests available to the operator and their sufficiency for the operation of the system; 2) the complexity of the dialogue procedures, the ability of personnel to work without special training; 3) the reaction of the system and its parts to operator errors, service facilities.

4.7. Checking the means of restoring the operability of the AU after computer failures should include: 1) checking the presence in the operational documentation of recommendations for restoring operability and the completeness of their description; 2) the feasibility of the recommended procedures; 3) operability of automatic recovery tools, functions (if any).

4.8. Verification of the completeness and quality of operational documentation should be carried out by analyzing the documentation for compliance with the requirements of regulatory and technical documents in the TOR.

4.9. The test results of the objects provided for by the program are recorded in the protocols containing the following sections: 1) the purpose of the tests and the number of the section of the requirements of the TOR for the NPP, according to which the test is carried out; 2) the composition of the hardware and software used in the tests; 3) an indication of the methods in accordance with which the tests were carried out, processing and evaluation of the results; 4) test conditions and characteristics of the initial data; 5) storage facilities and access conditions to the final, testing program; 6) generalized test results; 7) conclusions about the test results and the compliance of the created system or its parts with a certain section of the requirements of the TOR for the NPP.

4.10. Test reports of objects throughout the program are summarized in a single protocol, on the basis of which a conclusion is made about the compliance of the system with the requirements of the technical specification for NPPs and the possibility of issuing an act of acceptance of NPPs for permanent operation.

The work is completed by the execution of the act of acceptance of the NPP into permanent operation.

Loading...Loading...