Some aspects of building integrated security systems. architecture of automated systems..
KOROLEV Vladimir Sergeevich
Some aspects of building integrated security systems. architecture of automated systems
Source: magazine «Special Equipment» No. 4 2007
Sooner or later, officials responsible for the security of the entrusted facility face the problem of choosing an automated security system. And it does not matter at all whether this happens at the design stage or during operation. It is good if there are departmental recommendations that at least somehow limit and narrow the search space. But what if not? With such an abundance of advertising offers, beautiful interfaces and promises of mountains of gold. What criteria should be followed? And if you consider that the budget is always limited, how to get maximum quality at minimum cost?
Comparison of various automated security systems for facilities, the vast majority of which are proudly called integrated security systems (ISS), is a truly difficult task. And here's the thing. For example, no one would even think of comparing a nice garden house with a gray five-story building or a modern high-rise. And even more so, asking the question — which is better? But in the field of automated security systems for facilities, such a situation is quite common. Because behind the beautiful shell, a set of «science-intensive» phrases, it is sometimes very difficult to discern what the «Y» system actually is and what its preferred area of application is.
The process of developing and building an automated system is very similar to the construction of a building — the construction of both is a complex, labor-intensive process, often with an unpredictable result (budget, deadlines, deviations from the project, etc.). And if we consider an automated system as a unity of regularly located and interconnected parts that solve a common problem, then the answer to the question of what it will be like upon completion of development, and what consumer properties it will have, will primarily be determined by its architecture. Because it is the architecture that describes the components of the system and their interrelations, determines the principles of development, improvement and support, and determines the class of objects on which its application will be most effective. Continuing the analogy with construction, we can say that the architecture of a residential building designed for Egypt or Saudi Arabia is unlikely to be suitable for Siberia.
Unfortunately, such a relevant topic as the construction of integrated security systems for facilities is currently still poorly covered. And in general, there are many «blank» spots in this area: issues of standardization, terminology, classification of technical means, system architecture — all this requires the closest attention and careful analysis. The purpose of this article is to consider some issues related to the architecture of automated security systems for facilities.
The search result for the phrase «integrated security system» was not at all surprising. It is clear why one search engine immediately gave about a thousand links, and the second even more. Today, everything that is a computer to which some kind of alarm device and a TV camera are connected is called an ISS.
Given the complexity of the subject of discussion, a more rigorous definition is still necessary. An integrated security system is a set of organizational and technical measures that allow, on the basis of universal software and hardware, to automatically receive, guarantee delivery and display information upon detection of potentially dangerous situations at controlled points in order to take timely measures to prevent and suppress them [1]. The basis of an ISS is an automated system (AS) that combines perimeter security, access control, television surveillance, etc., i.e. everything that is necessary to ensure the safe operation of an object. Therefore, in this case, when discussing architectural issues, we will talk exclusively about automated systems and the features of their construction.
The next step is to select the type of objects where the AS demonstrates its full potential. It makes sense to consider automated systems positioned as intended for large, important and especially important objects. Because it is at such objects that the declared functionality must be fully implemented and the capabilities of the systems, their strengths and weaknesses, are maximally demonstrated.
The main task solved when guarding an object is to warn and prevent intruders from entering the object, as well as to prevent unauthorized actions of both external and internal intruders in relation to the infrastructure, property and personnel of the object. Thus, an automated system for guarding an object must be considered as a specialized tool, a toolkit that helps security services, military and paramilitary units solve problems related to guarding objects.
This approach allows us to formulate criteria for assessing the potential capabilities of various architectural solutions, provided that the systems under consideration have approximately the same functionality:
-
reliability, as the ability of the system to maintain its operability when its components fail and to ensure guaranteed delivery of messages in order to eliminate the loss of information about significant events;
-
scalability, as an increase in the functional capabilities of the system during its operation without stopping work.
Based on open published data, it is impossible to conduct a full analysis and compare various solutions by all parameters, for example, to analyze the process of interaction of components and evaluate data flows. Therefore, in this case, a methodology based exclusively on qualitative analysis was used. Further in the text, references to systems of specific manufacturers are not given, because the goal is not to compare specific systems and say which is better, which is worse, no. The goal is to analyze the potential capabilities of the solutions used and try to evaluate general trends in the development of the architecture of modern AS.
Manufacturers of domestic automated systems [2 — 9], selected for the reasons given above, are at least remarkably unanimous in one respect. All offer networked distributed systems, which are essentially automated process control systems. All the options under consideration, with minor differences, are built according to the same scheme: the upper level is decision-making and information display, the middle level is protocol and interface conversion, the lower level is peripheral controllers that control technical equipment (Fig. 1).
The upper and lower levels of the systems under consideration are structurally and functionally almost identical. The upper level consists of various types of automated workstations (AWP), database servers, etc., united into a local network based on the TCP/IP protocol family. The lower level contains peripheral controllers that directly interact with technical equipment. Interaction with the middle level is usually carried out via RS485, although RS232 is sometimes used.
Of greatest interest is the architectural solution of the middle level, which is the key one, determining the main and most important features of the system's functioning. Here there is no unanimity and the presented options can be reduced to the following three (Fig. 2):
-
centralized control based on a multifunctional controller or server (option a);
-
decentralized control based on a specialized automated workplace or server (option b);
-
peer-to-peer systems without centralized control (option c)*.
*) Despite the fact that each manufacturer has its own terminology, this does not change the essence of the matter.
The system design option shown in Fig. 2a is the most common. Most likely, historically it turned out that the system «grew» from the controller. Apparently, a certain multifunctional device was developed or purchased first, and subsequently, as the customer's needs and the complexity of the tasks to be solved grew, the necessary equipment and software were added to it. As a result, the developer found himself hostage to the problem of ensuring «bottom-up» compatibility, when with the release of each subsequent version, making fundamental changes became more and more expensive and problematic. This hypothesis is supported by the fact that, in most cases, RS-232 is used as the communication interface with the server, which could well have been used for the initial configuration of the controller and periodic transmission of small data blocks, but nothing more.
Anything that can break will break someday. Despite all its advantages, this architectural solution is not the most effective in terms of reliability. In this scheme, the «bottleneck» is the controller, which contains all the business logic of the protected facility, and if it fails, the consequences can be very serious (if you distribute the business logic between the server and the controller, the situation can be even worse). This fact implies increased requirements for the software (SW) of the controller itself, increased requirements for peripheral controllers and the need to use complex backup schemes.
The next most important bottleneck in this scheme is the server, which most likely stores a backup copy of the database (object model) and provides an interface for the upper level. The server must periodically poll the controller to obtain information about the current state of the controlled points and transmit it to the «interested» nodes. Unfortunately, in this case, it is almost impossible to ensure operation within the event model and in real time due to the limitations inherent in RS-232.
It should be noted that a system built in this way is completely insensitive to failures of the local network and other upper-level equipment. Even if we assume that the local network is disrupted, the controller will continue to function.
There are modifications of this scheme. For example, when one server controls several controllers. This solution is more effective compared to the one discussed above, but nevertheless, here, as in any other centralized system, the probability of a «fall» of the entire AS as a whole is quite high.
As for the scalability of systems built on the basis of this architectural solution, the following comments are true. First, when adding new lower-level equipment, it will be necessary to make changes to the controller database. In most cases, this action entails a subsequent restart of the controller, which is essentially equivalent to restarting the entire system as a whole. However, at a large operating facility, such an event is associated with serious organizational difficulties. Second, you can get rid of the restart by implementing some version of the «plug and play» technology. This will definitely lead to more complex software and, accordingly, an increase in the number of errors and an increase in the likelihood of unstable operation of the controller. Third, the situation is significantly simplified in the case of centralized server management of several controllers, since in this case only a restart of the modified part of the system is required.
The architectural solution implemented according to the scheme shown in Fig. 2b implies the presence of several equal nodes, each of which controls its own set of lower-level equipment (peripheral controllers). In this case, the control element can be either an automated workstation or a dedicated server.
It is worth noting that there is a scheme when the automated workstation interacts with a multifunctional controller, which, in turn, controls the peripheral controllers. This scheme is a variation of the solution in Fig. 2a and is discussed in detail above.
This solution could be classified as a peer-to-peer system (Fig. 2c). However, the scheme with a control workstation or server has its own fundamental features and is therefore considered as an independent, fairly common in real life, option for constructing an AS.
The scheme with the server as a control element does not have any special advantages or disadvantages. The disadvantages of such a solution are, firstly, the increased cost compared to others and, secondly, the need to reserve the control element. Moreover, the main contribution to the cost will be made not so much by the equipment and application software, but by the cost of server operating systems and DBMS plus the cost of client licenses. If we imagine that there are several such control nodes in the system, and each of them works for a certain number of clients, then even an elementary calculation gives a very decent amount. The need for reservation will be determined by the customer's requirements and the method of constructing the lower level (the degree of «intelligence» of the peripheral controllers). In the simplest case (such a solution is used in one of the systems), reservation is really necessary and the developer offers it as an option.
The worst solution is when an automated workplace is used as a control element. If this element fails, the protected facility may find itself in a very difficult situation, for example, part of the perimeter will be left unguarded or a set of access points will stop functioning. The worst thing about this is that the personnel will also be left without «eyes» at the time of the accident, since both equipment control and display of information to the operator are «closed» to one element. The only way out in this situation is «hot» backup. However, this step does not completely solve all the potential problems inherent in this scheme.
The scalability of the system with the architecture shown in Fig. 2b is significantly higher than that of the previous scheme with centralized control based on the controller. Ideally, when adding new equipment, it is sufficient to simply export the modified database to the corresponding automated workstation or server. This scheme allows for a fairly simple software restart, without stopping the operation and restarting the control element.
Compared to others, peer-to-peer systems (Fig. 2c) have more serious prospects in terms of their capabilities. In such schemes, the controller acts as an equal network node, managing its small part of the protected object. This solution allows for a fairly flexible approach to building the entire AS, distributing the controllers so as to minimize the consequences of the failure of each individual branch. However, this solution, as well as others, has a typical drawback — the lack of a real-time mode, due to the need for «polling» when organizing interaction with peripheral controllers. Unfortunately, such an implementation does not allow for the full use of the potential capabilities of peer-to-peer systems.
In terms of scalability, this scheme is no different from the scheme in Fig. 2b, and all the above comments are valid here.
Considering various aspects of modern «controller engineering», it can be clearly stated that over the last decade two stable trends have been observed. Firstly, in connection with the rapid development of microelectronics and microprocessor technology, as well as the reduction in the cost of electronic components, the level of «intelligence» of controllers is increasing. Secondly, the accompanying development of information technologies «spurs» the integration of controllers into various devices and devices, giving them new, previously inaccessible quality characteristics and functionality, concentrating «intelligence» at the point of its use, i.e. exactly where it is really needed. The most striking example of this is the new generation of household appliances. A similar picture is observed in the market of security systems: access control systems, technical security equipment, etc. Devices have appeared, the use of which can not only significantly improve the tactical and technical characteristics of systems, but also reduce their cost, for example, by minimizing the cost of cable products. Moreover, modern controllers have the ability to work in a local network, acting as equal independent nodes. This gives reason to believe that soon in the field of automated security systems there will be solutions that fully realize the potential of peer-to-peer systems. As an illustration, we can cite the example of the intensive development of the market of life support systems and building automation with the beginning of the use of fieldbus technologies. Such well-known and widely used standards in this field as BACNet and LonWorks are based on the concepts of the fieldbus, which came out of the «depths» of industry.
Relying only on published data, it is impossible to form a well-founded opinion about the features of the software implementation of the systems under consideration. Therefore, based on the above-mentioned methodology and analysis of modern trends in the field of information technology, we will try to formulate requirements for the software architecture taking into account the features of this subject area. As additional criteria, in addition to reliability and scalability, it is necessary to consider security and interoperability as the ability to integrate independently developed software modules or subsystems into the system [17].
Today, there are a large number of technologies for building distributed automated systems with both procedural and object paradigms for ensuring interaction between constituent elements: RPC (remote procedure call), Java RMI (remote method invocation), CORBA (common object request broker architecture), DCOM (distributed component object model), etc. [10, 11] The most promising architectural solution today is considered to be SOA (service oriented architecture), especially in combination with ESB (enterprise service bus) and EDA (event driven architecture), developed by a group of corporations led by IBM and Microsoft [12 — 15].
Each of the listed technologies has its pros and cons, «focus» on a specific area of application. However, despite the differences in implementation, the listed technologies have much in common, which allows us to identify common trends and select proven solutions. The most important features, implemented to one degree or another in all technologies, are:
-
object (service) paradigm of data access, allowing one to abstract from the physical nature of the object;
-
unified, abstract way of describing objects;
-
«storage» of objects, as a way of obtaining a reference to an object;
-
interaction through message exchange;
-
independent transport, synchronous or asynchronous method of interaction;
-
service bus — a software layer (middle ware) that provides unified access to data.
Based on the trends in the development of information technology and taking into account the specific requirements for automated security systems, we can formulate the basic requirements for software architecture:
-
the system must be built as a heterogeneous distributed system with certain, fixed rules for accessing data;
-
the system must be built on a hierarchical principle with delineation of the functionality of the component levels;
-
the control level must be completely abstracted from the physical nature of the devices and equipment used, and operate only with their logical equivalents;
-
interaction of the control level with technical means should be carried out through a software “layer” providing an interface to specific physical devices;
-
elements of the system should operate with single entities and function within the framework of a single model of the protected object;
-
elements of the system should support an “event” model of interaction, informing about changes in their state;
-
the system must ensure a specified response time to an event that has occurred;
-
information exchange must be carried out by messages via a secure protocol with mandatory subscriber identification, guaranteed message delivery and automatic restoration of communication.
An analysis of various methods for constructing automated systems allows us to conclude that in this case the optimal and most fully satisfying version of the system is the one implemented as a peer-to-peer loosely coupled distributed system according to the classification [10].
In this case, «loose coupling» means that each element (module) of the system must be independent and self-sufficient to a certain extent, which will ensure the system's operability in the event of failure of any element (with a corresponding decrease in the functionality of the entire system as a whole). In other words, the system must not have such an element (group of elements) whose failure would lead to blocking the operation of the entire system as a whole.
The requirement to abstract the control level from the physical nature of the technical means used, as well as functioning in the message exchange mode, is most adequately implemented in a three-tier client-server architecture, when all requests are made through the application server. The current level of information technology development allows this server to be distributed and the corresponding software to be placed on computers or controllers that provide an interface to the technical means. In this case, it is possible to significantly reduce the requirements for workstation software (AWP) and implement them as a kind of «thin» clients. Such a solution allows not only to minimize the number of computers, but also to increase reliability by grouping elements capable of blocking the operation of various parts of the system.
The operational characteristics of the system depend on how effectively the issues of scalability and interoperability are resolved, because the greatest number of problems begin to appear when adding new, “unknown” equipment to the system or trying to integrate a subsystem from another manufacturer.
Modern methods for solving these problems involve the use of various technologies of «late binding» (method invocation) or dynamically obtaining a link to a software object whose methods and properties are not known at the design stage. However, all these technologies, due to their universality, are very non-trivial and highly redundant. As a result, new, additional problems arise, the solution of which requires the involvement of highly qualified programmers and greatly complicates the system: in development, configuration and maintenance.
One of the features of the subject area under consideration is that it is limited to a very specific range of tasks, and «comprehensive» universality is not required here. This creates serious prerequisites for architecture optimization. For automated systems of this type, there is a simpler and more effective way to meet the requirements of scalability and interoperability — this is to ensure the functioning of the system within a single model of the protected object.
Description of the equipment included in the system: technical security equipment, access control equipment, television equipment, etc., as a set of certain interconnected abstract control objects (controlled points) makes it possible to standardize, «legitimize» the data access mechanism. This approach allows, if necessary, to freely change the internal data structures and operating algorithms without affecting the correct operation of the entire system as a whole.
To summarize the above, we can formulate the requirements for the model of the protected object and its components:
-
the model must be built from the minimum required set of components — control objects;
-
each control object must have a unique identifier;
-
each control object must be represented as a finite state machine and have a fixed set of states;
-
each control object must generate a system message when its state changes;
-
the object model must be scalable, i.e. allow adding and removing control objects during operation.
Integrated security systems, as already noted, are automated process control systems by their purpose. Moreover, in the control loop of this process, the main, determining role is played by a person — the operator making the decision. The system only provides information on the current state of the object and, when predetermined situations arise, controls the auxiliary equipment: turns on recording from television cameras, additional lighting, etc. Therefore, the only thing important for the system is whether or not a particular sensor has been triggered, whether a particular device is functioning properly, in order to inform the operator in time about the occurrence of a certain event and take the necessary actions. From the point of view of the system, the process is not only discrete, but is also characterized by the fact that each sensor or device (control object) has its own specified, fixed set of states.
It is advisable to group the technical means used to protect objects into two large classes, which can be called control points and access points. The proposed division adequately reflects the essence of the processes taking place. If we do not take into account minor details, then in reality two main information flows circulate in the system: the flow of access control data and the flow of control and security information. Thus, we can talk about access points and control points as basic components of a mathematical model. However, to build a mathematical model of an object or process, it is also necessary to specify the relationships between the constituent elements. Access and control points are only sources of signals informing about the occurrence of some significant event. Therefore, it is logical to use the «zone» object as a connecting element. The zone, which is a container-type control object, allows not only to group points, but also to uniquely identify the location of the event.
An additional advantage of this approach to building a model is that technical means grouped in this way can be very easily classified [16], corresponding to the paradigm of object-oriented programming.
Considering that all the listed objects of control function as finite automata, it can be argued that these elements are sufficient for constructing a mathematical model of the protected object, since, based only on this data, it is possible to unambiguously calculate the state of the object at any time.
The situation is such that installers and end customers no longer want to become «hostages» of the manufacturer of the «X» or «Y» system. The implementation of the requirements of openness and interoperability will sooner or later lead to the fact that automated security systems will be a kind of SCADA, integrating equipment and subsystems from different manufacturers. Moreover, in some cases, it will be necessary to integrate with life support and building automation systems. Whether this is good or bad is a completely different question. The current implementations of the lower level using «field bus» technologies give serious grounds to believe that this is exactly what will happen.
The scale and complexity of systems will only increase. It is possible to build an open, reliable and secure system only as a result of its careful design, and therefore the issues of the architecture of automated security systems are becoming increasingly important and relevant.
Literature
-
Googe I.G. et al. Integrated security systems /World of security, 2001 — №12.
-
bolid.ru
-
sigma-is.ru
-
algont.ru
-
bezopasnost.ru
-
eleron.ru
-
nikiret.ru
-
dedal.ru
-
itrium.ru
-
E. Tannenbaum, M. Van Steen. Distributed Systems. Principles and Paradigms. /SPb.: Piter, 2003.
-
D. Bacon, T. Harris. Operating Systems. Parallel and Distributed Systems./SPb.: Piter, Kyiv: BHV Publishing Group, 2004.
-
Newcomer, E. Web Services. XML, WSDL, SOAP, and UDDI. For Professionals./SPb.: Piter, 2006.
-
Leonid Chernyak. The Quest for the Holy Grail of Information Technology./Open Systems, 2006 – No. 1(117).
-
Alexey Dobrovolsky. Application Integration: Methods of Interaction, Topology, Tools./Open Systems, 2006 – №9(125).
-
Gleb Ladyzhensky. Application Integration as It Is./Open Systems, 2006 – №9(125).
-
Korolev V.S. Classification of Components of Integrated Security Systems./Special Equipment, 2007 – №1.
-
Sergey Kuznetsov. Portability and Interoperability of Information Systems, and International Standards./citforum.ru