ABOUT HIDDEN, SECRET, SIDE CHANNELS. AND NOT ONLY..

logo11d 4 1

ABOUT HIDDEN, SECRET, SIDE CHANNELS. AND NOT ONLY..

ABOUT HIDDEN, SECRET, SIDE CHANNELS. AND NOT ONLY.

ABOUT HIDDEN, SECRET, SIDE CHANNELS. AND NOT ONLY

V.A. Galatenko (Doctor of Physical and Mathematical Sciences)

About hidden channels

This is not the first time that Jet Info has addressed the topic of covert channels. In 2002, a separate issue was devoted to it (see [1], [2]), so this paper assumes that the reader is familiar with the basics of this area of ​​knowledge; otherwise, it is recommended to reread the article [2]. However, the author would like to note from the very beginning that the topic of covert channels in its traditional interpretation seems somewhat far-fetched and formal to him. The peak of research in the field of covert channels occurred in the mid-1980s, when the «Orange Book» of the US Department of Defense was published, which, starting with security class B2, introduced the requirement for covert channel analysis. As a result, the fight against covert channels began, mainly, not for the sake of real security, but for the sake of successful certification. In addition, covert channels, due to their generally random association with classes B2 and above, have been studied almost exclusively in the context of multi-level security policies, with mandatory mention of HIGH and LOW subjects, non-influence models, and other intricacies. All this is infinitely far from the real problems of typical modern information systems, and the published results are mostly obvious and do not represent either theoretical or, especially, practical interest. The article [2] explains the conceptual reasons for this state of affairs.

In particular, it is hardly reasonable to ask the question of the possibility of organizing covert channels for controlling a hostile multi-agent system (HMAS). If HMAS was built by hacking many remote systems and introducing malicious software (MS) into them, then, obviously, communication resources of suitable secrecy were found for this, more than sufficient for subsequent control.

In the mid-1980s, a systematic methodology for detecting covert channels from memory was proposed (see [3]), the key element of which is the shared resource matrix. In a networked environment, the Internet, there are any number of legal shared resources, such as the space allocated to users on publicly accessible websites. It is also possible to use IP packet header fields (for example, the checksum is an excellent candidate for this role), and initial sequence numbers when establishing TCP communication (see [4]). It is also possible to organize practical covert channels by time, for example, by encoding a unit by sending a packet in a certain time interval of milliseconds (see [5]).

With the advent of powerful multiprocessor systems with shared memory, the bandwidth of covert channels has jumped to megabits per second and continues to increase with the growth of hardware performance (see [6]). This is, of course, a serious problem, but its solution is sufficient to abandon the division of such systems between subjects with different levels of access.

The problem of covert channels is a manifestation of a more general problem of the complexity of modern information systems. In complex systems, covert channels have always existed and will exist, so we need to fight the cause, not the effect. In the most general form, the method of combating the complexity of systems can be formulated as «implementing an object approach with physical boundaries between objects». Processors should not be divided not only between subjects, but also between control flows. The user network should be physically separated from the administrative one. Generally speaking, system components should not trust each other: the processor may not trust the memory, the network card — the processor, etc. When suspicious activity is detected, the components should raise the alarm and apply other protective measures (for example, the disk controller can encrypt files, the network controller — block communications, etc.). In general, in war as in war. If it is impossible to organize physical boundaries, virtual ones should be used, formed primarily by cryptographic means. A more detailed presentation of these issues can be found in the work [7].

Covert channels can not only be detected, but also eliminated or jammed «without looking». As explained in [2], various types of normalizers are used for this purpose, smoothing the load on the processor, energy consumption, the time of calculating certain functions, network traffic, etc. For example, the kernel of the Asbestos operating system [8] in response to a request to create a port returns a new port with an unpredictable name, since the ability to create ports with specified names can serve as a covert channel.

The overhead of normalization can be large, which can significantly slow down the functioning of legitimate entities, so it is necessary to seek and find a reasonable compromise between information security and the functional usefulness of systems. From the point of view of combating complexity, covert channels have the following unpleasant property. Shared resources present at any level of the information system, starting from the lowest, hardware, can be used at all higher levels, up to the application, to organize an information leak. A centralized memory access arbiter in a multiprocessor system, a second-level cache shared by several processors, a memory management device — all this can serve as a leak channel. Thus, when analyzing covert channels, it is necessary to consider the system as a whole. An attempt to conduct so-called composite certification, when a system is assessed on the basis of previously conducted tests of individual modules or levels, leads to missing covert channels. The problem is aggravated by the fact that in the description of individual modules or levels, necessary details can be omitted as unimportant. It would seem that what difference does it make how the queue of instructions selected by the microprocessor for execution is arranged? However, this can also be important for the safe operation of the application (see [6]). An operating system that has successfully passed certification during testing on «bare» hardware contains hidden channels of noticeable throughput if it is executed under the control of a virtual machine monitor. In general, a shared resource is the same pea that a real princess will feel through any number of feather beds. And this must be remembered.

The covert channel approach is actively used to assess the degree of imperfection of the implementation of such security services as anonymizers and their networks, as well as traffic augmentation. This seems natural, since anonymization and traffic augmentation are types of normalization designed to eliminate covert channels. If the normalization is imperfect, then the covert channels remain. How imperfect? ​​As much as the information leak is large. The imperfection of anonymizers can be estimated as the throughput of covert channels for leaking information about the sender and/or recipient (see [9]). For individual anonymizers, it is possible to obtain an exact value, for networks of anonymizers — an upper estimate.

According to current trends, an increasing part of Internet traffic is encrypted (see [10]). Encryption protects the contents and headers of packets, and padding prevents information from being obtained by analyzing their sizes. However, cryptography itself does not protect against packet behavior analysis, i.e., their distribution in time, which may compromise user privacy. In addition, time analysis of SSH traffic significantly simplifies unauthorized access to user passwords. Traffic padding at the channel level is an effective protective measure against such analysis. The data flow in the channel acquires a predetermined nature. Some packets are delayed, and dummy data is sent to the channel when necessary. This is the principle. In practice, however, it is quite difficult to implement padding so that the observed traffic exactly follows the predetermined distribution, so that an attacker can still correlate the padded useful traffic. The imperfection of the padding implementation can be estimated as the throughput of a covert channel based on varying interpacket intervals. It turns out that under ideal conditions this covert channel allows practical use. Fortunately, in a real busy network with many data streams, the high level of noise in the channel makes it difficult for an attacker to act.

Using the covert channel apparatus to assess the degree of imperfection of the architecture and/or implementation of security services seems to be a very promising direction of research.

The authors of the work [11] managed to find a beautiful application of data transmission methods typical for covert time channels in wireless sensor networks. One of the main problems of sensor networks is reducing energy consumption. If binary values ​​are transmitted over a wireless network in the usual way, then we can assume that energy proportional to their logarithm is spent on this. However, values ​​can also be transmitted silently: send a start bit, forcing the recipient to turn on the counter, wait for a time corresponding to the value, and send a stop bit. As a result, energy is saved, but time is spent (proportional to the value), but the transmission can be optimized — silence is perfectly multiplexed, cascaded and quickly forwarded.

Of course, the described data transmission method is a funny curiosity. In general, covert channels are currently almost exclusively an academic and certification area. In this context, the work [12] is interesting, which studies the problem of completeness of covert channel analysis. The concept of a complete set of covert channels is introduced, the elements of which together generate the maximum possible covert information leakage (a basis in a vector space can serve as an analogue of a complete set). As covert channels are identified, their set can be checked for completeness (using the criteria formulated in [12]) and, as a result, an estimate of the potential information leakage is obtained. Another very important aspect of the work [12] is the description of an architectural approach to building systems that facilitates the analysis of covert channels. Identifying covert channels one by one in an arbitrary information system is a futile task; it is advisable to build systems in some regular way and then subject them to systematic analysis taking into account their specifics.In practice, neither attackers nor information security vendors pay any significant attention to covert channels. The reason is that modern information systems have more than enough «crude» vulnerabilities that allow easy exploitation, so both attackers and defenders prefer the path of least resistance, which is quite natural. The former exploit obvious «holes», the latter try to cover them.

Consumers also have no time for hidden channels — they would rather fight off worms and viruses hand-to-hand, and find money for last year's snow in a package labeled «intrusion prevention systems with known signatures». And also patiently listen to the lectures of manufacturers of leaky software for the lack of discipline in managing numerous corrective patches for this very software.

There are two pieces of news regarding vulnerabilities, and both are good. The first is that there are fewer problems with the security of basic software, so attackers are more actively exploiting application vulnerabilities. The second piece of news is that there are a lot of applications. And there is also phishing and other methods of moral and psychological influence… Therefore, the time of covert channels, if it comes, will not be very soon.

To understand what a modest place covert channels occupy among other information security problems, even if we limit ourselves to software defects, it is advisable to consider the classification of such defects proposed in the article [13] in the context of developing tools for static analysis of source texts in order to identify errors that are fraught with the emergence of vulnerabilities.

Defects in software can be introduced intentionally or through negligence. The former are divided into malicious and non-malicious. Malicious defects are loopholes, logic and time bombs; non-malicious ones are hidden channels (memory or time) and inconsistent access paths.

Non-intentional defects are divided into:

  • data validation errors (addressing errors including buffer overflows, poor parameter value checks, misplaced checks, inadequate identification/authentication);
  • abstraction errors (object reuse, internal representation disclosure);
  • asynchronous defects (concurrency issues including runaways, deadlocks, deadlocks, check/use gaps, and multiple references to the same object);
  • inappropriate use of subcomponents (resource leaks, misunderstanding of responsibility);
  • functionality errors (exception handling defects, other security defects).

To understand how security flaws can be introduced into software intentionally but not maliciously, consider a covert channel that is formed in a disk controller when optimizing the service of requests using the elevator algorithm (disk requests are processed not in the order in which they are received, but as the head bar reaches the requested blocks; see [14] for a systematic approach to identifying covert channels by time). A malicious sender of information can influence the order and, therefore, the processing time of requests by controlling the direction of movement of the head bar by issuing his own requests to the disk in a certain order. Here, the role of a shared resource that allows (malicious) targeted influence is played by the common queue of requests to disk blocks, as well as the current position and direction of movement of the bar. It is natural to consider this flaw to be intentional but not malicious, since the covert channel was formed not due to an implementation error, but as a result of the adopted design decision aimed at optimizing the functioning of the system.

The largest and most important group of defects introduced through carelessness are errors in data validation, or more precisely, insufficient control of input data before using them. Developing methods to prevent or detect such errors is a task of primary practical importance. And hidden channels can wait…

About hidden channels

As noted in [15], the so-called multi-aspect information security is currently being established, when attempts are made to take into account the entire spectrum of interests (sometimes conflicting with each other) of all subjects of information relations, as well as all types of configurations of information systems, including decentralized ones that do not have a single control center.

Security is subjective. The user has his own security, the content provider has his own (and the user can be considered an enemy here). New aspects of security are emerging, such as digital rights management. This trend is especially evident in the use of back channels.

Let us recall (see [2]) that non-standard channels of information transmission are considered hidden. Non-standard methods of information transmission via legal channels (called enveloping in this context) are called subliminal channels or stego channels. General information about them is given in the article [2]. Subliminal channels are used when there is a legal communication channel, but something (for example, security policy) prohibits the transmission of certain information via it.

Note that there are two important differences between covert and secret channels. First, despite the name, no one tries to hide the existence of covert channels, they simply use entities that were not originally intended for this purpose, created for other purposes, to transmit information. On the contrary, a secret channel exists only until the enemy finds out about it. Second, it is believed that the time of transmission of information via a covert channel is not limited. In contrast, the transmission time via a secret channel is determined by the characteristics of the enveloping channel. For example, if a graphic image is used for secret transmission of information, then only what can be placed in this image can be transmitted without violating secrecy.

In general, covert channels are much more practical than hidden ones, since they have a legal basis — an enveloping channel. Covert (not hidden) channels are the most suitable means for controlling a hostile multi-agent system. But they are not only needed by attackers. Covert channels can be effectively used by content providers who embed hidden «digital watermarks» in their content and want to control its distribution, compliance with digital rights by consumers. Another example that has become a classic is the use of a covert channel by British Prime Minister Margaret Thatcher, who, in order to find out which of her ministers were guilty of information leaks, distributed to them versions of the same document with different inter-word gaps.

Of course, under very general assumptions, secret channels cannot be eliminated, nor even detected (for example, in a compressed JPEG image there will always be room for hidden information). In relation to both hidden and secret channels, the statement in the article [16] is valid: «You can always send a bit.»

The question of the capacity and stability of such channels is meaningful, as they are determined not only by the bandwidth of the enveloping channel and the noise characteristics in it, but also by the maximum size of the useful (hidden) load, as well as the detector function of the admissibility of the transmitted information (see, for example, article [17] and the sources cited in it, among which we highlight work [18]).

The problem of covert channels has long been fruitfully studied from the standpoint of information theory; many theoretically interesting and practically important results have been obtained. Let us pay attention to the possibility and efficiency of joint use of covert and secret channels in a network environment. Thus, the work [19] describes the implementation of a network of anonymizers (see [15]) using HTTP servers and clients. Web surfing serves as an enveloping channel. HTTP servers act as nodes of the anonymizer network, and interaction between them is carried out via covert channels in HTTP/HTML with the mediation of unsuspecting clients (primarily using means of redirecting requests and active content, built in, for example, advertising banners present on the visited Web page). As a result, it is possible to achieve not only the impossibility of association between the sender and recipient of messages, but also to implement a stronger property — secrecy (even in the presence of a global observer). Web surfers who act as unwitting intermediaries add to the anonymity pool that needs to be analyzed, making it difficult for the observer to obtain useful information.

(Of course, both attackers and security vendors are aware of the opportunities and challenges associated with using HTTP as a wrapper channel. For example, [20] describes a trainable Web Tap system that detects anomalies in outgoing HTTP transactions.)

Note also the obvious connection between the intelligence of embedded agents (or elements of a multi-agent system) and the required bandwidth of covert or hidden channels for interacting with them. The paper [21] provides an example of a highly intelligent Trojan horse program that is an expert system embedded in a trusted (with a multi-level security policy) strategic system for managing military supplies and troop movements and is capable of determining from supplies and movements whether it is possible to begin offensive military operations next week. If such a program transmits just one bit of information (possible/impossible) every day, this will be very valuable for strategic planning. At the same time, according to the formal requirements of the Orange Book, covert channels with a bandwidth of less than one bit per ten seconds may not be considered at all when auditing trusted systems. (A rare case when the «Orange Book» makes concessions and, as it turns out, in vain.)

The moral is that when analyzing secret and covert channels in general and their throughput in particular, it is necessary to take into account the specifics of information systems, the value of information and the semantics of interaction. Otherwise, the results of the analysis risk being meaningless.

On side channels

Side channels can be considered a special case of hidden channels. The role of (involuntary) transmitters in such channels is played by regular components of information systems, and the role of receivers is played by external observers using the appropriate equipment. Most often, side channels are used to measure the time of visible operations (time attacks on RSA have become commonplace), their energy consumption and/or side electromagnetic emissions and interference (SEMI), but acoustic channels can also be used for attacks, whether it is a digital safe lock or a personal computer processor processing a secret key (see [22]).

Side channels are probably the most obvious manifestation of the multifaceted nature of modern information security. The attackers on information systems (information content, bank cards, SIM cards of mobile phones, etc.) are usually their owners, who have considerable time and the appropriate tools. In combination with the fundamental impossibility of controlling physical access, the listed factors make attacks using side channels especially dangerous.

The objects of attacks using side channels are most often cryptographic components of information systems, or more precisely, their secret keys. For example, the article [23] describes an attack by splitting SIM cards of mobile phones (more precisely, the COMP128 algorithm used to authenticate users and generate session keys), carried out by measuring energy consumption in order to clone these cards. The attack was honed to such an extent that only eight measurements with adaptively selected input data are enough to determine the secret 128-bit key! That is, an attacker only needs to get a SIM card for literally a minute.

The danger of differential power analysis attacks is illustrated quite clearly in the article [22]. In 1998, Bruce Schneier wrote that there is not enough silicon in the galaxy and not enough time in the Sun to implement a brute-force attack on the secret key (112 bits) of the 3DES algorithm. The minimum key length in the AES algorithm is 128 bits, but a successful differential power analysis attack on an unprotected chip implementing AES can be carried out in less than three minutes — from the start of measurements to the end of the analysis.

A radical solution to the side-channel problem is possible if the following fundamental principle is observed: the data on operations that can be obtained from side channels must be statistically independent of the input and output data and restricted access information. Since systems with very limited resources most often have to be protected from side-channel attacks, a correct, complete implementation of the cardinal principle is a very difficult task. Operation time is relatively easy to normalize, energy consumption is more difficult, but also possible (see, for example, [24]), and PEMIN is even more difficult. In practice, systems are strengthened «as best they can» (which is typical of modern information security in general), and motivated attackers still have plenty of opportunities for effective attacks.

About agents — good and «different»

Agents and multi-agent systems (MAS) are one of the actively developing areas of modern programming technology. Agents deliver code to remote systems that expands the functionality of the latter, and multi-agent complexes allow for natural parallelization of complex task solutions. Both properties are important, for example, for efficient indexing of multimedia resources (see [25]). Multimedia files are large, and uploading them to a central server for indexing is expensive. It is easier to deliver the indexing code to the target system and then receive only the indexing results from there, which are significantly smaller in size. This approach is also good from the point of view of information security, if approached from the perspective of content providers and digital rights management, since it eliminates the downloading of paid resources and, at the same time, facilitates the dissemination of information about them.

Mobile agents rely on interfaces and capabilities present on most platforms. In addition, they are able to move from one system to another, moving along a planned route, which makes them autonomous, does not require constant management, thereby saving computing and communication resources. In the context of information security, mobile agents can take on the role of wandering knights, monitoring remote systems for timely application of corrective patches and the absence of signs of malicious activity (see [26]), detecting and repelling, together with «brothers in arms», distributed, coordinated attacks (see [27]), introducing elements of dynamism and self-organization into the defense (see [28]).

Incrementality is an important advantage of mobile agents and multi-agent systems. Each individual agent can solve its own, private task (highlighting certain properties of multimedia content, checking the installation of certain software corrections, identifying certain types of malware, implementing certain provisions of the security policy), but their replenished set turns out to be an adequate, relevant reflection and means of implementing the current security policy, changing under the influence of changes in the environment, when new risks and threats appear.

Mobile agent and multi-agent technology is a powerful tool that should be used with caution and awareness of the associated risks. The vulnerability of MAS communications is only one, and probably not the most difficult, problem. It should be taken into account (see, for example, [29]) that:

  • agents can attack target platforms (platform security problem);
  • agents themselves can be attacked by platforms, other agents, and external entities such as viruses (the agent security problem).

Reliable protection of both platforms and agents can only be built taking into account the semantics of programs (see [2]), but some particular solutions can be obtained by formal, cryptographic methods, authenticating agents and their sources, monitoring integrity, and ensuring the confidentiality of agent code and data (see [29], [30], [31]).

The security issues of mobile agents and target platforms are particularly complicated by their mutual influence. Suppose that a mobile agent contains sensitive data, only part of which is intended for each platform it visits. These parts are encrypted with the public keys of the corresponding platforms.

In principle, decryption can be performed by both agents and platforms, but both approaches have drawbacks (see [31]). If decryption is performed by an agent, the platform must transmit its private key to it, which is (too) risky. When decryption is performed by a platform, the platform must know the structure of the agent, which contradicts mobility. In addition, attempts to decrypt data stolen from agents must be prevented. The compromise solution proposed in [31] is based on the platform providing some basic cryptographic service that can only be used by authenticated agents.

Agents have a «spin-up» core with a simple structure that decrypts the body on a specific platform and controls the integrity of the result.

Coordination of MAS actions is another difficult problem. If agents implement security policy within a large corporate network, then when the provisions of this policy change, one set of agents must be recalled and another must be launched instead. In theory, everything is simple, but this can be hindered by asynchronous movements of mobile agents, temporary lack of communication with remote network segments, etc. In general, the legitimate use of MAS is not an easy matter, but it opens up a lot of opportunities for malicious agents.

As with covert and secret channels, malware must be combated — detected and eliminated and/or limited. The paper [2] substantiates the advantages of an approach based on limitation taking into account the semantics of programs and protocols. As for detection, it can be said that for covert and secret channels it is generally hopeless, and for malware it is even more hopeless. As the results of the paper [32] show, even the best commercial antivirus tools fail in the face of simple methods of program obfuscation, such as, for example, code reordering, from the point of view of information theory, a fact that is absolutely obvious and natural. The situation can be improved to some extent by more sophisticated matching with patterns of malicious behavior, as proposed in [33], but this «progress» can in no way be called decisive.

In such conditions, the only hope is for childish psychological tricks and attempts to distract hackers from the Internet, providing them with a separate corner for entertainment and demonstration of their power (see [34]), or reducing the likelihood of users unintentionally launching malware by extending the approach to the system bootloader «everything that is not permitted is prohibited» and protecting executable files with cryptographic checksums (see [35]).

About secret passages and repair agents

If for some reason it is necessary to constantly monitor the state of a remote system and, if necessary, influence it, backdoors are constructed and used. Usually, such backdoors are associated with malicious activity, but, as the project being developed at Rutgers University shows (see http://discolab.rutgers.edu/bda/), there are also quite legitimate applications for them, such as remote monitoring and restoration of operability (remote treatment). In this context, backdoors are appropriately called technical interfaces. The article [36] describes a prototype implementation of technical interfaces for FreeBSD.

Since the ultimate goal is to repair a remote target system that has become inoperable, the latter should be considered a passive object. The proposed interface with it is reduced to remote access to memory, implemented through a programmable network card. The target system must support sensor and external representation areas, the reading of which allows detecting and diagnosing abnormal situations (such as, for example, lack of progress in running applications, overuse or exhaustion of resources), as well as «repair holds» — memory areas, writing to which can correct the situation (for example, the process table or an in-memory copy of the file system superblock). In principle, if there is access to the memory of the target system, it is possible to transfer the service it implements to another network node (for example, within a cluster configuration), if on-site repair cannot be performed (see [37]).

Of course, from the point of view of information security, backdoors are a means with very serious side effects. If the monitoring system is compromised, the attacker can gain full control over the target system. Interference with the operation of a programmable network card is equally dangerous. As a countermeasure, such network cards can be implemented in a protected version, similar to cryptomodules, and remote control can be carried out from several machines and only with full consent in their actions.

The described approach is good, first of all, for restoring the system's functionality after unintentional or intentional attacks on availability (for example, when a fork bomb is triggered, RAM is exhausted, or the file system is damaged). If the system is hacked by intruders and taken under control by introducing malware, such as rootkits, then common sense would suggest that the only way to restore trust in it is a complete reinstallation from guaranteed safe media and subsequent application of all available corrective patches, as well as restoration of undamaged user data. However, in a developed corporate network, such an activity may require lengthy manual work by highly qualified specialists and may be economically inexpedient or practically impossible. Instead, one can try to implement the idea of ​​automatic self-healing of systems (i.e. removing all malicious information without losing good information) by embedding repair agents in them and protecting the latter using virtual machine technology (see [38]).

Repair agents, like other information security tools, must satisfy the following design principles: 

  • simplicity;
  • isolation (the agent must be protected from unauthorized modification or bypass);
  • trust;
  • visibility (the agent must be able to see the entire system);
  • adaptability (the agent's operation and the amount of resources it consumes must depend on the state of the controlled system and not interfere with its normal functioning).

The general scheme of the repair agent is simple. It remembers a known safe state of the production system, monitors all changes made, periodically checks for signs of anomalous behavior and unauthorized changes, and returns the system to a safe state if necessary. Since the production system runs within a virtual machine, it cannot interfere with the agent, which is a trusted, immutable extension of the kernel.

Of course, in practice, everything is much more complicated. First, if an attacker gains physical access to the system, he will be able to bypass the repair agent; physical threats can only be protected with hardware support. Second, suspicious activity is detected with some delay, so critical data is at risk of being compromised. Third, a «known safe» system image may be incomplete (the system administrator or user can add or change something bypassing the agent), and then unauthorized changes cannot be detected and eliminated. System security cannot be higher than the level of discipline existing in the organization and recorded in its policy.

Automatic recovery after compromise should be one of the main goals in the design and implementation of advanced information systems. On the one hand, we should accept the inevitability of successful attacks or, at least, hardware failures and administrative errors. On the other hand, the cost of hardware, including data carriers, is rapidly falling, so there is a technical and economic opportunity to organize detailed logging of system operation and, in particular, to record all changes in the file system, while maintaining the ability to roll back malicious or erroneous activity.

The main problem is to roll back all unauthorized changes without affecting legitimate modifications. The article [39] describes possible approaches to solving this problem and a prototype implementation — the Taser intrusion recovery system. The idea is to associate changes with the processes that implement them and to set rules dividing the processes into «clean» and «unclean». It is claimed that the results are satisfactory in terms of the level of automation, logging overhead, and recovery time.

About secret passages and rootkits

Rootkits are known to be used to enable an attacker who has hacked a system and obtained superuser privileges to continue to gain secret, unauthorized, superuser access to it. That is, a rootkit is both a secret passage and a means of masking malicious activity.

Rootkits are a type of Trojan program and are divided into binary and kernel-level rootkits. The former replace system utilities, the latter replace kernel functions that implement system calls. The methodology for classifying rootkits and detailed information about their functioning mechanisms can be found, for example, in the article [40].

To detect binary rootkits, integrity monitoring tools (such as Tripwire) of key system files are sufficient.

The situation with kernel-level rootkits is much more complicated. The signature approach is, of course, ineffective in this case as well. If, for example, the address of the system call table in the corresponding interrupt handler is changed, then what signature and where should we look for? An additional technical problem in implementing file scanning and integrity checking is the lack of trust in the results of system services.

If a rootkit is implemented using the mechanism of loadable kernel modules (and this is the most common method for Linux systems), one can try, as recommended by the authors of [41], to perform a binary static analysis of modules with elements of symbolic execution before loading to detect signs of malicious behavior, such as writing to the kernel control structures. But, in essence, this is a generalized antivirus approach that combines signature search and heuristics, and its limitations are known. True, rootkits need to be detected not among arbitrary programs, but among modules that tend to a certain internal structure, typical, for example, for device drivers, but program obfuscation methods are sufficient here to hide signs of maliciousness. More precisely, one can predict an «arms race» between the means of detecting signs of malicious behavior and hiding them. According to the results published in the article [41], the methods proposed and implemented by its authors allowed to detect all the rootkits tested (there were eight of them) and did not give a single false positive on almost five hundred legal modules. The analysis time, as a rule, did not exceed 10 ms, the maximum was 420 ms (Pentium IV, 2 GHz, 1 GB RAM). So, despite the theoretical problems, the presence and seriousness of which the authors, of course, are aware of, the first practical results turned out to be encouraging, although the technical problems of integration with the kernel and ensuring the impossibility of bypassing the controlling module loader have not yet been solved.

Loadable modules are a serious security threat to monolithic operating systems because they have unrestricted access to kernel code and data structures. Such modules account for up to 70% of the kernel code and 70% to 90% of bugs, with an average lifetime of about 20 months (see [42]). Even if we ignore malicious rootkits, there are still threats that exist due to vulnerabilities in hastily written device drivers. It is advisable to somehow organize access control in the kernel to prevent exploitation of vulnerabilities.

The paper [42] develops the direction outlined in the paper [41]. It provides for a specification of acceptable and unacceptable behavior (white and black lists — addresses to which transitions can or cannot be made, data to which access is allowed or prohibited, areas where machine instructions cannot be executed, etc.). The fulfillment of the specifications can be partially checked statically, the rest is controlled dynamically, by inserting a verification code. It is claimed that the overhead costs in this case do not exceed 23%. Note, however, that the future does not lie in such obviously temporary solutions, but in modular operating systems, full-fledged security models for their components and hardware support for implementing the security policy.

Rootkits can be considered a type of stealth software that includes, for example, means of logging user sessions. Both the executable code and the resources it uses and associated information can be hidden. For example, malicious code can be placed in the flash memory of a video card and, for execution, «injected» into an existing process. Resources, such as files and processes, can be hidden from the user by intercepting system calls using rootkit technology. There are at least two ways to try to detect stealth software:

  • try to detect hiding mechanisms (the approaches discussed above belong to this category);
  • try to obtain information about the system in several ways and find differences in the results (for example, compare the output of the ls and echo * commands or information from ps and from the process table in the kernel, naturally, having first brought the results to a single format).

Since hidden resources are manipulated, one can hope that they are visible in some (low-level) representation. This is the main idea of ​​the approach proposed in [43]. It may seem wrong to look for symptoms instead of the root cause of the disease, but if symptoms are easier to identify, then why not do it? «Comparison scanners» can be regularly launched on all computers in a corporate network; scanning one gigabyte of disk space, according to the data presented in [43], takes about half a minute, so such an approach seems quite practical.

(Recall that article [2] discusses the use of the «differential» method for identifying hidden channels.)

Of course, it is desirable not only to prevent the installation of rootkits or promptly detect them, but also to self-heal compromised systems. The latter is the topic of the article [44]. The idea is to track changes in the system call table, the appearance of hidden files, processes and network interactions, and upon detection of malicious activity, to eliminate it by restoring the correct state of the system call table, deleting hidden files, terminating hidden processes, blocking hidden network connections. Ironically, the prototype of this protective tool is implemented as a loadable kernel module and, therefore, can itself become the target of rootkit attacks, not to mention the problem of complete diagnostics and treatment of systems.

The question of who will guard the guards is one of the eternal and difficult to resolve. If the protected and protecting systems coincide, there are no guarantees that after compromise the protective means will work correctly, and the treatment they provide will be effective and complete. One of the ways to isolate protective means is the above-mentioned virtual machine technology. Its application in the context of detecting suspicious activity is considered in the work [45]. Unfortunately, in a network environment this technology turns out to be, on the one hand, insufficient, and on the other hand, expensive (and the authors of the work [45], of course, are aware of the seriousness of the noted problems). The inadequacy is that, despite virtualization, the system remains a single network node, susceptible to attacks on availability and remote exploitation of vulnerabilities of network services. Inefficiency is explained by the need for frequent switching between virtual machine contexts and the monitor controlling them.

(We note in parentheses that virtual machine technology is a wonderful means of efficiently and economically implementing baits and traps, numbering in the tens of thousands, on several hardware servers, see [46].)

(We also note in parentheses as a curiosity the authors of [47] suggest learning obfuscation methods from rootkit developers, but this article contains an excellent overview insert outlining the main facts and provisions related to rootkits.)

Only hardware support promises a breakthrough in information security, the implementation of systems that are resistant to attacks, capable of quickly and automatically self-healing. Unfortunately, systems like the one described in [48] are a matter of the future, and not very near at that.

Conclusion

The article [2] quite rightly emphasizes how important it is to correctly formulate the problem and consider it not in isolation, but in a real environment. Correct formulation, associated with controlled execution (restriction) of programs taking into account their semantics, is important for all types of hidden and secret channels.

At the same time, the problem of hidden and secret channels from a practical point of view cannot be considered one of the most acute. Side channels are a much more real threat to embedded systems. The security of mobile agents is a sore point of Internet/Intranet technology. Finally, rootkits and hidden software in general are a dangerous threat that ordinary users and many system administrators are unable to resist.

Information security problems cannot be solved by software alone. In our opinion, there is currently a trend towards expanding hardware support for security tools. When such support will take on real shape is not a question of one year. In order for it to receive a real solution, economic and legal prerequisites are needed, and not just frightening statistics of malicious activity and estimates of losses from it.

The root cause of information security problems should be sought in the complexity of modern systems. To combat complexity means to make systems more secure. Unfortunately, the desire to get ahead of competitors and offer a system with richer functionality forces manufacturers to move in the opposite direction. At present, there are no visible reasons that could change this trend. System integrators and consumers can only rely on themselves, on their ability to choose the simplest, most thoughtful architecture and maintain production systems in a secure state with technical and organizational measures, spending energy and resources on repelling real, not imaginary threats.

Literature

[1] E.E. Timonina — Hidden Channels (review). — Jet Info, 2002, 11
[2] A. Galatenko — About hidden channels and more. — Jet Info, 2002, 11
[3] R.A. Kemmerer — A Practical Approach to Identifying Storage and Timing Channels: Twenty Years Later. — Proceedings of the 18th Annual Computer Security Applications Conference (ACSAC’02).— IEEE, 2002
[4] E. Tumoian , M. Anikeev — Network Based Detection of Passive Covert Channels in TCP/IP. — Proceedings of the IEEE Conference on Local Computer Networks 30th Anniversary (LCN’05).— IEEE, 2005
[5] S. Cabuk , C.E. Brodley, C. Shields — IP Covert Timing Channels: Design and Detection. — Proceedings of the CCS’04.— ACM, 2004
[6] P.A. Karger, H. Karth — Increased Information Flow Needs for High-Assurance Composite Evaluations. — Proceedings of the Second IEEE International Information Assurance Workshop (IWIA’04). — IEEE, 2004
[7] V.B. Betelin, S.G. Bobkov, V.A. Galatenko, A.N. Godunov, A.I. Gryuntal, A.G. Kushnirenko, P.N. Osipenko — Analysis of Hardware and Software Development Trends and Their Impact on Information Security. — Collection of articles edited by Academician of the Russian Academy of Sciences V.B. Betelin.— M.: NIISI RAS, 2004
[8] P. Efstathopoulos, M. Krohn, S. VanDeBogart, C. Frey, D. Ziegler, E. Kohler, D. Mazieres, F. Kaashoek, R. Morris & #8212; Labels and Event Processes in the Asbestos Operating System. — Proceedings of the SOOP’05.— ACM, 2005
[9] Y. Zhu , R. Bettati — Anonymity v.s. Information Leakage in Anonymity Systems. — Proceedings of the 25th IEEE International Conference on Distributed Computing Systems (ICDCS’05).— IEEE, 2005
[10] B. Graham, Y. Zhu, X. Fu, R. Bettati — Using Covert Channels to Evaluate the Effectiveness of Flow Confidentiality Measures. — Proceedings of the 2005 11th International Conference on Parallel and Distributed Systems (ICPADS’05).— IEEE, 2005
[11] Y. Zhu , R. Sivakumar — Challenges: Communication through Silence in Wireless Sensor Networks. — Proceedings of the MobiCom’05.— ACM, 2005
[12] R. Browne — An Entropy Conservation Law for Testing the Completeness of Covert Channel Analysis. — Proceedings of the CCS’94.— ACM, 1994
[13] S. Weber , P.A. Karger, A. Paradkar — A Software Flaw Taxonomy: Aiming Tools At Security. — Proceedings of the Conference on Software Engineering for Secure Systems — Building Trustworthy Applications (SESS’05).— ACM, 2005
[14] J.C. Wray— An Analysis of Covert Timing Channels. — IEEE, 1991
[15] V.B. Betelin, V.A. Galatenko, M.T. Kobzar, A.A. Sidak, I.A. Trifalenkov — Overview of protection profiles built on the basis of the «General criteria». Specific requirements for security services. — «Information Technology Security», 2003, 3
[16] K. Loepere — Resolving Covert Channels using a B2 Class Secure System. — Honeywell Information Systems.
[17] J.J. Harmsen, W.A. Pearlman — Capacity of Steganographic Channels. — Proceedings of the MM-SEC’05.— ACM, 2005
[18] I.S. Moskowitz, L. Chang, R. Newman — Capacity is the Wrong Paradigm. — Proceedings of the 2002 Workshop on New Security Paradigms.— ACM, 2002
[19] M. Bauer — New Covert Channels in HTTP. Adding Unwitting Web Browsers to Anonymity Sets. — Proceedings of the WPES’03.— ACM, 2003
[20] K. Borders , A. Prakash — Web Tap: Detecting Covert Web Traffic. — Proceedings of the CCS’04.— ACM, 2004
[21] D. Slater — A note on the Relationship Between Covert Channels and Application Verification. — Computer Sciences Corporation, 2005
[22] K. Tiri , I. Verbauwhede — Simulation Models for Side-Channel Information Leaks. — Proceedings of the DAC 2005.— ACM, 2005
[23] J.R. Rao, P. Rohatgi, H Scerzer, S. Tinguely — Partitioning Attacks: Or How to Rapidly Clone Some GSM Cards. — Proceedings of the 2002 IEEE Symposium on Security and Privacy (S&P’02).— IEEE, 2002
[24] R. Muresan , C. Gebotys — Current Flattening in Software nad Hardware for Security Applications. — Proceedings of the CODES+ISSS’04.— ACM, 2004
[25] V. Roth , U. Pinsdorf , J. Peters — A Distributed Content-Based Search Engine Based on Mobile Code. — Proceedings of the 2005 ACM Symposium on Applied Computing (SAC’05).— ACM, 2005
[26] M. Carvalho, T. Cowin, N. Suri, M. Breedy, K. Ford — Using Mobile Agents as Roaming Security Guards to Test and Improve Security of Hosts and Networks. — Proceedings of the 2004 ACM Symposium on Applied Computing (SAC’04). — ACM, 2004
[27] T. Pedireddy , J.M. Vidal— A Prototype MultiAgent Network Security System. — Proceedings of the AAMAS’03.— ACM, 2003
[28] R. Menezes — Self-Organization and Computer Security. — Proceedings of the 2005 ACM Symposium on Applied Computing (SAC’05).— ACM, 2005
[29] J. Page , A. Zaslavsky , M. Indrawan — Countering Agent Security Vulnerabilities using an Extended SENSE Schema. — Proceedings of the IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’04).— IEEE, 2004
[30] J. Page , A. Zaslavsky , M. Indrawan — Countering Security Vulnerabilities in Agent Execution using a Self Sxecuting Security Examination. — Proceedings of the AAMAS’04.— ACM, 2004
[31] J. Ameiller , S. Robles , J.A. Ortega-Ruiz — Self-Protected Mobile Agents. — Proceedings of the AAMAS’04.— ACM, 2004
[32] M. Christodorescu , S. Jha — Testing Malware Detectors. — Proceedings of the ISSTA’04.— ACM, 2004
[33] M. Christodorescu , S. Jha , S.A. Seshia, D. Song, R.E. Bryant — Semantics-Aware Malware Detection. — Proceedings of the 2005 IEEE Symposium on Security and Privacy (S&P’05).— IEEE, 2005
[34] J.A.M. McHugh, F.P. Deek— An Incentive System for Reducing Malware Attacks. — Communications of the ACM, 2005, 6
[35] J.V. Harrison— Enhancing Network Security By Preventing User-Initiated Malware Execution. — Proceedings of the International Conference on Information Technology Coding and Computing (ITCC’05).— IEEE, 2005
[36] A. Bohra, I. Neamtiu, P. Gallard, F. Sultan, L. Iftode — Remote Repair of Operating System State Using Backdoors. — Proceedings of the International Conference on Autonomic Computing (ICAC’04).— IEEE, 2004
[37] F. Sultan , A. Bohra , S. Smaldone , Y. Pan , P. Gallard , I. Neamtiu , L. Iftode — Recovering Internet Service Sessions from Operating System Failures. — IEEE Internet Computing, 2005, March/April
[38] J.B. Grizzard, S. Krasser, H.L. Owen, G.J. Conti, E.R. Dodson — Towards an Approach for Automatically Repairing Compromised Network Systems. — Proceedings of the Third IEEE International Symposium on Network Computing and Applications (NCA’04).— IEEE, 2004
[39] A. Goel , K. Po , K. Farhadi , Z. Li , E. de Lara — The Taser Intrusion Recovery System. — Proceedings of the SOSP’05.— ACM, 2005
[40] J. Levine , J. Grizzard , H. Owen — A Methodology to Detect and Characterize Kernel Level Rootkit Exploits Involving Redirection of the System Call Table. — Proceedings of the Second IEEE International Information Assurance Workshop (IWIA’04).— IEEE, 2004
[41] C. Kruegel , W. Robesrtson , G. Vigna — Detecting Kernel-Level Rootkits Through Binary Analysis. — Proceedings of the 20th Annual Computer Security Applications Conference (ACSAC’04).— IEEE, 2004
[42] H. Xu , W. Du , S.J. Chapin— Detecting Exploit Code Execution in Loadable Kernel Modules. — Proceedings of the 20th Annual Computer Security Applications Conference (ACSAC’04).— IEEE, 2004
[43] Y.-M. Wang, D. Beck, B. Vo, R. Roussev, C. Verbowski — Detecting Stealth Software with Strider GhostBuster. — Proceedings of the 2005 International Conference on Dependable Systems and Networks (DSN’05).— IEEE, 2005
[44] S. Ring , D. Esler , E. Cole — Self-Healing Mechanisms for Kernel System Compromises. — Proceedings of the WOSS’04.— ACM, 2004
[45] M. Laureano , C. Maziero , E. Jamhour — Intrusion Detection in Virtual Machine Environments. — Proceedings of the 30th EUROMICRO Conference (EUROMICRO’04). — IEEE, 2004
[46] M. Vrable , J. Ma , J. Chen , D. Moore , E. Vandekieft , A.C. Snoeren, G.M. Voelker, S. Savage — Scalability, Fidelity, and Containment in the Potemkin Virtual Honeyfarm. — Proceedings of the SOSP’05.— ACM, 2005
[47] S. Ring , E. Cole — Taking a Lesson from Stealthy Rootkits. — IEEE Security & Privacy, 2004, July/August
[48] W. Shi , H.-H.S. Lee, G. Gu, L. Falk — An Intrusion-Tolerant and Self-Recoverable Network Service System Using A Security Enhanced Chip Multiprocessor. — Proceedings of the Second International Conference on Autonomic Computing (ICAC’05)— IEEE, 2005

    Мы используем cookie-файлы для наилучшего представления нашего сайта. Продолжая использовать этот сайт, вы соглашаетесь с использованием cookie-файлов.
    Принять