INSIDE – how much is in this sound…
It is probably unnecessary to talk about the relevance of the problem again.
It is only worth defining it more specifically in order to imagine what kind of enemy we are dealing with. We will talk about theoretical methods of protecting data from users who are authorized to work with this data by virtue of their job description.
Why is this interesting?
Protection of an information system from an external intruder is quite well formalized today, methods, standards and tools have been developed, all sorts of products have been introduced to the market, specialists have been trained, and although this problem cannot be called completely resolved, its solution today requires quantitative efforts rather than qualitative ones.
The situation is different with the so-called «insider attacks», internal attacks.
At first glance, it seems that here too the companies' offers are full of all sorts of «means of protection against NSD» (from unauthorized access), but in reality it turns out that they boil down to yet another original way of entering a password (which, of course, is also important).
Let's try to understand the issue.
Let's assume that we are database developers, let it be a database for a credit history bureau.
After we have protected all database access services from outside penetration and defined the postulate: the user must go through the identification, authentication and authorization procedure when accessing data — we have solved one problem and created a new one. We have created data entities, defined user rights for them, taken into account all the nuances, trained the administrator — protected the system.
Now let's look at this situation from the side of the database owners. We (the owners) ordered the development of the database, demanded that all approaches to it be protected and asked to limit user rights.
At the same time, authorized users of the system (and in our example, these are employees of the head office of the credit history bureau) must work with the data.
For example, they must respond to a written request to the bureau, i.e. they can get any credit history from the database. Or maybe everything at once.
Further developments.
The database grows, users come and go, and we appoint a security administrator (or even a group of such administrators).
We are faced with the problem of the superuser.
Here an important logical feature of the problem of internal attacks appears: it is fundamentally impossible to protect against an insider.
This is very similar to Russell's paradox about the barber who shaves the dependent villagers, let's try to rephrase it so that it sounds in our terms:
The administrator-barber blocks access to the system for everyone who should not have it, i.e. for everyone who does not block his own access.
And then the standard reasoning: if the administrator does not block his own access, then he should block it.
If the administrator blocks access to himself, then he is no longer an administrator, because he can only block access to those who cannot (do not want) to block it themselves.
Then, who blocks access to the administrator?
This funny reasoning puts the database owners in front of sad conclusions.
However, the situation is not so hopeless.
We must not forget that this problem is more than a century old (and maybe even a millennium old).
And over the past century, totalitarian state machines have achieved truly amazing results. The key word in the solution they used is regulation. Of course, these methods are still effective.
Let's return to our example, the credit bureau. Obviously, at the creation stage, we must hire a «security specialist».
This employee should not even be an IT specialist, but rather an information security specialist in general (for example, a former employee of the Internal Affairs Directorate). His responsibilities include drawing up regulations for access to protected information.
The documents he develops contain the following lines: “…no removable media”, “prohibit access outside working hours”, “…password change procedure no less than…”, “limit the functionality of the workplace…”, “isolate the computer network from…”, “…bears personal responsibility”, “…is resolved in court”, etc.
The weak point of this approach is obvious – the human factor.
We will not dwell on the information system security regulations now, many works are devoted to this, we will touch on it later, but in a different aspect.
We will also not dwell on psychological methods of solving this problem (motivating employees who have access to the database, or maybe intimidation?), since this is far beyond the scope of this article. We will try to develop such concepts of information system design that will help automate protection against internal attacks.
So, if the problem has no solution, its negative impact should be reduced.
If it is fundamentally impossible to protect data from internal attacks, you can try to narrow the attack boundary, the front of attack by internal intruders.
The ideal option is if the database fits into a paper notebook, you can put it in your pocket and not let go of it, even when someone is reading data from it.
Then the only moment in time when it will be vulnerable is the moment when this database is taken out of the pocket (we protected ourselves from pickpocketing — i.e. external threats — at the very beginning).
What separates this situation from reality is not even the fact that a real database of credit histories will not fit in a notebook, but the fact that its maintenance requires special skills that the owner of the database generally does not possess.
It would be convenient to entrust the management of the database to some intellectual entity.
This entity, let's call it a «gnome», must be able to transfer records, manage the database structure, and perform maintenance, but at the same time the «gnome» must be closed from the outside world in a «black box», which even for the appointed administrator must serve as an intermediate layer of abstraction when accessing the structure and records of the database.
Gnome in a box
This is the basic concept of creating an information system protected from internal attacks.Its essence is in concentrating all powers to access the database within a single software module, the «gnome».
And the box for it will be a set of measures.
«Gnome» communicates with the outside world at the level of information messages (via web services, for example), this access channel must be universal and unique.
That is, only such «envelope» that corresponds to the format can get into the «box».
Each «envelope», i.e. each request to perform an operation, is accompanied by an electronic signature of the sender.
Thus, this software module is the only way to access the system (or access the closed data of the system).
How can this concept be implemented? In general terms, it might look like this…
But first, a few words about the «public key infrastructure».
This is a set of technologies, algorithms, and standards, on which, in particular, the so-called electronic digital signature certificate is implemented.
Each certificate contains a pair of keys for symmetric encryption (the private key is stored separately in a secure container) and a description of the certificate, which is accompanied by an electronic signature of the issuing organization (certification authority).
From the user's point of view, a digital signature certificate allows signing data blocks, verifying the signature and encrypting.
Due to the description, the certificate is also convenient to use as an identification tool.
In our example, all users of the database and, of course, the owner must be provided with reliable digital signature certificates.
Let's return to the implementation.
Since the «gnome» is the only entity that has access to private tables, it must be the only entity that has the superuser password. The question arises, where to store it?
Obviously, it can only be stored in encrypted form. It must be decrypted with the private key of the owner's certificate.
Again, keeping the owner's digital signature certificate on the server all the time is absurd, conclusion: the owner's certificate private key must be cached, loaded into RAM at the system boot stage. This leads us to one limitation and one problem.
First: every time the system boots, it will need to be initialized with the owner's certificate.
Second: the cached private key can be stolen from RAM.
But we will return to this problem in the following concepts.
Intermediate result: when the system boots, the «gnome in the box» software module is launched, which is initialized with the owner's certificate and becomes the only window into the database.
Let's continue.
For users to work, the «gnome» must be able to identify them. To do this, the software module maintains a register of electronic signature certificates of subjects providing access. The database owner's certificate is pre-installed in this register.
Users send their messages and instructions (packed in XML, for example) to the «gnome», the «gnome» identifies the senders by their certificates in its register, checks the signature and makes a decision on executing the request.
The first instructions that the gnome will see in its life are instructions from the database owner.
They will indicate the certificates whose owners should gain access to the database.
These control requests will be signed with the electronic signature of the database owner (whose certificate is pre-installed).
After which the «gnome» will begin to process requests from newly added users.
Such a system role as Administrator undergoes changes in this concept.
As already mentioned, no user knows the system superuser password.
This password is randomly generated at the initial setup stage and is stored in a container independent of the database (a sealed envelope in a bank cell or the database owner's safe, for example), plus an encrypted copy remains with the «gnome».
The user who is still involved in administration, like everyone else, communicates with the database through the «gnome».
The administrator's instructions that the «gnome» must perform can be divided into two categories, let's call them: verified and free.
Verified instructions are subroutines in the database management language that have been verified by an external auditor and are considered safe for the database, for example, creating an index.
The main feature of such instructions is that to call them, the administrator requests the name of the instruction and passes only the call parameters, but not the code itself. Of course, when auditing these routines, it is necessary to make sure that it is impossible to exceed the authority by using specially specified parameters.
Free instructions are pure code in the database management language.
However, the «gnome» also passes this code through itself. What can protect against leakage in this case?
A set of measures: notification of the owner about the need to execute a free instruction (or a request for his permission, i.e. signature), an electronic signature of the administrator under each such instruction and keeping a log of executed instructions on a separate server.
In case of a leak, it will be possible to find the culprit.
Of course, it is important that the intruder's fear from understanding this exceeds the consolation for the owner after the information is disclosed.
Obviously, the security of the database in this case depends on the ratio of verified and free instructions (we will not dwell on the fact that there are always many more free instructions in the database management language).
In other words, it is necessary to increase the intelligence of the «gnome» (maybe not intelligence, but erudition), i.e. the set of functions that this service can perform at the authorized request of the administrator.
Ideally, the set of «skills» should include: managing indexes, table space, buffering; as well as working with some basic entities of the subject area: creating/deleting users, determining their powers, in our example, adding electronic signature certificates for users.
This concept is not without its drawbacks: for example, it is very difficult to apply to databases that are at the initial stage of their structure development and require constant intervention from architects.
«Gnome in a box» in its pure form assumes the use of «mature» databases.
Another problem that arises is trust in the «gnome».
It is in turn divided into the problem of trusting the developer of this software module (are there any «bookmarks» or simply vulnerabilities) and the problem of trusting the current instance of the «gnome» (has it been substituted).
The first problem is partly solved by engaging a third-party auditor in the field of information systems, and the second by supporting the executable code of the «gnome» with an electronic signature.
Why partly — because each new «solution» raises a new question of trust (trust in the auditor or the code that will check the electronic signature of the «gnome»).
We have considered the advantages and disadvantages of this concept, and now it is worth returning to the already mentioned security regulations.
The «weak point» in this concept is the regulatory provision of full access to the database administrator for changes to the structure (permission to execute free instructions).
That is, if the task that the administrator must perform is not included in the list of «gnome» skills (not automated), full access must be provided to the administrator. This is where the responsible role is assigned to the regulations.
Here you can come up with a set of rules that will make data disclosure as difficult as possible.
For example: grant permission to execute free instructions only after the administrator has passed a lie detector test, close access to the network when the administrator works with the database in this mode, limit traffic exchange with the database, require two witnesses, conduct a preliminary audit of the code, etc.
We must not forget that as the effort required to steal information increases, so do the efforts required to support and develop the database.
If the «gnome in the box» restricts the development of the database structure, and also limits the possibilities of working with statistics and test data, we can try to expand the boundaries, make the box not so black.
But how to secure the data? Can there be a database, the information from which is absolutely securely closed? Yes, if this information is absolutely worthless…
The Rusty Chest Concept
The essence of it is that the data in the database should be of interest only at the moment of their request, but not at the moment of storage.
That is, everything that lies in physical storage should be useless «rusty junk» for someone who tries to read it.
Or in other words: all data should be encrypted before being written, and decrypted upon authorized reading.
Encryption on the fly — transparent encryption.
In this case, a stolen hard drive or an entire server will not be of interest.
How can this be implemented?
When the database is initialized, a container of the private key of the digital signature certificate is inserted into the server. The private key is cached in the memory area of the crypto-provider (by means of the OS or Crypto PRO).
Then the key is removed. Now, if the power is lost, this key can no longer be restored.
The private key from this container is used to decrypt the table of symmetric passwords, which are random sequences.
Each private table can have its own symmetric password.
Now we return to the «gnome».
«Gnome» is the only service in the OS that has access to the private key and to the password table.
With each request, it reads data from the storage, decrypts it and transmits it to the client, and also encrypts it back and writes it to the storage.
The concept of Cutlets and flies separately
The essence of the concept is to select some individual records, and then divide these records into title and content parts.
The title part contains a description of the object to which this record relates, and the content part contains the actual data on this object.
The separated parts are stored in different tables (and it is possible to use different databases, different information platforms).
The table in which the pointers linking the separated parts are stored is encrypted, and only the «Gnome in the Box» has access to it.
The advantages of this concept are: wide possibilities for further development of the structure. The ability to openly use data for statistical analysis.
And the disadvantages are that it can be quite difficult to draw a line along which to divide the data. It happens that the title section alone already contains enough confidential data to be of interest to an intruder.
The Narrow Passage Concept
Its essence is that the amount of data transferred to the user is limited by several criteria. That is, it is impossible to steal everything at once.
Why this concept was created. All users undergo some kind of identification, but in the case of an insider, the identification key (digital signature certificate) can be compromised (stolen).
Then it is necessary to limit everyone, including trusted users.
The limitation criteria can be different, for example:
amount of information per unit of time (no more than 100 records per day per user),
time limit (requests cannot be made after the end of the working day),
user group limit (no more than 200 records from a group of addresses, from one department, district, city).
We also propose introducing a matrix of trust and significance coefficients.
That is, to associate each record with its significance coefficient, which may depend on the frequency of access to it, on the object it describes, on the political situation, after all.
Assign each user a trust coefficient, which will be determined by the time the client has been working with the system, the possibility of external control over the client, his reputation, and the history of relations.
Thus, if a user with a low trust coefficient begins to actively request records with a high significance coefficient, he will receive a temporary refusal.
In this case, the owner of the database will be notified that the system has restricted this client's access to certain records. The owner will make a decision and send a signed package to the system.
Divide and conquer
The basic idea of the concept is that to obtain privileged access, authentication of not one, but several users is required. That is, to execute a query to a secret table or to make changes to the database, the command package for the «gnome» must have several digital signatures. Moreover, an excessive number of users is used. For example: there are 5 administrators, signatures of any three are required to execute the package with the command.
Advantages: there is no need to constantly monitor authorized users. In order to steal a database, you need to collude, in our example you need to figure out «for three».
Disadvantages: responsibility is blurred, everyone can refer to another authorized user.
Summing up our «research», we come to the conclusion that the problem cannot yet be closed, but it is possible to raise the level of protection against internal intruders to a new level, although this requires qualitative rather than quantitative efforts. Perhaps, like Russell's Barber Paradox, the solution to this problem lies in a different formulation of the conditions. Perhaps future developments in IT and lawmaking (and perhaps also sociology and psychology) will allow us to look at this problem from a completely different angle.
The example with the credit history bureau under consideration is not fictitious; an information system was actually developed using the described concepts; it was called «Pleiades».
The Interregional Credit History Bureau (http://mbki.ru) operates on the basis of this system; this organization was entered into the state register of credit history bureaus under number 2 (http://fcsm.ru/catalog.asp?ob_no=24284).
The development was carried out by the company «IVC-1» (http://ivc-1.ru), which is part of the «Astra ST» group of companies.