Secure by design

Secure by design, in software engineering, means that the software has been designed from the foundation to be secure. In such approach, the alternate security tactics and patterns are first thought; among them, the best are selected and enforced by the architecture design, and then used as guiding principles for developers.[1] Secure by Design is more increasingly becoming the mainstream development approach to ensure security and privacy of software systems. In this approach, security is built in the system from the ground up and starts with a robust architecture design. Security architectural design decisions are often based on well-known security tactics, and patterns defined as reusable techniques for achieving specific quality concerns. Security tactics/patterns provide solutions for enforcing the necessary authentication, authorization, confidentiality, data integrity, privacy, accountability, availability, safety and non-repudiation requirements, even when the system is under attack.[2] In order to ensure the security of a software system, not only it is important to design a robust security architecture (intended) but also it is necessary to preserve the (implemented) architecture during software evolution. Malicious practices are taken for granted and care is taken to minimize impact in anticipation of security vulnerabilities, when a security vulnerability is discovered or on invalid user input.[3] Closely related is the practice of using "good" software design, such as domain-driven design or cloud native, as a way to increase security by reducing risk of vulnerability-opening mistakes—even though the design principles used were not originally conceived for security purposes.

Generally, designs that work well do not rely on being secret. Often, secrecy reduces the number of attackers by demotivating a subset of the threat population. The logic is that if there is an increase in complexity for the attacker, the increased attacker effort to compromise the target. While this technique implies reduced inherent risks, a virtually infinite set of threat actors and techniques applied over time will cause most secrecy methods to fail. While not mandatory, proper security usually means that everyone is allowed to know and understand the design because it is secure. This has the advantage that many people are looking at the computer code, which improves the odds that any flaws will be found sooner (see Linus's law). Attackers can also obtain the code, which makes it easier for them to find vulnerabilities as well.

Also, it is important that everything works with the fewest privileges possible (see the principle of least privilege). For example, a Web server that runs as the administrative user ("root" or admin) can have the privilege to remove files and users that do not belong. A flaw in such a program could put the entire system at risk, whereas a Web server that runs inside an isolated environment and only has the privileges for required network and filesystem functions, cannot compromise the system it runs on unless the security around it is in itself also flawed.

Security by design in practice

Many things, especially input, should be distrusted by a secure design. A fault-tolerant program could even distrust its own internals.

Two examples of insecure design are allowing buffer overflows and format string vulnerabilities. The following C program demonstrates these flaws:

 #include <stdio.h>
 
 int main()
 {
     char a_chBuffer[100];

     printf("What is your name?\n");
     gets(a_chBuffer);
     printf("Hello, ");
     printf(a_chBuffer);
     printf("!\n");

     return 0;
 }

Because the gets function in the C standard library does not stop writing bytes into buffer until it reads a newline character or EOF, typing more than 99 characters at the prompt constitutes a buffer overflow. Allocating 100 characters for buffer with the assumption that almost any given name from a user is no longer than 99 characters doesn't prevent the user from actually typing more than 99 characters. This can lead to arbitrary machine code execution.

The second flaw is that the program tries to print its input by passing it directly to the printf function. This function prints out its first argument, replacing conversion specifications (such as "%s", "%d", et cetera) sequentially with other arguments from its call stack as needed. Thus, if a malicious user entered "%d" instead of his name, the program would attempt to print out a non-existent integer value, and undefined behavior would occur.

A related mistake in Web programming is for an online script not to validate its parameters. For example, consider a script that fetches an article by taking a filename, which is then read by the script and parsed. Such a script might use the following hypothetical URL to retrieve an article about dog food:

http://www.example.net/cgi-bin/article.sh?name=dogfood.html

If the script has no input checking, instead trusting that the filename is always valid, a malicious user could forge a URL to retrieve configuration files from the webserver:

http://www.example.net/cgi-bin/article.sh?name=../../../../../etc/passwd

Depending on the script, this may expose the /etc/passwd file, which on Unix-like systems contains (among others) user IDs, their login names, home directory paths and shells. (See SQL injection for a similar attack.)

Server/client architectures

In server/client architectures, the program at the other side may not be an authorised client and the client's server may not be an authorised server. Even when they are, a man-in-the-middle attack could compromise communications.

Often the easiest way to break the security of a client/server system is not to go head on to the security mechanisms, but instead to go around them. A man in the middle attack is a simple example of this, because you can use it to collect details to impersonate a user. Which is why it is important to consider encryption, hashing, and other security mechanisms in your design to ensure that information collected from a potential attacker won't allow access.

Another key feature to client-server security design is good coding practices. For example, following a known software design structure such as client and broker can help in designing a well-built structure with a solid foundation. Furthermore, if the software is to be modified in the future, it is even more important that it follows a logical foundation of separation between the client and server. This is because if a programmer comes in and cannot clearly understand the dynamics of the program they may end up adding or changing something that can add a security flaw. Even with the best design this is always a possibility, but the better standardized the design the less chance there is of this occurring.

References

  1. "A Catalog of Security Architecture Weaknesses". 2017 IEEE International Conference on Software Architecture (ICSA): https://design.se.rit.edu/papers/cawe-paper.pdf.
  2. "Growing a pattern language (for security)" (PDF). In Proceedings of the ACM International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software.
  3. Dougherty, Chad; Sayre, Kirk; Seacord, Robert C.; Svoboda, David; Togashi, Kazuya. "Secure Design Patterns". CMU. Retrieved 14 October 2017.

See also

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.