Unsafe coding construct that may lead to a bug or vulnerability. For example, indexing an array with a user-supplied and unvalidated index is a hazard.
Reachable program behavior contrary to the author's intent.
Buggy behavior that is actively occurring for users of the program.
Buggy behavior that does not currently occur for users, but is reachable. Behaviors that are reachable, and so can happen, but don't happen today in practice are always still bugs!
Absent a qualifier or narrow context, refers to system safety, and safety engineering. Always a property of a system or product as a whole, including human factors, etc.
Invariants or limits on program behavior in the face of bugs.
This is a specific subset of safety concerns, and the ones we are most often focused on with programming language and library design.
Bugs where some aspect of program behavior has insufficient (often none) invariants or limits.
For example, undefined behavior definitionally has no invariant or limit, and reaching it is always a safety bug.
The first behavior contrary to the author's intent, distinct from subsequent deviations.
The behavior of immediately terminating the program, minimizing any further business logic. This is in contrast to any form of "correct" program termination, continuing execution, or unwinding.
A bug that creates the possibility for a malicious actor to subvert a program's intended behavior in a way that violates a security policy (for example, confidentiality, integrity, availability). Vulnerabilities are often exploitable manifestations of underlying bugs.
The set of strategies and techniques employed to reduce the risks posed by vulnerabilities arising from bugs. These strategies operate at different levels and have varying degrees of effectiveness.
While still leaving the code vulnerable, a defense that attempts to recognize and potentially track when a specific bug has occurred dynamically. Requires some invariant or limit, but very minimal.
Making a vulnerability significantly more expensive, difficult, or improbable to be exploited.
Making it impossible for a bug to be exploited as a vulnerability without resolving the underlying bug -- the program still doesn't behave as intended, it just cannot be exploited. Often this is done by defining behavior to fail-stop.
Ensures that if the program compiles successfully, it behaves as intended. This typically prevents a bug being written and compiled into a program in the first place. For example, statically typed languages typically ensure that the types used in the program are correct.
Combinations of mitigation, prevention, and ensured correctness to reduce the practical risk of vulnerabilities due to bugs.
Having well-defined and predictable behavior regarding memory access, even in the face of bugs. Memory safety encompasses several key aspects:
Memory accesses occur only within the valid lifetime of the intended memory object.
Memory accesses remain within the intended bounds of memory regions.
Memory is accessed and interpreted according to its intended type, preventing type confusion.
Memory is properly initialized before being read, avoiding the use of uninitialized data.
Memory writes are synchronized with reads or writes on other threads.
A safety bug that violates memory safety.
A computing platform or execution environment that provides mechanisms to prevent memory safety bugs in programs running on it from becoming vulnerabilities. This is a systems path to achieving memory safety by providing the well-defined and predictable behavior by way of the execution environment. For example, a strongly sandboxed WebAssembly runtime environment can allow a program that is itself unsafe to be executed safely
A programming language with sufficient defenses against memory safety bugs for them to not be a significant source of security vulnerabilities. This requires preventing vulnerabilities or ensuring correctness; mitigation is not sufficient to provide an adequate level of memory safety.
We identify several key requirements for a language to be memory-safe: