Jay Thomas is the director of field engineering at LDRA Technology.
Reading time: 5 minutes
Creating secure software is very similar to creating safety-critical software, observes Jay Thomas at LDRA. If we treat security as an explicit requirement throughout the development life cycle, we can dynamically verify its implementations. Moreover, static analysis helps eliminate known risks and vulnerabilities.
Security is a growing concern for today’s Internet of Things. ‘Smart’ thermostats and point-of-sale devices have given hackers access to credit card databases, and we’ve seen multiple demonstrations of cars and personal medical devices being taken over by outsiders. All these examples represent vulnerabilities in the cryptographic software components used to secure internet communications. As a result, developers are facing increased demand for not only reliable and safe embedded software, but also software systems that are secure.
To be considered secure, software must exhibit three properties. First, it must be dependable, that is, it must execute predictably and operate correctly under all conditions. Second, it must be trustworthy: it must contain few, if any, exploitable vulnerabilities or weaknesses that can be used to subvert or sabotage the software’s dependability. And third, it must be resilient enough to withstand attack and to recover as quickly as possible, and with the least possible damage from those attacks that it can neither resist nor tolerate.
Of course, there’s no silver bullet for mitigating all security threats. But according to research by the National Institute of Standards and Technology in the US, 64 per cent of software vulnerabilities stem from programming errors, so the best time to address them is during the code’s creation.
Building secure code is all about eliminating known weaknesses, including defects. That means secure software must also be high-quality software; even minor defects can result in significant security breaches. And while most quality-driven software is written to satisfy functional requirements, building secure software requires adding security-specific concepts as quality attributes in the software development life cycle. As a result, the same process disciplines required to create safety-critical software can also be used to create secure software.
|Name||Programming language||Date of most recent version||Description|
|SecureC||C||2013||C Secure Coding Rules (ISO/IEC 17961:2013 or SecureC) provides a set of rules for the creation of secure code in the C programming language.|
|Cert C||C||2013||Secure coding standard from the Computer Emergency Response Team (Cert) division of the Software Engineering Institute at Carnegie Mellon University. Provides rules and recommendations (collectively called guidelines) for secure coding in the C programming language.|
|Cert C++||C++||2013||Provides guidelines for secure coding in the C++ programming language.|
|Cert Java||Java||2011||Provides guidelines for secure coding in the Java programming language.|
|Cert Perl||Perl||Under development||Provides guidelines for secure coding in the Perl programming language.|
|Common Weakness Enumeration||Multiple||2014||The Common Weakness Enumeration (CWE) dictionary is a formal list of common software weaknesses that have been known to occur in software’s architecture, design, code or implementation that can lead to exploitable security vulnerabilities. Software weaknesses include flaws, faults, bugs, vulnerabilities and other errors in software implementation, code, design or architecture that could make systems and networks vulnerable to attack.|
Key software security standards deliver security guidelines for software developers that can be more easily enforced via automated tools.
In the same way that developing safety-critical systems starts with a system safety assessment, a secure software project starts with a security risk assessment – a process that ensures that the nature and impact of security breaches are assessed prior to deployment. Next, the security controls necessary to mitigate any identified impact can be determined, and these become a system requirement. This way, you include security in the definition of system correctness that permeates the development process.
Once you’ve introduced security concepts into the requirements process, you can then use dynamic assurance via security-focused testing to verify that the security features have been implemented correctly. The biggest return on investment, however, comes from the enforcement of secure coding rules. Many of the software defects that lead to vulnerabilities stem from common code weaknesses. A growing number of dictionaries, standards and rules have been created to highlight these common weaknesses so they can be avoided, and so that vulnerability defects can be detected and eliminated at the point of injection (see the table).
These programming standards encapsulate the best practices for writing software in a given language for a given domain and provide guidance for creating secure code. Detecting defects at the point of injection, rather than later in the development process, also greatly reduces the cost of remediation and ensures that software quality is not degraded by excessive maintenance.
While it’s possible to enforce these coding standards via manual inspection, the process is slow and inefficient. Even worse, for large and complex software applications it’s not consistent or rigorous enough to uncover the variety of defects that can produce a security vulnerability. As a result, these secure coding standards are best enforced by using static analysis tools, which help to identify both known and unknown vulnerabilities while also eliminating latent errors in the code. Additionally, these tools help even novice secure software developers benefit from the experience and knowledge encapsulated in the standards.
The primary objective of using static analysis tools to build secure software is to identify potential vulnerabilities in the code. Examples of potentially exploitable vulnerabilities that static analysis tools help to identify include the use of insecure functions, array overflows and underflows, and the incorrect use of signed and unsigned data types.
In addition, these tools provide a range of metrics to assess and improve the quality of the code under development, such as the cyclomatic complexity metric that identifies unnecessarily complex software that is difficult to test. Since secure code must, by nature, be high-quality code, you can use these metrics to bolster the quality of the code under development.
Eliminating defects through static analysis does require an increase in the coding effort. However, this is offset many times over by a reduced verification burden. Adopting and enforcing a secure programming standard can greatly help you to identify and remove potentially exploitable vulnerabilities in your embedded system, be they defects or known weaknesses. Using a workflow manager to host the static and dynamic analysis tools throughout the development process makes generating the documentation required for certification extremely straightforward. You can access all of the project artefacts, which helps you prepare your data for the certification authorities.