Secure Your Data Like You Secure Your Cargo Part 2: Assurance
Own a sailboat?
Before you set sail, you do a safety inspection, right? Checking for loose or bent pins, watching out for damaged parts, ensuring the rigging lines are not worn or tangled, and so forth. The bigger the boat, the longer the checklist grows to ensure safety and a smooth cruise. For a large sea-going vessel, the checklist before leaving port can contain dozens or even hundreds of inspection points to assure the safety and security of the crew and cargo. Even narrowed down to the checklist for the engine department, there are nearly 50 things to think about during port departure. A common theme in safety checklists like these is to double-check by inspecting or testing. After loading and securing cargo, we have better confidence that it will stay in place during the voyage if we double-check the strength and fastness of the rigging that lashes it in the hold. In some cases, it is prudent to have an independent crew member perform the inspection, just to get a second pair of eyes on the situation, so that nothing is missed. What about our onboard computer systems and data? How can we ensure they are similarly secure?
Assuring that the information and data in your shipboard systems is secure has some surprising similarities to the securement of physical cargo. As I said in the previous installment of this series, I am not a marine electronics expert. However, my company may have some interesting ideas and technology for this industry to consider when it comes to cybersecurity and trusted computing. A number of the technology trends related to cybersecurity that we have seen in other transportation industries (e.g., aerospace and land vehicles) are appearing in marine electronics: rising concern about hacking, increased functionality consolidated into fewer but more powerful computers, and more internal and external interconnectivity. In the face of these trends, customers have a fundamental need to trust onboard technology. Trust in electronics systems is built on three pillars: assurance, isolation, and monitoring. My previous blog in this series focused on isolation. In this blog, I’ll address assurance, with a later blog on monitoring. The pillar of assurance is a computer software technique that is analogous to securing cargo with lashing, along with the ship procedures to ensure all cargo has been secured properly.
Assurance increases our confidence and trust in a system because it offers evidence that the system will do what we expect and, just as importantly, will not do what we do not expect. In engineering designs these expectations are typically called requirements or specifications. Our assurance approach should justify a certain level of confidence that the system meets the requirements.
What Needs to be Assured?
Assurance should address all possible failures (as far as an experienced person can reasonably foresee) eliminating failures if possible, and when not, at least reducing the probability they can occur and reducing the impact if they do occur.
A number of design features can aid in reducing the impact of failures. For example, isolation, the topic of the previous blog in this series, reduces the impact of a software bug or hacking attack on one software function so that it cannot spread to other functions of the system. Another example is the use of redundancy, such as multiple copies of data, so that if one is corrupted, another copy can be used instead. Analysis and testing of these safety features must be particularly rigorous to ensure confidence that they will truly provide the extra protection from failures as advertised.
Assurance must also address problems associated with failures introduced by human operators, either by accident or with malicious intent. While it is impossible to eliminate all human error, some designs are more intuitively simple than others, which reduces the likelihood of operator mistakes. This means user testing is vitally important to ensure the typical operator is not easily confused by the computer interface.
What Does Assurance Look Like?
Industry standard assurance of computer systems typically takes the form of documented proof of correctness, called assurance artifacts. These artifacts are akin to the port departure checklist, though more extensive. To give you a better idea of what this entails, let me briefly three types of artifacts: review, test, analysis.
- A review assurance artifact provides documented evidence that a competent reviewer looked over the computer system design. This might take the form of a peer review, such as one software developer looking over the computer program written by another developer. It might be more formal, such as a third-party auditor performing an in-depth review.
- A test assurance artifact records the results of testing. Unit tests verify the proper operation of individual electronic components and individual software programs. System and integration testing verify the correct operation of the overall computer system. Rigorous testing may also include independent review of the test procedures, repetition of tests to ensure consistent results, and a careful process to ensure re-testing any time a component is modified.
- An analysis artifact documents a computation or investigation intended to verify correctness of software or electronic hardware. Although testing is typically preferred, analysis might be used where testing is difficult or impossible. For example, it is hard to test for cybersecurity against hacking threats that are not yet known. However, certain types of analysis can rule out even the possibility of certain types of hacking if the system is properly designed.
How Rigorous Should Assurance Be?
The formality and rigor of the artifacts depends on the severity of the impact of a failure. If a failure in the system could potentially cause the loss of human life, the system is considered safety-critical and must be assured with very high levels of rigor. Even where human safety is not at risk, impact to property and goods can still result in harm, such as financial or environmental, so that here too we apply a reasonable level of care to assure the correct operation of the onboard electronics. If a failure would not result in any harm, we can use a much less formal approach to ensure a reasonable level of functionality.
The more rigorous the assurance, the more confidence we can have that the system will operate as needed and expected. More rigor in assurance means a longer “checklist” of artifacts that go much further into the depths and details of the software and electronic hardware to confirm they will do the right thing under all the expected operating conditions.
How Much Confidence is Warranted?
If your instinct is to distrust any newfangled system, then your gut is right, the new system needs to earn your trust. However, if you are not a computer engineer yourself — you are simply purchasing or operating the onboard computer system — how can you be confident that the system will behave correctly? How do you know if your confidence is justified?
There are at least three possible sources to help you evaluate the trustworthiness of a new system. First, you might have some reason to trust the provider themselves, particularly if you have used equipment from them in the past with success. Though not a guarantee, a good track record of producing reliable equipment in the past is a sign that future products will be robust as well. Second, the product or the provider might be endorsed or certified by a reliable and objective third party. Industry standard or quality assurance certification can bolster your confidence in the provider and their products. Third, other operators like you may have done some early field testing of the product you are considering, giving you the pros and cons based on actual experience. However, although a poor review is strong evidence something is wrong, a glowing review from initial field testing does not, by itself, provide strong evidence of correctness. Of course, even better is to have all three sources confirming the product is trustworthy.
If you purchase or operate an onboard computer system with high confidence based on the assurance discussed earlier, can your confidence ever fade? Yes.
Your confidence might fade as the system ages — you may need to take measures to confirm the system is still reliable. For more sophisticated systems, they might have self-test and self-diagnosis features built-in; for others, you might need to do certain tests manually, as part of routine maintenance.
Your confidence might fade when the manufacturer discovers bugs or cybersecurity vulnerabilities. Even worse, your confidence might be completely misplaced if the supplier is not even aware of the problems until they show up in the field. Your trust can be restored if the supplier promptly provides software updates to your device.
Aside from aging or bugs, your confidence might fade simply due to using the system in a new situation might warrant reconsidering your confidence. For example, perhaps you are traveling for the first time in saltwater rather than freshwater, or into a colder weather region than normal, all of which puts new environmental stress on the system. Is your system rated for that temperature or protected from temperature extremes by enclosing it in a heated protective housing? Is the housing sufficiently sealed against the more corrosive effects of saltwater? Any changes in the operating environment should cause us to reconsider the validity of our systems, checking whether any assumptions made by our assurance evidence have been violated.
There is More to the Story of Trust
Assurance of onboard computer systems, including the electronic hardware and software can improve the reliability and security of marine electronics. However, assurance is not the complete story for trusted computing – it must be combined with isolation and monitoring. The next article in this series will cover that last element, monitoring.