How should a software system be verified? Much research is currently focused on attempts to show that code modules meet their specifications. This is important, but bugs in code are not the weakest link in the chain. The larger problems are identifying and articulating critical properties, and ensuring that the components of a system - not only software modules, but also hardware peripherals, physical environments, and human operators - together establish them. Another common assumption is that verification must take system design and implementation as given. I’ll explain the rationale for, and elements of, a new approach to verification, in which design is driven by verification goals, and verification arguments are structured in a way that exposes the relationship between critical properties and the components that ensure them.
If you cannot see the document below, the PDF document is most likely not freely accessible. In this case, please try to access the document via this link.
% BibTex
@inproceedings{JacksonK10,
author = {Daniel Jackson and
Eunsuk Kang},
editor = {Marc Frappier and
Uwe Gl{\"{a}}sser and
Sarfraz Khurshid and
R{\'{e}}gine Laleau and
Steve Reeves},
title = {A Structure for Dependability Arguments},
booktitle = {Abstract State Machines, Alloy, {B} and Z, Second International Conference,
{ABZ} 2010, Orford, QC, Canada, February 22-25, 2010. Proceedings},
series = {Lecture Notes in Computer Science},
volume = {5977},
pages = {1},
publisher = {Springer},
year = {2010},
url = {https://doi.org/10.1007/978-3-642-11811-1\_1},
doi = {10.1007/978-3-642-11811-1\_1},
timestamp = {Tue, 14 May 2019 10:00:50 +0200},
biburl = {https://dblp.org/rec/conf/asm/JacksonK10.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}