VSSE2013

Local Information

ETAPS 2013


Abstracts

Abstract Semantic Differencing for Numerical Programs (invited talk)
Eran Yahav (Technion)

We address the problem of computing semantic differences between a program and a patched version of the program. Our goal is to obtain a precise characterization of the difference between program versions, or establish their equivalence when no difference exists. We focus on computing semantic differences in numerical programs where the values of variables have no a-priori bounds, and use abstract interpretation to compute an over-approximation of program differences. Computing differences and establishing equivalence under abstraction requires abstracting relationships between variables in the two programs. Towards that end, we first construct a correlating program in which these relationships can be tracked, and then use a correlating abstract domain to compute a sound approximation of these relationships. To better establish equivalence between correlated variables and precisely capture differences, our domain has to represent non-convex information using a partially-disjunctive abstract domain. To balance precision and cost of this representation, our domain over-approximates numerical information while preserving equivalence between correlated variables by dynamically partitioning the state according to equivalence criteria. We have implemented our approach in a tool called DIZY, built on the LLVM compiler infrastructure and the APRON numerical abstract domain library, and applied it to a number of challenging real-world examples, including programs from the GNU core utilities, Mozilla Firefox and the Linux Kernel. Our evaluation shows that DIZY often manages to establish equivalence, describes precise approximation of semantic differences when difference exists, and reports only a few false differences.


Formalization and Verification of Behavioral Correctness of Dynamic Software Updates
Min Zhang, Kazuhiro Ogata and Kokichi Futatsugi

Dynamic Software Updating (DSU) is a technique of updating running software systems on-the-fly. Whereas there are some studies on the correctness of dynamic updating, they focus on how to deploy updates correctly at the code level, e.g., if procedures refer to the data of correct types. However, little attention has been paid to the correctness of the dynamic updating at the behavior level, e.g., if systems after being updated behave as expected, and if unexpected behaviors can never occur. We present an algebraic methodology of specifying dynamic updates and verifying their behavioral correctness by using off-the-shelf theorem proving and model checking tools. By theorem proving we can show that systems after being updated indeed satisfy their desired properties, and by model checking we can detect potential errors. Our methodology is general in that: (1) it can be applied to three updating models that are mainly used in current DSU systems; and (2) it is not restricted to dynamic updates for certain programming models.


Towards Incremental Mutation Testing
Mark Anthony Cachia, Mark Micallef and Christian Colombo

Proponents of Agile development processes claim that adhering to Agile principles leads to the delivery of high quality code in evolutionary increments. Confidence in resulting systems is mostly gained through the use of unit test suites, entrusted to catch regressions as soon as they occur. Consequently, the system can only be as trustworthy as its tests, meaning that measurements of the tests' quality is crucial. Whilst mutation testing has been proposed as a means of uncovering test suite defects, it has not been widely adopted in the industry; mainly due to its computational expense and manual effort required by developers investigating unkilled mutants. To make mutation testing affordable, we propose incremental mutation testing - a variation of mutation testing which leverages the iterative nature of agile development by limiting the scope of mutant generation to sections of code which have changed since the last mutation run. Preliminary results show that the number of mutants generated is drastically reduced along with the time required to generate mutants and execute tests against them.


Memoized Symbolic Execution (invited talk)
Corina Pasareanu (NASA Ames and CMU)

We present memoized symbolic execution, a program analysis approach to the more efficient application of symbolic execution. The key idea in our work is to allow re-use of symbolic execution results across different runs of symbolic execution without having to re-compute previously computed results as done in earlier approaches. Specifically, we build a trie-based data structure to record path exploration information during a run of symbolic execution and then we optimize and re-use the trie in subsequent runs. We present how our approach optimizes symbolic execution in three standard scenarios where it is commonly applied: iterative deepening, regression checking, and heuristic search. We have implemented memoized symbolic execution in the Symbolic PathFinder framework for the symbolic execution of Java programs. Joint work with: Guowei Yang and Sarfraz Khurshid.


Static Upgrade Checking with Dynamically Generated Assertions
Fabrizio Pastore (UniMiB)

Model checking is extremely useful to verify if some properties of interest hold for a target program. Unfortunately, model checking is mostly used to verify general properties, such as the absence of deadlocks, rather than specific properties, which need to be manually specified by the developers, and a number of relevant program faults require specific properties to be detected. Dynamically generated assertions is an interesting alternative to manually specified assertions. In this case assertions are obtained by applying inference engines to traces collected during test case execution. The verification of dynamically generated assertions, in contrast with manually specified assertions, poses interesting challenges, like the need of limiting the false positives that might be generated by imprecise assertions, and the need of scaling the approach to the case of large populations of assertions, which might be returned by an inference engine. This talk presents a static-dynamic analysis technique, developed in the context of the EU project PINCETTE, that identifies faults in program updates by statically checking, on the updated pro-gram, the dynamic properties generated from the execution of the program before the update.

This strategy can effectively identify faulty updates that violate the program properties derived dynamically. The approach elaborated in the scope of the project includes hints to reduce false positives, by filtering false assertions from dynamic assertions, and to improve verification scalability, by exploiting the nature of the change during the verification process. Early results conducted with open-source software systems show that the approach can effectively be used to identify regression problems.


Verification on the fly in Visual Studio
Nassim Seghir (Oxford)

In recent years, automatic software verification has emerged as a complementary approach to program testing for enhancing software quality. Finding bugs is the ultimate aim of software verification tools. How do we best support the programmer who has to diagnose and understand those bugs? Unfortunately, most of the existing tools do not offer enough support for error diagnosis. We have developed a plug-in which implements a graphical user interface for the CProver tools within the Visual Studio IDE. Our plug-in enables visual debugging and error trace simulating within C programs as well as co-debugging C programs in tandem with wave-form views of hardware designs. Another feature of our plug-in is background verification. Each time a program source is saved, the verification process is silently triggered in background. If an error is found, its location is highlighted in the program. The user interacts directly with the program source to obtain information about the error.


EvolCheck: Incremental Model Checking for Upgrade Checks
Grigory Fedyukovich (USI)

Software is not created at once. Rather, it grows incrementally version by version and evolves long after being first released. To be practical for software developers, the software verification tools should be able to cope with changes. In this talk, we present a tool, eVolCheck, that focuses on incremental verification of software as it evolves. During the software evolution the tool maintains abstractions of program functions, function summaries, derived using Craig interpolation. The function summaries are used to localize verification of an upgrade to analysis of the modified functions. Experimental evaluation on a range of various benchmarks shows substantial speedup of incremental upgrade checking of eVolCheck in contrast to checking each version from scratch.
Joint work with Ondrej Sery and Natasha Sharygina.










































ETAPS