VSSE 2012

Local Information

ETAPS 2012

Workshops


Abstracts

Software Upgrade Checking Using Interpolation-based Function Summaries
Ondrej Sery, Charles University, Prague

During its evolution, a typical software undergoes a myriad of small changes. However, it is very costly or simply infeasible to verify each new version from scratch. As a remedy to this issue, we proposed to use function summaries to enable incremental verification of the evolving software. During verification of the original version, our approach computes function summaries using Craig's interpolation. Later, these summaries are used to perform an incremental check of the new version. Benefit of this approach is that the cost of the check depends on the extent of the change between the two versions. Thus checking small changes with little impact is cheap.

Bio: Ondrej Sery received his PhD from Charles University in Prague in the field of software verification. During his postdoctoral stay at University of Lugano he studied function summarization using Craig's interpolation and its application to software upgrade checking in the scope of the PINCETTE EU project. Currently, he is an assistant professor at Charles University in Prague.


Finding Races in Evolving Concurrent Programs Through Check-in Driven Analysis
Alastair Donaldson, Imperial College London

We have recently designed a tool, KLEE-race, for detecting data races in concurrent C programs using dynamic symbolic execution, building on top of the KLEE tool. Although KLEE-race uses several optimisations to attempt to explore the state-space of a concurrent program in an intelligent fashion, state-space explosion is still problematic and can hinder the discovery of important bugs. Based on the hypothesis that, in large software projects, bugs are often localised in parts of software that have recently changed, we are undertaking a study applying KLEE-race to multiple source code revisions. By guiding the tool with heuristics that attempt to explore methods that were modified during recent check-ins, we hope to uncover a larger percentage of bugs than with the tool's default search heuristics. In the talk, I will report the latest findings of this ongoing work.

Bio: Alastair Donaldson is a Lecturer and EPSRC Research Fellow in the Department of Computing at Imperial College London, where he leads the Multicore Programming group. He is also coordinator of CARP: Correct and Efficient Accelerator Programming, an EU-funded STREP project focussing on programming models and verification techniques for effective construction of software for accelerator processors such as GPUs. Alastair has published more than 40 research papers on formal verification, compilers and multicore programming, and has designed several software tools in these areas. Before joining Imperial, Alastair was a Visiting Researcher at Microsoft Resarch, Redmond, a Research Fellow at the Departmant of Computer Science, University of Oxford and a Fellow of Wolfson College and a Research Engineer at Codeplay Software Ltd. He holds a PhD from the University of Glasgow.


SymDiff: Leveraging and Extending Program Verification Techniques for Comparing Programs
Shuvendu K. Lahiri, Microsoft Research, SymDiff project

Comparing programs has several application including ensuring compatibility of evolving programs, comparing independent implementations and validating compiler transformations. SymDiff is a project for investigating program verification techniques for comparing programs. We describe a language agnostic semantic diff framework (also called SymDiff) for comparing two programs that operate at the level of an intermediate program Boogie. We describe a contract mechanism called mutual summaries that extends single program contracts such as pre/postconditions; this is useful for relating programs that have undergone interprocedural transformation. Finally, we will describe an application of SymDiff towards the problem of compiler validation and interesting challenges for scaling it for large number of examples.

Joint work with Chris Hawblitzel, M. Kawaguchi and H. Rebelo.

Bio: Shuvendu Lahiri is a researcher in the Research in Software Engineering (RiSE) group at Microsoft Research, Redmond. His research focusses on program verification, symbolic methods and theorem proving. He has worked on verifying heap manipulating C programs, modeling and verifying unbounded systems, predicate abstraction methods and SMT solvers. He obtained a PhD from Carnegie Mellon University and a BTech from Indian Institute of Technology, Kharagpur. His PhD thesis received the ACM SIGDA Outstanding Thesis Award for the year 2004-2005.


Testing Evolving Software
Alex Orso, Georgia Institute of Technology

Software engineers spend most of their time maintaining and evolving software systems to fix faults, improve performance, or add new features. A fundamental activity during software evolution is regression testing--testing of modified software to gain confidence that the changed parts of the software behave as expected and the changes have not introduced new faults in the rest of the code. Regression testing involves a wide range of activities, including (1) regression-test selection, which identifies test cases in an existing test suite that need not be rerun on the new version of the software, (2) test-suite reduction, which eliminates redundant test cases in a test suite according to a given set of criteria, and (3) test-suite augmentation, which determines whether a regression test suite adequately exercises the changed behavior of modified software and, if not, provides suitable guidance for creating new test cases that specifically target such changed behavior. The talk will first provide an overview of regression testing and of the state of the art in this area. It will then discuss, at different levels of detail, a set of techniques that we developed for supporting various regression-testing activities. Finally, we will conclude by discussing some current and future directions for regression-testing research.

Bio: Alessandro Orso is an Associate Professor in the College of Computing at the Georgia Institute of Technology. He received his M.S. degree in Electrical Engineering (1995) and his Ph.D. in Computer Science (1999) from Politecnico di Milano, Italy. From March 2000, he has been at Georgia Tech, first as a research faculty, and now as an Associate Professor. His area of research is software engineering, with emphasis on software testing and program analysis. His interests include the development of techniques and tools for improving software reliability, security, and trustworthiness, and the validation of such techniques on real-world systems. Dr. Orso has received funding for his research from government agencies, such as NSF and the Department of Homeland Security, and industries, such as IBM and Microsoft. He serves on the editorial boards of ACM TOSEM, served as program chair for ACM-SIGSOFT ISSTA 2010, and serves on the Advisory Board of Reflective Corp. He has also served as a technical consultant to DARPA. Dr. Orso is a member of the ACM and the IEEE Computer Society.


Empirical analysis of Evolution of Vulnerabilities
Fabio Massacci, University of Trento, coordinator of SecureChange (Security Engineering for lifelong Evolvable Systems) project

Many conference and journal papers describe laws that should characterize the evolution of software and its vulnerabilities. Vulnerability models operate on the known vulnerability datasets to obtain a quantitative temporal estimation of the vulnerabilities present in the software. Successful models can be useful hints for both software vendors and users in allocating resources to handle potential breaches, and tentative patch update.

Having a precise vulnerability discovery model (VDM) would provide also a useful quantitative insight to assess software security. Thus far, several models have been proposed with some evidence supporting their goodness of fit. In this work we describe an independent validation of the applicability of these models to the vulnerabilities of the popular browsers Firefox, Google Chrome and Internet Explorer.

The result shows that some VMDs do not simply fit the data (no matter how computed), while for others there is both positive and negative evidence. The major source of problems is the presence of un-accounted after-life vulnerabilities.

We also analyze the impact of vulnerability data sets based on different definitions of vulnerability to the VDM's performance. In terms of databases, we found that using ``confirmed'' vulnerabilities yields more stable results.

Bio: Prof. Massacci has graduated from the University of Rome I in 1998. He then did his postdoctoral research in CNR, Rome, worked as an assistant professor in the University of Siena, and was a visiting researcher in IRIT-CNRS, Toulouse before accepting his current position as a professor in the University of Trento. He has published over 130 peer-reviewed articles in conferences and academic journals and participated in six international research projects as a principal investigator. Fabio Massacci's main research interests are in the field of automated reasoning and formal Methods, security engineering and verification, security and trust Management for autonomic systems.


Regression Verification for Multi-Threaded Programs
Ofer Strichman, Technion

Regression verification is the problem of deciding whether two similar programs are equivalent under an arbitrary yet equal input, given some definition of equivalence. So far this problem has only been studied for the case of single-threaded deterministic programs. We present a method for regression verification to establish partial equivalence (i.e., input/output equivalence of terminating executions) of multi-threaded programs. Specifically, we develop two proof-rules that decompose the regression verification between \emph{concurrent} programs to that of regression verification between \emph{sequential} functions, a more tractable problem. This ability to avoid composing threads altogether when discharging premises, in a fully automatic way and for general programs, uniquely distinguishes our proof rules from others used for classical verification of concurrent programs.

Bio: Prof. Strichman has graduated from the Weizmann Institute in 2001, where he completed his PhD under the supervision of the late Amir Pnueli. He then continued for two years of Post-Doc at Carnegie-Mellon, and then joined the Technion as a senior lecturer. He has published a book on decision procedures together with Daniel Kroening, and over 70 peer-reviewed articles in formal verification and operations research.


Automated Continuous Evolutionary Testing
Peter M. Kruse, Berner & Mattner Systemtechnik GmbH, FITTEST (Future Internet Testing) project

Applications that run in the cloud of the Future Internet need to be tested continuously (i.e. post-release) since the application under test does not remain fixed after its release. Services and components could be dynamically added by customers and the intended use could change. Therefore, testing has to be performed continuously after deployment to the customer.

The FITTEST testing environment will integrate, adapt and automate various techniques for continuous FI testing, providing a user friendly way to activate and parameterize them. This presentation will explian the global pictire of this environment and the corresponding project.

Bio: Peter M. Kruse is a software engineer in Berner & Mattner Systemtechnik GmbH, where he is working in the domain of testing, including evolutionary testing and the classification tree method. Peter's project experience includes Hardware-in-the-Loop Testing (HiL), model driven-development (MDD), and evolutionary structural and functional testing. He is also responsible for development of classification tree editor (CTE XL Professional), a popular tool for systematic black-box test case design. Previous work on evolutionary testing includes participation in the EU-funded EvoTest project from 2006 to 2009 and is currently the FITTEST EU project.






































ETAPS