Auflistung nach Autor:in "Ernst, Michael D."
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragCollaborative verification of information flow for a high-assurance app store(Software-engineering and management 2015, 2015) Just, René; Ernst, Michael D.; Millstein, SuzanneCurrent app stores distribute some malware to unsuspecting users, even though the app approval process may be costly and time-consuming. High-integrity app stores must provide stronger guarantees that their apps are not malicious. This talk presents a verification model for use in such app stores to guarantee that the apps are free of malicious information flows. In this model, the software vendor and the app store auditor collaborate-each does tasks that are easy for her/him, reducing overall verification cost. The software vendor provides a behavioral specification of information flow and source code annotated with information-flow type qualifiers. This talk also presents our flow-sensitive, context-sensitive information-flow type system that checks those information flow type qualifiers and proves that only information flows in the specification can occur at run time. We have implemented the information-flow type system for Android apps written in Java, and we evaluated both its effectiveness and usability in practice. In an adversarial Red Team evaluation, we analyzed 72 apps (576,000 lines of code) for malware. Our information-flow type system was effective: it detected 96\% of malware whose malicious behavior was related to information flow.
- KonferenzbeitragMutation analysis for the real world: effectiveness, efficiency, and proper tool support(Software-engineering and management 2015, 2015) Just, René; Ernst, Michael D.; Fraser, GordonEvaluating testing and debugging techniques is important for practitioners and researchers: developers want to know whether their tests are effective in detecting faults, and researchers want to compare different techniques. Mutation analysis fits this need and evaluates a testing or debugging technique by measuring how well it detects seeded faults (mutants). Mutation analysis has an important advantage over approaches that rely on code coverage: it not only assesses whether a test sufficiently covers the program code but also whether that test's assertions are effective in revealing faults. There is, however, surprisingly little evidence that mutants are a valid substitute for real faults. Furthermore, mutation analysis is well-established in research but hardly used in practice due to scalability problems and insufficient tool support. This talk will address these challenges and summarize our recent contributions in the area of mutation analysis with a focus on effectiveness, efficiency, and tool support.