Auflistung nach Autor:in "Fraser, Gordon"
1 - 10 von 13
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAutomatische erzeugung von unit tests für klassen mit umgebungsabhängigkeiten(Software-engineering and management 2015, 2015) Arcuri, Andrea; Fraser, Gordon; Galeotti, Juan PabloDie automatische Erzeugung von Tests für objektorientierte Software besteht typischerweise aus der Erstellung von Sequenzen von Methodenaufrufen um hohe Codeabdeckung zu erzielen. In der Praxis könnte der Erfolg dieses Prozesses eingeschränkt sein, wenn Klassen mit ihrer Umgebung, wie zum Beispiel dem Dateisystem, dem Netzwerk oder Nutzern interagieren. Das fürt zu zwei großen Problemen.
- PraxisbeiträgeAutomatisiertes Feedback für block-basierte Programmiersprachen(INFOS 2023 - Informatikunterricht zwischen Aktualität und Zeitlosigkeit, 2023) Obermüller, Florian; Greifenstein, Luisa; Heuer, Ute; Fraser, GordonBlock-basierte Programmiersprachen wie Scratch oder mBlock ermöglichen motivierende und einfache erste Programmierversuche, aber ohne Feedback sowohl zu fehlerhaften oder umständlichen als auch zu gut gelungenen Programmabschnitten werden viele Lernchancen ungenutzt bleiben. Um diesem Problem zu begegnen, stellen wir in diesem Beitrag das von uns entwickelte automatisierte Feedback-Werkzeug LitterBox vor. LitterBox unterstützt Lernende, indem es Programme auf bekannte Codemuster überprüft und für jede gefundene Instanz eines Musters Erläuterungen generiert, die sich direkt auf das vorliegende Programm beziehen und diese am zugehörigen Codeabschnitt visualisiert. Darüber hinaus kann LitterBox Lehrenden helfen, schnell einen Überblick über prinzipielle Lernrückstände und -fortschritte ihrer Schüler*innen zu erhalten. In einer ersten Evaluation mit Grundschullehramtsstudierenden ohne universitäre Informatikausbildung (n = 142) hat sich LitterBox generell als sehr hilfreich beim Erstellen funktionaler und lesbarer Programme erwiesen. Aus den Rückmeldungen der Studierenden konnten außerdem Kriterien für gute Hinweise abgeleitet werden. LitterBox wird von uns als Web-Frontend bereitgestellt oder kann Open Source bezogen und um neue Codemuster erweitert werden.
- KonferenzbeitragCode Defenders(Software Engineering und Software Management 2018, 2018) Rojas, José Miguel; White, Thomas D.; Clegg, Benjamin S.; Fraser, GordonThis paper was presented at the 39th International Conference on Software Engineering (ICSE 2017), where it received an ACM SIGSOFT Distinguished Paper Award: Writing good software tests is difficult and not every developer's favorite occupation. Mutation testing aims to help by seeding artificial faults (mutants) that good tests should identify, and test generation tools help by providing automatically generated tests. However, mutation tools tend to produce huge numbers of mutants, many of which are trivial, redundant, or semantically equivalent to the original program; automated test generation tools tend to produce tests that achieve good code coverage, but are otherwise weak and have no clear purpose. In this paper, we present an approach based on gamification and crowdsourcing to produce better software tests and mutants: The Code Defenders web-based game lets teams of players compete over a program, where attackers try to create subtle mutants, which the defenders try to counter by writing strong tests. Experiments in controlled and crowdsourced scenarios reveal that writing tests as part of the game is more enjoyable, and that playing Code Defenders results in stronger test suites and mutants than those produced by automated tools.
- KonferenzbeitragCreating Test-Cases Incrementally with Model-Checkers(Informatik 2007 – Informatik trifft Logistik – Band 2, 2007) Fraser, Gordon; Wotawa, FranzTest-case generation with model-checkers is a promising field of research in software testing. Model-checker based approaches offer many advantages: They are fully automated, they are flexible due to different concrete techniques, and under certain conditions they are also efficient. There are still many issues that need to be resolved in order to achieve widespread acceptance in the industry. Because model- checkers were not originally designed with test-case generation in mind, a large percentage of the test-cases produced are duplicates. Many of the remaining test-cases share identical prefixes that do not contribute to the overall fault sensitivity of a test- suite. Some test criteria also result in large test-suites of rather short test-cases. In this paper, we address these problems and suggest to create test-cases incrementally instead of separately for each test requirement. For this, heuristics based on an es- timated distance between a state and a temporal logic formula are presented, which allows to chose which test-case to extend with regard to which test requirement.
- KonferenzbeitragAn Empirical Study of Flaky Tests in Python(Software Engineering 2022, 2022) Gruber, Martin; Lukasczyk, Stephan; Kroiß, Florian; Fraser, GordonThis is a summary of our work presented at the International Conference on Software Testing 2021 [Gr21b]. Tests that cause spurious failures without code changes, i.e., flaky tests, hamper regression testing and decrease trust in tests. While the prevalence and importance of flakiness is well established, prior research focused on Java projects, raising questions about generalizability. To provide a better understanding of flakiness, we empirically study the prevalence, causes, and degree of flakiness within 22 352 Python projects containing 876 186 tests. We found flakiness to be equally prevalent in Python as in Java. The reasons, however, are different: Order dependency is a dominant problem, causing 59% of the 7 571 flaky tests we found. Another 28% were caused by test infrastructure problems, a previously less considered cause of flakiness. The remaining 13% can mostly be attributed to the use of network and randomness APIs. Unveiling flaky tests also requires more runs than often assumed: A 95% confidence that a passing test is not flaky on average would require 170 reruns. Additionally, through our investigations, we created a large dataset of flaky tests that other researchers already started building on [MM21; Ni21].
- KonferenzbeitragImproving Testing Behavior by Gamifying IntelliJ(Software Engineering 2024 (SE 2024), 2024) Straubinger, Philipp; Fraser, Gordon
- PosterMusikprogrammierung als universeller Motivator in der Programmierausbildung(INFOS 2023 - Informatikunterricht zwischen Aktualität und Zeitlosigkeit, 2023) Graßl, Isabella; Fraser, GordonDie initiale Motivation für das Lernen von Programmierung gestaltet sich aufgrund anhaltender gesellschaftlicher Stereotype insbesondere bei Mädchen besonders schwierig. Anhand einer Kursevaluation mit 118 Jugendlichen stellt sich die Synthese von Musik und Informatik, bei der grundlegende Programmierkonzepte anhand Musikkompositionen vermittelt werden, als universeller Motivator hervor, der insbesondere bei Mädchen beliebt ist.
- KonferenzbeitragMutation analysis for the real world: effectiveness, efficiency, and proper tool support(Software-engineering and management 2015, 2015) Just, René; Ernst, Michael D.; Fraser, GordonEvaluating testing and debugging techniques is important for practitioners and researchers: developers want to know whether their tests are effective in detecting faults, and researchers want to compare different techniques. Mutation analysis fits this need and evaluates a testing or debugging technique by measuring how well it detects seeded faults (mutants). Mutation analysis has an important advantage over approaches that rely on code coverage: it not only assesses whether a test sufficiently covers the program code but also whether that test's assertions are effective in revealing faults. There is, however, surprisingly little evidence that mutants are a valid substitute for real faults. Furthermore, mutation analysis is well-established in research but hardly used in practice due to scalability problems and insufficient tool support. This talk will address these challenges and summarize our recent contributions in the area of mutation analysis with a focus on effectiveness, efficiency, and tool support.
- KonferenzbeitragNeuroevolution-Based Generation of Tests and Oracles for Games(Software Engineering 2024 (SE 2024), 2024) Feldmeier, Patric; Fraser, Gordon
- KonferenzbeitragPractical Flaky Test Prediction using Common Code Evolution and Test History Data(Software Engineering 2024 (SE 2024), 2024) Gruber, Martin; Heine, Michael; Oster, Norbert; Philippsen, Michael; Fraser, Gordon