Login

 

Show simple item record

dc.contributor Wang, Xuyu en_US
dc.contributor.advisor Baynes, Anna en_US
dc.contributor.author Kakadiya, Ashray
dc.date.accessioned 2020-02-10T23:04:29Z
dc.date.available 2020-02-10T23:04:29Z
dc.date.issued 2020-02-10
dc.date.submitted 2019-12-04
dc.identifier.uri http://hdl.handle.net/10211.3/215073
dc.description Masters, Engineering and Computer Science, Computer Science en_US
dc.description.abstract This work presents a thorough investigation and procedure to enhance the usability of current autograding tools. The focus is on improving current open source autograding tools through usability studies, heuristic evaluations, and the iterative software engineering improvement process. The first half of this project comprises of investigation and research of existing autograding tools and analysis of how users currently interact with these tools. The second part consists of the methodology to improve the usability of an existing open-source autograding tool, Submitty. The improvement process is grounded in human computer interaction interative techniques. For example, I used a heuristic evaluation and usability testing. Autograding tools help teachers and students in the hassle-free submission of programming assignments and getting meaningful feedback after each submission. They provide robust interfaces to support creating an assignment for grading the posted assignment. The web-based autograding tools securely grade the assignments for programming languages in Python, C/C++, Java, Scheme, Prolog, SQL, SPIM, and anything available on GNU / Linux!. Instructors have full access to logs for debugging, launch batch regrading, custom containers (e.g., Docker). These tools offer many advancements which aid introductory computer science class students and teachers. However, there is still a learning curve in utilizing these tools leading many instructors to avoid these tools. My motivation in this project is to improve the usability of autograding tools. First, I conducted a heuristic evaluation of existing available open source projects. Next, I designed, developed, and implemented changes to correct the heuristic violations to make the system more robust. To implement these changes, I utilized PHPstorm as IDE, xdebug for debugging, PostgreSQL as a database, PHPUnit for testing, Javascript, Theia (Cloud IDE) using vagrant as Docker container service along with virtual box machine (Ubuntu 18.4). The final system, including the changes, remains open source and can be used by any institution and professor to publish their programming assignments for automated grading. en_US
dc.description.sponsorship Computer Science en_US
dc.language.iso en_US en_US
dc.subject Heuristic evaluation en_US
dc.subject Usability testing en_US
dc.subject Auto grading tool en_US
dc.subject Heuristic violation en_US
dc.title Usability enhancements on current autograding tools for introductory computer science courses en_US
dc.type Project en_US


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record

Search DSpace


My Account

RSS Feeds