Local cover image
Local cover image

A usability model for software development processes and practices

By: Contributor(s): Material type: TextTextPublication details: 2020Description: 1 archivo (3,4 MB) : il. colSubject(s): Online resources:
Contents:
Chapter 1. Introduction -- 1.1. Motivation -- 1.2. Problem Statement -- 1.3. Objective of the Thesis -- 1.4. Research Strategy -- 1.4.1. Explicate Problem -- 1.4.2. Define Objective and Requirements -- 1.4.3. Design and Develop Artifact -- 1.4.4. Demonstrate Artifact -- 1.4.5. Evaluate Artifact -- 1.5. Research Context -- 1.6. Thesis Outline -- Chapter 2. State of the Art -- 2.1. SMS on Process and Practice Usability -- 2.1.1. SMS Planning -- 2.1.1.1. SMS Objective and Research Questions -- 2.1.1.2. Search Strategy -- 2.1.1.3. Inclusion and Exclusion Criteria -- 2.1.1.4. Selection Procedure -- 2.1.1.5. Data Extraction Strategy -- 2.1.1.6. Data Synthesis Strategy -- 2.1.2. SMS Execution -- 2.1.3. SMS Results -- 2.2. Conclusions -- Chapter 3. Initial UMP Construction -- 3.1. Selection of Sources -- 3.2. Model Construction -- 3.2.1. Define Initial Usability Characteristics -- 3.2.2. Decompose Characteristics -- 3.2.3. Define metrics -- Chapter 4. UMP Structure Definition -- 4.1. UMP Summary -- 4.2. UMP Detailed Description -- 4.2.1. Self-evident purpose -- 4.2.1.1. Appropriateness of name metric -- 4.2.1.2. Recognized purpose metric -- 4.2.2. Learnability -- 4.2.2.1. Time required to learn to perform metric -- 4.2.2.2. Standard introductory course duration metric -- 4.2.2.3. Number of new concepts metric -- 4.2.3. Understandability -- 4.2.3.1. Conceptual model correspondence metric -- 4.2.3.2. Conceptual model complexity metric -- 4.2.4. Safety -- 4.2.4.1. Cost of incorrect adoption metric -- 4.2.4.2. Reduction in cost of error metric -- 4.2.4.3. Safety perception metric -- 4.2.4.4. Use of restraining functions metric -- 4.2.5. Feedback -- 4.2.5.1. Timeliness of feedback metric -- 4.2.5.2. Feedback richness metric -- 4.2.5.3. People feedback metric -- 4.2.5.4. Automatic feedback metric -- 4.2.6. Visibility -- 4.2.6.1. Defines indicators metric -- 4.2.7. Controllability -- 4.2.7.1. Defines checkpoints metric -- 4.2.7.2. Explicit outcomes metric -- 4.2.7.3. Level of autonomy metric -- 4.2.8. Adaptability -- 4.2.8.1. Defines adaptation points metric -- 4.2.8.2. Ratio of roles allowed to adapt metric -- 4.2.9. Attractiveness -- 4.2.9.1. User attractiveness rating metric -- 4.2.10. User satisfaction -- 4.2.11. User satisfaction rating metric -- 4.3. UMP Evaluation Process -- 4.3.1. Example Evaluation of Continuous Integration -- 4.3.2. UMP Metric Categorization -- 4.4. UMP Usage Modes -- 4.5. UMP Usage Scenarios -- Chapter 5. UMP Applications -- 5.1. Feasibility Study -- 5.1.1. Feasibility Study Planning -- 5.1.1.1. Study preparation -- 5.1.1.2. Participant Selection -- 5.1.2. Feasibility Study Execution -- 5.1.3. Feasibility Study Results -- 5.1.4. Threats to Validity -- 5.2. Usability Profiles for Evaluated Processes and Practices -- 5.3. Conclusions -- Chapter 6. UMP Iterative Refinement -- 6.1. Focus Group Study -- 6.1.1. Focus Group Planning and Design -- 6.1.2. Focus Group Session -- 6.1.3. Focus Group Data Analysis -- 6.1.3.1. Data Analysis of UMP Characteristics -- 6.1.3.2. Data Analysis of UMP Metrics -- 6.1.4. Summary of UMP Changes in Version 3.0 after Focus Group -- 6.1.5. Threats to Validity -- 6.2. Conclusions -- Chapter 7. UMP Reliability Evaluation -- 7.1. Inter-rater Reliability and Inter-rater Agreement -- 7.2. Scrum Study -- 7.2.1. Study Design and Statistic Selection -- 7.2.1.1. Context Selection -- 7.2.1.2. Subjects -- 7.2.1.3. Statistic and Variable Selection -- 7.2.1.4. Planning -- 7.2.2. Study Execution -- 7.2.3. Data Analysis -- 7.2.4. Results and Conclusions -- 7.2.5. Threats to Validity -- 7.3. TDD-BDD Study -- 7.3.1. Study Design and Statistic Selection -- 7.3.1.1. Context Selection -- 7.3.1.2. Subjects -- 7.3.1.3. Statistic and Variable Selection -- 7.3.1.4. Planning -- 7.3.2. Study Execution -- 7.3.3. Data Analysis -- 7.3.4. Results and Conclusions -- 7.3.5. Threats to validity -- 7.4. Conclusions -- Chapter 8. UMP Utility Evaluation -- 8.1. VMP Study -- 8.1.1. An Introduction to the VMP -- 8.1.2. Case Study Design -- 8.1.2.1. Context Selection -- 8.1.2.2. Participants -- 8.1.2.3. Design -- 8.1.3. Case Study Execution -- 8.1.4. Data Analysis -- 8.1.5. Results and Conclusions -- 8.1.6. Threats to validity -- 8.2. BDD Study -- 8.2.1. An Introduction to BDD -- 8.2.2. Field Quasi-experiment Planning -- 8.2.2.1. Context Selection -- 8.2.2.2. Subjects -- 8.2.2.3. Variable Selection -- 8.2.2.4. Hypothesis Formulation -- 8.2.2.5. Design -- 8.2.2.6. Procedure, Materials and Tasks -- 8.2.2.7. Analysis Procedure -- 8.2.3. Field Quasi-experiment Execution -- 8.2.4. Data Analysis -- 8.2.4.1. Descriptive Statistics -- 8.2.4.2. Hypothesis Testing -- 8.2.4.3. Qualitative Data Analysis -- 8.2.5. Results and Conclusions -- 8.2.6. Threats to Validity -- 8.3. Conclusions -- Chapter 9. Conclusions and Future Work -- 9.1. Thesis Contributions -- 9.2. Achievement of the Thesis Objective -- 9.2.1. Additional Emergent Results -- 9.3. Future Research Lines -- 9.4. Dissemination of Results -- 9.4.1. Thesis Publications -- 9.4.2. Thesis Publications in Progress -- 9.4.3. Other Related Publications -- 9.4.3.1. HELENA Global Survey on Hybrid Methods -- 9.4.3.2. State of Agile Practice -- 9.4.3.3. Using Feedback to Improve Student Practice
Dissertation note: Tesis (Doctorado en Ciencias Informáticas) - Universidad Nacional de La Plata. Facultad de Informática, 2020.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Home library Collection Call number URL Status Date due Barcode
Tesis de posgrado Tesis de posgrado Biblioteca de la Facultad de Informática TES 20/75 (Browse shelf(Opens below)) Available DIF-05035
Tesis de posgrado Tesis de posgrado Biblioteca de la Facultad de Informática Biblioteca digital Link to resource No corresponde
Tesis de posgrado Tesis de posgrado Biblioteca de la Facultad de Informática Biblioteca digital Link to resource No corresponde

Tesis (Doctorado en Ciencias Informáticas) - Universidad Nacional de La Plata. Facultad de Informática, 2020.

Chapter 1. Introduction -- 1.1. Motivation -- 1.2. Problem Statement -- 1.3. Objective of the Thesis -- 1.4. Research Strategy -- 1.4.1. Explicate Problem -- 1.4.2. Define Objective and Requirements -- 1.4.3. Design and Develop Artifact -- 1.4.4. Demonstrate Artifact -- 1.4.5. Evaluate Artifact -- 1.5. Research Context -- 1.6. Thesis Outline -- Chapter 2. State of the Art -- 2.1. SMS on Process and Practice Usability -- 2.1.1. SMS Planning -- 2.1.1.1. SMS Objective and Research Questions -- 2.1.1.2. Search Strategy -- 2.1.1.3. Inclusion and Exclusion Criteria -- 2.1.1.4. Selection Procedure -- 2.1.1.5. Data Extraction Strategy -- 2.1.1.6. Data Synthesis Strategy -- 2.1.2. SMS Execution -- 2.1.3. SMS Results -- 2.2. Conclusions -- Chapter 3. Initial UMP Construction -- 3.1. Selection of Sources -- 3.2. Model Construction -- 3.2.1. Define Initial Usability Characteristics -- 3.2.2. Decompose Characteristics -- 3.2.3. Define metrics -- Chapter 4. UMP Structure Definition -- 4.1. UMP Summary -- 4.2. UMP Detailed Description -- 4.2.1. Self-evident purpose -- 4.2.1.1. Appropriateness of name metric -- 4.2.1.2. Recognized purpose metric -- 4.2.2. Learnability -- 4.2.2.1. Time required to learn to perform metric -- 4.2.2.2. Standard introductory course duration metric -- 4.2.2.3. Number of new concepts metric -- 4.2.3. Understandability -- 4.2.3.1. Conceptual model correspondence metric -- 4.2.3.2. Conceptual model complexity metric -- 4.2.4. Safety -- 4.2.4.1. Cost of incorrect adoption metric -- 4.2.4.2. Reduction in cost of error metric -- 4.2.4.3. Safety perception metric -- 4.2.4.4. Use of restraining functions metric -- 4.2.5. Feedback -- 4.2.5.1. Timeliness of feedback metric -- 4.2.5.2. Feedback richness metric -- 4.2.5.3. People feedback metric -- 4.2.5.4. Automatic feedback metric -- 4.2.6. Visibility -- 4.2.6.1. Defines indicators metric -- 4.2.7. Controllability -- 4.2.7.1. Defines checkpoints metric -- 4.2.7.2. Explicit outcomes metric -- 4.2.7.3. Level of autonomy metric -- 4.2.8. Adaptability -- 4.2.8.1. Defines adaptation points metric -- 4.2.8.2. Ratio of roles allowed to adapt metric -- 4.2.9. Attractiveness -- 4.2.9.1. User attractiveness rating metric -- 4.2.10. User satisfaction -- 4.2.11. User satisfaction rating metric -- 4.3. UMP Evaluation Process -- 4.3.1. Example Evaluation of Continuous Integration -- 4.3.2. UMP Metric Categorization -- 4.4. UMP Usage Modes -- 4.5. UMP Usage Scenarios -- Chapter 5. UMP Applications -- 5.1. Feasibility Study -- 5.1.1. Feasibility Study Planning -- 5.1.1.1. Study preparation -- 5.1.1.2. Participant Selection -- 5.1.2. Feasibility Study Execution -- 5.1.3. Feasibility Study Results -- 5.1.4. Threats to Validity -- 5.2. Usability Profiles for Evaluated Processes and Practices -- 5.3. Conclusions -- Chapter 6. UMP Iterative Refinement -- 6.1. Focus Group Study -- 6.1.1. Focus Group Planning and Design -- 6.1.2. Focus Group Session -- 6.1.3. Focus Group Data Analysis -- 6.1.3.1. Data Analysis of UMP Characteristics -- 6.1.3.2. Data Analysis of UMP Metrics -- 6.1.4. Summary of UMP Changes in Version 3.0 after Focus Group -- 6.1.5. Threats to Validity -- 6.2. Conclusions -- Chapter 7. UMP Reliability Evaluation -- 7.1. Inter-rater Reliability and Inter-rater Agreement -- 7.2. Scrum Study -- 7.2.1. Study Design and Statistic Selection -- 7.2.1.1. Context Selection -- 7.2.1.2. Subjects -- 7.2.1.3. Statistic and Variable Selection -- 7.2.1.4. Planning -- 7.2.2. Study Execution -- 7.2.3. Data Analysis -- 7.2.4. Results and Conclusions -- 7.2.5. Threats to Validity -- 7.3. TDD-BDD Study -- 7.3.1. Study Design and Statistic Selection -- 7.3.1.1. Context Selection -- 7.3.1.2. Subjects -- 7.3.1.3. Statistic and Variable Selection -- 7.3.1.4. Planning -- 7.3.2. Study Execution -- 7.3.3. Data Analysis -- 7.3.4. Results and Conclusions -- 7.3.5. Threats to validity -- 7.4. Conclusions -- Chapter 8. UMP Utility Evaluation -- 8.1. VMP Study -- 8.1.1. An Introduction to the VMP -- 8.1.2. Case Study Design -- 8.1.2.1. Context Selection -- 8.1.2.2. Participants -- 8.1.2.3. Design -- 8.1.3. Case Study Execution -- 8.1.4. Data Analysis -- 8.1.5. Results and Conclusions -- 8.1.6. Threats to validity -- 8.2. BDD Study -- 8.2.1. An Introduction to BDD -- 8.2.2. Field Quasi-experiment Planning -- 8.2.2.1. Context Selection -- 8.2.2.2. Subjects -- 8.2.2.3. Variable Selection -- 8.2.2.4. Hypothesis Formulation -- 8.2.2.5. Design -- 8.2.2.6. Procedure, Materials and Tasks -- 8.2.2.7. Analysis Procedure -- 8.2.3. Field Quasi-experiment Execution -- 8.2.4. Data Analysis -- 8.2.4.1. Descriptive Statistics -- 8.2.4.2. Hypothesis Testing -- 8.2.4.3. Qualitative Data Analysis -- 8.2.5. Results and Conclusions -- 8.2.6. Threats to Validity -- 8.3. Conclusions -- Chapter 9. Conclusions and Future Work -- 9.1. Thesis Contributions -- 9.2. Achievement of the Thesis Objective -- 9.2.1. Additional Emergent Results -- 9.3. Future Research Lines -- 9.4. Dissemination of Results -- 9.4.1. Thesis Publications -- 9.4.2. Thesis Publications in Progress -- 9.4.3. Other Related Publications -- 9.4.3.1. HELENA Global Survey on Hybrid Methods -- 9.4.3.2. State of Agile Practice -- 9.4.3.3. Using Feedback to Improve Student Practice

Click on an image to view it in the image viewer

Local cover image