CSCI-577A 复习

  • Lec1

    What is Software Engineering?

    Software Engineering or Multi-Person Construction of Multi-Version Programs” [Parnas 1975]

    • Scope
      • Study of software process
      • Development principles, techniques, and notations
    • Goal
      • Production of quality software, delivered on time, within budget, satisfying customers’ requirements and users’ needs

    Ever-Present Difficulties

    • Few guiding scientific principles
    • Few universally applicable methods
    • As much managerial / psychological / sociological as technological

    Why These Difficulties?

    • SE is a unique brand of engineering

    • Software is malleable
    • Software construction is human-intensive (at least “has been”)
    • Software is intangible
    • Software problems are unprecedentedly complex
    • Software directly depends upon the hardware

    Software Engineering ≠ Programming

    • Programming

    • Single developer
    • Simple apps
    • Short lifespan
    • Single or few stakeholders
      • Architect = Developer = Manager = Tester = Customer = User
    • One-of-a-kind systems
    • Built from scratch
    • Minimal maintenance

    • Software engineering

    • Teams of engineers with multiple roles
    • Complex systems
    • Indefinite lifespan
    • Numerous stakeholders
      • Architect ≠ Developer ≠ Manager ≠ Tester ≠ Customer ≠ User
    • System families
    • Reuse to amortize costs
    • Maintenance accounts for over 60% of overall development costs

    Economic and Management Aspects of SE

    • Software production = development + maintenance (evolution)
    • Quicker development is not always preferable
      • Higher up-front costs may defray downstream costs
      • Poorly designed/implemented software is a critical cost factor

    Mythical Man-Month by Fred Brooks

    • Published in 1975, republished in 1995
    • Experience managing development of IBM OS/360 in 1964-65
    • Central argument
      • Large projects suffer management problems different in kind than small
      • ones, due to division in labor
      • Critical need is the preservation of the conceptual integrity of the product itself
    • Central conclusions
      • Conceptual integrity achieved through chief architect
      • Implementation achieved through well-managed effort
    • Brooks’s Law
      • Adding personnel to a late project makes it later

    Software Development Lifecycle Model

    Software Development Lifecycle Waterfall Model

    img

    Software Development Lifecycle Spiral Model

    img

    Software Development Lifecycle Incremental Commitment Spiral Model

    img

    Other Development Lifecycle Models

    • Agile: Iterative and increment development
    • V-Shaped: Waterfall-like but emphasis on testing
    • And many more (Every organization has their own)

    Software Qualities

    Qualities (aka “-ilities”) are goals in the practice of software engineering

    External vs. Internal Qualities

    External qualities are visible to the user (Reliability, efficiency, usability)

    Internal qualities are the concern of developers. They help developers achieve external qualities. (Verifiability, maintainability, extensibility, evolvability, adaptability)

    Product vs. Process Qualities

    Product qualities concern the developed artifacts (Maintainability, understandability, performance)

    Process qualities deal with the development activity. Products are developed through process (Maintainability, productivity, timeliness)

    Some Software Qualities

    Correctness

    • Ideal quality
    • Established regarding the requirements specification

    Reliability

    • Statistical property
    • Probability that software will operate as expected over a given period of time

    Robustness

    • “Reasonable” behavior in unforeseen circumstances
    • Subjective
    • A specified requirement is an issue of correctness; An unspecified requirement is an issue of robustness

    Usability

    • Ability of end-users to easily use software
    • Extremely subjective

    Understandability

    • Ability of developers to easily understand produced artifacts
    • Internal product quality
    • Subjective

    Verifiability

    • Ease of establishing desired properties
    • Performed by formal analysis or testing
    • Internal quality

    Performance

    • Equated w/ efficiency–given resources, how well it performs
    • Assessable by measurement, analysis, and simulation

    Evolvability

    • Ability to add or modify functionality
    • Problem: evolution of implementation is too easy
    • Evolution should start at requirements or design

    Reusability

    • Ability to construct new software from existing pieces
    • Occurs at all levels: from people to process, from requirements to code

    Interoperability

    • Ability of software (sub)systems to cooperate with others
    • High interoperability means easy integration into larger systems
    • Common techniques: APIs, plug-in protocols, etc.

    Scalability

    • Ability of a software system to grow in size while maintaining its properties and qualities
    • Assumes maintainability and evolvability
    • Goal of component-based development (e.g., microservices)

    Heterogeneity

    • Ability to compose a system from pieces developed in multiple programming languages, on multiple platforms, by multiple developers, etc.
    • Necessitated by reuse
    • Goal of component-based development

    Portability

    • Ability to execute in new environments with “reasonable” effort
    • May be planned for by isolating environment-dependent components
    • Necessitated by the emergence of highly-distributed systems (e.g., the Internet)
    • An aspect of heterogeneity

    Lec2

    Software Engineering Philosophy

    Software Process Models

    Before Software Process Models: “Code-and-Fix”

    • Free-formworkingenvironments

    • Usuallyforhobbyprojectsorschoolassignments

    • Lackofstructure

    • Uncertaindesignrequirements

    • Incompletenessinitsnature

    • Simpleprogramming–simple,repetitivecode

    • Indiedevelopment–videogames,solodeveloperapps,etc. • Hackathon

    Software process: A set of related activities, methods, practices, and rules used to develop, operate, maintain, and improve software and its associated artifacts. Suggests how to handle the following activities:

    • Project planning and oversight
    • Operational concepts and business workflow
    • System & software requirements analysis
    • System & software design
    • Software implementation and unit testing
    • System integration and other testings

    Not just “core development” process. Supporting, cross-cutting processes

    • Software configuration management
    • Peer reviews and product evaluation
    • Quality assurance
    • Corrective action
    • Technical and management reviews
    • Risk management
    • Metrics
    • Subcontractor management
    • Independent Verification & Validation
    • Team coordination
    • Process improvement

    Software development process: The subset of the software process that focuses on building, delivering, and maintaining a software product, including concrete activities, roles, artifacts, and tools used by a specific org or project. (The actual set of software development activities performed within an organization; How a team actually develops software in practice)

    Software dev life cycle (SDLC): A conceptual representation of the stages a software system passes through from inception to retirement. “SDLC is a conceptual framework or process that considers the structure of the stages involved in the development of an application from its initial feasibility study through to its deployment in the field and maintenance.”

    Software Development Life Cycle Model: A framework that defines how the SDLC stages are organized, ordered, overlapped, and iterated, incl. feedback & decision points. Generally used to describe the steps that are followed within a software development life cycle. Popular models: Waterfall models, Spiral models, Iterative and incremental models, Rapid app development models, Agile, XP, Lean Dev, ... There is no one perfect model for all software development; These models are like ready-made clothing; You don’t wear a dressy suit at a baseball game; You don’t wear a swimsuit at a steakhouse.

    img

    Spiral Model (1988)

    • Waterfall model: Focus on front load elaboration

    • Spiral model

    • Risk-driven
    • Complete a round by review
    • Round 0 – Feasibility study
    • Round 1 – Concepts of operations
    • Round 2 – T op level requirements and specification

    WinWin Spiral Model(1994)

    Use the Theory W (win-win) approach to converge on a system’s next level objectives, constraints, and alternatives.

    Anchor Point Milestones (1996)

    Lack of intermediate milestones

    Anchor Points:

    • LCO: Life Cycle Objectives
    • LCA: Life Cycle Architecture
    • IOC: Initial Operational Capability
    • • Concurrent-engineering spirals between anchor points

    Model-Based Architecting and Software Engineering (MBASE)

    • MBASE integrates product models (software architecture), process models (development lifecycle), and property models (performance, security, ...) together to maintain consistency between them

    The Incremental Commitment Model (ICM) (2007)

    • Incremental commitment-based decision making to ensure projects remain viable throughout their lifecycle

    The Incremental Commitment Spiral Model (ICSM) (2010)

    • Risk assessment determines the scope and direction of each cycle
    • Continuous risk evaluation that leads to more granular control over project evolution
    • More suitable than vanilla ICM for rapidly changing environments and domains

    V-Model

    • Extension of the Waterfall model that pairs each development phase with a corresponding test phase
    • Commonly used in systems and safety-critical engineering
    • Emphasizes early test planning but does not explicitly model concurrent engineering
    • Poor fit for evolutionary or highly iterative development
    • Verification and validation are defined early but executed later

    Double-V Model

    • Show concurrent development
    • Supports system of systems

    V with Multiple Deliveries

    • The “V” is divided into smaller chunks
    • You see multiple Vs
    • A subset of the Vs may “go back” concurrently

    Rational Unified Process (RUP)

    • Iterative software development process framework by Rational (an IBM division)
    • Six Best Practices
      • Develop iteratively
      • Manage requirements
      • Use components
      • Model visually
      • Verify quality
      • Control changes

    OpenUP

    • A streamlined, open-source derivative of RUP
    • A lean Unified Process that applies iterative and incremental approaches within a structured lifecycle

    • Keeps RUP’s core philosophy but removes much of its weight, prescriptiveness, and tooling dependency

    Agile Methodologies

    • Scrum
    • eXtreme Programming (XP)
    • Dynamic System Development Method (DSDM)
    • Lean Development
    • Feature-Driven Development (FDD)
    • Crystal
    • Adaptive Software Development (ASD)

    Lean Principles

    • From Toyota Production System – translated into software development
    • Seven lean principles
      • Eliminate waste – anything that does not add value
      • Amplify learning – continuous update about the project
      • Decide as late as possible – delay decisions, gather more information
      • Deliver as fast as possible – daily deliveries, daily standup meeting
      • Empower the team – get good people, listen, communicate
      • Build integrity in – build good products
      • Optimize the whole - “Think big, act small, fail fast; learn rapidly”

    Scrum Process

    img

    Scaling Agile

    • “Vanilla Agile,” e.g., a single Scrum team, does not scale by itself
    • Agile with structures for scaling: SAFe; LeSS; DA; Nexus

    Large Scale Scrum (LeSS)

    Disciplined Agile (DA) Framework

    Nexus: Agile for Large-Scale Projects

    Scrum vs. ICSM

    img

    XP – eXtreme Programming

    • Agile methodology that pushes core engineering practices to their extreme
    • Frequent release
    • Shorter timebox
    • Frequent communication
    • Expecting requirements changes
    • Drawbacks
      • Unstable requirements
      • Requires strong engineering discipline
      • High customer availability

    XP Principles

    img

    Scrum vs. XP

    • Scrum defines how a team organizes and manages work, while XP defines how the team engineers the software with disciplined technical practices
    • The Scrum concept only has
      • Four ceremonies (or “meetings”)
        • Daily scrum
        • Sprint Planning
        • Sprint Review
        • Sprint Retrospective
      • Three artifacts
        • Product Backlog
        • Sprint Backlog
        • Increment (aka “potentially shippable product”)
      • Three roles
        • Product owner
        • Scrum master
        • Development team members
    • XP has more technical practices

    Reducing Waste in Software Development

    • Three types of waste from Toyota production system

    • Muda (無駄) – non-value added tasks
      • E.g., unnecessary gold plating
      • Avoid Muda by using high planning and coordination
    • Mura (斑) – unevenness or irregularity in workflows
      • E.g., one component being produced more than needed
      • Avoid Mura by using pull-based scheduling (such as Kanban)
    • Muri (無理) – overburdening or failure load on people, equipment, or systems
      • Avoid Muri by balancing workloads and standardizing work
    • Lean concepts influenced XP practices and later software Kanban adoption
      • Elimination of excessive planning
      • Reducing “Red” (baseline tests in TDD)
    • This introduces Kanban (meaning “signboard”) to achieve further elimination of waste

    Kanban

    • Software development framework
    • Provides transparency in workflow
    • Focuses on “managing flow”
    • Limits Work-In-Progress, Complete a feature before starting a new one
    • Iteration and estimate are optional
    • Could be used on top of other processes

    Kanban Concepts

    • Visualize workflow
      • More than work, but interaction and coordination
    • Limit work-in-progress
    • Measure and manage flow
      • Use metrics such as throughput, lead/cycle time, work in progress
    • Make process policies explicit
      • Clear on who is doing what and when

    Kanban: Visualize Workflow & Limit WIP

    • Observe workflow
      • What is happening?
      • Where is the bottleneck?
    • Check performance
      • Lead/cycle times, throughput, WIP trends
    • Identify improvement opportunities
    • What can be improved here ? • Bottleneck, Variability, Waste
    • Craftsmanship & Leadership • to improve the process and use performance as evidence to support

    Value-based software engineering

    What is Value-Based Software Engineering?

    • The science, skill, and profession of acquiring and applying scientific, economic, social, and practical knowledge, in order to design and also build structures, machines, devices, systems, materials and processes

    What is Value-Based Software Engineering?

    • The application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software

    What is Value-Based Software Engineering?

    • The regard that something is held to deserve; the importance or preciousness
    • of something
    • Material/monetary-worth of something
    • The worth of something compared to the price paid or asked for it
    • The usefulness of something considered in respect of a particular purpose
    • Etc.

    What is Value-Based Software Engineering?

    • Goal of software engineering is to create products, services, and processes that
    • add value
    • VBSE brings value considerations to the foreground so that software engineering decisions at all levels can be optimized to meet or reconcile explicit objectives of the involved stakeholders

    Why Should You Care About VBSE?

    • Software has unique internal and external characteristics:
      • Highly flexible and volatile
      • Heavy dependence on collaboration amongst creative and skilled people
      • Necessitates construction and management approach radically different from traditional
      • engineering
      • Basic engineering principles of discipline, economy, rigor, quality, utility, repeatability and predictability (to some extent) still apply
    • Value considerations affect trade-offs with much more subtlety, severity, and variety as opposed to engineering of tangible products
    • Trade-offs ultimately govern the outcome of the project!
    • VBSE draws attention to these trade-offs
      • Impossible to reason about in value neutral setting

    Who Should Practice VBSE?

    • Just about everyone
      • CTO/CIOs, Product and Project Managers making high-level (value-adding) decisions
      • Process & measurement experts, requirements engineers, business analysts, QA/usability experts, technical leads, etc.
      • Software engineering researchers, educators, and graduate students teaching or studying software processes, evaluating existing and new practices, technologies, methods, or products
    • Basically all academics, managers, practitioners, and students of software engineering who realize that software isn’t created in a void and involves numerous participants

    Theory W: Enterprise Success Theorem

    • It is a management approach in software engineering
      • “Your enterprise will succeed if and only if it makes winners of your success-critical stakeholders”
    • Arguments about “if”:
      • Everyone that counts is a winner
      • Nobody significant is left to complain
    • Arguments about “only if”:
      • Nobody wants to lose
      • Prospective losers will refuse to participate, or will counterattack
      • The usual result is lose-lose

    Theory W: WinWin Achievement Theorem

    • Making winners of your success-critical stakeholders (SCSs) requires:

    • Identifying all of the SCSs
    • Understanding how the SCSs want to win
    • Having the SCSs negotiate a win-win set of product and process plans
    • Controlling progress toward SCS win-win realization, including adaptation to change

    VBSE Theory: 4+1 (Steps)

    img

    VBSE Agenda

    • Objective

    • Integrating value considerations into the full range of existing & emerging software engineering principles in a manner so that they “compatibly” reinforce one another

    • VBSE seven elements (refer to the VBSE book* for details)

    • Benefits Realization Analysis
    • Stakeholder Value Proposition Elicitation & Reconciliation
    • Business Case Analysis
    • Continuous Risk and Opportunity Management
    • Concurrent System & Software Engineering
    • Value-Based Monitoring & Control
    • Change as Opportunity

    Documenting What You Know

    • High concurrency/backtracking when practicing the VBSE 4+1 Model
    • “Tacit Knowledge” generated amongst team members
    • How do the other SCSs know what it is you know and whether you really know what you claim to know?
    • In terms of the CSCI-577a team project:
      • How do the teaching staff know what you know?
      • By documenting the findings/solutions in the appropriate artifacts and validate the same during the periodic grading

    Incremental Commitment Spiral Model

    Original Spiral and Misinterpretations

    • Common misconceptions

    • Hack some prototypes
    • Fit spiral into waterfall
    • Incremental waterfalls
    • Suppress risk analysis
    • No concurrency, feedback
    • One-size-fits-all model

    The Four ICSM Principles

    • Stakeholder value-based guidance
    • Incremental commitment and accountability
    • Concurrent multidiscipline engineering
    • Evidence- and risk-based decisions

    Principle 1: Stakeholder Value-Based Guidance (“W”)

    • It is a management approach in software engineering
      • “Your enterprise will succeed if and only if it makes winners of your success- critical stakeholders”
    • Arguments about “if”:
      • Everyone that counts is a winner
      • Nobody significant is left to complain
    • Arguments about “only if”:
      • Nobody wants to lose
      • Prospective losers will refuse to participate, or will counterattack
      • The usual result is lose-lose

    Win-lose Generally Becomes Lose-lose

    Principle 2: Incremental Commitment and Accountability

    • Total Commitment: Roulette
      • Put your chips on a number • E.g., a value of a key performance parameter
      • Wait and see if you win or lose
    • Incremental Commitment: Poker, Blackjack
      • Put some chips in
      • See your cards, some of others’ cards
      • Decide whether, how much to commit to proceed
      • You can fold if you don’t think you will win

    Principle 3: Concurrent Multidiscipline Engineering

    Principle 4: Evidence- and Risk-Based Decisions

    • The decision criteria for a “go/no-go” commitment
    • Evidence provided by developer and validated by independent experts that:
      • If the system is built to the specified architecture, it will
      • Satisfy the requirements: capability, interfaces, level of service, and evolution
      • Support the operational concept
      • Be buildable within the budgets and schedules in the plan
      • Generate a viable return on investment
      • Generate satisfactory outcomes for all of the success-critical stakeholders
    • All major risks resolved or covered by risk management plans
    • Serves as basis for stakeholders’ commitment to proceed
      • “Are we ready to commit to go to the next phase?”

    ICSM Meta-Principle: Risk Balancing

    • How much is enough?

    • System scoping, planning, prototyping, COTS evaluation, requirements detail, spare capacity, fault tolerance, safety, security, environmental protection, documenting, configuration management, quality assurance, peer reviewing, testing, use of formal methods, and feasibility evidence

    • Answer

    • Balancing the risk of doing too little and the risk of doing too much will generally find a middle-course sweet spot that is about the best you can do

    Why the Incremental Commitment Spiral Model (ICSM)?

    img

    Agile: Introduction

    Agile Concepts

    What is “Agile?”

    • It is a software development umbrella term and guiding philosophy that comes with a set of principles
    • Agile development promotes: Adaptive planning; Evolutionary development and delivery; Time-boxed iterative approach; Rapid and flexible response to change

    Manifesto for Agile Software Development

    We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

    • Individuals and interactions over processes and tools
    • Working software over comprehensive documentation
    • Customer collaboration over contract negotiation
    • Responding to change over following a plan

    That is, while there is value in the items on the right, we value the items on the left more.

    The 12 Principles of Agile Software Development

    • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
    • Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
    • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
    • Business people and developers must work together daily throughout the project.
    • Build projects around motivated individuals. Give them the environment and support they need, and trust
    • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
    • Working software is the primary measure of progress.
    • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
    • Continuous attention to technical excellence and good design enhances agility.
    • Simplicity—the art of maximizing the amount of work not done--is essential
    • The best architectures, requirements, and designs emerge from self-organizing teams.
    • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

    Agile Methodologies

    • Scrum
    • eXtreme Programming (XP)
    • Dynamic System Development Method (DSDM)
    • Lean Development
    • Feature-Driven Development (FDD)
    • Crystal
    • Adaptive Software Development (ASD)

    Scrum Team

    • Product owner
      • The champions for their product
      • Understanding of business, customer, market requirements
    • Scrum master
      • The champions of scrum within their teams
      • Coaching teams, product owners, and the business on the scrum process
    • Dev team
      • The champions for sustainable development practices
      • The ones who make things work
      • Usually five to seven members

    Sprint Planning

    • The Scrum team holds Sprint Planning

    • Agree upon sprint goal. What the upcoming sprint is supposed to achieve

    • The development team reviews product backlog and determines the high priority items that the team can accomplish in a sprint
    • With sustainable pace

    • A pace at which the team can comfortably work for an extended period of time

    Product Backlog

    • List of requirements (user stories)
    • Priority level
    • May include estimation (story points)

    Sprint Review

    • Inspect the product that is being built
    • Participants: Scrum team, stakeholders, sponsors, customers
    • Focus on reviewing the just- completed features or underlying architecture
    • Bi-directional info flow

    Sprint Retrospective

    • Frequently occurs after the sprint review and before the next sprint planning
    • Focus on inspect adapt the process
    • Team, Scrum Master, and Product Owner discuss what is not working
    • Focus on continuous improvement
    • Identify process improvement actions
    • 15 - 30 minutes, up to three hours

    Increment (“Potentially Shippable Product”) vs. Done

    • Sprint results = Increment (aka “potentially shippable product”)
    • Use “definition of done” (DoD) Check whether the increment is “done” using DoD

    Scrum vs. ICSM

    img

    XP – eXtreme Programming

    • Agile methodology that pushes core engineering practices to their extreme
    • Frequent release
    • Shorter timebox
    • Frequent communication
    • Expecting requirements changes
    • Drawbacks
      • Unstable requirements
      • Requires strong engineering discipline
      • High customer availability

    Risks for Agile Software Development: Development and Deployment risks

    img

    Risks for Agile Software Development: Project management risks

    img

    Agile Antipatterns

    What is antipattern?

    • “[A] common response to a recurring problem that is usually ineffective and risks being highly counterproductive.”

    Common Software Engineering Anti-Patterns

    • Brooks’s law

    Adding more resources to a project to increase velocity, when the project is already slowed down by coordination overhead

    • Feature creep

    Uncontrolled changes or continuous growth in a project's scope, or adding new features to the project after the original requirements have been drafted and accepted

    • Gold plating

    Continuing to work on a task or project well past the point at which extra effort is not adding value

    • God object Concentrating too many functions in a single part of the design (class)
    • Dead code Code that is there but no one is sure why
    • Pemature optimization Code early, optimize early and assume that it efficient
    • Dependency hell Conflicts among dependency version requirements
    • Soft code
      • Strong business logic or value from external resources in configuration files rather than a source code, e.g., macros, command line arguments. Abstracting too many values and features lead to complexity and maintenance issues

    Anti-Patterns on New Agile Projects

    img
    img
    img

    Lec3

    Software Analysis and Planning

    Risk analysis

    Definition:

    • “possibility of loss or injury” [Marriam-Webster]
    • Risk is not a problem – it may or may not be realized into a problem
    • Risks involve uncertainties
      • And can be dealt with proactively
      • If the risk is realized, then it is a problem. It is no longer a risk at that point
    • Earlier, this problem was a risk.
    • Risk Exposure = Probability of loss P(L) * Size of loss S(L) (RE = P(L) * S(L) )

    Risk Management

    • Buying information 购买信息(减少不确定性)
    • Risk avoidance 风险规避(避开风险路径)
    • Risk transfer 风险转移(如保险或分包)
    • Risk reduction 风险降低(采取行动降低概率或影响)
    • Risk acceptance 风险接受

    Is Risk Management Fundamentally Negative?

    • It usually is, but it shouldn’t be
    • As illustrated in the Risk Acceptance strategy, it is equivalent to Opportunity Management. Opportunity Exposure OE = P(Gain) * S(Gain) = Expected Value
    • Buying information and the other risk management strategies have their opportunity counterparts
      • P(Gain): Are we likely to get there before the competition?
      • S(Gain): How big is the market for the solution?

    Cost estimation

    What is cost estimation? • Prediction of both the person-effort and elapsed time of a project

    Known methods:

    • Algorithmic
    • Expert judgement
    • Estimation by analogy
    • Top-down
    • Bottom-up
    • And many others

    Best approach is a combination of methods

    • Compare and iterate estimates, reconcile differences, continually

    • COCOMO (original from 1981) • The “COnstructive COst MOdel” • Derived from empirical data

    • COCOMO II (released in 2000) is the update to COCOMO 1981

    • COCOMO III in the works

    • COCOMO is the most known, thoroughly documented and calibrated cost model

    Software Estimation Accuracy

    img

    Top-Down and Bottom-Up Approaches

    img

    COCOMO II

    img

    COCOMO Effort Formulation

    img

    COCOMO Schedule Formulation

    img

    Cost Factors

    Significant factors of development cost:

    • Scale drivers are sources of exponential effort variation
    • Cost drivers are sources of linear effort variation. Product, platform, personnel and project attributes
    • Effort multipliers associated with cost driver ratings

    Defined to be as objective as possible

    Each factor is rated between very low and very high per rating guidelines

    Relevant effort multipliers adjust the cost up or down

    img
    img

    Technical debt

    What Are the Cost of Software?

    A financial estimate whose purpose is to help consumers and enterprise managers determine direct and indirect costs of a product or system • Including the costs to research, develop, acquire, own, operate, maintain, and dispose of a system

    What is Technical Debt?

    为了追求短期交付速度而采取“走捷径”的方案,导致未来维护成本增加的现象。

    Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite ... The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented, or otherwise.

    Major Causes of Technical Debt

    • Conspiracy of Optimism
    • Business pressures
    • Easiest-first; neglecting rainy day use cases
    • Delayed Refactoring
    • Neglect of ICSM Principles
      • Stakeholder value-based guidance
      • Incremental commitment and accountability
      • Concurrent system engineering
      • Evidence and risk-driven decisions
    img
    img

    Fixing Technical Debt

    • Big Bang
      • No new features for a month/year?
      • Spend some time cleaning up the mess
    • Dedicated Team
      • Have another team dedicated
    • Boy Scout
      • Remove technical debt little and often
      • If no tests, add some. If poor test, improve them. If bad code, refactor it
      • The boy scout rule – leave the camp cleaner than you find it
    • Use technical debt tools: SONAR, CAST, SQALE

    The Bottom Line

    • Creating technical debt may be a good practice
      • Meeting market windows
      • Prototyping to determine user needs, satisfaction
      • Short-term fixes, targets of opportunity
    • But needs pay-down later
      • To avoid mounting debt and interest
    • Need balanced investment in fixes, new features

    Architecture-Based Software Engineering

    img

    What is Software Architecture?

    Definition: A software system's architecture is the set of principal design decisions about the system

    • Software architecture is the blueprint for a software system’s construction and evolution
    • A conceptual essence of a software system
    • A living thing that evolves over the lifecycle of a software system
    • Three fundamental understandings
      • Every application has architecture
      • Every application has at least one architect
      • Architecture is not a phase of development

    典型架构风格与模式

    • 管道与过滤器 (Pipe-and-Filter):适用于数据流处理系统。
    • 分层架构 (Layered):如 OSI 网络模型,层间通过接口通信。
    • 客户端-服务器 (Client-Server):计算任务分布在提供者和请求者之间。
    • 模型-视图-控制器 (MVC):将业务逻辑、数据和界面显示分离。
    • 微服务 (Microservices):通过一组小型、独立部署的服务构建系统。

    What is “Principal?”

    “Principal” implies a degree of importance that grants a design decision “architectural status”

    • Not all design decisions are architectural
    • Not all design decisions necessarily impact a system’s architecture
    • Example of a non-principal design decision: “Do I use an array or linked list for a data?”

    How one defines “principal” will depend on what the stakeholders define as the system goals

    Software Model and Modeling

    • Definition:

    • An architectural model is an artifact that captures some or all of the design decisions that comprise a system’s architecture.

    • Definition:

    Architectural modeling is the reification and documentation of those design decisions.

    Software Architecture’s Elements

    • A software system’s architecture typically is not (and should not be) a uniform monolith
    • A software system’s architecture should be a composition and interplay of different elements
    • • Processing • Data, aka information or state • Interaction
    • The above is encapsulated into following elements:

    ComponentsConnectorsConfigurations

    img
    img
    img

    Architectural Style

    • Definition: An architectural style is a named collection of architectural design decisions that
      • are applicable in a given development context,
      • constrain architectural design decisions that are specific to a particular system within that context, and
      • elicit beneficial qualities in each resulting system.
    • Reflect less domain specificity than architectural patterns
    • Example architectural styles • Object-Oriented, Layered, Client-Server, Data-Flow, Batch-Sequential, Pipe and Filter, Blackboard, Microservices, Peer-to-Peer, ...

    Architectural Pattern

    • Definition: An architectural pattern is a named collection of architectural design decisions that are applicable to a recurring design problem, parameterized to account for different software development contexts in which that problem appears.
    • Applied at a low level and within a narrow scope
    • Example architectural patterns: • State-Logic-Display (aka Three-Tier), Model-View-Controller (for GUI), Sense- Compute-Control (aka Sensor-Controller-Actuator), ...

    What Is Architectural Analysis?

    • Architectural analysis is the activity of discovering important system properties using the system’s architectural models
      • Early, useful answers about relevant architectural aspects
      • Available prior to system’s construction
    • Important to know: which questions to ask, why to ask them, how to ask them, and how to ensure that they can be answered
    • Note: Not all models will be equally effective in helping to determine whether a given architecture satisfies a certain requirement • Informal and formal models

    Architectural Analysis Goals

    • The four “C”s

    • Completeness • “Is the architecture complete?”
    • Consistency • Ensures that different model elements do not contradict one another
    • Compatibility

    • Ensures that the architectural model adheres to guidelines and constraints of (1) a style, (2) a reference architecture, and/or (3) an architectural standard

    • Correctness

    • Ensures that (1) the architectural model fully realizes a system specification and (2) the system’s implementation fully realizes the architecture

    img
    img

    Prototyping and Testing

    Prototyping and Testing

    • Why do we do prototyping and testing?
    • To reduce risk • Help minimize risk by uncovering potential problems before the final product is released
    • To seek feedback
      • Prototyping seeks user or stakeholder feedback on design ideas and features
      • Testing obtains feedback about the correctness, performance, and reliability
    • To inform development decisions
      • Prototyping informs decisions on product scope and usability
      • Testing informs decisions on whether the development is ready to advance

    What is a Prototype?

    • In software engineering, it means “an early sample, model, or release of a product”

    Why Prototype?

    • “Buying information”–risk management • Inexpensive so it’s okay to fail early
    • To gather more accurate requirements
    • To better understand the underlying technicalities (e.g., the algorithm)
    • To explore feasibility for estimation and planning
    • To increase user’s (and other stakeholders’) involvement
    • To resolve conflicts among stakeholders
    • ...

    Types of Prototype

    • Throwaway prototyping
      • Also known as closed-ended prototyping or rapid prototyping
      • Informal, paper prototyping, storyboard, or UI (click dummy)
    • Evolutionary prototyping
      • Build a robust prototype and constantly refine it
      • “First draft,” “second draft,” “third draft,” ....
    • Incremental prototyping
      • “Building block”
      • Add/integrate new component, when all is in place, the solution is complete

    Dimension of Prototypes

    • Horizontal

    • Broad view of the entire system, focusing on user interaction, but no underlying functionality

    • Vertical

    • Complete elaboration of a single subsystem or function (in-depth functionality, but only for selected few features)

    img
    img

    What Is “Testing?”

    img
    img

    Terminology: Verification vs Validation

    • Verification: “Are we building the product right?”
      • Demonstrating correctness, completeness, and consistency of the software artifacts
      • To what degree the implementation is consistent with its (formal or semi-formal) specification
    • Validation: “Are we building the right product?”
      • To what degree the software fulfills its (informal) requirements
      • The software should do what the user really requires
    img
    img
    img

    Black Box Testing

    • Generate a test case based on a specification of the software
    • Code is ignored: only use specification documents to develop test cases
    • In the presence of formal specifications, it can be automated
    • Techniques:
      • Equivalence partitioning
      • Boundary conditions
    img
    img

    White Box Testing

    • Selection of test suite is based on some elements in the code
    • Testing based on control-flow and data-flow criteria
    • Helps to identify design weakness in the code
    • Example of white-box testing criteria:
      • Control flow (statement, branch, loop)
      • Condition (basic, multiple)
    img
    img

    Wrapping Up: Testing

    Q: When do we stop testing?

    A: Until some aspects of the program is covered according to the considered criterion

    Q: What about test selection criteria?

    A: We certainly want a criterion that gives test suites that improve confidence

    Use both types of the criteria as they are complementary

    • Black-box testing → based on specification
    • White-box testing → based on the code

    Q: How to generate test cases?

    A: Look at the test requirement of the selected criterion and derive your test cases based on it

    Q: How to check the output of a test?

    A: Compare it with the expected output or the correct result given by the oracle, so we can indicate that the test is passed or failed

    Q: Thoroughness of testing?

    A: Can be measured in terms of the percentage of “coverable” items (as defined by the criteria) that are covered by the test suite

    • Examples

    • Execution of statements
    • Execution of paths
    • All customer requirements

    Testing Levels

    img
    img
    img
    img

    Quality of Service

    img
    img
    img
    img
    img

    DevOps

    • Definition: “DevOps is a set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality.” *
    • DevOps combines development (Dev) and operations (Ops) To increase • Efficiency, Speed, and Security of software development and delivery
    • While DevOps aligns well with the Agile principles, any software development processes can adopt DevOps practices

    Why DevOps?

    • Align stakeholders’ goals and directions
    • Business, operations, development, security, architecture, compliance, ...
    • Benefits
      • Faster lead time
      • Fewer defects
      • Faster recovery
      • More frequent deployments
    img
    img
    img
    img
    img
    img
    img
    img
    img
    img
    img
    img

    Lec4

    Software Maintenance

    Software maintenance definition and context

    img

    The Iron Law of Software Maintenance

    • Useful software systems will spend twice as much on software maintenance as they did for development

    Software Maintenance Categories

    • Preventive maintenance: Modification of a software product after delivery to detect and correct latent faults in the software product before they become effective faults
    • Corrective maintenance: Reactive modification of a software product performed after delivery to correct discovered problems
    • Perfective maintenance: Modification of a software product after delivery to improve performance or maintainability
    • Adaptive maintenance: Modification of a software product performed after delivery to keep a software product usable in a changed or changing environment

    Designing Maintainable Products

    • Modularize around sources of change
    • Risk-manage choices of COTS products and cloud services
      • Fewer is generally better
      • Compatible, stable, well supported, refreshed
    • Deliver maintenance & diagnostic software
      • Debug aids, test tools and cases, CM support
      • Version control: having many versions to maintain is costly
    • Follow good code and document standards

    Software Maintenance in the AI Era

    • AI is transforming maintenance, but not replacing the need for it
      • AI boosts throughput (+7.5% to +21.8% PRs/week in field trials)
      • Code churn (revisions in 2 wks) is projected to 2x; shows maintenance burden
      • AI adoption has shown a negative relationship with delivery stability
      • Acceleration without robust testing and feedback loops creates instability
    • AI-generated code often violates the DRY principle and lacks architectural awareness, creating long-term maintainability challenges
    • What students can do to adapt
      • Treat AI output as a draft, not final code; always review for architectural fit and coding standards
      • Invest in automated testing (unit/integration/E2E) and CI/CD pipelines to catch regressions early
      • Practice reading unfamiliar code; AI adds code that you did not write but you will need to maintain

    Software Engineering Best Practices

    The triple constraint of project management

    • Time
    • Scope
    • Cost

    Best practices for software development teams (from RUP)

    Rational Unified Process: Best Practices for Software Development Teams

    • Proven approaches to software development
    • “Best Practices” – not so much because you can precisely quantify their value, but rather, because they are observed to be commonly used in industry by successful organizations
      • Develop software iteratively
      • Manage requirements
      • Use component-based architectures
      • Visual model software
      • Verify software quality
      • Control changes to software

    Best Practices 1: Develop Software Iteratively

    Best Practices 2: Manage Requirements

    Best Practices 3: Use Component-Based Architectures

    Best Practices 4: Visually Model Software

    Best Practices 5: Verify Software Quality

    Best Practices 6: Control Changes to Software

    Common Mistakes in Software Engineering

    Classic Mistakes

    • Mistakes about: • People
    • • Process • Product • Technology
    • They fail software development projects
    • Avoid making these mistakes to succeed in your project

    Classic Mistakes: People

    • Failure to take action to deal with a problem employee
    • Adding people to a late project
    • Not listening to user input
    • Team members getting undermined motivation
    • Lacking individual capabilities of the team members

    Classic Mistakes: Process

    • Wasting time on

    • Fuzzy early schedule, approval and budgeting, and aggressive schedule later

    • Human tendency to underestimate and produce overly optimistic schedules

    • Poor time estimates

    • Insufficient risk management

    • Lack of sponsorship, changes in stakeholder commitment, scope creep, and contractor failure

    • Risks from outsourcing and offshoring
      • QA, interfaces, unstable requirements
      • Increased communication and coordination cost
    • Not sharing knowledge with stakeholders
    • Being unable to evaluate mistakes
      • Skipping Sprint reviews, retro, QA
      • Limited code review,
      • Not enough testing

    Classic Mistakes: Product

    • Requirements gold-plating

    • Unnecessary product size and/or characteristics

    • Developer gold-plating

    • Developers trying out new and hot technology / features

    • Feature creep

    • Continuous/uncontrolled expansion of features beyond the original scope

    • Relying on temporary solutions

    • Quick fix, mounting technical debt, poor code quality

    • Not considering coding standards, secure coding, cybersecurity threats
    • Useless, outdated, or unrelated documentation

    E.g., /* NO COMMENTS */

    Classic Mistakes: Technology

    • Silver-bullet syndrome
      • Expect new technology to solve all problems
      • AI, LLM, vibe coding, etc.
    • Not staying current with technology
    • Overestimated savings from new tools or methods
      • Did not account for learning curve and unknown unknowns
      • “AI will make all engineers obsolete!”
    • Switching tools in the middle of a project
      • Version upgrade
      • E.g., operating system update on your MVP demo day

    “Laws” of S£oftware Engineering

    • Daniel Bernoulli’s Principle
      • “Velocity is greatest where density is least.”
    • Barry Boehm’s Law
      • “Errors are more frequent during requirements and design activities and are more expensive the later they are removed.”
    • Fred Brooks’s Law
      • “Adding people to a late software project makes it later.”
    • Melvin Conway’s Law
      • “Any piece of software reflects the organizational structure that produced it.”
    • Phil Crosby’s Law
      • “Quality is free.”
    • Dan Galorath’s Law
      • “Projects that get behind stay behind.”
    • Watts Humphrey’s Law
      • “Users do not know what they want a software system to do until they see it working.”
    • Caper Jones’s Law
      • “Every form of defect removal activity has a characteristic efficiency level, or percentage of bugs actually detected.”
    • Goodhart’s law
      • “When a measure becomes a target, it ceases to be a good measure.”
    • Hofstadter’s Law
      • "It always takes longer than you expect, even when you take into account Hofstadter's Law.”
    • Donald Knuth’s optimization principle
      • “Premature optimization is the root of all evil.”
    • Moore’s Law
      • “The processing speed of computers will double every two years.”
    • Parkinson’s Law
      • “Work expands to fill the time available for completion.”
    • Pareto Principle
      • “More than 80 percent of software bugs will be found in less than 20 percent of software modules.”
    • Ninety-ninety rule, Tom Cargill
      • “The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time.”
    • Facts and Fallacies of Software Engineering
      • “Modification of reused code is particularly error-prone. If more than 20 to 25 percent of a component is to be revised, it is more efficient and effective to rewrite it from scratch.”
    • Facts and Fallacies of Software Engineering
      • “Eighty percent of software work is intellectual. A fair amount of it is creative. Little of it is clerical.”
    • Facts and Fallacies of Software Engineering
      • “Missing requirements are the hardest requirements errors to correct.”
    • Facts and Fallacies of Software Engineering
      • “Rigorous inspections can remove up to 90 percent of errors from a software product before the first test case is run.”
    • Barry Boehm’s Law
      • “Prototyping significantly reduces requirements and design errors, especially for user errors.”

    Software Engineering Ethics

    Definitions and context

    • Power to do public harm or good
    • ACM/IEEE Software Engineering Code of Ethics
    • SW engineers will have vastly increasing power to do public harm or good
      • System safety and security
      • Intellectual property
      • Privacy
      • Quality of work
      • Fairness
      • Liability
      • Risk disclosure
      • Conflict of interest
      • Unauthorized access
    • SW engineers today can now wield AI as a force multiplier

    Principles and examples

    • Rawls’ Theory of Justice: Aims to constitute a system to ensure the fair distribution of primary social goods

    Conclusions

    • SW engineers will have vastly increasing power to do public harm/good
    • Rawls’ Theory of Justice enables constructive approach for integrating ethics into daily software engineering practice
      • E.g., ICSM Stakeholder win-win with least-advantaged system dependents as success-critical stakeholders
    • Responsibility challenges for your future careers
      • Global threats, nanosecond decisions
      • Coping with AI, Internet of Things, DevOps, Systems of Systems
      • Try to build in reversibility of bad decisions

    Future of Software Engineering

    “Software Engineering” Definition

    • Based on Webster definition of “engineering” : The application of science and mathematics by which the properties of software are made useful to people
    • Many different definitions of the term, including the one from the first lecture: Development of software systems whose size/complexity warrants team(s) of engineers
    • Includes computer science and the sciences of making things useful to people:Behavioral sciences, economics, management sciences
    img

    The Future of Systems and Software

    • Eight surprise-free trends • Increasing integration of systems engineering and software engineering • User and value focus • Software criticality and dependability • Rapid, accelerating change • Distribution, mobility, interoperability, globalization • Complex systems of systems • COTS, open source, reuse, legacy integration • Computational plenty
    • Two wild-card trends • Autonomy software • AI (including AI-assisted software engineering)