How to write your engineering Requirements document (with 18 point template).

Kai Brooks
13 min readJan 27, 2022

--

Actual photo of the author searching for good requirements documents.

I’m an electrical engineer that has brought many products to market, serving as a design engineer, project manager, product owner, and “person on the stage with a big pitch deck behind them.” I think Excel holds back progress, and companies throw billions of dollars into the trash fire that is “someone making a worthless spreadsheet about a thing.” However, I still believe every serious project needs a requirements document.

If you’re just here for a template, it’s at the bottom, though I would be sure you understand the different sections before using it.

Why are requirements documents even useful?

First, a requirements document is simply “a detailed plan on what we want to build.” It’s the engineering equivalent of writing a business plan before throwing time and money into an idea.

  1. Everybody on the project has precisely the same vision before anyone starts working. You avoid meetings where a manager pipes in like, “wait, that’s not what I thought we agreed on building” a month into development.
  2. Writing a structured requirements document makes you consider parts of the project that you wouldn’t have thought of otherwise (see the next point).
  3. If your project has a critical fault, you will see it in the requirements so you can kill the project before you waste energy on a useless idea.
  4. Requirements make your test plan dead simple to write.
  5. You never have to remember some details that somebody said the project needed (Wait, did we actually decide on that, or were people just talking about it?). Just reference the requirements.

I believe that requirements documents are “living” documents, such that they go through many iterations as a project advances. For example, you might specify a budget and then realize that it’s impossible to build as listed upon researching material for the project. Or, integration tests might reveal the need to change a communication protocol.

What do you do if you don’t know if something is possible before you start prototyping?

I run into this often. My method is to write a requirements document that considers, “What are the minimum viable elements of this project such that if we can’t meet them, we shouldn’t pursue the project?”

Here’s an example: When designing a (throttled) electric bicycle, how many miles does the vehicle need to travel on a single charge? Let’s say marketing and business research determines users need at least 20 miles — anything less, and users will hate it and the product doesn’t launch. Therefore, the requirement looks like this:

“System shall travel a minimum of 20 miles on a single full charge when traveling on a flat road without user assistance.”

We only addressed the user need (or business case) in this requirement. We didn’t talk about battery capacity, motor controllers, efficiency, power drain, or technical specifications. This language gives us the flexibility to develop a creative engineering solution.

Now that we have a target to aim for, we can determine various methods of accomplishing engineers’ requirements. Maybe we source a big battery, develop extra-efficient custom motor drivers, use tires with decreased rolling resistance, or other strategies. We could explore various options with the “20-mile range” to measure success.

Why don’t we just specify technical criteria instead of user criteria?

What if, instead of specifying user criteria, we established a technical criterion that was “effectively” the same, such as “The battery shall be 12Amp-hours”? While we may initially end up with the same 20-mile range, this limits us in many ways:

  1. We no longer have control over as many elements of the design process.
  2. We might not reach a 20-mile range, which was the “actual” goal, depending on the other conditions.
  3. As the project progresses, the battery capacity might not be as relevant in the future. If some future battery system was twice as efficient, we could use a smaller battery and have the same user experience.
  4. We can lose sight of why we specified criteria in the first place. If we developed version 2.0 of this product five years later, would anyone remember why the battery was supposed to be 12 Amp-hours?

Requirements in conflict

If two requirements are in conflict (suppose we also needed to make the product cost us less than $200 and we can’t source it), we simply come up with alternatives and present them to stakeholders (such as project managers or financiers). They either update the requirements with the newest information or decide that it’s no longer “business-feasible” to pursue the product and shelve it.

The requirements document is a “living document” and will go through many iterations as development continues.

The three magic words

All features or capabilities in your requirements use these three words, and these three words only: Shall, Should, and May. Add these definitions to your document.

Shall — Denotes a binding requirement. In other words, a project is a success if it accomplishes all of its “shall” requirements. This term is the only one that’s an actual requirement; the others are all technically optional (though you might upset people if you ignore the rest of them). All “shall” requirements are testable with objective acceptance criteria.

Note: Don’t use “must” in technical requirements documents. Legal scholars and contract attorneys may disagree on terms, but “shall” is the standard binding term according to IEEE and ISO. “Shall” is also advantageous since most people don’t use it in daily speech, removing any accidental communication.

Should — Denotes a desired or preferential outcome. Everyone on the project generally wants to accomplish all of the “should” requirements, but they aren’t technically necessary for success. Should is also used when you want a feature but can’t objectively test that feature.

May — Denotes a suggestion or allowance. Use “may” when suggesting some guidance on possible options or explicitly noting a non-opinion. Anything listed as “may” is of no preference.

I use “System” to mean “the thing I’m making”, unless there’s a reason to specify another term. Capitalize it and use it consistently.

Use “is” or “will” to describe statements of fact. “System will connect to the existing network in the factory.”

Examples of Shall/Should/May

Good examples of “Shall”:

  • System shall output voltages between 3.0V and 6.0V DC. — Objective, testable, clearly defined. Adding the extra zero offers an increased digit of precision, which you may want if your system needs exact ranges.
  • System’s longest dimension shall be between 30cm and 70cm in length. — Objective, testable, clearly defined.
  • System shall be written in the Python programming language — Objective, testable. Note there’s no “version” requirement listed, so any Python version satisfies this requirement. Writing “Python 2.8” or “Python 3.0 or newer” would be possible.
  • System shall cost less than $35 when built at a 10,000 unit scale. — Objective, testable.
  • System shall survive a 1-meter drop onto concrete without Catastrophic Damage — Objective, testable, but with a caveat. We need to make sure “Catastrophic Damage” is defined in the document, and capitalizing it lets the user know the word has a specific meaning. It’s helpful to define terms like this if we want to reference them in multiple places. For example, we might want some water submersion requirement that also references “Catastrophic Damage.”

Good examples of “Should”:

  • System should run well on phones from the past four years. — “Run well” isn’t a testable condition, so it is not a requirement. Nevertheless, it points out that people on the project need to try and target “good” performance, whatever that subjectively means.
  • System screen should be readable in bright sunlight. — “Readable” is a subjective measure since there’s no indication of lumens, light angle, visual acuity, or anything you could precisely test. However, this could guide engineers into making sure the screen interface is readable through choices of colors, layout, size, etc.
  • System code should be thoroughly commented. — This item gets the point across that the project team expects code comments throughout, even though there’s no specifically defined quality or quantity of comments.
  • System should not restrict user movement. — For a wearable, all physical systems technically restrict movement in some capacity, but this statement means to use subjective judgment to determine what counts as a restriction.

Good examples of “May”:

  • System may use a pre-made PCB, or a custom PCB. This allowance notes that the project doesn’t care if the final PCB is an “off-the-shelf” Arduino or completely custom SoM. While this isn’t a requirement or even a preference, it lets everyone know that there isn’t an expectation one way or the other. It explicitly states non-preference to remove any assumption of bias.

Items that are requirements use “Shall”; they have one item each, they’re unambiguous and testable.

Items that need subjective judgment to validate or describe a preference use “Should.”

Items you don’t have a preference for use “May.”

General bad examples of requirements:

System may use any programming language except Python. — “May” is not a requirement, so this statement has no meaning. If you did not want something in Python, make it a “shall.” If you prefer developers avoid Python, make it a “should.”

System shall be painted green. — I have died on this hill in many meetings, but “green” is not an objectively testable condition. “System shall be painted PANTONE Green C,” or “System shall be painted with the green paint supplied by partner X” would be acceptable since anyone can precisely validate the requirement.

System shall be 170cm in length — Not specific enough. Where’s the tolerance on this? 170.000 cm? Up to 170cm? A minimum of 170cm?

System shall be easy to use. — How do you measure “easy”?

It it required that the system allow ten users to connect simultaneously — Generally good, but “is it required that” needs to be “System shall.”

System shall allow the user to be able to input a six-digit code — This entry is way too confusing. Is the system enabling the user to have some new “code-entering” capability, or is the system itself supposed to have code input?

System shall be between 160.0 and 170cm in length and weigh less than 11.0lbs — These are all testable, but there are two requirements (length and weight) in one entry. Split this into two separate requirements.

Requirements document format template

Some of these items might not be relevant to a specific system but are general guidelines for an arbitrary system.

1 System purpose

  • Why are you building this?

2 System scope

  • What are you making, by name?
  • What problem is the user having?
  • How does this solve the problem?
  • How is the system used?
  • What are the benefits to the user?

3 System overview

3.1 System context

  • What are the primary system elements?
  • How do the elements interact?
  • How does the user interact with them?

Add diagrams or descriptions to clarify, especially if you expect the requirements to move beyond engineering and into the hands of business/marketing/etc.

3.2 System functions

  • What are the major capabilities of the system?

3.3 User characteristics

  • Who are the different groups of users of the system? Don’t forget the maintainers, installers, etc.
  • How many users are there in each group?
  • How does each group interact with the system?
  • How much knowledge/capability does each group have? (This point is roughly similar to how software developers write “user stories” or business managers create “ideal customers”)

4 Functional requirements

  • What are the features or objectives of the system?

This area is the “main” part of the requirements where you list all of “what the system does.” Remember shall/should/may language and specific, measurable conditions for “shall” requirements.

5 Usability requirements

  • How will the users learn to use the system?
  • How is the system usable to differently-abled groups (e.g., color-blindness, dyslexia, wheelchair users, etc.)?
  • How does the system “safeguard” against incorrect use? (e.g., a plug that only fits one way to prevent reverse polarization).
  • How efficient should the system be, or how quickly should the user accomplish the system’s goals?

6 Performance requirements

  • How often do the users interact with the system?
  • What is the life expectancy of the system?
  • If there are different modes for the system, what are their requirements?

Consider elements such as strength, stability, security, noise levels, and other conditions of use (e.g., does a system have a different requirement if it’s submerged in water?).

7 System interface requirements

  • How does the system interface with other systems?
  • How do users interface with the system?
  • Does the system need to use any pre-defined communication protocols or standards?

Include system interface diagrams here if needed, especially in systems with complex communication busses. Suppose the communication or external interfacing is complex (but necessary). In that case, it’s okay to reference an external document with details (e.g., “System shall interface with product Omega per document Omega Link Unified Language”). Remember to put any references in the appendix.

8 System operations

8.1 Human system integration requirements

  • What type of constraints on personnel, operators, or other users does the system need? (e.g., does the system require special training, or do the users require ESD protection or similar?)
  • Where does the user interact with the system (e.g., a particular station or area)?

8.2 Maintainability requirements

  • Who will maintain the system? What type of training or skills do they need?
  • How often does the system need maintenance? How long does maintenance
  • Does the system need modifications to allow maintenance (e.g., an access panel)?
  • How much downtime is acceptable? Are there any considerations to redundant systems during downtime?

8.3 Reliability requirements

  • How long is the system designed to last?
  • How much unscheduled downtime is acceptable?
  • What happens to the system during a failure?

This section might use statistics to specify requirements (e.g., 99% uptime, 80% capacity over ten years, etc.)

Failure Mode and Effects Analysis and a detailed quality plan often govern more in-depth reliability planning; reference them in the appendix if using them.

8.4 Other quality requirements

  • How will the system be compatible with other systems?
  • What sort of governing quality standards or plans does the system use? Are there internal quality specifications (e.g., company policy) or external specifications (e.g., IPC, ISO, etc.) in use?

9 System modes and states

If the system has multiple modes, use a flowchart, state diagram, or similar to describe the operations and how the user (or system) switches between these modes.

10 Physical characteristics

10.1 Physical requirements

  • What size is the system?
  • What is the system’s mass, volume, or other dimension?
  • How large is the memory, storage space, or bandwidth (for software)?
  • What materials will the system use?
  • How is the system marked or labeled?
  • Does the system require standard “off-the-shelf” parts, or is it acceptable to use custom fabrication?

Remember to use ranges or tolerances as necessary. Also, avoid over-specifying requirements if they aren’t the principal consideration. For example, a system might require a specific construction material or just require the system to weigh under a specific amount.

10.2 Adaptability requirements

  • How does the system grow in the future or accommodate expanded use (e.g., more users, increased bandwidth, physical throughput, etc.)?
  • How does the system change when it reaches the limits of its use (e.g., adding more systems in parallel, expanding the existing system’s capabilities, etc.)?
  • How does the system change when utilized less than expected?

11 Environmental conditions

  • What types of environments does the system need to survive?
  • How is the system protected from the environment?
  • How does the system interact with its environment?

General environmental considerations are temperature, water, dust, heat, physical shock, mold, salt, radiation, chemicals, pressure, wildlife, humans, etc. These elements will vary depending on the system’s environment. Environment also includes non-physical economic/social requirements. You can also specify a standard, such as IP testing, to govern a specific water/dust requirement.

12 System security requirements

  • How do you secure the system, either physically or through software?
  • What sort of authentication does the user need to operate the system (e.g., keys, passwords, building access, etc.)?
  • What kind of protection does the system have against tampering/sabotage/accidental user error?
  • How does the system report any security violations (e.g., software logs, physical breaks when a panel opens, alarms, etc.)?
  • What are the methods of securing software code (e.g., Git methodology, CI/CD branch testing, pen-testing, disconnecting network access during system failure)?

13 Information management requirements

  • How does the system store information?
  • How is information storage secured?
  • How does the system back up data?
  • What is the storage method for information about the system (e.g., schematics, software code, industrial diagrams, market research information, etc.)?
  • Who has access to the various information?

14 Policy and regulation requirements

  • What health and safety regulations surround the system’s operation (e.g., OSHA Lockout/Tagout, internal company safety procedures, etc.)?
  • What multilingual support does the system use?

15 System life cycle sustainment requirements

  • How will you ensure the system operates satisfactorily?
  • How will you routinely check system operations?
  • Who will monitor and buy spare parts for the system? Where will you store spare parts or specific tools?
  • How will you train personnel on system use and maintenance?

16 Packaging, handling, shipping, and transportation requirements

  • How will you ship or transport the system?
  • Does shipment or transport require special certification(e.g., UN 38.3 for batteries)?
  • What type of materials should you use to package the system (e.g., ESD-safe plastic, JEDEC trays, wooden crates, waterproof containers, etc.)?
  • How will you store the system intermittently? Does storage require a particular environment (e.g., refrigeration)?

17 Verification

The easiest way to write this section is to reference the specific test plan that governs the system (e.g., “System shall be considered successful when it passes all tests described in…”).

An easy way to write a test plan is to go back through the requirements, look for any “shall” requirements, and then quantify them in a test procedure. If you wrote the requirements document correctly, all requirement items have a singular testable condition with clearly-defined acceptance criteria.

For items that aren’t objectively testable but require judgment, specify a person, title, or position of whoever shall qualify that item.

Again, you can simply put all of this information in a separate test plan and then reference the test plan.

18 Assumptions and dependencies

  • What axioms does the system rely on to continue operation?

This section could be infinitely long (e.g., assuming measurement equipment is reliable, universal constants remain universal, P != NP, etc.), so only add items that you feel are relevant, plausible, or critical to the system’s operation.

If you don’t have control over an item, don’t include it. If you could have control over an external dependency and it’s critical to the system, consider adding a requirement to incorporate that dependency. For example, if the system relies on a crucial open-source library, forking the library and maintaining an internal fork might be an option instead of depending on the developers’ upkeep.

A template with just the headers

Front page

Version history

Definitions (don’t forget shall/should/may)

Acronyms and abbreviations

1 System purpose

2 System scope

3 System overview

3.1 System context

3.2 System functions

3.3 User characteristics

4 Functional requirements

5 Usability requirements

6 Performance requirements

7 System interface requirements

8 System operations

8.1 Human system integration requirements

8.2 Maintainability requirements

8.3 Reliability requirements

8.4 Other quality requirements

9 System modes and states

10 Physical characteristics

10.1 Physical requirements

10.2 Adaptability requirements

11 Environmental conditions

12 System security requirements

13 Information management requirements

14 Policy and regulation requirements

15 System life cycle sustainment requirements

16 Packaging, handling, shipping, and transportation requirements

17 Verification

18 Assumptions and dependencies

Appendix

--

--