RFP Process
How to Score and Select References for RFPs
A practical framework for scoring and selecting the most relevant past project references for each RFP—one of the most impactful and underinvested parts of the proposal process.
Summary
Reference selection is one of the highest-leverage decisions in an RFP response. A strong, well-matched reference set directly increases evaluator confidence and scoring. Most teams choose references based on familiarity, not fit—and lose points as a result.
Reference selection is one of the most important—and most underinvested—parts of the RFP response process. Evaluators use your references to answer a simple question: Has this team done work like this before? How well you answer that question is almost entirely within your control.
Why reference selection matters more than most teams realize
In heavily weighted evaluation criteria, past performance sections commonly account for 20–40% of the total score. A poorly matched reference set—even from a highly capable firm—signals to evaluators that your team lacks direct relevant experience in this specific context.
Most teams choose references based on what they remember or what's easiest to access. The best teams choose references based on systematically scored fit against each opportunity.
A scoring framework for reference selection
For each potential reference project, score it on five dimensions relative to the current RFP:
| Dimension | What to evaluate | Weight |
|---|---|---|
| Sector match | Is the client type (municipal, federal, private sector) similar? | High |
| Scope similarity | How closely does the work match the current RFP's scope? | High |
| Project value | Is the scale comparable to the current opportunity? | Medium |
| Recency | Was the project completed within the last five years? | Medium |
| Outcome quality | Can you point to a measurable result or positive client reference? | Medium |
How many references should you include?
Always follow the RFP's specification exactly. If the RFP asks for three references, submit three. If you have flexibility, three to four high-scoring references outperform five or six weaker ones. Evaluators weight quality and relevance far more heavily than volume.
Writing the reference description
For each selected reference, include:
- Client name and type (public/private, sector)
- Project scope in one to two sentences—specific, not generic
- Your firm's specific role (not just "environmental consulting"—what exactly did you deliver?)
- A quantifiable outcome where possible (area remediated, cost savings delivered, timeline achieved)
- Contactable client reference with name and current contact information
What to avoid
- Listing flagship projects that don't match the scope: A prestigious project in an unrelated sector doesn't impress evaluators scoring against specific criteria.
- Generic descriptions: "Provided environmental services for a regional client" tells evaluators nothing. Be specific.
- Outdated references: Projects more than seven to eight years old raise questions about current capability. Use them only if they're genuinely the best match and clearly frame the recency.
- References you haven't confirmed: Confirm your referee is still reachable and willing to speak positively before including them. A non-responsive or negative reference can cost you a contract you otherwise would have won.
How AI improves reference selection
For firms with large project histories—dozens or hundreds of past projects across multiple sectors—manual reference selection misses the best matches. AI can score your entire project library against each new RFP's scope and requirements in seconds, surfacing high-fit references that wouldn't surface through memory or manual search.
Stepscale's reference scoring engine does exactly this: upload your past project context, and Stepscale ranks your history by fit against each new opportunity. It's one of the fastest ways to improve the quality of your past performance sections across every bid you submit.