How We Test

Our detailed testing methodology for robot vacuums — learn how we evaluate cleaning performance, navigation, noise, and more.
Apr 1, 2026

How We Test Robot Vacuums

Every robot vacuum we review goes through the same rigorous, real-world testing process. This page explains exactly what we do, how we score, and where our data comes from. No black boxes — just transparent methodology you can trust.

Our Testing Philosophy

Spec sheets tell you what a robot vacuum should do. We measure what it actually does. Every model runs through standardized tests in furnished, lived-in rooms so results reflect real-world performance, not lab conditions.


Cleaning Performance Tests

Hard Floor Testing (25% of final score)

We spread measured amounts of three debris types across a 3m x 3m section of hard flooring:

  • Cereal (Cheerios-sized) — tests pickup of common, lightweight debris
  • Coffee grounds — tests fine particle pickup, which separates good suction from great
  • Fine sand — the toughest test; fine sand settles into floor texture and requires strong, consistent airflow

Each debris type is tested in a single pass. We weigh the dustbin before and after to calculate the exact pickup rate as a percentage. A top-scoring vacuum picks up 98%+ of all three types in one pass.

Carpet Testing (20% of final score)

We use a standardized medium-pile carpet section and embed a measured mix of debris (sand, cereal, and coffee grounds) into the fibers using a weighted roller. The vacuum makes two passes. We then weigh the collected debris against the starting amount to calculate the embedded debris pickup rate.

This test reveals how well a vacuum handles the dirt you can't see — the kind that settles deep into carpet over days of foot traffic.

Pet Hair Testing

Pet hair is evaluated as part of both hard floor and carpet scores. We flatten real pet hair (a mix of short and long strands) onto carpet and hard floor surfaces, then measure pickup rate and check the brush roll for tangling. Vacuums with rubber extractors typically outperform bristle brushes here.


Mopping Testing (15% of final score)

For models with mopping capability, we apply dried coffee stains to sealed hard flooring and let them set for 24 hours. The vacuum-mop runs two passes over the stained area. We photograph and grade stain removal on a 1-10 scale, evaluating both first-pass and second-pass results.

We also assess water flow consistency, pad pressure, and whether the mop leaves floors overly wet or streaky.


We run each vacuum in a furnished room (approximately 20 square meters) containing a sofa, dining table, chairs, shelving, and common clutter. We measure two things:

  • Completion time — how long the vacuum takes to clean the entire room
  • Coverage rate — percentage of accessible floor area actually cleaned, verified by placing lightweight paper markers across the floor

LiDAR and structured-light models typically score highest. Camera-based navigation is usually close behind. Random-bounce models rarely score above 6/10.


Obstacle Avoidance

We set up a standardized obstacle course with five common household items:

  1. Shoes (sneakers placed on the floor)
  2. Cables (a phone charger draped across the path)
  3. Small toys (building blocks, ~3cm)
  4. Socks (a single sock laid flat)
  5. Chair legs (a dining chair in the cleaning path)

We record whether the vacuum avoids, bumps into, pushes, or gets stuck on each obstacle. This test is factored into the navigation score.


Noise Testing (10% of final score)

We take decibel readings at a distance of 1 meter using a calibrated sound meter. Measurements are recorded in both standard and max/turbo modes.

RatingStandard ModeMax Mode
ExcellentUnder 60 dBUnder 68 dB
Good60-65 dB68-72 dB
Average65-70 dB72-76 dB
LoudOver 70 dBOver 76 dB

For context, 60 dB is roughly normal conversation volume. Anything under 65 dB in standard mode is comfortable enough to run while you work from home.


Battery and Runtime Testing

We fully charge each vacuum and run it on standard cleaning mode until it returns to the dock or dies. We record the actual runtime and compare it to the manufacturer's claimed runtime.

Most manufacturers test runtime in an empty room on the lowest power setting. Our numbers are typically 15-30% lower — and more representative of what you will actually experience.


Smart Features and App Evaluation (10% of final score)

We evaluate the companion app across several criteria:

  • Setup ease — how long from unboxing to first cleaning run
  • Map management — room editing, zone cleaning, no-go zones
  • Scheduling — flexibility and reliability of scheduled cleans
  • Voice assistant integration — Alexa, Google Home, Siri support
  • Firmware updates — frequency and whether they actually improve performance

A great app meaningfully improves the ownership experience. A bad app can make an otherwise excellent vacuum frustrating to use.


Maintenance Cost Tracking (5% of final score)

We track the price and recommended replacement frequency of every consumable part:

  • Side brushes
  • Main brush rolls
  • Filters (HEPA or standard)
  • Mop pads
  • Dust bags (for self-emptying docks)

We calculate the estimated annual maintenance cost based on the manufacturer's replacement schedule. Some vacuums cost under $30/year to maintain; others exceed $100. This is a real and often overlooked part of the total cost of ownership.


Our Scoring System

Every vacuum receives a final score on a 10-point scale, calculated as a weighted average across the following dimensions:

CategoryWeight
Hard Floor Cleaning25%
Carpet Cleaning20%
Navigation15%
Mopping15%
Noise10%
Smart Features10%
Maintenance Cost5%

For vacuums without mopping capability, we redistribute that 15% proportionally across the other categories so the score remains comparable.


How We Source Our Data

Our reviews are built on three pillars:

  1. Hands-on testing — Every vacuum we recommend has been tested in-house using the methodology described above.
  2. Cross-referencing trusted sources — We compare our findings against data from Vacuum Wars, RTINGS, and verified Amazon purchase reviews to ensure consistency and catch edge cases.
  3. Real user feedback — We monitor communities like Reddit (r/RobotVacuums, r/Roborock, r/iRobot) for long-term ownership reports, common failure points, and firmware update impacts that only show up after months of use.

No single source tells the full story. By combining lab-style testing with community-sourced long-term data, we give you the most complete picture possible.


Editorial Independence

We take our independence seriously:

  • We buy our own products. The majority of vacuums we test are purchased with our own money. In some cases, we use manufacturer loaner units for early access to new releases — but we always disclose this, and it never affects our scores.
  • No sponsored reviews. We do not accept payment from any brand to review a product or influence a ranking.
  • Affiliate links do not affect scores. We earn commissions through affiliate links (see our Affiliate Disclosure), but our recommendations are based entirely on test results.

If a product is not good enough to recommend, we say so — regardless of the commission rate.


Questions About Our Process?

If you have questions about how we test or want to suggest improvements to our methodology, we are always open to feedback.

Email: hello@bestrobovacuums.com

How We Test