How We Test
Our goal is simple: make your first cut look like our lab results. We build repeatable tests that mirror real tasks, capture objective measurements, and translate them into clear scores and cut kits you can trust.
Test Environments: Lab + Field
- Lab: Controlled benches with reference squares, dial indicators, digital calipers, laser alignment, lux meters (for visibility), tachometer (blade speed where accessible), and particle measurement for dust. We use standardized materials (e.g., CDX plywood, AC plywood, melamine-coated board, SPF framing lumber) conditioned for moisture equilibrium.
- Field: Real setups in small spaces and jobsite scenarios—driveways, apartments, and framing sites—to validate handling, balance, visibility, and dust practicality with common vacs.
Core Metrics We Measure
- Accuracy Deviation: Out-of-square and bevel error over representative cut lengths using reference blocks and calibrated squares.
- Speed: Time to complete standardized rips and crosscuts at controlled feed pressure with matched blades.
- Edge Quality / Chip‑Out: Visual scoring and magnified inspection on plywood/melamine; tear‑out quantified against a rubric.
- Dust Capture %: Differential weight/particle counts with and without extraction, using standardized hoses/adapters.
- Vibration & Ergonomics: Relative vibration perception, handle geometry, balance, trigger feel, and visibility to the cut line—scored via a consistent rubric.
- Setup / Usability: Blade changes, depth/bevel adjustments, sightline, guard behavior, and fence/track compatibility.
Blades, Guides, and System Matching
We test saws with category-appropriate blades (framing, fine‑finish, laminate/melamine) and evaluate with and without guides (fences, tracks, edge guides). If a saw benefits significantly from a specific blade or guide, we document it and reflect it in our cut kits.
Scoring Model and Weights
We publish composite scores and category scores with transparent weights for each use case:
- Finish/Sheet Goods: Accuracy (35%), Edge Quality (25%), Dust (20%), Ergonomics (10%), Speed (10%).
- Framing/Production: Speed (30%), Accuracy (25%), Ergonomics (20%), Visibility/Setup (15%), Dust (10%).
Weights may vary slightly by task; any changes are disclosed on the page.
Battery Platforms and Normalization
For cordless saws, we test on recommended amp‑hour packs and, when possible, a second pack to evaluate voltage sag. Results note battery used, charge cycles, and ambient temperature. Where cross‑platform comparisons are made, we normalize for blade type and material.
Repeatability and QA
Each test is repeated multiple times. Outliers caused by visible user or material defects are documented and excluded with justification. Fixtures and gauges are checked before each test cycle, and we recalibrate when environmental conditions shift.
Safety Protocols
All testing follows conservative feed rates and best practices for kickback prevention, PPE, and dust control. We document any safety concerns (e.g., guard stickiness, brake lag) prominently.
Updates and Re‑tests
When firmware/hardware revisions, blade formulations, or adapters change results, we update the review and timestamp the change. Community requests for re‑tests are tracked via email and prioritized by impact and volume.
From Scores to Cut Kits
After testing, we assemble task‑specific kits that pair the right saw, blade, guide/track, dust adapter, and a simple setup recipe. This system-level recommendation is designed to reproduce our measured outcomes in your space with minimal tuning.