Graph neural networks (GNNs) have exhibited state-of-the-art performance across a wide range of domains. Yet message-passing GNNs suffer from over-squashing—exponential compression of long-range information from distant nodes—which limits expressivity. Rewiring techniques can ease this bottleneck, but their practical impacts are unclear due to the lack of a direct empirical over-squashing metric. We propose a topology-focused method for assessing over-squashing between node pairs using the decay rate of their mutual sensitivity. We then extend these pairwise assessments to graph-level statistics. Coupling these metrics with a within-graph causal design, we quantify how rewiring strategies affect over-squashing on diverse graph- and node-classification benchmarks. Our extensive empirical analyses show that most graph classification datasets suffer from over-squashing (but to various extents), and rewiring effectively mitigates it—though the degree of mitigation, and its translation into performance gains, varies by dataset and method. We also found that over-squashing is less notable in node classification datasets, where rewiring often increases over-squashing, and performance variations are uncorrelated with over-squashing changes. These findings suggest that rewiring is most beneficial when over-squashing is both substantial and corrected with restraint—while overly aggressive rewiring, or rewiring applied to minimally over-squashed graphs, is unlikely to help and may even harm performance. Our plug-and-play diagnostic tool lets practitioners decide—before any training—whether rewiring is likely to pay off.