Peer review in all it’s contexts is widely considered a useful contributor to improving the quality of work. Whether it’s articles for publication or software development, having somebody else look over the work is generally accepted to be better than nobody (other than the creator) to look at it. You could extrapolate that logic, and say well if one reviewer improves the quality by x, then 2 reviewers would double the quality assurance. And three reviewers..
See, this might work if each reviewer thought they were the only ones reviewing it. Or at least didn’t know if they were the last reviewer or first, or who the other reviewers were. The problem is people are humans. Unless they’re some exceptionally diligent individual – it’s not unusual for a person to think “If I miss something, the next person will pick it up” or “There previous reviewer will have already picked out the problems”. I’ve seen it happen.
From the perspective of the Swiss cheese model for hazards, I think the more people involved at each stage of QA potentially correlates to more holes in the slice. Not always. But throwing more people and systems at it is no guarantee of quality.
I don’t doubt that a single review adds some value and quality to any process. Human nature is best for subjective analysis. For reliable QA, it’s probably best to leverage objective, independent, and reproducible QA as much as possible. Something machines (including software) can be good at.