The reality is a lot messier, and getting into the details makes it clear that not only is a Whiggish history of science mistaken, but that science doesn't proceed through the algorithmic application of "the scientific method," and in fact that there is no such thing as "the scientific method." Rather, there is a diverse set of methods that are themselves evolving in various ways, and sometimes not only do methods which are fully endorsed as rational and scientific produce erroneous results, sometimes methods which have no such endorsement and are even demonstrably irrational fortuitously produce correct results. For example, Johannes Kepler was a neo-pythagorean number mystic who correctly produced his second law of planetary motion by taking an incorrect version of the law based on his intuitions and deriving the correct version from it by way of a mathematical argument that contained an error. Although he fortuitously got the right answer and receives credit for devising it, he was not justified in believing it to be true on the basis of his erroneous proof. With his first law, by contrast, he followed an almost perfectly textbook version of the hypothetico-deductive model of scientific method of formulating hypotheses and testing them against Tycho Brahe's data.
The history of the scientific revolution includes numerous instances of new developments occurring piecemeal, with many prior erroneous notions being retained. Copernicus retained not only perfectly circular orbits and celestial spheres, but still needed to add epicycles to get his theory any where close to the predictive accuracy of the Ptolemaic models in use. Galileo insisted on retaining perfect circles and insisting that circular motion was natural motion, refusing to consider Kepler's elliptical orbits. There seems to be a good case for "path dependence" in science. Even the most revolutionary changes are actually building on bits and pieces that have come before--and sometimes rediscovering work that had already been done before, like Galileo's derivation of the uniform acceleration of falling bodies that had already been done by Nicole Oresme and the Oxford calculators. And the social and cultural environment--not just the scientific history--has an effect on what kinds of hypotheses are considered and accepted.
This conservativity of scientific change is a double-edged sword. On the one hand, it suggests that we're not likely to see claims that purport to radically overthrow existing theory (that "everything we know is wrong") succeed--even if they happen to be correct. And given that there are many more ways to go wrong than to go right, such radical revisions are very likely not to be correct. Even where new theories are correct in some of their more radical claims (e.g., like Copernicus' heliocentric model, or Wegener's continental drift), it often requires other pieces to fall into place before they become accepted (and before it becomes rational to accept them). On the other hand, this also means that we're likely to be blinded to new possibilities by what we already accept that seems to work well enough, even though it may be an inaccurate description of the world that is merely predictively successful. "Consensus science" at any given time probably includes lots of claims that aren't true.
My inference from this is that we need both visionaries and skeptics, and a division of cognitive labor that's largely conservative, but with tolerance for diversity and a few radicals generating the crazy hypotheses that may turn out to be true. The critique of evidence-based medicine made by Kimball Atwood and Steven Novella--that it fails to consider prior plausibility of hypotheses to be tested--is a good one that recognizes the unlikelihood of radical hypotheses to be correct, and thus that huge amounts of money shouldn't be spent to generate and test them. (Their point is actually stronger than that, since most of the "radical hypotheses" in question are not really radical or novel, but are based on already discredited views of how the world works.) But that critique shouldn't be taken to exclude anyone from engaging in the generation and test of hypotheses that don't appear to have a plausible mechanism, because there is ample precedent for new phenomena being discovered before the mechanisms that explain them.
I think there's a tendency among skeptics to talk about science as though it's a unified discipline, with a singular methodology, that makes continuous progress, and where the consensus at any moment is the most appropriate thing to believe. The history of science suggests, on the other hand, that it's composed of multiple disciplines, with multiple methods, that proceeds in fits and starts, that has dead-ends, that sometimes rediscovers correct-but-ignored past discoveries, and is both fallible and influenced by cultural context. At any given time, some theories are not only well-established but unified well with others across disciplines, while others don't fit comfortably well with others, or may be idealized models that have predictive efficacy but seem unlikely to be accurate descriptions of reality in their details. To insist on an overly rationalistic and ahistorical model is not just out-of-date history and philosophy of science, it's a "coloring book" oversimplification. While that may be useful for introducing ideas about science to children, it's not something we should continue to hold to as adults.