You have told or been told a white lie at some point. Often it goes unnoticed. But on occasion it grows. It happens because explanations have consequences. You tell the lie: “I couldn’t get milk because the store was closed.” The listener finds a consequence that doesn’t match your explanation: “But I just spoke to Becky who said she’d just got back from the store.” So you figure out an explanation that matches all the data: “I went to the other store, because it was near the gas station, and I was low.” But no deal: “Low? I filled the car on Monday.” Another explanation for even more data: “I had to drive back out of state on Tuesday when I did that delivery, because there was a part missing.” And so on.
Eventually this exchange ends in one of three ways: either the explainer can’t think of further explanations (and they usually go personal: “I can’t believe you don’t trust me!”) or the listener can’t think of further consequences to check and believes the story, or the listener gets bored with the game and moves to discount the explanation (“Okay, fine, whatever you say.”).
The same three things happen with any bad explanation, including explanations in scholarship. The first two are the best ways to end, of course. Either in the acceptance of a new explanation, or its rejection: agreed on by both parties. But often you get the third: eventually the reasons get stretched thinner and thinner until the scholar (or the academy generally) will say the equivalent of “Whatever”. This is the source of the myth among creationists, for example, that biologists have no refutation of their argument. It is true, they don’t. There’s always a further even more far-fetched explanation around the corner for any new bit of data. Eventually all scholars give up on these debates. All explainers think they’ve won. And they are all disappointed (or turn to conspiracy theories) when they find they are just being ignored.