The long-running battle between descriptivists and prescriptivists involves many arguments about whether particular points of usage are right or wrong, as well as meta-arguments about how those arguments all rest on some sort of logical fallacy or another. Linguists love to invoke the etymological fallacy, arguing that historical usage shouldn't determine modern usage. But prescriptivists love to turn this one back on them when linguists argue that a particular usage should be accepted because it's been used by great writers since Shakespeare's time.
On my own blog, I've received plenty of comments that boil down to "Just because everybody does it doesn't make it right!" I've occasionally asked, "So what would make it right?" but I've never received a real answer. Of course, the truth is that it's something of a trick question. Technically, every argument is a fallacy when arguing about right and wrong.
The problem is that all such arguments run into what philosophers call the is–ought problem. That is, it's not clear how to get from a statement of fact, or an is statement, to a statement of value, or an ought statement. As in other fields, we can make observations to determine what's true or false about language, and this is what linguists and lexicographers do: observe actual usage and describe the system of language. Prescriptivists, on the other hand—writers, editors, English teachers, usage commentators, and many others—create or promote rules to try to foster good language. Linguists and lexicographers are mostly concerned about what is, while prescriptivists are more concerned with what ought to be.
But, again, how do we know how language ought to be? There's a difference between what's true and what's right, though it's not always easy to remember that distinction when discussing language. Many prescriptivists seem to assume that there's an objective or logical right and wrong that exist independent of the way language is used. For example, David Foster Wallace wrote in his essay "Present Tense," "If a physics textbook operated on Descriptivist principles, the fact that some Americans believe that electricity flows better downhill (based on the observed fact that power lines tend to run high above the homes they serve) would require the Electricity Flows Better Downhill Theory to be included as a ‘valid' theory in the textbook—just as . . . if some Americans use infer for imply, the use becomes an ipso facto ‘valid' part of the language."
This passage is riddled with logical and factual errors, but the worst is this: the behavior of electricity is based on the laws of physics, which can be induced through observation. Language, on the other hand, is not based on natural laws. Every language is different, and languages change over time, so it's impossible to say what a valid theory of proper usage would be.
Some people get around the is–ought problem by talking about goals. For example, we might say that if your goal is to write well, you should follow certain rules of usage. But many popular usage rules conflict with real examples of good writing, as editor Tom Freeman discussed in a recent blog post. If the rules are supposed to improve your language, how do you explain how people can "still speak and write clearly, even powerfully and beautifully" without knowing and following the rules? What gives the rules authority? Freeman concludes, "Any authority has to draw its authority from somewhere else—and so on and so on. The only place to stop is at the bottom, with the community that uses the language."
Many people, however, are uncomfortable with the idea of moral relativism. They think that basing language standards on what people do is synonymous with lowering standards or abandoning them altogether. But linguistic relativism doesn't necessary lead to the "anything goes" philosophy that is often ascribed to linguists. Linguist David Crystal said it well: "The whole point of sociolinguistics, pragmatics, and the other branches of linguistics which study language in use is actually to show that ‘anything does not go'." That is, it's possible to observe that some words and constructions are uniformly rejected, or that they're accepted only in certain situations or by certain people, or that they're widely used in speech but not acceptable in formal writing, and on and on. There may not be an objective and monolithic standard of right and wrong, but we can still glean insight from the facts of usage, especially the usage of great writers.
It may technically be a logical fallacy to say that something is correct just because everyone does it, but it's the best we've got. Just because Shakespeare or Austen or Orwell used a particular construction doesn't make it right, but if great speakers and writers have been doing something for centuries, it's hard to say that they're all wrong. In the end, it's the people who make it right.