Skip to main content

The Doubting of Skepticism


ISSUE:  Autumn 1987
. . . Having drunk, he put the flies back into the mixing bowl. “I do not like flies,” he explained, “but some of you may.”
Sir Thomas More, epigram, 133 (1518)

In the summer of 1985 the Peking government, concerned at the rise of juvenile crime in the Chinese People’s Republic, announced new compulsory courses for all schoolchildren “to teach them the difference between right and wrong.”

That moral confidence is impressive. Not the difference between the legal and the illegal, you will notice, but the difference between right and wrong. Laws or at least good laws—and the Peking government, having made them, plainly thinks its own laws good—embody moral principles; and such principles are not merely to be observed. They must be seen to be true.

I imagine that the Western intellectual reaction to such confidence in a Marxist state is likely to be embarrassed and confused. In the non-Communist world we are by now accustomed to being told, even by the clergy, that law is not the same as morality, which may indeed be so; and in literary studies we are so accustomed to the assumption that critical values are a matter of personal judgment that many find it hard to imagine that any other view can be seriously meant. When I presumed, some years ago, to entitle a chapter in The Discipline of English (1978) “Why literary judgments are objective,” the suggestion was considered so laughable that I have since, on visits to schools and colleges, found that discussion commonly begins with a mocking look and a remark like “You are the one who believes that literary judgments are objective,”

Well, yes. I did and I do; arid make no apology here for confusing, so early in the debate, literary and moral judgements—what Peking thinks about right and wrong, and what we think about the greatness of Shakespeare or Dickens— since it is rare, and in common experience simply unknown, for anyone to be an objectivist in the one and not in the other. I do not know what the Peking government thinks about literary judgments but would wager a large sum that, since it is objectivist in moral matters and believes children should be taught the difference between right and wrong, it holds similar views on literature and other arts as well.

And that confidence contrasts sharply with the prevailing orthodoxy of the West. Here we strive to distinguish factual and evaluative issues and imagine that it is possible, even easy, to tell one from another. Teaching in a Midwestern university in the 1950’s, I was told by the chairman of an English department to set students an objective test during a Shakespeare course to see if the prescribed plays had been duty studied; and it was to consist, as he explained, of questions to which there was only one right answer, such as “What is the name of Richard II’s queen?” It took little inquiry to ascertain that the entire department, and perhaps the entire university, believed that an objective question is one to which the answer is certainly known and unanimously agreed, as the very title of the test implied; and that, since literary and moral questions are not usually like that, they fall outside the range of objective inquiry. And the confidence! with which these assumptions were held was positively Pekingese in its certitude, though directed toward an opposite conclusion.

For all that, Western subjectivism was muddled as a consensus, and it seldom noticed how muddled it was. A friend in another university, this time in an Eastern state, reported soon afterward that a student expelled for a sexual misdemeanor there had provoked a noisy demonstration in his own support, the banner reading “Morality is a matter of personal choice.” A short time later, the expelled student was discovered to have attempted blackmail. Nobody doubts that blackmail is a moral issue; nobody, equally, whether on that campus or any other, thinks it a matter of personal choice. The slogan so earnestly stitched on banners, and so passionately upheld by scores of eager demonstrators, suddenly looked tattered and in ruins: so much so, that the local philosophy department, which took its responsibilities seriously, spent several weeks conscientiously debating the matter.

It seems clear, then, that the whole question of moral and literary skepticism will have to be reopened. It has already been reopened among philosophers; and the 1980’s have lately seen a battery of books about (and commonly against) skepticism as a philosophical idea or the radical notion familiar to Ancients as well as to Moderns that all knowledge is illusory. The title of A.C. Grayling’s Refutation of Scepticism (1985) speaks for itself. Other studies are more remotely anthropological, like some of the essays in The Skeptical Tradition (1984), a collection of papers about ancient philosophy edited by Miles Burnyeat. No need to conclude that, because philosophy is forever turning back on itself and reopening old questions, it is failing to achieve anything. That would be rather like telling a housewife that, because her home will be dirty again tomorrow, there is no point in cleaning it today. Such matters need to be reopened, all the time; and there are forms of intellectual progress other than a simple trajectory or an unending accumulation of evidence.

The internal contradictions of radical skepticism have by now escaped few philosophers: the simple truth that if we cannot certainly know anything, then we cannot certainly know even that; that if all dogmas are ideologically conditioned, and to their discredit, then so is that dogma. In A Map of Misreading (1975), Harold Bloom has restated the familiar old dogma of the undetermined text in as radical fashion as he dared: all reading is strictly speaking impossible, he argues in his introduction, or rather it is “a belated and all-but-impossible act,” and “if strong is always a misreading.” But a misreading can only confidently be known to be that if a right reading is possible; and if a right reading is possible, it can only be arrived at by reading. So Harold Bloom must believe his own reading habits to be an exception to his own principle. A true skeptic, as David Hume remarked in the Treatise (1739—40), “will be diffident of his philosophical doubts as well as of his philosophical conviction”; and in another recent book, Skepticism and Naturalism (1985), Sir Peter Strawson of Oxford has argued that skepticism about the real world can never be considered as even seriously meant, let alone true. Nobody behaves, that is, whether in moral or aesthetic matters, as if values were merely something we impute, as individuals, to any given behavior or any given work of art. Auschwitz was a given set of behavior, and nobody (to my knowledge) believes that it can be made right or wrong simply by thinking it so. Constrained by popularized versions of 19th-century anthropology as we are, we are entirely used to being told that we should withhold moral condemnation from other civilizations and other ethical systems, but hardly anyone thinks this principle should extend as far as the ethical system of Hitler’s Third Reich in the early 1940’s. A muddle, then, as our anti-skeptical philosophers have rightly noted, is what we are in.

The history of philosophy is less like housework, perhaps, than like a game of ping-pong, where arguments are forever turned back on their originators; and even if the prevailing mood in morality and literary studies is less confident than that of the Chinese government, at least the argumentative ball is being returned, and ever more briskly, into the skeptical court. It is less fun to be a radical skeptic now, in the 1980’s, than it was in the heady days of post-structuralism and dogmas of ideological conditioning 20 years ago, though the aging survivors of deconstruction and feminism still struggle to keep these dilapidated skepticisms afloat. As an increasing number of theorists are coming to see, the arguments of that older tradition of critical theory that once emanated from Paris in the 1950’s and 1960’s were never anything like as good as they needed to be, and it is high time that the stale fug of old-fashioned French marxisant criticism was dispelled by the fresh argument of humanist debate. Our radical skeptics, it now seems clear, were nothing like skeptical enough.

II

In literary terms, the debate about the objectivity of judgment is little more than a century old, and it forms no part of Renaissance or 18th-century critical theory. In literary parlance, the terms “objective” and “subjective” entered English in the early 19th century, with Coleridge’s Biographia Literaria (1817), inviting the famous mockery of Thomas Carlyle. In its tenth chapter, Coleridge announces that he has decided to import the words out of philosophy into literary theory:

The very words objective and subjective, of such constant recurrence in the schools of yore, I have ventured to reintroduce, because I could not so briefly or conveniently, by any more familiar terms, distinguish the percipere from the percipi,

the act of perceiving from what is perceived; and nearly 40 years later De Quincey, in a note of 1856 to his Confessions, remarked of “objective” that the word has been “so nearly unintelligible in 1821,” when his book first appeared, a mere four years after Coleridge’s Biographia, “so entirely scholastic,” and “so apparently pedantic,” that it had taken a generation or two to render it “too common to need any apology.” In literary theory, then, the debate can be dated to the early 19th century, which makes it a century and a half old. Time to get it right, one might say; and yet, like the busy housewife, we seem little nearer than in Coleridge’s day, or De Quincey’s, to getting it right.

Some of the capacity of late 20th-century literary theorists to get it wrong strikes one as mildly obdurate. There can be no serious excuse, for example, to confuse the claim to objectivity, whether in morality or in the arts, with a claim to certain knowledge. Nobody working in the physical sciences, unless corrupted by bad philosophy, imagines that an inquiry becomes objective only when the answer is found. A scientist performing an observation or an experiment in a laboratory can hardly fail to be aware that his inquiry is objective, even though he does not know the answer to it. (If he knew the answer, why would he bother to conduct the experiment?) To take an observational instance, from astronomy: when John Couch Adams, in 1845—6, noticed irregular movements in the planet Uranus and decided that there must be some hitherto unnoticed planet there to account for them, calling it Neptune, his early observations can only have been based on the twin assumptions that something was there and that he did not yet know what it was; or, at a later stage, that something was probably there and that he did not certainly know what it was. There is nothing exceptional or paradoxical, then, about the suggestion that an inquiry is objective and yet unanswered: that moral and aesthetic values are objective and that, in some given case, we do not know what they are. Richard Rorty has called objectivity “a matter of ability to achieve agreement on whether a particular set of desiderata have or have not been satisfied” (New Literary History 17, 1985), which, if seriously pondered, might rule out much or most of the physical properties even of our own cosmos, let alone others; and yet it seems extravagant to deny that physics and chemistry are objective inquiries. Humanism depends on the claim that the study of the arts, and of the principles of human conduct, is an objective inquiry. The claim to literary objectivism is in no way a claim to knowledge, still less to certainty. It is a comment on the logical status of the question. Like any experimental scientist, a critic could sensibly argue that his inquiry was objective without claiming that he knew the answer to it, and even (perhaps) without claiming that the answer would ever be known. That is why it remains so absurd to imagine that to call literary judgments objective is to claim to know, and for certain, what they are.

Another bit of critical obduracy in our times has been to confuse knowledge with an ability to give an account of what one knows. That demand, at its crudest, expresses itself in an insistence on definition and statable criteria. Consider a recent collection called Modern Literary Theory (1982). In its preface the editors, Ann Jefferson and David Robey of Oxford, complain that literature still lacks “a truly adequate definition of its subject-matter”; and its preface goes on to argue that until that agreed definition is achieved, no progress can be made in literary criticism. After two thousand years or so of literary argument, they imply, we have not even reached the first stage.

But a definition is hot the first stage in an argument, and it is not even certainly the last. G.K. Chesterton once remarked that “the man next door is not definable because he is too actual to be defined”: a perceptive remark, especially when one reflects that nobody would think it a ground for denying that one knows the man next door. Knowing literature is like that too. We do not recognize a play to be a tragedy by measuring it against an agreed form of words that define Tragedy in general: we know that Opedipus and Lear are tragedies before any critical debate about tragedy begins; and any account of what constitutes Tragedy has to conform to those plays. Definition is not where we start, though it may occasionally be where we choose to stop. A recent commission on pornography in Great Britain decided at its first meeting, which was chaired by a philosopher, not to begin its deliberations by trying to define pornography, and it did not even end by doing so. We can perfectly well know what literature is, in the same way, or tragedy, without having a definition that satisfies any one of us; and as for an agreed definition, that is an even remoter absurdity than a definition satisfying only to a single mind.

The simple truth is that we do not need agreement, whether in morality or in the arts, in order to know anything at all. No one would think it an objection to believing that the earth is round, rather than flat, to be told that not everyone agrees that it is so. Strindberg thought Shakespeare a poor dramatist, but we do not hesitate long between the two most evident explanations for this—that he was right, or that he did not know much English; and the simple truth, sure enough, is that he did not know much English. Some murderers have continued to believe and to proclaim, even after conviction, that they behaved properly, Franz Stangl, for example, when serving a prison sentence after the war as a Nazi camp commandant, remained fully convinced that, since he had behaved efficiently as an exterminator and under orders not of his making, his sentence was plainly unjust; but it would seem very strange to suppose that his agreement to the proposition that mass murder is wrong would have to be gained before we could certainly say that we knew that it was. Nor is my agreement required before the conclusions reached in a chemical laboratory are accepted, since I know little or nothing about experimental chemistry. There is no substance at all, then, in the initial claim confidently made in Modern Critical Theory that a proposition needs to be agreed on in order to be true.

The demand for a verbal definition, whether agreed or not, is in any case misconceived. We perfectly well know what certain familiar fruits taste like, such as apples and pears—so much so that we could infallibly distinguish them blindfold when laid on the tongue—and yet without being able to give a sufficient account of how we infallibly perform that familiar process of distinction. It seems plain, then, that we can know what something is without being able to give an account of it; so that the complaint that we have no agreed definition of literature looks like a very odd ground for denying that we know what literature is. A Russian ballerina, when asked what she meant by a dance, once replied: “If I could say it in so many words, do you think I should take the trouble of dancing it?” And if we could say what Tragedy is, in any reasonable number of words, should we take the trouble to act, read, or watch Oedipus or Lear? It is because, in an absolute sense, we cannot say everything that they are that we want them.

Another theoretical fallacy is to confuse precision with accuracy. An account is precise when it is in no sense vague or blurred: a map of Britain in the form of a perfect triangle, for example, would be precise; so is a newspaper caricature; and so is the ethical injunction “It is wrong to tell lies.” But all such accounts, though precise, are inaccurate. The triangular map fails to describe the ins-and-outs of the British coastline; the caricature of President Reagan or Mrs. Thatcher leaves most facial details out and exaggerates what is left; and the rule against lying, as any thoughtful moralist or casuist could tell you, is subject to exceptions. In many cases like these, the more precise an account is, the less accurate it is likely to be. A map of the Sahara desert, for example, could not fail to grow more inaccurate if its limits were made to look precise: since desert merges into surrounding non-desert, anything with the precision of a line is certain to be misleading. Some phenomena are by their natures blurred at the edges: the Sahara, for example, is not bordered in the sense that France is bordered from Germany. And the demand for definitions takes no account of that difference between precision and accuracy: it rashly assumes, that is, that any account must be the more accurate for being the more precise.

The distinction may be called Wittgenstein’s chief contribution to philosophy, and it is a pity it has been so little noticed by literary critics, and more especially by literary theorists. It was wittily summed up years ago in an article by John Widsom in Mind (1952) in the question “Can you play chess without the queen?” That is a vivid instance of Wittgenstein’s own famous proposition, “When I tell you to stand about here, that is exactly what I mean.” Both remarks illustrate the distinction between precision and accuracy in a memorable way. In photographing a building, the best place to stand is “about here”; it would not matter to move a step or two to right or left, but it would matter to move 20 yards. If, before beginning a game of chess, we remove the two queens and then play the game according to all the usual rules, are we playing chess or some other game? In all such cases we are in the scrubland, so to speak, that fuzzily divides desert from farmland. And when we ask whether Samuel Beckett’s Waiting for Godot counts as a tragedy, or as a comedy, or as neither, or whether the metre of Milton’s “Lycidas” is a canzone or not—or whether it is always wrong to tell a lie— we are back in the scrub land. It is not that we are failing to find the right or accurate answers to such question when we decline to give a single and precise answer. It is rather that precision is not there to be found.

The point, Wittgensteinian as it is, is to be found in the writings of Samuel Johnson. In the 28th chapter of Rasselas (1759), the hero hotly debates with his sister the rival claims of marriage and celibacy, and Rasselas tries to convict Nekayah of a logical fallacy: “Both conditions may be bad, but they cannot both be worst.” And his sister rounds on him in Wittgensteinian style:

“I did not expect,” answered the princess, “to hear that imputed to falsehood which is the consequence only of frailty. To the mind, as to the eye, it is difficult to compare with exactness objects vast in their extent, and various in their parts. Where we see or conceive the whole at once, we readily note the discriminations and decide the preference: but of two systems, of which neither can be surveyed by any human being in its full compass of magnitude and multiplicity of complication, where is to wonder that, judging of the whole by parts, I am alternately affected by one and the other as either presses on my memory or fancy? We differ from ourselves just as we differ from each other when we see only part of the question, as in the multifarious relations of politics and morality; but when we perceive the whole at once, as in numerical computations, we all agree in one judgment, and none ever varies his opinion.

To the multifarious relations of politics and morality, Nekayah might have added those of the arts. Many critical disagreements, like many moral disagreements, spring from the complexity and lack of precise borders to the questions at stake. To persuade Strindberg of the highly multifarious question of Shakespeare’s greatness as a dramatist, “in its full compass of magnitude and multiplicity of complication,” as Johnson grandly put it, we should have to disentangle at length the causes of Strindberg’s mistake, as I do not hesitate to call it: a lack of English, a bad translation, a false expectation about the nature of drama, or a crippling ignorance about Elizabethan theatrical conditions. Of course Strindberg’s agreement might still be hard to have. But it would be false to suggest that there is no critical argument by which, in principle, it could be had.

III

Beyond and above all this, the most potent enemy of humanism in our times has been the doctrine of conditioning.

In the 20th century, reductive attacks on humanist discourse have most commonly been either racial or class-based. “You only believe that because you are Jewish,” for example, or “You only believe that because you are bourgeois.” Sidney and Beatrice Webb, pro-Soviet though they were when they wrote Soviet Communism (1935), were mildly disturbed to discover that there was an academic slogan in Stalin’s Russia that ran “We stand for Party in Mathematics”; and the Nazis, similarly, as Fredrich Hayek reported years ago in The Road to Serfdom (1944), had a mathematical journal full of “Party in mathematics.” More recently a third and sex-based ground has been offered: “You only believe that because you are a man.” In the 1970’s, for example, when a woman scientist was elected for the first time head of a French polytechnique—one of the great academic institutes of the French Republic, that is to say— the event, though welcomed by some, was publicly rejected by organized feminist opinion on the grounds that she would be teaching what they called “male-dominated science.”

I am less concerned, for the moment, with the elements that distinguish these three forms of antihumanism—racialism, class war, and feminism—than with what they have in common. For what they have in common is the assumption that it is enough to identify the cause of a belief in order to dismiss it. “You only believe that because you are Jewish/bourgeois/male. . . .” And since it is often possible to identify the principal cause why we believe in something, and always possible to pretend that one has identified it, it follows that any conviction can be briskly and conveniently discredited by arguments in that form.

But to all such simple reductionism, whether Fascist, Marxist or feminist, there is a brisk and convenient answer. And that answer is that to know anything at all, at least in an articulate sense, is a mental state that is caused—most commonly by observation or by teaching—and that to identify such causes has little to do with any attempt to endorse or discredit such knowledge. To take a simple example: being more literate than numerate, and scarcely scientific at all, I only know that water consists of two parts of hydrogen to one of oxygen because I was taught it. That may be a highly inadequate ground for believing it, but it is not an adequate ground for denying it. Any grounds for believing that water is H2O are simply independent of my personal powers, as a nonscientist, to grasp or justify that formula. To call science male-dominated, in a similar way, or Mendelssohn’s music Jewish, or Victorian fiction bourgeois might help to identify some features of that science, music, or fiction, and might conceivably help us to discover how they came to be what they are But such judgements could not, of themselves, prove the achievements of artists and scientists to be anything less than they are; and those who accept the Fascist, Marxist or feminist view of conditioning have never shown that they do. The value of a belief is not to be confused with the causes, inadequate as they often are, that have led to its acceptance.

Nor have such skeptics ever shown how their own arguments are exempt from themselves. For if all beliefs are conditioned, then racialism, Marxism, and feminism are conditioned, too. Miss Glenda Jackson is a feminist it may be presumed because she has been conditioned into it: she has heard people talk feminism and believed them. But that does not help in deciding whether any of her convictions are true: it merely shows that the apostles of easy reduction are the victims of self-contradiction. For if all beliefs are conditioned, then the belief that all beliefs are conditioned is itself conditioned. And if it is an objection to a belief to say it is that, then it is an objection to the doctrine of conditioning, too.

All such reductions, it seems clear, are subject to convenient exceptions. I have never known a Marxist who believed that Karl Marx believed what he did because he had been conditioned into believing it. We are entitled, surely, to ask on what grounds the exemption is made in his case. If he is exempt, might not others claim exemption too? When a feminist calls science male-dominated, for example, she is plainly implying that she, exceptionally, can discern those features that male scientists have put there specifically because they are men. But if it is possible to discern them, then it cannot be true to say that science is altogether male-dominated. For if that domination were total, we should all be subject to it; and the feminist has plainly implied that she is not.

Samuel Johnson, again, has shown the fly the way out of the fly bottle here. In a conversation reported by Boswell from the Isle of Skye, in September 1773, he insisted that all good manners are acquired—and all virtue and all critical judgment, too, he might have said—and that it is no objection to them to say that they are:

“Common language speaks the truth as to this: we say, a person is well bred.”

And when Lady M’Leod asked if no man were naturally good, Johnson replied categorically: “No, madam, no more than a wolf.” “Nor no woman, sir?” asked Boswell, mischievously. “No, sir,” said Johnson. To which Lady M’Leod remarked shudderingly, and under her breath, “This is worse than Swift.”

But if worse than Swift means’ misanthropic, then we may doubt that it is. Johnson’s point is common to the Enlightenment of his own century, after all, and to the long humanistic tradition stretching out behind it; and it is not self-evidently pessimistic. Man knows what he knows about science, morality, and the arts because he has learned it. He has watched and listened and tested. He has acquired his mother tongue, in the first instance, by such means, and other languages, too, if at all, by such means. Of ourselves we know nothing: we know because we find out. To object to that cognitive process, whether in literature or elsewhere, that it derives from sources beyond ourselves is to say nothing against its objective standing. It needs to derive from elsewhere.

My own helplessness before the multiplication table or the chemical composition of water was saved by teachers working through an accumulation of principles built up since the ancient Babylonians, and it is no objection whatever to my belief to say that I could not have thought of any of it for myself—still less than I did not in fact think of any of it for myself. Just as well. . . . Any mathematical or chemical principle I might have conceived of in isolation from that tradition, by now thousands of years old, is highly unlikely to be of any value whatever. In a similar way, what I know about literature is none the worse for being derived from others and may be all the better for being so derived.

When someone years ago objected that we know far more than dead writers, T.S. Eliot, a critic firmly if agonizedly in the humanist tradition, replied “Precisely; and they are that which we know.” That remark from his essay on “Tradition and the Individual Talent” (1919) is by now more than 60 years old. But we have yet to absorb it, and its wisdom; yet to realize its full cogency and force; yet to use all of that force and cogency as if they were there.

0 Comments

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Recommended Reading