Tuesday, August 04, 2009

Whether true or not

In a comment to his original post, Andrew Seal declines to defend his “very queer” reading of Death Comes for the Archbishop. The blame, he says, is not his. “I’m done arguing with you for entertainment’s sake,” he says. “If I thought you ever might countenance a view that you haven’t already accepted, I’d make an effort.”

This is not a particularly novel approach to refutation. It spices up the huffiness of “I won’t dignify that with a reply” by adding a pinch of argumentum ad hominem. But what happens if I stipulate that Seal is correct about me as a person? Despite evidence to the contrary, I refuse not merely to consider but even to countenance—to put up with—a view that I have not accepted prior to hearing it explained. Okay, this doesn’t make any sense to me either. How can you swallow an opinion before you fully grasp it? Let that pass. What Seal is trying to say is that I am close-minded, inflexible, mulish. I just will not agree to cultural marxism, no matter how many times I hear it explained to me. Fine. I stipulate this is true about me too. My mind is made up. Ain’t nothing you—or, at least, Seal—can do to change it.

My question is this. Do any of these malignancies and personality defects on my side absolve someone who has advanced a view from defending it against criticism?

I am assuming, from his comment, that Seal believes it is rationally inadequate to disagree with criticism in advance of reading it. The principle of non-contradiction would seem to suggest, accordingly, that either the critic who offers a “very queer” reading and then confirms it as “significantly more meaningful,” or the critic who denies the reading, is mistaken. Is either critic excused from the intellectual responsibility of correcting error by the miscreancy of the other?

If I am serious about my views, I must defend them against all comers. To complain about how criticism is hurled at me, whether it is rude or aggressive, is to protect my personal dignity, not the validity of my thought. To remark upon the place or rank of my critic (“I will not lower myself to answer a mere undergraduate”) is anxiously to guard my status, which implies that my ideas are advanced, not in pursuit of truth, but to bolster my reputation. And to sneer at the person of my critic is to preserve my identity, because my views contribute to my sense of self, whether they are true or not.

Monday, August 03, 2009

Aesthetic offenses

Nearly everybody understands a legal offense (breaking a law) or a moral wrong (irresponsibility toward another), but what is an aesthetic offense, a crime against art?

Daniel Green is pretty sure it is not the same thing as a moral wrong. Recoiling from the conservatism of Roger Scruton’s essay “Beauty and Desecration” in the spring issue of the City Journal, Green tries to distinguish between moral objections on one hand to slicing off a prostitute’s nipples and presenting them to the lead soprano in Mozart’s light-hearted opera Die Entführung aus dem Serail, or littering the stage with rutting couples, including urination as foreplay and forced oral sex, and on the other hand a careful account of an operatic production’s “aesthetic flaws”:

I am quite willing to believe that those responsible for it thought it a clever idea to “set the opera in a Berlin brothel, with Selim as pimp and Konstanze one of the prostitutes,” but ultimately this is just an aesthetically vacuous attempt to “update” Mozart, to run roughshod over Mozart’s original vision of his opera and establish their own overwhelmingly lame one in its place. It is a practice to be found not only in opera but in theater in general, whereby directors and producers with the aesthetic sensibilities of lizards attempt to keep the great works “relevant.” One could, I suppose, call this artistic cluelessness a “moral” problem, but most of what Scruton sees as the unleashing of “moral chaos” is finally just the consequence of the aesthetic incompetence of some those entrused with the job of re-presenting the theatrical art of the past.Green uses the words aesthetic or artistic four times in this passage, but I am no closer to understanding what he means by them.

Scruton is much clearer. Art is the discovery and representation of beauty, and beauty is the affirmation and truth to life. On Scruton’s showing, director Calixto Bieito’s production of Die Entführung aus dem Serail at the Komische Oper Berlin in 2004 was an artistic failure, because it was ugly and false to life.

Green uses the concept of the aesthetic in so many different ways he only ends up confusing the issue. Thus Bieito’s production was “aesthetically vacuous.” (Proposition 1: Art must have a meaning or purpose.) Bieito resembles other champions of Regietheater (German for “director’s theater”): all of them have “the aesthetic sensibilities of lizards.” (Proposition 2: Art requires a warm-blooded—presumably, a human—capacity for perception and feeling.) They exhibit “artistic cluelessness.” (Proposition 3: Art is a knowledge.) Their efforts are examples of “aesthetic incompetence.” (Proposition 4: Art is an ability.)

Set aside his obvious misreading of Scruton. I still don’t see how this is an improvement, how it eliminates the virus of morality to produce a healthy conception of art. Indeed, Green’s principal objection is that Bieito is irresponsible to “Mozart’s original vision,” and this, as I argued yesterday, is a moral failing. Upon closer examination, Green’s understanding of aesthetics turns out to be deeply conventional, and thoroughly confused.

A ramshackle dwelling, with materials borrowed from classical antiquity (art is an ability), the Renaissance (art is a knowledge), the Romantics (art requires a capacity for perception and feeling), and the Victorians (art must have a purpose), it does not provide the secure blockhouse for artistic autonomy that Green hopes it will. All it succeeds in doing is to recapitulate the history of aesthetics without reconciling the various doctrines that have been advanced at various times for various reasons.

More to the point, it leaves the question of aesthetic offense entirely up in the air. Is it an aesthetic affront to violate just one of Green’s implicit propositions, or is it necessary to infringe all four? Ronald Firbank’s fiction is nonsense, but it displays a genius for prose and literary form. And to read it is to acquire a workable knowledge of how fiction goes about achieving its effects. Aesthetic offense, then, or not? Or take the fiction of Louis-Ferdinand Céline. It has a clear purpose (among other things, to call into question the reality of love and to spread hatred of the Jews); it is competently made; its knowledge of a world without love and the fine details of antisemitism is unshakable. But Céline has the aesthetic sensibility of a lizard. Aesthetic offense or not?

I don’t particularly like Scruton’s conception of art either, but the trouble with it is the same as the trouble with Green’s. It conceives art as a category of value, and serves then as a scepter for knighting some works for their great value in order to distinguish them from other works of lesser value or none. But to call something art is to say nothing unless it is immediately clear what standards of artistic acceptibility are being invoked. “Beauty” will not do, because it begs the question. Green’s four-part answer simply multiples the confusion.

The whole conception of art is of limited utility in the study of literature, but in as far as a conception is demanded, what needs to happen is a shift from conceiving art as works of value to thinking of it as a specific kind of mental activity, or what Oakeshott calls a mode of experience. Art is what invites contemplation, whether it is the Rothko Chapel here in Houston or a 1961 Jaguar E Type, and an offense against art is to do something other than contemplate it. If I use the Rothko Chapel to host my son’s bar mitsvah, I am treating it other than aesthetically, and if I drive a Jaguar to class in College Station, I may be using it for the purpose for which it was intended, but I am hardly driving a work of art.

Sunday, August 02, 2009

Solipsism in interpretation

There is an even greater danger than intellectual error in reading literary texts in the comforting favorable light of current theory. The danger is that the text will be treated, not as belonging to someone else, but as mine. It will be read as confirming my own intellectual persuasions and loyalties of feeling. Any differences will be elided or smoothed over. The text will become the next room of my own moral development; it will be arranged on my shelves alongside the other secondary sources that illuminate and augment the primary source of my mind. Its value will reside in its significance to me, not its meaning in itself. The text will be safely solipsized.

An especially comical version of this habit popped up on the literary blogscape the other day when Andrew Seal read Death Comes for the Archbishop,Willa Cather’s historical novel about Jean Baptiste Lamy of Santa Fe (left) and his vicar Joseph Machebeuf (right), as an “achingly beautiful love story about two men.” (I append the photos of the two men to indicate the prima facie unlikelihood of any such interpretation. You gonna believe Seal, or your lying eyes?) After admitting that the interpretation is achingly common (“I know that Cather is often assumed to have been queer herself”), Seal introduces the distinction that really grabs his attention:

I think it’s completely, 100% intellectually valid to read the novel as a very queer love story. But I also know that the novel doesn’t make this reading necessary, and that arguing someone into a queer reading might be a self-defeating proposition: you haven’t given them the experience of reading the novel this way, just the idea that it can be read this way. And I think being able to share the experience of reading a novel is sometimes much more important than being able to convince someone that your idea of a novel is possible or valid.By this means Seal seems to believe that he has insulated himself from the refutation that Cather’s homosexuality (and thus a “very queer” reading of the novel) logically cannot be “assumed” if another explanation of the facts is equally plausible. He shrugs that his “revisionary reading” is not supported by the text “very well,” but he prefers it to “a ‘straight’ reading” (ha ha ha), because it draws him “deeper into the book” and renders it “significantly more meaningful.”

He means “significantly more significant.” Meaning is stable; it is assigned forever when an author chooses a distinct and finite set of signs to represent it. Change the signs and the meaning is changed; otherwise it is changeless. Significance, as E. D. Hirsch Jr. says, “names a relationship” between a text and its author, readers, historical era, body of opinion, criteria of value, “or indeed anything imaginable.”[1] Significance varies from reader to reader, era to era. Indeed, there is no gainsaying significance, because there is no probative mechanism for challenging a text’s special relationship to you. All you must do is testify to it.

And that’s how Seal wants his “reading experience” received—as testimony, not as an empirical hypothesis that can be tested and thus falsified. In the terminology of Levinas, he wants his account of her novel to be heard, not as speech for-the-other—not as a statement on Cather’s behalf—but as speech by-the-other, which gives me the responsibility of attending to it as I would to a cry from someone in pain. “It is only in this way,” Levinas says, “that the for-the-other, the passivity more passive still than any passivity, the emphasis of sense, is kept from being for-oneself.”[2]

What Seal fails to notice is that he expects from his own readers what he is unwilling to grant Cather—the charity of respecting his intended meaning. He worries about the “communicative value” of what he is doing to Death Comes for the Archbishop. He is anxious lest sharing his experience of the novel come across as “shallow.” These are the stirrings of conscience.

“Am I,” Seal asks plaintively, “talking to you about the texts, or about myself?” The latter, sir. Rather than seeking evidence that confirms the solipsism of your interpretation you might rummage about for the impervious facts that falsify it. That Cather clearly sympathizes with Bishop Latour’s celibacy over Padre Martínez’s debauchery—that the doctrine of celibacy is crucial to the novel—might give you pause, for example. And in this way you might return from the roadside weeds of autobiography to the garden of knowledge.
____________________

[1] E. D. Hirsch Jr., Validity in Interpretation (New Haven: Yale University Press, 1967), p. 8.

[2] Emmanuel Levinas, Otherwise than Being, or Beyond Essence (1974), trans. Alphonso Lingis (The Hague: Nijhoff, 1981), p. 50.

Friday, July 31, 2009

A writer’s desk


The work week has ended, and the Sabbath queen is on her way. Maybe tomorrow I will clear away some of the clutter. Not now, though; not now.

The question in criticism

Everyone seems to like what I have asserted about literary criticism, at least “on a high level,” as Jake Seliger puts it, but no one really agrees with me.

Litlove, for example, accepts my credo that literary criticism must contribute to the store of human understanding, but then she adds that such contributions emerge “gradually, organically, from the process of continual, profound discussion in which teachers and students explore every angle and aspect of a text and enlighten each other.” What she endorses, in other words, is Mark Bauerlein’s call for fewer scholarly publications and more teaching.

I don’t know what it means, though, to say that new contributions to knowledge grow organically out of discussions between students and teachers. Such discussions may provoke curiosity, but when the research bears fruit, it is because someone has excused herself to go off and inquire into a question. Human knowledge is expanded by inquiry into something that is not yet known and understood. If and only if class discussion is organized upon the model of inquiry will it yield organic produce.

And that is the key. The institutional reforms proposed by Bauerlein and Seliger (limit promotion materials to one hundred pages, change peer-reviewed publication into a link to a paper on the author’s website) are sharp-eyed and promising—I hope they will be instituted—and are entirely beside the question.

Nor is the question, as Seliger would have it, “the difficulty of deciding what is good criticism.” Such a question can never be decided before the fact, and even when you have finally hit upon a practitioner of good criticism, the question has still not been decided:

Some writers tell us that this or that historian really did solve the problem, he wrote history as it should be written and all we have to do, if we wish to be good historians, is to copy him. But don’t you believe it. Nobody has solved the problem of how history should be written, and for the same reason that nobody has solved the problem of how poetry should be written, or how chess should be played or how houses should be built—because there is no such problem. We have been told, so often as to be nearly persuaded, that history must be scientific, or it must be imaginative, or it must be impartial, or it must be impersonal. But why all this “must”? Why should there be only one kind of history? And we are particularly puzzled because, as far as we know, there are a great many different kinds of history, and we find it very difficult to say one kind is really so much better than any other that it is the only kind we can allow the name to.[1]There are good critics who practice moral criticism (Yvor Winters), good critics who practice formalist criticism (Robert Penn Warren), good critics who practice the criticism of political ideas (Irving Howe), good critics who practice biographical criticism (Cynthia Ozick), good critics who practice philological criticism (J. V. Cunningham), good critics who practice the criticism of criticism (Frederick Crews), good critics who exercise criticism in the construction of literary tradition (Ruth R. Wisse), and perhaps even good critics who “do” theory (if I could only think of some. Or even one). What is good criticism can never be decided, because it is not a real question.

The only question is how to enlarge human understanding—how, that is, to return from the lure of career advancement, which encourages the critic to write something that is merely new and different, regardless of its validity, to the professional responsibility of adding to the store of human knowledge. “[M]ost people who are writing just to ‘write something new and different’ would argue they are adding to the store of human knowledge,” Seliger observes. He is right. Who am I to assert otherwise?

The careerist is motivated by “getting on,” while the professional is motivated by a sense of responsibility to the profession. And in the critic’s case, that means responsibility to the growth of literary knowledge. What follows from this, however, is that no one knows but me whether I am a careerist, because no one but me has access to my motivations. But what is more, the shift from careerism to professionalism—from commitment to career to commitment to knowledge—is entirely a matter of motivation, and entirely within my control.

The shift will never occur, though, unless the ideal of contributing to knowledge is clearly and repeatedly enunciated, and if it is not lost in the swarm of institutional proposals for merely scaling back the demands of a career.
____________________

[1] Michael Oakeshott, “What Do We Look for in an Historian?” (1928), in What Is History? and Other Essays, ed. Luke O’Sullivan (Charlottesville, Va.: Imprint Academic, 2004), p. 135.

Thursday, July 30, 2009

Careers in criticism

The reason so much literary criticism is “crap”—his word, not mine—is careerism, Elberry says. He shares an anecdote:

I was talking to a young, ambitious, power-suited female academic in 2001, who as much as admitted that she didn’t study the literature that interested her, but rather the things she could easily write about, so she could publish, and so get on. She even said something like “you’ve got to play the game.”But if this is, as he says, the deep problem in criticism (“people like this exist and thrive in academia, like maggots in a carcass,” Elberry adds, and “will never write anything worth reading”), what is the solution? Like Mark Edmundson, whose proposal I dissected earlier, he suggests a “moratorium on academic publications.” But this won’t rid the world of crappy criticism. It will merely delay its reemergence.

What then is to be done? I have an idea or two.

Let me start by focusing for a moment on Elberry’s diagnosis of the problem. He is absolutely right that far too many academics are motivated by career rather than, as I put it the day before yesterday, the ambition to contribute to knowledge.

Career was originally a French word that entered the language in the sixteenth century for a horse-racing track. Within a century, the word came to be applied to what the horses were doing on the track—galloping at top speed. “To pass a career is but to run with strength and courage such a course as is [fit] for [a horse’s] ability,” wrote the poet Gervase Markham in 1671.

Another century, and the word had come to refer to a rapid and continuous course of action; an uninterrupted progress. In 1722, the philosopher William Wollaston warned his readers “not to permit the reins to our passions, or give them their full career.” Two hundred years into the word’s career, and the smell of horses was still upon it.

Not until a hundred and fifty years later was the word career first used in its current sense of a “course of professional employment, which affords the opportunity for advancement in the world” (OED). In this sense the word was first used in Felix Holt, the 1868 novel by George Eliot. The novel opens when Harold Transome, the second-born son of landed gentry, returns to England from the colonies with a self-made fortune. Under these conditions, at that time, Harold would have been expected to settle down to a life of country leisure. He has decided to do otherwise—to run for Parliament, as a radical. He could have had a comfortable life, his mother reflects bitterly. But no: “Harold must go and make a career for himself.”

The distinctly modern notion of a career upsets the apple cart of ancient expectations. In the Laws, Plato explicitly raises the question of what life should be like for “men whose necessities have been moderately provided for.” There is a “double, or more than double, glut of occupation” in such a life, he concludes, because it is “concerned with the practice of every virtue of body and mind” (7.806d–807d). When Abraham is confronted by God at the age of ninety-nine, he is told: Walk in my ways, and be complete [tamid]. God doesn’t say: Go and make a career for yourself.

Not until modernity, in short, was career understood to be separate from and perhaps at odds with life. All career advice is founded upon the assumption that such a separation exists. The current commonplace, for example, advises that you need to balance career and life. You can’t help suspecting that by balancing career and life, most people mean juggling them.

Here, for example, is some advice on how to balance your life and career:The flight attendant, giving preflight instructions, reminds individuals to first place on the oxygen mask on themselves before attempting to assist others. When you are busy bustling through life, how often do you take time to take care of you, first? You owe it to yourself! If you do not take care of you, you will not be around to take care of those who need you the most.This advice comes from the Institute of Family and Work, but you’d never know it. For it expresses the attitude which is most inimical to family: I have to take care of me first. I owe it to myself!

What is more, this attitude is already the driving force in many careers. I don’t know about you, but I already work with a whole lot of people who don’t need to be reminded to put themselves first. In fact, the habit of putting oneself first is what produces the careerist. Careerism is the self-serving, promotion-oriented behavior that seeks career advancement above all else.

Think of what we say to each other when we meet a new person. “What do you do?” she asks. “I’m a professor [or a doctor or lawyer or candlestick maker],” he answers. She asks what we do, but we reply with our sense of what we are—perhaps because we cannot bear to admit, even to ourselves, what we really do all day long. We’d be bored or appalled. And perhaps we can’t acknowledge that much of what we do is an angling for promotion. To the degree that we want to be rather than do we are all careerists.

Thus the ambitious, power-suited academic in Elberry’s anecdote wants to be published. What she does is to “get on.” At a cocktail party she would have to answer, if she were honest, “I write tedious, niggling papers for scholarly journals that nobody reads, because all I really want to do is advance to the top of my profession. I have no idea what I’ll do once I get there. Check back with me in about ten years—if I’m still talking to you. You may be beneath me by then.”

The careerist avoids risk for fear that a mistake or failure might tarnish his image and put promotion and career progress in jeopardy. You can easily imagine the effect that this risk avoidance, this reluctance to stand for principle, has upon a profession or institution.

In literary scholarship, the effect is pretty obvious. Because promotion is based upon the quantity of the work that is produced, not on its quality, scholars take shortcuts to publication. Hence the enduring popularity of the “application” model. Select a currently dominant figure of thought from column A and a canonical (that is, frequently discussed) text from column B. Now reinterpret text B by rewording it in the vocabulary of theorist A, et voilà! A new publication! Small wonder that so many literary scholars rush into print with work that is shoddy, half-baked, and sometimes even full of untruths. Probative inquiry consists almost entirely of combing through the names in column A. The conclusions yielded by “applying” their concepts and categories to the texts in column B are never examined. They are taken on faith.

It is a maxim with me that ethics place a person at a competitive disadvantage. If your scruples prevent you from doing what is no problem for a careerist then you face an obstacle he does not. There is another way of looking at it, however. If you have a competitive advantage over someone, it may be owing to your lack of scruples.

Careerism, then, does not merely damage professions and institutions. It damages the person. And, indeed, that is a better definition of careerism. It is the practice of advancing a career at the expense of a person’s integrity.

If careerism is the problem in the writing of literary scholarship and criticism the solution is the reinstatement of integrity, but there is of course no professional nor institutional mechanism for doing so. It would require the reintegration of life and career. It would require foregoing the lure of personal advancement in favor of responsibility to—to what?

The ancients would not have hestitated to reply. For the Greeks, virtue; for the Jews and Christians, God. The modern, though, believes in work (“Love and work are the cornerstones of our humanness,” Freud says); and for the modern, then, the question becomes this. What is he reponsible to on the job? Not in the sense of whom he must answer to, but rather what is he responsive to?

Elberry mistakes my answer to this question where literary critics are concerned. He thinks that I am suggesting that “critics should write about less well-known books,” but I suggest this only as a method, a practical expedient, for undertaking their real responsibility: namely, to contribute to literary knowledge. The demand upon critics (in the university and out) must be, not to “write something new and different,” but to add something new and different to the store of human understanding. If they accepted this as their professional responsibility, who knows? Their careers might even advance.

Tuesday, July 28, 2009

Criticism’s returns

In an essay for the Chronicle of Higher Education, Mark Bauerlein concludes that literary scholarship has reached the point of “diminishing returns,” and a “redistribution” of efforts is in order, “particularly toward teaching” (h/t: Nigel Beale).

The problem, according to Bauerlein, is that an upsurge in scholarly “productivity”—fueled by universities’ policy of rewarding nothing else—obliged young scholars to look everywhere for an angle, a new and untried “approach” (in the English department’s jargon) to old and sorely tried literary texts, in order to satisfy the traditional scholarly requirement of making an original contribution. Over the past five decades, Bauerlein reports elsewhere, scholarly publications in language and literature have increased fivefold from thirteen thousand to seventy-two thousand a year. At same time, the audience for literary scholarship has shrunk to the vanishing point. Sales of academic books average about three hundred copies per title. What happened? “The audience got bored,” Bauerlein says. But what bored them? Here Bauerlein is somewhat less persuasive.

Some time in the late ’seventies, the conception of criticism as the explication of a difficult text was replaced by the self-inflating notion of criticism as performance. “The old model of the critic as secondary, derivative, even parasitical gave way to the critic as creative and adventuresome,” Bauerlein says. And certainly there is much truth to the complaint that the rise of literary theory led directly to a decline in the quality of literary criticism. Even by the time my own essay on the subject was reprinted in Theory’s Empire in 2004, critics were yawning that the arraignment of literary theorists on the charge of bad writing was fit to drop. Bauerlein himself said the same year that “Outside the tiny group of academic theorists, the question is closed.” Under the influence of theory, literary scholars now write badly, and there’s an end on ’t.

Yet Bauerlein’s history is confused. The real shift that steered criticism away from generally well-educated readers interested in literature but not professionally consumed by it occurred earlier. In fact, the shift can be summed up in Bauerlein’s own phrase: criticism-as-explication. The Anglo-American new criticism, celebrated by John Crowe Ransom as “a kind of literary criticism more intensive than a language has ever known,” spelled the doom of the “life-and-works” essay that was once a staple of general-interest magazines. Historical background and biographical information came to be held as irrelevant to literary criticism, because any claim for their relevance was theoretically incoherent. Prior to a close reading of a text no knowledge of what is necessary for understanding it is even possible, and consequently, the text is the whole context of understanding.

Although the literary theorists who emerged in the late ’seventies declared their independence from the doctrine of “close reading,” and though the next wave of theorists triumphantly announced the arrival of a new historicism in literary criticism, they cleaved as tightly to the close details of a literary text as their predecessors.

It is instructive, for example, to compare the “approaches” of the earliest and most recent full-length critical essays on the same author. In 1923, the University of Chicago professor Percy H. Boynton published a 3,000-word introduction to Edith Wharton in the English Journal. He begins by locating her in the American tradition, relating her to this country’s widening “expression of national self-consciousness”; then he discusses her upbringing, education, travels, and social class and ideals, mentioning four of her first eight books in passing; he devotes about a paragraph apiece in the next section of his essay to The Age of Innocence, The Custom of the Country, Ethan Frome, and The House of Mirth, connecting the books to Wharton’s overarching social themes; and in the last section he delivers a verdict on her “work as a whole,” based primarily on Wharton’s prose style, dialogue, and characterization.

By contrast, last month Nick Bromell published an 8,000-word essay in American Literature comparing Wharton’s House of Mirth to Nella Larsen’s much slighter 1929 novel Passing,

not because they are representative of a historical moment or because they constitute an instructive genealogy, but because they have a striking power to engage readers in a perplexing problem of democracy. That is, they not only depict the “practices of listening” undertaken by their characters, but they also require such practices of their readers, schooling us to perform better than the characters by understanding what they do not: that knowing others means knowing them through—and not just despite or in terms of—their differences. Indeed, it is in the felt experience of their readers that these novels most fully activate their democratic pedagogy.This propositio is delivered only after Bromell has completed a careful two-page warmup, which establishes his theoretical stance and loyalties. When he finally gets around to it, his section on The House of Mirth is a third again as long as Boynton’s entire essay, and glances at none of Wharton’s other books. (He quotes her handbook on The Writing of Fiction once.)

Both critics attend to the social theme. But where Boynton speaks of Wharton’s uninterest in “social institutions of any kind,” “the social game of hide-and-go-seek as it is played in respectable society,” Undine Spragg’s “meteoric ascent in the social world,” and the novelist’s reluctance to venture “outside the social pale” except to invoke “forces of fate,” Bromell introduces his central term in this way:Cognizant of the need to understand more richly what it means to know an Other, political theorists are starting to produce phenomenological accounts of intersubjective communication or what I will call “social knowing.”A footnote directs the reader, if there is any, to a further elaboration of the term. Bromell uses it twelve times in the section on The House of Mirth alone: “The erotics of conjectural social knowing,” “successful social knowing,” “constitutive of social knowing,” “[George Herbert] Mead’s and [Donald] Davidson’s sunny conceptions of the self and social knowing,” “difference as an immutable obstacle to social knowing,” et cetera.

The source of literary criticism’s “diminishing returns,” in short, is not merely an upsurge in “productivity” nor a shift to criticism-as-performance. The problem is with a kind of literary criticism more intensive than a language has ever known, a kind of criticism that has only grown more intensive since Ransom’s day. Its microscopic focus, the stretching of its subject to excessive lengths, its fervid humorlessness, its exclusive concern with a text’s inner tensions to the utter neglect of literature’s extensions—the references and even the applications to a world outside the text—have narrowed the appeal of literary criticism and quite naturally cost it readers.

The solution, as I have suggested before now, is to return from interpretation (which includes both criticism-as-explication and criticism-as-performance) to a more traditional scholarly conception of literary study. Scholars do not seek merely, in Bauerlein’s phrase, “to write something new and different”; they seek to contribute something new and different to knowledge. They are not satisfied with new “approaches.” They demand new facts, new sources, new intelligence.

Literary criticism has been narrowed, not merely in its “approach,” but in its subject matter. When I took up the question of Richard Russo’s Catholicism in Empire Falls yesterday, I consulted the MLA Bibliography to find the previous criticism on the novel. Eight years after its publication, only three articles on it have been published—and one of those, by Joseph Epstein in Commentary, was a review-essay on it and Jonathan Franzen’s Corrections. In the same period of time, sixty-four publications have appeared just on The Sound and the Fury.

I have come to believe that what literary scholars abuse as the canon—the “established canon,” the “fixed, restrictive canon”—is an image in the mirror. It exists only in the scholars’ own decisions of what to study, teach, and write about. Few literary critics display the scholarly instincts of Miriam Burstein, whom I recommend to all young graduate students as a model of the scholarly life:DAD THE EMERITUS HISTORIAN OF GRAECO-ROMAN EGYPT: Is that a good book?
ME: No.
DAD: So . . . you’re going to write about it.
The motto of her blog The Little Professor, Burstein quips, is: “I Read These Things So You Don't Have To.” Why should that not be the ambition of literary scholarship in an age of diminished returns? Rather than another intensive examination of The Sound and the Fury, perhaps some news about Faulkner’s contemporaries and the novels they published the same year—Edmund Wilson’s I Thought of Daisy, Cornell Woolrich’s Times Square, Edward Dahlberg’s Bottom Dogs, Wallace Thurman’s Blacker the Berry, Oliver La Farge’s Laughing Boy, Joseph Hergesheimer’s Swords and Roses, Robert Nathan’s There Is Another Heaven, Elizabeth Moorhead’s Clouded Hills, DuBose Heyward’s Mamba’s Daughters, O. E. Rolvaag’s Peder Victorious, Myron Brinig’s Singermann—might increase criticism’s returns.

Monday, July 27, 2009

America’s leading Catholic novelist

Calling it “one of the finest novels of recent years,” Ted Gioia nominates Richard Russo’s novel Empire Falls for the new canon. I agree, and would go one step further. Russo’s 2001 chronicle of a small town in Maine is easily one of the five best American novels from the Twenty Oughts. (What is this decade’s handle?)

Since Gioia summarizes the plot, I am thankfully relieved of that duty. Instead, I’d like to glance at an aspect of the novel that Gioia neglects—its Catholicism. Although I have no rights in the matter, being neither Catholic nor close to one, I have long believed that Russo is—after the deaths of Walker Percy in 1990 and Paul Horgan in 1995—perhaps the leading Catholic novelist in America today. Although his fiction is realistic, Russo is not merely a literary realist, a fifth-generation descendant of Sherwood Anderson and Sinclair Lewis. Like theirs, his realism grows out of an innermost familiarity with small towns, but unlike theirs it takes flight, not upon a wind of revolt, but from acceptance of the lives he finds there. Russo’s fiction is grounded upon the deeply Catholic conviction that the world as it really is—the heavens and earth created by God—deserve the novelist’s deepest respect and closest attention. He is not interested in creating an alternative reality, because he is not interested in exhibiting his creative powers and thus confining himself to them. His sensibility, in George Weigel’s phrase, is “an unmistakably Catholic sensibility: a sacramental sensibility convinced that the ordinary things of this world are the vehicles of grace and the materials of a divinely scripted drama.”

What makes this assertion all the more arresting is that Russo is pretty clearly a lapsed Catholic—a “cradle Catholic,” but no longer practicing. Yet the Catholic theme of Empire Falls is made explicit early on. In the few hours that he can spare from the Empire Grill and his daughter Tick, Miles Roby paints St. Catherine’s Church (“St. Cat’s,” as he calls it fondly). While he still attends Mass there with regularity, he is no longer a militant believer. He prefers “the notion of an all-loving God to that of an all-knowing one.” As they contemplate the church steeple that Miles is about to paint, Father Mark remarks that he “used to think God actually lived up there.” “I was just thinking how far away it is,” Miles replies. He is comforted by the idea of God’s solicitous remoteness:

It pleased him to imagine God as someone like his mother, someone beleaguered by too many responsibilities, too dog-tired to monitor an energetic boy every minute of the day, but who, out of love and fear for his safety, checked in on him whenever she could. Was this so crazy? Surely God must have other projects besides Man, just as parents had responsibilities other than raising their children?Miles does not waste much time on theology, however. With God far away, he transfers his attachment to his parish church. The rectory is one of his favorite places, and Father Mark is one of his favorite persons. He had considered taking Holy Orders until well into high school, and the “romance of the profession” stayed with him for much longer.

As his daughter points out, everyone in Empire Falls has a secret except for Miles. He is something like the town’s confessor. Where Henry James would have put the “centre of consciousness,” Russo has installed a figure of great stillness and release from striving, who is peacefully at home in the decaying paper-mill town by the polluted Knox River. Charlene, the Grill’s buxom “full service waitress” whom he desires (without taking any steps to fulfill his desire), calls him “a good man, straight and true”; his ex-wife Janine calls him an “enabler.” The truth is that he displays what Sally Fitzgerald, meditating upon a passage by Jacques Maritain, called Flannery O’Connor’s “habit of being.” Despite his failures—he dropped out of college to return home and manage a diner that he does not even own, his wife has left him for a middle-aged fitness guru—Miles has raised the level of his moral existence through the sustained and practically unseen exercise of a quiet will. He is a good man, not because he checks himself constantly against a moral code, but because he has thoroughly internalized his virtue. He doesn’t even think about it, although everyone else in town is fully aware of it.

Empire Falls is not about Miles, though. He is merely the “enabler” of the novel’s action, which occurs because his presence testifies to the reality of grace. One day, for example, he decides to scrape the old paint off the south face of St. Cat’s, even though he has only about an hour’s worth of painting left on the west face. It is, he decides, “more satisfying to be peeling something away, creating ugliness before restoring beauty.” (That’s also the history of the town in a phrase, come to think of it.) After scraping the entire wall, Miles climbs the ladder to the steeple as darkness falls:He’d felt strangely serene on the ladder, reaching farther and farther out to where the paint had bubbled and cracked. Even as he moved up and out, he felt the opposite sensation, as if he progressing down and in, through the protective paint and into the soft wood. A powerful and dangerous illusion, he knew, though he couldn’t shake the feeling that if for some reason he were to step off the ladder, he wouldn’t tumble to the ground but step onto the side of the church, as if its pull had supplanted gravity.The illusion is powerful and dangerous only because Miles does not inhabit a supernatural matrix. The real world, though, contains the certainty of grace, a feeling stronger than gravity. Its operations lead Miles down and in, scraping away lies and misperceptions to get to the truth. He scrapes away, convinced that the world, stripped bare of its cracks and bubbles, will be restored to its original beauty.

Few other American writers, living or dead, have believed as strongly as Richard Russo that the ordinary things of this world, perceived in their ordinariness, are worthy of close attention and perhaps are even redemptive.

Sunday, July 26, 2009

Five Books of immigrants

Yesterday in the Wall Street Journal, Matthew Kaminski selected the five best novels about immigrants to America—three of the five published within the last decade. Pnin, his first choice, is not a novel about an immigrant at all. Timofey Pnin is an émigré—a very different thing. The immigrant comes to America in search of a better life; the émigré comes to escape a far worse one. A more representative novel about Russian immigrants to the U.S. is Gary Shteyngart’s frantic and hilarious Russian Debutante’s Handbook (2002).

Only one novel on Kaminski’s list belongs there, in my opinion. That is Henry Roth’s Call It Sleep. Here are some other titles that scour the immigrant experience in unpredictable terms.

(1.) Pietro di Donato, Christ in Concrete (1939). A proletarian novel in origin and not merely in subject matter—the author was a bricklayer—Christ in Concrete was the first American novel to explore the lives of Italian immigrants. It is episodic rather than traditionally plotted and carelessly written in places, but these qualities only add to its foreignness—as does its portrait of working-class Catholicism, which is mixed heavily with sensuality and paganism. These are not Mario Puzo’s Italians.

(2.) John Okada, No-No Boy (1957). About a young man, the son of immigrants from Japan, who is drafted for the U.S. Army while his family is in a detention camp. He refuses induction, but he also declines to express any loyalty for the Emperor. Hence the title. The novel begins as he returns to Seattle after his imprisonment, finding himself in a unique situation that nevertheless captures something at the heart of the immigrant experience, especially for the second generation: “[I]t is not enough to be American only in the eyes of the law and it is not enough to be only half an American and know that it is an empty half. . . . I am not Japanese and I am not American.”

(3.) Paule Marshall, Brown Girl, Brownstones (1959). Selina Boyce is the daughter of immigrants from Barbados living in Brooklyn. To come of age, to come into her own, she must distance herself from her family’s “Bajan” ways. Marshall’s novel complicates the simplistic notions of race and African-American identity now current, for the Caribbean blacks in the novel have little in common with the Brooklyn blacks whose families had been in this country for more than a century. The terminology of race—“Negro,” “black,” “African-American”—conceals those differences. Marshall exposes them to the light.

(4.) Lore Segal, Her First American (1985). Ilka Weissnix, a young Jewish refugee from Hitler’s Germany, plunges into an affair with an angry middle-aged black intellectual. She is unsure whether Alabama is a Southern state, and is unable to distinguish a Negro from a Chinese. Her lover introduces her to America. You can imagine the racial taboos and linguistic barriers that must be negotiated, but as he says, there is nothing about this country that race and sex will not bring out. The author of the autobiographical Holocaust novel Other People’s Houses (1964), Segal herself emigrated to this country in 1951.

(5.) Chang-rae Lee, Native Speaker (1995). Born in Seoul and brought to the U.S. by his parents when he was three, Lee published this first novel at the age of thirty. It is an astonishingly accomplished and complex work. Based on the boycott of Korean produce stores by black activists in 1990, the novel is a summa of all the themes enunciated by the first four books on this list: the subordination of the first-generation immigrant to his job, the divided loyalties, the sense of belonging to neither nation, the tragic conflict between American blacks and later immigrants. Written in the voice of a man who is unsparingly honest about himself, whose intelligence is the archive of his family’s ambitions, and who insists upon the truth. Compared by reviewers to Ellison’s Invisible Man, it is more realistic and less political. Yet the comparison is not a scandal.

Friday, July 24, 2009

A Happy Marriage

Rafael Yglesias, A Happy Marriage (New York: Scribner, 2009). 369 pp. $26.00.

Few books have disappointed me more than Rafael Yglesias’s novel A Happy Marriage. Its title raised my expectations to probably unreachable heights. I have complained that the tradition of the novel is far more open to adultery than faithful marriage. I have regretted how very little of ordinary life—family life—gets into American writing. A Happy Marriage promised at first glance to reverse those trends. It is touted by Scribner as an “achingly honest story about what it means for two people to spend a lifetime together—and what makes a happy marriage.” But it is none of that. It aches not; neither is it honest. And it is not about what makes a happy marriage.

Yglesias, now fifty-five, first made a splash when he dropped out of his private high school to become a writer, publishing his first novel at seventeen. Hide Fox, and All After (1972) was about a teenager who leaves his private high school to become an actor. His second novel, appearing four years later, told the story of a young novelist who published his first novel in his teens. As the main character of A Happy Marriage ruefully admits, his fiction tends to be autobiographical. To date the principal exception of Yglesias’s career has been Fearless (1993), about the survivors of an airplane crash, filmed the same year by Peter Weir with a screenplay by the author. In his ninth novel—his first in thirteen years—Yglesias reverts to form, writing a flimsily disguised autobiographical account of his wife Margaret’s death from bladder cancer five years ago.

Something the poet and critic William Logan wrote upon reviewing Yglesias’s second novel The Work Is Innocent in 1976 bears repeating:

When an author hews so closely to the facts of his life, one wonders if any life, merely described, has the drama and integrity a novel demands. Fiction substitutes drama for the sustained personal insight of, say, a memoir. We would insist that autobiography more closely examine acts that stand here more symbolic than understood.After reading the opening chapters of A Happy Marriage, I began to suspect that Yglesias had chosen the transparent disguise of autobiographical fiction precisely to avoid examining his nearly three-decade marriage to Margaret Joskow more closely.

Narrated in alternating chapters, the book traces the early stages of the romance between Margaret Cohen and Enrique Sabas—the same surname as the hero of Hide Fox, and All After—and the late stages of her terminal metastatic cancer. Enrique is twenty-one when he first meets the four-years-older Margaret and fifty when she dies. When she was first diagnosed, he tried to encourage and cheer her, although he was frightened:But all those desperate feelings were long ago, two years and eight months ago, one hundred and forty-seven days and nights in the hospital ago, three major surgeries, a half dozen minor surgeries, and fourteen months of chemo ago, two remissions and two recurrences ago. Looking back through the defeated gaze of fatigue, it seemed inevitable now that it would end like this, this inch-by-inch dying, this one-track terminus when hope had become a skeleton’s grin.Detailing the progress of her disease like this, through the gaze of his own fatigue, is not as narcissistic as it may sound. The terminal cancer patient has the relatively easy part. All she must do is to die. The spouse is left with her permanent absence.

A Happy Marriage, then, might have been written to invoke her presence—to seek an immortality for her, and to share it with her. Yglesias dedicates the novel simply “For her.” But from first to last, his attention is upon himself. Margaret comes to life only as she affects him. Enrique knows as much—when heading out to purchase birthday gifts for her he realizes that he is ignorant of her tastes, when a marriage counselor asks her how she feels about their marriage he realizes that he has never made the same request—but the knowledge never grants him the power to overcome his self-involvement. He accepts it as his fate or donnée. The marriage of the title is merely Margaret’s status. Only once—in a deathbed interview with her mother upon which Enrique eavesdrops—is Margaret glimpsed in relation to someone other than him, although he confides plenty about himself apart from her.

Some such impoverished notion of it may explain the novel’s unsettling approach to the business of telling what makes a marriage happy. The first fourteen chapters—well over half the book—take Enrique and Margaret from first meeting to first night in bed. The next stage of their relationship, or at least the next stage that Yglesias finds worth recording, occurs seven years later when Enrique finds himself passionately involved with another woman. (This flashback immediately follows the eavesdropping scene at which Enrique realizes that Margaret is “so good and so kind” and he is “so mean and so bitter.” Rather than proceeding to describe her goodness, though, he hurries to justify the self-accusation of meanness.)

As this chapter of their lives closes, he and Margaret enter marriage counseling. Fifteen years later they are in Venice for their twentieth wedding anniversary. In the very next chapter—at this point Yglesias discards the mechanical device of alternating chapters—they are back in therapy, fifteen years earlier. To his credit, Enrique realizes that, in “his social class and time, New York 1983,” the conventional wisdom holds that a “bad marriage was worse for a child than a divorce.” He decides to break off the affair and remain with Margaret. Twelve years later, in 1995, his father dies of prostate cancer. In his grief, he accuses Margaret of not loving him. She protests. “I’m never going to stop loving you,” she says. “You’re my life.” Apparently these are the peaks of a happy marriage: courtship, surviving adultery, a memorable anniversary trip, solace during grief.

At one point during their courtship, Margaret tells Enrique about all the classes she has taken—tap dancing, photography, lithography, French, basic acting technique—all for fun. He reflects:He too wanted to know as much as possible about how the world worked. Not, however, for something as pointless as having fun. He wanted information to impress readers and to burrow into a character’s inner life. Work was the most invested and complicated expenditure of most people’s time; it bothered him to write about characters and not know, in a tactile and intimate way, precisely what they did each day on their jobs.An excellent point, and perhaps more novelists will begin to write about something other than writing and a writer’s special worries. Yglesias will not be among them, however. He cannot even be bothered to say anything about what his wife Margaret did each day on her job; she worked as the deputy art director of Newsweek. He is rather good, though, at something similar. He captures life on the strange and distant planet of late-stage cancer. Only then, with her tubes and shrunken flesh and consciousness dimmed by drugs, is Margaret truly other—perhaps because the experience is so defamiliarizing, and perhaps because she must go through it alone.

There is little else to recommend the novel. It frequently reads like a roman à clef. Some of the key is included. Yglesias’s father, called Guillermo in the novel, was a Cuban-American novelist, and his mother Helen Yglesias (Rose in the novel) was also a novelist. Margaret’s family is described in terms that makes them equally easy to identify. Her father Jules Joskow cofounded National Economics Research Associates, a consulting firm, in 1961; Andrew Joskow, her younger brother, now serves as its senior vice president; her older brother Paul Joskow is a professor of economics at MIT.

Naturally, then, you want to guess the true identity of Bernard Weinstein, the young Cornell graduate who introduces Enrique and Margaret. Thwarted in his ambition to become a novelist,he had evolved into one of the country’s leading cultural critics, and certainly its most visible. He had reviewed books for the daily New York Times for ten years, movies of The New Yorker for five, was still a columnist for Time as well as the author of two bestsellers of general cultural musings.Same for Porter Beekman, a New England novelist who is the second-string movie critic for the New York Times. You find your fun, pointless though it may be, where you can.

But after a while the enjoyment pales and the eyewitness account of terminal cancer yields information, but no insight. The reason did not strike me until I did a bit of digging. Although the events of A Happy Marriage antedate the deaths of his mother and half-brother—Helen Yglesias died in April 2008 at the age of ninety-two, while Lewis Cole, a film professor at Columbia University, died six months later at the age of sixty-two from Amyotrophic Lateral Sclerosis—neither of them receive any charity from the author. Enrique describes his mother Rose as unvaryingly self-pitying, while his half-brother Leo is “willfully dense” and useless to Enrique in his ordeal: “[I]n lieu of visiting Margaret at the hospital, [Leo] insisted on inviting Enrique over to his apartment for dinner. . . .” Even his father is cheerfully dismissed as narcissistic. This is not honest. It is merely vicious.

“My father and mother talked about novelists at home, and I thought they were gods,” Yglesias told the New York Times upon the publication of Hide Fox, and All After when he was seventeen. “I wanted to be a god, too, in a sense, to have some power.” Rafael Yglesias appears never to have recovered from the heady arrogance of teenaged authorship, and in A Happy Marriage he has told the story of a marriage that was happy because it outlasted the death of his wife. Margaret Joskow must have been an extraordinary woman, but from this novel the best you can do is to suppose so.

Update: Nancy Connors praises the novel in the Cleveland Plain Dealer. “What glue holds a marriage together despite disloyalties, professional failures, free-floating anger and regret for the life not lived?” she asks, failing to notice that neither she nor Yglesias answer the question. Malena Watrous reviews the novel favorably for the New York Times Book Review, suggesting that Margaret’s cancer is what makes Enrique aware of his marriage’s happiness. At Bookforum.com, Karen Karbo calls A Happy Marriage “beautiful and disturbing,” while the author is “superb and courageous.”

Thursday, July 16, 2009

On vacation

“How perillous vacancie from affaires hath ever bene, may appeare by ancient and moderne examples, whose Tragicall catastrophe wold crave teares immix’d with lines. Let this suffice, there is no one motive more effectually moving, no Rhetoricke more movingly perswading, no Oratorie more perswasively inducing, then what we daily feele or apprehend in our selves. Where every houre not well employed, begets some argument or other to move our corrupt natures to be depraved. Let us then admit of no vacation, save onely vacation from vice. Our lives are too short to be fruitlessly employed, or remissly passed.”—Richard Brathwait, Natures Embassie (1621).

A Commonplace Blog will be on vacation until Thursday, July 23rd.

Wednesday, July 15, 2009

Self-reference and narcissism

It is fast becoming a commonplace of American criticism that frequent use of the first person betokens narcissism. Last month Stanley Fish reported that he had listened closely to President Obama and had detected a growing preference for big I’s over little we’s. “[T]he note of imperial possession, the accents and cadences of a man supremely aware of his authority and more than comfortable with its exercise,” have creeped into his speech, Fish concluded. But it was not only the Left devouring one of its own. Last week Wall Street Journal columnist Peggy Noonan—best known as a speechwriter for President Reagan—belittled Governor Sarah Palin for being “self-referential to the point of self-reverence.” In the July 3rd announcement of her resignation from Alaska’s governorship, Palin kept saying “I’m, I’m, I’m,” Noonan complained. Over at the Language Log, Mark Liberman submitted Noonan’s claim to careful scrutiny (h/t: Neil Verma).

Adopting “two crude measures of ego-involvement,” Liberman compared Palin’s announcement to three similar speeches—Richard M. Nixon’s concession in the 1962 California gubernatorial election, Lyndon B. Johnson’s announcement in 1968 that he would not seek reelection to the presidency, and President Nixon’s resignation in 1974—and found that, by these measures, “Palin is more ego-involved than LBJ, but less than Nixon.” She used the various forms of the first-person singular four percent of the time, while Nixon was at 6.1% and 4.6% and Johnson at just 2%. Liberman also calculated the ratio of the first-person plural to the singular, observing that a “higher ratio suggests less ego-involvement,” and found that Johnson had the highest ratio (1.37), but that Palin’s (0.81) was strikingly higher than Nixon’s (0.17, 0.48).

I want to take Liberman’s analysis one step further, not to defend Palin—frankly, she doesn’t need my help—but to show that the folk psychology about frequency of the first person is badly off the mark. In short, self-reference is not evidence of narcissism, because historically even Nixon’s rate of I-talk is within the range of normal.

Plagiarizing Liberman’s method, I examined three English-language classics of the eighteenth century that were written in the first person and three from the nineteenth. Here are the results. (Please forgive the lack of a table.)

Earl of Chesterfield, Letters to His Son (1746–71)
Words = 286,074
1st sing. = 8,636
% 1st sing.= 3.0%
1st pl. = 777
Pl./sing. ratio = 0.089

Laurence Sterne, Tristram Shandy (1759)
Words = 190,268
1st sing. = 6,641
% 1st sing. = 3.5%
1st pl. = 816
Pl./sing. ratio = 0.123

Benjamin Franklin, Autobiography (1771)
Words = 65,935
1st sing. = 2,963
% 1st sing. = 4.5%
1st pl. = 678
Pl./sing. ratio = 0.229

Charles Dickens, David Copperfield (1850)
Words = 362,889
1st sing. = 22,959
% 1st sing. = 6.3%
1st pl. = 2,701
Pl./sing. ratio = 0.118

Mark Twain, Adventures of Huckleberry Finn (1884)
Words = 116,519
1st sing. = 4,914
% 1st sing. = 4.2%
1st pl. = 1,062
Pl./sing. ratio = 0.216

The Personal Memoirs of U. S. Grant (1885–86)
Words = 241,878
1st sing. = 4,692
% 1st sing. = 1.9%
1st pl. = 1,758
Pl./sing. ratio = 0.375

What do these figures prove? Accusations of narcissism cannot be sustained by citing the frequency of self-reference, not even as contrasted with the use of the first-person plural. While Ben Franklin has the second-highest percentage of references to himself, he also manages the second-highest ratio of plural to singular uses. In Liberman’s language, he displays both a relatively high degree of ego-involvement and clear evidence of relatively less ego-involvement. Again, it is not surprising to find that Grant, a military man, has the highest ratio of plurals to singulars among the six authors. What is surprising is that Nixon achieved a higher ratio (0.48 versus Grant’s 0.38) when he told the country at last that he was giving up the presidency.

I am left with two hypotheses, neither of which the folk psychologists and critics of American political discourse have entertained. First, the frequency of the first person is pretty likely to be a product of culture and history. The eighteenth-century British writers use the first-person forms less than the other four on my list. And in the twenty-first century heavy use of the first person is an accepted norm. An accepted grammatical norm, I might add. All it may demonstrate is a preference for constructing sentences in a certain way—a relatively easy way.

Replying to criticisms of first-person narration, the novelist David Isaak points out that the first-person encourages a straightforward construction that can wear upon readers:

I’ve heard more than one person comment that first-person narratives tend to start too many consecutive sentences with “I,” giving the impression we are listening to a Mexican folk song (“Ai—Ai—Yi—Ai . . .”). Fine—but I’ve seen just as many third person manuscripts starting paragraph after paragraph with “He.” Is “hee-hee-hee” somehow better?Isaak doesn’t notice that he starts both of his sentences the same way. But this is not to fault him. Starting sentences with the I is the default construction in current English, especially in informal discourse when the speaker’s (or, as here, the writer’s) mind is not on the form of what is being said.

And thus the second hypothesis. Person reflects genre. Despite the fact that he is an eighteenth-century author like Sterne and Chesterfield, Franklin uses the first person more often because he is writing an autobiography, a literary kind that, except when it is an exercise in long-winded self-concealment, like The Education of Henry Adams, depends helplessly upon the first person. Similarly, to accuse David Copperfield of “ego-involvement”—he uses some form of the first person 6.3% of the time—does not seem quite right. David is as much a “camera” as Christopher in The Berlin Stories; he is at least as interested in the people in his life as in himself. Consider, for example, the passage in which David first studies Uriah Heep in Mr. Wickfield’s office:It so happened that this chair was opposite a narrow passage, which ended in the little circular room where I had seen Uriah Heep’s pale face looking out of the window. Uriah, having taken the pony to a neighbouring stable, was at work at a desk in this room, which had a brass frame on the top to hang paper upon, and on which the writing he was making a copy of was then hanging. Though his face was towards me, I thought, for some time, the writing being between us, that he could not see me; but looking that way more attentively, it made me uncomfortable to observe that, every now and then, his sleepless eyes would come below the writing, like two red suns, and stealthily stare at me for I dare say a whole minute at a time, during which his pen went, or pretended to go, as cleverly as ever. I made several attempts to get out of their way—such as standing on a chair to look at a map on the other side of the room, and poring over the columns of a Kentish newspaper—but they always attracted me back again; and whenever I looked towards those two red suns, I was sure to find them, either just rising or just setting.David refers to himself eleven times in this passage—exactly five percent of the words are first-person forms—while referring to Heep just seven times (ten, if the references to Heep’s eyes almost as impersonal objects are included). Yet his entire attention is on Heep, not himself. The narrative strategy is to register Heep’s effect, because that is how—at least for Dickens—a man is to be judged.

Unless first-person genres and their self-referential purposes are taken into account, complaints like Fish’s and Noonan’s about “self-reverence” and the “imperial possession” are empty moralizing.

Update, I: Here are the numbers for this Commonplace Blog.

Words = 179,427
1st sing. = 2,362
% 1st sing. = 1.3%
1st pl. = 453
Pl./sing. ratio = 0.192

After the basic components of a sentence, the most common words here have been not (1,284), novel or novels (724), literature or literary (706), book or books (531), all of the various variations on the word Jew (386), and then American (359). No idea what to make of all this.

Update, II: Three more sets of figures.

Wilkie Collins, The Moonstone (1869)
Words = 197,669
1st sing. = 10,827
% 1st sing. = 5.5%
1st pl. = 1,189
Pl./sing. ratio = 0.110

Booker T. Washington, Up from Slavery (1900)
Words = 74,130
1st sing. = 3,120
% 1st sing. = 4.2%
1st pl. = 718
Pl./sing. ratio = 0.230

Woodrow Wilson, Presidential Addresses (1913–18)
Words = 92,886
1st sing. = 1,294
% 1st sing. = 1.4%
1st pl. = 2,069
Pl./sing. ratio = 1.599

The Moonstone is the first example of an unreliable narrative given by Wikipedia. Its numbers are nearly the same as those of Dickens’s novel, published eighteen years earlier. The age? The genre? Washington is close enough to Franklin to suggest that something around 4% is the rate at which autobiographies drop into self-reference. Meanwhile, Wilson’s use of the first person almost exactly mirrors Grant’s. His ratio of plurals to singulars, though, is the highest I have found, identifying an integral element of his political rhetoric.

Tuesday, July 14, 2009

Thicker than Water

Vera Caspary is almost completely forgotten today, but in her day she was something of a literary pioneer. The White Girl (1929), her first book, was one of the earliest American novels about “passing.” Her play One Beautiful Evening, rewritten as Blind Mice with Winifred Lenihan in 1930, was described by the press as “manless”: its cast was composed exclusively by women. Her story “Suburbs,” filmed in 1932 as The Night of June 13th, unmasked the quiet desperation of suburban lives thirty years before Richard Yates got around to it. An unapologetic Leftist, she was one of the few American writers to speak out publicly on behalf of the Scottsboro Boys. In Laura, her best-known book, a 1942 mystery in which the detective falls for a crime victim, filmed by Otto Preminger two years later, she may have invented the genre of the psycho thriller.

Caspary’s most ambitious and unusual project was Thicker than Water (1932), a 425-page chronicle of a Sephardic Jewish family living in Chicago. Although I have been unable to confirm my hunch, chances are that Caspary based the novel on her own family. The daughter of a buyer for a Chicago department store, she was born in November 1899 into a “mixed marriage.” Her father’s father was a German Jew, but her mother’s father was a Sephardi whose family had settled in Amsterdam after being expelled from the Iberian Peninsula. In the novel, a suitor is dismissed with a single scornful phrase: “But he’s a German.” And when a Sephardic child marries “a Jew of humbler stock,” he is “disowned by his parents, mourned as if he were dead.” The family jealously guards “the Portugese purity” of their blood:

They could trace their history back to those days when their ancestors were rich and powerful, patrons of art, friends of dukes and intimates of princes, holders of vast properties, vintners of fine wines, builders of great castles, and always learned scholars. When, in 1499, they had been forced to flee their home, they had settled in Holland with a group of refugees, as self-conscious and aloof in their poverty as they had been in the days of their grandeur. The refugees had again become prosperous, although they were never granted the privileges of citizens in this land, and they had intermarried among themselves as if they had been the few proud members of a dwindling royalty.By the time the novel opens in 1885, the family fortune has disappeared, and all that remains is the family pride and the family traits—a thin parrot nose, sunken cheeks, and “inkily shadowed eyes.”

Rosalia Piera is the main character. Well-aware that the family traits have prevented her from being a beauty, she brags about being plain and compensates with a hypercritical biting mind. The first female Jewish intellectual in American literature, Rosalia detests the local Canaan Literary Society—its chatty Oprah Book Club-like atmosphere has nothing to do with the mental life—and yet she attends its meetings, hoping secretly to meet “the tall, fine suitor of Spanish and Portugese blood, with whom she had carried on so many phantom conversations.” Instead, her heart is captured when, intending to embarrass a dandy from a rich German Jewish family who had read a sentimental poem in praise of her large-breasted cousin at an earlier meeting, she is lacerated by pity for him. She “emerge[s] from a dark place where she had been hidden for years.” Within a year she and Adolph Reisinger are married. Thus begins the family whose history Caspary chronicles through three generations.

Caspary has been enrolled among the radical novelists, but in Thicker than Water she raises no placards. Perhaps anti-capitalist pluck emboldens her portraits of the Jewish businessmen who log long hours, earning money and talking about it endlessly and coming home too exhausted to attend to their wives, but far more noticeable is her knowledge of the millinery and department-store trades, which she might well have acquired from her father. The novel spans the rise and fall of the family’s silk-jobbing business—and its eventual sale—while Rosalia’s brother Saul leaves the firm to become a partner in a West Side department store and a relative by marriage leaves to join a brokerage. The third generation abandons business altogether for art, romance, or philanthropy.

Throughout it all, Caspary’s focus remains on the branching and leafing family—its marriages, homes, children—and the changing notions of status. Book One, “Prejudices,” comes to a head when a cousin marries a sharp-eyed salesman named Smith, who turns out to be a Polish Jew originally named Slivowski. “He’s a kike,” says Rosalia’s husband Adolph. “You can always depend upon them to take advantage of a situation.” Caspary elaborates:Such an attitude was not unusual. No doubt most of the good stolid German-Jewish merchants . . . said the same thing about kikes. . . . With the exodus of Jews from Russia and Polish Russia in the two decades since a bomb had been hurled at Alexander II, bitter prejudice had risen among American Jews against their Russian co-religionists. This bitterness did not ferment so rapidly in the Middle West as in the Eastern cities where the greatest number of immigrants sttled. But gradually as they came to Chicago, as the section around Maxwell Street grew crowded with uncouth, unclean strangers, speaking a guttural jargon, the old solid citizens felt their security threatened, their place in the community, the respect of their Gentile neighbors, their social position and their prosperity.In time, though, Rosalia’s practical-minded advice carries the day: “Perhaps you’re right,” she tells her husband. “Only it was that very thing, that ruthlessness and that quickness at seeing an advantage that made him so valuable to you.” The family “kike” stays in the business, and because of him, the business prospers.

Book Two, “Possessions,” details the family’s prosperity. Through marriage, the Pieras ally themselves to an even more prominent Chicago Jewish family—the descendants of a peddler. Caspary’s moral seems to be that, in America, money creates caste. “Descendant of a family who could trace its adventures from the fifteenth century,” Rosalia is amused that the “sons of meat packers, wheat farmers and steel puddlers had become the nobility of the Western world.” But it might just as easily be said that business knocks down the barriers of prejudice in its ruthless demand for the best talent and the most customers. Caspary does not show that the pursuit of financial success leads to cruelty and unhappiness, if that is what she is trying to do. The lives of the third generation are largely empty, and they are also strangers to Jewish tradition, badly educated, and concerned with little beyond pleasure. If capitalism is to blame, you couldn’t prove it by Thicker than Water.

The family chronicle is a native form of Jewish narrative. In the book of Genesis, the first family chronicle in Jewish literature, Abraham receives three promises from God, but they are not fulfilled in his lifetime. Since then the Jews have understood that several generations may be required for a promise to work its way through the system. Early on, Rosalia reflects that “no one living could remember the day when there had been anything but intelligence and good blood in the dark Piera family,” but by the conclusion of the novel, the family has intermarried, acquired land and valuable possessions, and given birth to many grandchildren. The blood may have thinned, but it is still Thicker than Water.

Monday, July 13, 2009

Five Books of Jewish fiction

Tim Davis asks me to “recommend [a] handful of books (fiction) in which Judaism either is a central theme or is the foundational spirit the author draws upon for the book’s style and tone.”

Sure thing. Before I do, though, let me direct you to the great Ruth R. Wisse’s 2000 book The Modern Jewish Canon. Moreover, the Yiddish National Book Center compiled a longer list of one hundred great Jewish books as selected by Wisse, Hillel Halkin, Robert Alter, and four other critics. In what follows I try my best merely to supplement their canons. I have also restricted myself to American writers, if only to narrow the field that I must survey to compile such a humash. The Amateur Reader can add more Yiddish titles. Perhaps Israeli novels will make up another Five Books some day.

(1.) Isaac Rosenfeld, Passage from Home (1946). The novel might have been entitled Call It Sleep: The Next Generation, if only Henry Roth’s superlative novel had not already disappeared twelve years after its publication. An immigrant Jewish family; the fifteen-year-old son, “sensitive as a burn”; the inevitable conflict with the Old World father, who has submerged his own intellect in ambitions for his son—much that would become familiar is here. The boom in American Jewish fiction was ignited by this book, which explains most of what came after.

(2.) Bernard Malamud, The Assistant (1957). As much as I dislike The Natural do I revere Malamud’s novel about Frank Alpine, the young Italian-American who goes to work in Morris Bober’s grocery after sticking it up. I know all the objections against it: for Malamud the Jews are symbols of metaphysical suffering, the Yiddish-inflected English is artificial and nothing like what any Jew has ever spoken, etc. I could not care less. No one gives a better flavor of the Jewish spirit of hope in the midst of despair.

(3.) Chaim Potok, The Chosen (1967). Another novel that has undergone a deflation in recent years. For a glimpse of Orthodoxy with its glorification of Talmudic study, its intense family life, and its rivalries between modernists and hasidim, no book is a better introduction. When this book made him famous, Potok was liberated from the editors who stayed after him to plane The Chosen into shape. The story is uninvolved, and so is the prose. The people are sharply individuated and yet wholly recognizable. In a real sense, although he published eight novels, Potok was a one-book author.

(4.) Cynthia Ozick, The Cannibal Galaxy (1983). In one of those coincidences that God seems to delight in, it was published the same year as Arthur A. Cohen’s Admirable Woman, which was also based on the life of Hannah Arendt. In Ozick’s novel, the Arendt figure is a mother in conflict with the Jewish educational establishment. Although I’ve never heard it described this way, it is a novel about the ancient quarrel between official institutional Judaism and the text-centered culture inhabited by deeply religious Jews. Except for Agnon, no other Jewish novelist is so hypertextual with Jewish texts. You may not catch all of the allusions, but you will get a taste for Jewish textualism.

(5.) Zoë Heller, The Believers (2009). I have already reviewed the novel at unconscionable and unbloglike length, and then went on to discuss it further. My addendum is autobiographical. Like Rosa, I am a baal teshuvah, a Jew who “returns” to Orthodoxy—that is, who becomes Orthodox in adulthood. I can say this much: Heller gets it exactly right. You may not feel the rightness for yourself, but believe me, this is how it happens.

The Middle of the Journey

“Yech,” said my neocon friend when I told him that I admired The Middle of the Journey. Yet Lionel Trilling’s 1947 novel—Trilling himself—is more admired on the Right than on the Left these days. Nothing like liberal anti-Communism, which Trilling championed, exists any more; and not just because Communism has been discredited everywhere except on the academic Left. It is not clear any more what tyranny the Left is anti-, other than a firefighter who objects to being discriminated against on the basis of race.

His theme, as Trilling wrote in an Introduction when the novel was reissued in 1975, was “the powerful attraction to Communism felt by a considerable part of the American intellectual class during the Thirties and Forties.” And the equally powerful revulsion from it, he might have added, on the part of ex-Communists. The opposing forces are dramatized in the character of Gifford Maxim, “this huge, dedicated man,” and the reaction he provokes. By now everyone knows that the character was based on Whittaker Chambers, who had “pledged himself to the cause of Communism and had then bitterly repudiated his allegiance.” At the time the novel was published he could “scarcely be called a historical figure,” but less than a year later Chambers testified in a public hearing of the House Un-American Activities Committee that an “underground group” whose purpose was “the Communist infiltration of the American government,” although espionage as “certainly one of its eventual objectives,” included former State department official Alger Hiss and two other members of the Roosevelt administration. Since that moment Trilling’s novel has been treated as a footnote to history, but there is a case to be made for its literary interest. Especially because readers on the Left will have nothing to do with Chambers’s 1952 autobiography Witness—the greater book on Communism’s powerful attraction—The Middle of the Journey deserves to be more widely read.

Right off it must be admitted that you either put up with Trilling’s style or lose patience with it. Exquisiteness and tact may serve the purposes of criticism, depending as it does on incongruities of wit, but few novelists besides James have succeeded in writing such prose without sounding fussy—particularly when the care is being taken to distinguish between abstractions and approximations. John Laskell, the novel’s protagonist, a young scholar who has written Theories of Housing, has a private income:

It preserved him from the very beginning from the brunt of those problems of “integrity” which, although they had always been of importance in American careers, were just now of more importance to men of talent than ever before. Laskell, with only a little caution and not much sacrifice, could manage to live without a salary. The alternatives to doing what he thought right did not present themselves, as they did to many of his friends, as “selling out” or as “corruption.” He did not have to think of himself in such heroic and tragic language and this suited his reasonable temperament.In the summer of 1936, Laskell comes to the Connecticut countryside, “the stranger, the outlander, the foreigner from New York,” to stay with his friends Arthur and Nancy Croom after an unspecified illness which had nearly killed him. He comes bearing news, “quite momentous news about Maxim—the grotesque story of his break with the Party.”

The Crooms are radicals. They are “the decent people, the people of good will.” They wear “the armor of idealism.” They belong to “the near future—not the far future when the apocalyptic days would come, but the time now at hand before things got very bad.” They take pains “to think in terms of mankind in general.” They oppose war, but make an exception for revolution. They “grant or refuse requests according to nothing but reason.” Their passion of mind and will is so pure that they “could not believe that anything that opposed it required consideration.”

Their reaction to the news that Gifford Maxim has broken with the Communist Party is incredulity. They stare at Laskell as if had had just told them of the Reichstag fire. Although not themselves members of the Party—they are what used to be called fellow travellers—they assess political action in reference to it. For them “the Party [is] a fixed point from which all deviation implied something wrong with the person deviating.” There are only two possible explanations for Maxim’s break. Either he has gone insane or has “moved so far as to be on the other side,” becoming “the blackest of reactionaries.” Thus, when Laskell appears to take Maxim’s own explanation seriously—when he seems to be saying, in effect, that Maxim is telling the truth about the Party—he becomes, in the Crooms’ eyes, “touched with Maxim’s guilt.”

Maxim’s explanation is simple. He was a Party professional, not an idealist; he cared only for results. “[I]f you take the professional attitude about revolution,” he explains to Laskell, “you don’t permit yourself the luxury of ideas.” At some point, though, the results ceased to please him; they were more than he had bargained for. They were, in fact, evil.

Then as now, the use of the word evil separates Right from Left. The Crooms are not mistaken: Maxim has joined the other side, the side of law in open antagonism to evil. “Is it not strange,” he says,do you not find it strange that as we become more sensitive to the sufferings of mankind, we become more and more cruel? The more we think of the human body and the human mind as being able to suffer, and the sorrier we feel for that, and the more we plan to prevent suffering, the more we are drawn to inflict suffering. They more tortures we think up. The more people we believe deserve to be tortured. The more we think that people can be fuled by fear of suffering. We have become our brother’s keeper—and we will keep him in fear, we will keep him in concentration camps, we will keep him in straitjackets, we will keep him in the grave.Then Maxim pays a visit. And despite the Crooms’ initial reluctance even to dine with him, he becomes involved in the human drama of the Connecticut countryside. One thing leads to another, and a child dies. At first it appears that her father has murdered her—a genuine working-class man, a victim of capitalist injustice, who is for the Crooms “not so much a man as a symbol. He was a symbol of something good, of something that deserved to be talked about endlessly. . . .” Although he is cleared of his daughter’s murder, he is no longer a symbol of good. “I can’t stand the idea of having him around me,” Nancy says. “Not that I’d be afraid, but I’d always be thinking that this man killed his child.” Arthur falls back upon Marxist clichés (“social causes, environment, education or lack of education, economic pressure,” yadda yadda), and Nancy agrees that he is not to blame “personally, individually,” but even so she remains adamant about not seeing him again. She is deeply perplexed by her contradictory feelings.

“Nancy’s dilemma is an inevitable one,” Maxim observes. “She refuses to say that Caldwell has any responsibility, any blame or guilt. And then she refuses to allow him to come near her.” He explains the advantage of the system with which he has replaced Communism. To his new way of thinking, the child’s father is “wholly responsible for his acts,” and all men are responsible for one another. To use his exact words: “if we are all members of one another, then each of us is in some part God.” Thus Nancy Croom can embrace the man who killed his daughter only in the abstract, while rejecting him in the flesh. Maxim is able to reconcile the two impulses. “Absolute responsibility,” he concludes: “it is the only way that men can keep their value, can be thought of as other than mere things.”

In the end, then, Maxim triumphs. The Left’s refusal to acknowledge the evils perpetrated in the name of Communism—even to speak the name of evil—divides it against itself. But this is not the claim that Trilling’s novel has on the attention of readers in 2009. Nor is the novel most interesting, as several critics have pointed out, in prophesying the rise of neoconservatism. “The time was getting ripe for a competing system,” Laskell decides. The Middle of the Journey is that rare thing, a successful novel of ideas. And the key to its success is that Trilling takes what Aristotle called dianoia (“thought,” which he defined as a lesser element of tragedy), and makes it indistinguishable from ethos, character. To accept or reject a man is to accept or reject his thinking. At the end of the novel, Laskell includes the radical Crooms among the dangers of the world, and so does Lionel Trilling’s attentive reader.

Thursday, July 09, 2009

New issues, frames, non-existent empires

The second issue of Daniel E. Pritchard’s Critical Flame is up. Nora Delaney looks at Mark McGurl’s Program Era, while three different reviewers examine recent fiction. Pritchard himself takes a gander at the “exciting and enjoyable” D. A. Powell’s fourth volume of poetry.

The Amateur Reader begins reading Sholem Aleichem’s Railroad Stories in which a commercial traveler listens to the tales of the passengers in a third-class Ukrainian railroad car. Ruth R. Wisse calls Aleichem’s frame stories “the natural form” of Yiddish literature, creating an “internal dialogue between Jews.”

Roberta Rood praises The Little Stranger by the compulsively readable Sarah Waters. Rood points out that the novel is 463 pages long, but consumes the reader with curiosity and becomes a “real page turner.”

Thirteen writers suggest some beach-bag stuffers in National Review Online’s annual symposium on summer reading.

Ron Slate recommends Kevin Canty’s story collection Where the Money Went. Canty tends to write about the approach of a “revelation which does not occur.” That is his book’s “secret sauce,” according to Slate.

Martin Levin explains his “mixed feelings” for Gore Vidal. He both likes and dislikes that Vidal is a “hyperarticulate critic of the excesses of the American Empire.” Ah. That explains my own 99% pure distilled hatred for Vidal’s writing. No such empire exists.

A. F. Jurek declares that blogging is dying, the victim of Twitter, Facebook, and the difficulty of doing it regularly.

The Los Angeles Times book blog Jacket Copy wishes a happy seventy-sixth birthday to Oliver Sacks.

Matthew Cheney takes seriously two new G. I. Joe books that are “media tie-ins.” “As with Bond,” he concludes, “the ideal audience seems to be adolescent heterosexual boys and maybe some lesbians. . . .”

Carrie Frye has the latest intriguing details on The Original of Laura, the novel left unfinished by Nabokov upon his death in 1977.

Vikram Johri enjoys The Link, Colim Tudge’s account of how Norwegian paleontologist Jorn Hurum acquired “Ida,” the 47-million-year-old “missing link” between primates and man.

Sam Sattler awards Ellen Feldman’s novel about the Scottsboro Boys four out of five stars. Narrated by a “desperately poor white” in Alabama, Scottsboro “makes what happened, in the context of its times, almost understandable.”

Litlove clears some books off her table before leaving for vacation.

R. T. Davis’s book blog Novels, Stories and More seems to have disappeared from the blogscape again. If anyone has heard from R. T., or has any news of him, please leave a comment.