Sunday, February 21, 2010
A Kind and Generous Introduction to Canada from the Americans
Tom Brokaw and NBC, specifically. Very classy.
Saturday, February 20, 2010
Musings of a Man of the Left
I'm left-handed. I've always suspected that also means I think differently from most other people; ceertainly, I do think differently from other people. There are odd little hints, like being able to read upside down or sideways almost as easily as right-side-up. I remember once, as treasurer of the Editors Association of Canada, sitting at a table with all the other former treasurers of that institution, and we were all left-handed. This seemed to hint at some definite mental characteristic we all shared that was also expressed on our handedness—perhaps a mental versatility, or a readiness to take something on.
It is not so easy to find treasurers among editors.
The research gives some more hints; but we really know nothing. Anything apparently proven by one study is disproven by the next. After a scan of the information available online, though, a few things seem to me fairly solid:
1.Left-handed men, on average, earn more than right-handers—26% more for college graduates. This is not true for left-handed women.
2.At the higher reaches of IQ, left-handers become much more common. They double their proportion at 140 IQ (Mensa level), and outnumber right-handers at the highest levels, although they are only about 10% of the general population.
3.Left-handers do not live as long as right-handers. This has been vehemently denied, but I think most of that denial is denial, so to speak. Not only do studies tracing specific populations suggest this, but as the population ages, the overall proportion of left-handers seems to drop. There are fewer left-handed fifty-year-olds than left-handed thirty-year-olds, and fewer left-handed eighty-year-olds than fifty-year-olds.
4.Among famous people generally, left-handers are disproportionately represented. The list of famous left-handers is almost a historical Who's Who.
5.If lefties stand out in any field more than others, it seems to be in positions of leadership. This has come to the fore recently with the realization that a disproportionate number of recent US presidents are left-handed: Truman, Ford, Reagan, Clinton, Bush Sr., and Obama. Then one looks back into history, and discovers a pretty impressive list of other left-handed world leaders:
Alexander the Great
Julius Caesar
Charlemagne
William the Conqueror
Joan of Arc
Napoleon
Nelson
Bolivar
Victoria
Bismarck
Churchill
Gandhi
Patton
This is especially impressive when one remembers that, until the present century, left-handedness was actively suppressed, and much less common than today.
In a simliar category are these left-favouring captains of industry:
John D. Rockefeller
Henry Ford
Bill Gates
Steve Jobs
One other field perhaps stands out above the others for left-handers. Trial lawyers:
Clarence Darrow
F. Lee Bailey
Melvin Belli
Of course, these lists are less impressive when you realize how dominant left-handers are in nearly all fields. Name a famous person, and you name a famous left-hander—or it almost seems so.
You've probably already heard all that left-brain-right-brain crap. Problem is, that's just what it is—crap. Nobody knows what it really means to be “left-brained” or “right-brained,” or if anyone really is either.
A lot of this can be explained by the environmental results of being left-handed. Even if handedness does not relate at all to any mental inclination or brain structure, it takes a certain scrappiness, a determination to stick with your instincts against the social world, in order to remain so. As a result, possibly, lefties tend to be self-motivated, thence to have an air of self-possession. They are also, if they stay left-handed, obviously not inclined to shrink from a fight. They must think for themselves, and believe in their own instincts.
This insistence on sticking to one's principles and refusal to back down from a fight probably takes its toll physically: in terms of stress, and in terms of facing down the odd railway locomotive. Hence, perhaps, the shorter lifespan.
Still, not everything fits on this thesis. It doesn't really explain the IQ difference. Nor, as we have noted elsewhere, does high IQ ordinarily correspond with positions of leadership, as it seems to here.
In any case, choosing lefties for leaders is probably a good idea for the rest of us. First, they seem more capable of getting past the prejudice against smart leaders. Second, they seem to demonstrate a certain courage, if not always a moral courage, that is important in leadership. While we might not all approve of every name on the list of famous lefties, it is reassuring to find that it contains none of the true monsters of history: no Hitler, Mussolini, Mao, Pol Pot, Vlad the Impaler; not even a conniving Nixon.
Which brings up a final reason to prefer lefties for leaders: the alternative—the other personality type most inclined to drive for leadership—is the self-aggrandizing psychopath. Not that all right-handers in leadership are this type, but it is an extremely common type in leadership, and going with a lefty is pretty strong assurance that this is not the case.
A few other famous lefties:
Bob Dylan
Charlie Chaplin
Beethoven
Mozart
Aristotle
Newton
Darwin
Goethe
Twain
Michelangelo
Da Vinci
Raphael
It is not so easy to find treasurers among editors.
The research gives some more hints; but we really know nothing. Anything apparently proven by one study is disproven by the next. After a scan of the information available online, though, a few things seem to me fairly solid:
1.Left-handed men, on average, earn more than right-handers—26% more for college graduates. This is not true for left-handed women.
2.At the higher reaches of IQ, left-handers become much more common. They double their proportion at 140 IQ (Mensa level), and outnumber right-handers at the highest levels, although they are only about 10% of the general population.
3.Left-handers do not live as long as right-handers. This has been vehemently denied, but I think most of that denial is denial, so to speak. Not only do studies tracing specific populations suggest this, but as the population ages, the overall proportion of left-handers seems to drop. There are fewer left-handed fifty-year-olds than left-handed thirty-year-olds, and fewer left-handed eighty-year-olds than fifty-year-olds.
4.Among famous people generally, left-handers are disproportionately represented. The list of famous left-handers is almost a historical Who's Who.
5.If lefties stand out in any field more than others, it seems to be in positions of leadership. This has come to the fore recently with the realization that a disproportionate number of recent US presidents are left-handed: Truman, Ford, Reagan, Clinton, Bush Sr., and Obama. Then one looks back into history, and discovers a pretty impressive list of other left-handed world leaders:
Alexander the Great
Julius Caesar
Charlemagne
William the Conqueror
Joan of Arc
Napoleon
Nelson
Bolivar
Victoria
Bismarck
Churchill
Gandhi
Patton
This is especially impressive when one remembers that, until the present century, left-handedness was actively suppressed, and much less common than today.
In a simliar category are these left-favouring captains of industry:
John D. Rockefeller
Henry Ford
Bill Gates
Steve Jobs
One other field perhaps stands out above the others for left-handers. Trial lawyers:
Clarence Darrow
F. Lee Bailey
Melvin Belli
Of course, these lists are less impressive when you realize how dominant left-handers are in nearly all fields. Name a famous person, and you name a famous left-hander—or it almost seems so.
You've probably already heard all that left-brain-right-brain crap. Problem is, that's just what it is—crap. Nobody knows what it really means to be “left-brained” or “right-brained,” or if anyone really is either.
A lot of this can be explained by the environmental results of being left-handed. Even if handedness does not relate at all to any mental inclination or brain structure, it takes a certain scrappiness, a determination to stick with your instincts against the social world, in order to remain so. As a result, possibly, lefties tend to be self-motivated, thence to have an air of self-possession. They are also, if they stay left-handed, obviously not inclined to shrink from a fight. They must think for themselves, and believe in their own instincts.
This insistence on sticking to one's principles and refusal to back down from a fight probably takes its toll physically: in terms of stress, and in terms of facing down the odd railway locomotive. Hence, perhaps, the shorter lifespan.
Still, not everything fits on this thesis. It doesn't really explain the IQ difference. Nor, as we have noted elsewhere, does high IQ ordinarily correspond with positions of leadership, as it seems to here.
In any case, choosing lefties for leaders is probably a good idea for the rest of us. First, they seem more capable of getting past the prejudice against smart leaders. Second, they seem to demonstrate a certain courage, if not always a moral courage, that is important in leadership. While we might not all approve of every name on the list of famous lefties, it is reassuring to find that it contains none of the true monsters of history: no Hitler, Mussolini, Mao, Pol Pot, Vlad the Impaler; not even a conniving Nixon.
Which brings up a final reason to prefer lefties for leaders: the alternative—the other personality type most inclined to drive for leadership—is the self-aggrandizing psychopath. Not that all right-handers in leadership are this type, but it is an extremely common type in leadership, and going with a lefty is pretty strong assurance that this is not the case.
A few other famous lefties:
Bob Dylan
Charlie Chaplin
Beethoven
Mozart
Aristotle
Newton
Darwin
Goethe
Twain
Michelangelo
Da Vinci
Raphael
Friday, February 19, 2010
Tuesday, February 16, 2010
The Case for Child Labour
Europe is not climbing out of this recession easily. The German economy just tipped back into decline, Greece is on the brink of default, and the big French Bank Societe General has publically expressed its opinion that the collapse of the Euro is “inevitable.”
This fits with the thesis of our last post here, that the real problem is the demographic decline. It is worse in Europe than in America. Europe is looking more and more like Japan in 1997: while the crisis might have begun elsewhere, the underlying demographics may prevent Europe, like Japan, from ever rising from its knees.
Time for some creative thinking, surely. We cannot afford to feed any sacred cattle. Everyone is talking about ending pensioned retirement. Sure; but one other possible solution occurs to me; and perhaps a less painful one. Perhaps we have been looking at the wrong end of the spectrum. Perhaps the better option is to end the child labour laws.
This would do three useful things. First, it would expand the active labour force at a stroke, directly and more or less immediately fixing the essential immediate problem of a labour shortage. Second, it would make childbearing and childrearing more attractive: instead of representing a huge loss of income for parents, children could be self-supporting, as children are in less-developed, less urban countries; or were in Europe until the last century. This might reverse the drop in fertility, and so fix the underlying problem. Third, assuming that children would be paid lower wages, it boosts the competitiveness and so the overall economy of the developed world.
Of course, you are horrified. I know this sounds to many as heinous a suggestion as Jonathan Swift’s original “Modest Proposal.” Many are actively campaigning against child labour in the Third World.
But do we really, in this, have the children’s best interests at heart? Were child labour laws in the first place a matter of protecting children; or of boosting the price of labour? As laws were introduced to prevent child labour, and as part of the same movement, laws were also passed to prevent the employment of women. Both were thought to be humanitarian; making either work outside the home was considered exploitation. Assuming we were wrong in thinking this about women, the point logically applies to children too.
Certainly my wife, who grew up in the Third World, deeply resents the efforts of Western “do-gooders” to try to end the “exploitation of children” in the imaginary sweatshops of Asia. “What are the children supposed to do? Starve to death?” Even if literal starvation is not an issue, I think it would be rather better for the health of any child to find paying work than to be aborted.
Or if we allow them to live, being forced to sit on their butts all day in school. You might want to argue that working is not fun for children--but do they usually enjoy school? No. School is unpaid labour, and it is mandatory. Try that with anyone else, and the proper term is slavery.
Do children hate work? Not so much. Those of us who, when young, had a paper route, or every now and then ran a lemonade stand, or had some money-making hobby, I submit, remember this experience with considerable fondness. Nothing beats the feeling of holding your own earned money in your hand. You want self-esteem? That’s self-esteem.
Would we, by allowing children to work, be limiting their future? I’m sure that argument will be made. But is it correct? How much schooling these days ends up being mostly babysitting? We discover that children who are home-schooled are able to cover the same material in about half the time. It follows that children in general could just as easily cover everything they are learning now in half the time, and have half their time free for earning. Do farm kids suffer academically from having their “chores”?
Working half-time could itself be a vital part of a proper education. The world of work changes much faster than the hidebound, bureaucratic world of school. By the time a kid graduates from college, with fourteen or sixteen years of schooling behind them, but no work experience, most of what they have learned is bound to be irrelevant. Had they been working, at least some of their working knowledge would still be current. We must, in any case, replace the current model with one of lifelong learning.
Over their twelve or sixteen years of schooling, it gets hard for some students—indeed, hard for the system—to keep their focus on the relevance of it all. With concurrent work experience, one would have a constant reality check. And anyone who goes to college, as is increasingly required of almost anyone, is all dressed up with nowhere to go for another five or six or more years after that. The frustration of this, I suspect, is what drives the angst, the utter hell, of modern adolescence—which was never seen as a traumatic period in the past, or in non-Western societies. If this enforced poverty and apartheid from the real world, forbidden to do real things, is not deliberate torture, it is certainly torture nevertheless. And in the end, after all this terrible sacrifice, how many graduate from a professional school in their mid or late twenties, only to find themselves locked for life in a job they discover they hate? Better to sample styles of work on the way there.
Yes, some parents may exploit their children for money, confiscating everything a soon as they come home. But consider: are such parents likely to treat their children better simply because they are of no economic value, but instead constantly cost them money? No, logically, the case is the reverse: at least, if the child’s labour is worth money to them, they will take better care of their children’s health. Being of some worth to them is the ultimate guarantee of any child’s safety.
Earning their own money and potentially being self-supporting also allows a child a decent chance of escaping the abusive family altogether. The present system, by contrast, seems perfectly designed to promote and enable child abuse.
Yes, we probably don't want to send children down the mines. In the old days, this was a real possibility, and child labour laws accordingly made more sense. We wouldn't want young bodies doing work that is dirty and dangerous. But with the shift from an industrial to a service economy, and with modern health and safety regimes, there is in principle such work left anyway.
For poor kids, having some money to call their own may prevent many from dropping schooling sooner than they should, either because their families cannot afford to support them, or because they watch with envy others working and having money to spend. Some may find they hate the work available with relatively little schooling—so much the better. They will stick to their schooling as a result. Others may happen upon a job they really love. Perfect—let them drop out; they have the schooling they need. And if they want more later, the same half-time system remains available.
Nor will starting a family in the meantime make this option no longer possible. The kids can take care of themselves.
Saturday, February 13, 2010
The Real Cause of the Recession
An economist at the Vatican has suggested that the underlying cause of the current recession is not loose credit or real estate. It is infertility. The real estate bubble of the last few years was simply masking a deeper longterm problem which is now revealed.
To call economics an inexact science is to give it far too much credit. But what Signor Gotti Tedeschi of the Institute for the Works of Religion says has, to my mind, the ring of truth. People are our most valuable resource; they produce all other resources. No nation has ever prospered as its population declined. Witness Japan: in a dead stall economically for almost two decades, due to its rapidly falling birthrate and aging population. Welcome to our future. Japan is just two decades ahead of the world curve in this regard.
Buying a home, and then possibly also a vacation or retirement home, were the two biggest, and last big, expenditures all those baby boomers were going to make. Note that the recent bubble was biggest in retirement destinations: Arizona, Florida, Nevada, the coast of Spain.
That's been done. Now the boomer demographic is barrelling into retirement. They are going to start selling their main homes and downsizing. There are fewer coming in the next few generations. It will be a buyer's market, for real estate, for the foreseeable future. And for everything else--retirees spend less. Of course, they also produce nothing.
And remember this: while all economics may be voodoo science, demographic projections are rather reliable, at last for a few decades into the future. Numbers are numbers, and it takes twenty years to make a human generation.
So the real estate market is not going to pick itself up and resume its onward march anytime soon. And that, according to the Economist and others, was just about all the action in the US and European economies for the last good number of years.Without it, we would probably have been in recession all along. Now we will also have a huge surge in government pension and health costs, coupled with a huge drop in productive capacity, and in consumer buying power, throughout the developed world.
Hard to see how an economy can go through all that without grave economic consequences.
The rising tax burden to pay for entitlements will in turn suppress economic activity; Ibn Khaldun, the great Arab historiographer, posited that all polities eventually collapse through overtaxation. We're about to test his theory; the more so since most governments have responded to the current recession by going heavily into debt.
So, okay, the entire developed world is heading for a long, perhaps permanent, Japan-like recession. What about the developing world? Certainly, the dropoff in consumer spending in the developed world will hurt them. But their own conumers; might they pick up the slack?
It doesn't look all that promising, actually. For the past four decades, most of the Third World has been busy trying to get their fertility rates down. They accordingly face more or less the same crisis. First, forget China replacing the US as the world's dominant power—it has a bigger demographic crisis facing it than anyone, about to hit in just a few years' time. Not only did they stop making children--of the children they still made, a disproportionate number were boys, meaning they will never be able to find a wife and have children of their own. It's going to be like hitting the wall, even if China's political problems do not hogtie it well before this point. Russia? In worse shape demographically than anyone else in Europe. Of the members of the big leagues, India is rather better placed than anyone, though its fertility rate too is lower than it should be and declining. Indonesia looks relatively sound. Some nations of SubSaharan Africa are growing faster than anyone--Nigeria, Ethiopia--but they have yet to demonstrate that they have the other requisites for development: stable government, decent education, lack of curruption, and so forth. Among the already-developed nations, the US is doing better than anyone at keeping fertility levels up.
Some nations, including Canada, have thought that the solution to this demographic problem was to open up the borders to increased immigration. The assimilation problems this causes are now, however, beginning to cause severe strains, and the public is turning against the idea throughout Europe. In any case, it doesn't work: it turns out that recent immigrants very quickly age themselves, and their fertility patterns quickly match those of the native-born. And, if the problem is worldwide, we are only offshoring it, not solving it.
So, how do we solve it? How can we get the engine running again, if only for our children's children, or our children's children's children? Nobody seems to know, because nobody seems to know, or professes to know, what's causing it. Nobody seems able to explain why fertility rates are dropping so quickly. Because so many women are in the workforce? Then why are Japan and Italy leading the pack? Because of growing wealth? Then why is the US somewhat resisting the trend? Because of religious beliefs? Then why is Iran flagging? Because of urbanization? Then why didn't it happen immediately postwar, when everyone poured into the suburbs? Instead, we had the baby boom.
As it happens, I have a suggestion, which seems to me to account fully for the data. The drop in fertility is proportional to the ready availability of abortion, reliable contraceptives, a social safety net, and old age pensions. Abortion and contraceptives, obviously, allow women to choose to have fewer children. Turns out, given the choice, that most do not want ten or even five. Good solid old age pensions mean less need to have children as an insurance policy against incapacity in old age. A strong social safety net from government means less need to rely on family for one's security in tough times, hence less need to have a large family.
This explains the anomaly of the USA, still at replacement level in defiance of the trend. The USA is simply less "socialized" than Europe or Japan. Communist and ex-Communist nations like Russia, Cuba, and China suffer from having a very strong social safety net, and stand out in terms of population decline.
The cure, then, might be a bitter one: roll back the social safety net. Go back to banning not only abortion, but contraceptives as well--as was the case essentially everywhere until around the 1960s. It seems just conceivable that our grandparents were right after all.
One other possibility also occurs to me. It may be just baby-boomer wishful thinking, but one hears more and more about the possibility of new life-extending medical technologies coming in the near future. A friend of mine who is in the field says something dramatic will probably be available within ten years. If so, the sudden jump in lifespans in the developed world might save us; at least long enough to make the necessary changes to our systems. Notably, this too, to be a solution, would require ending all talk of pensioned retirement at 60 or 65.
If this comes, it will only boost the current American economic dominance. The sudden jump in life expectancy would probably be felt here first, and would add to the already relatively healthy fertility rate. The US has another advantage, that is not always appreciated: while European immigration comes mostly from Muslim lands, and involves a radical change of culture, America is built for immigration, and its immigrant supply comes mostly from Latin America--on the whole, a similar culture.
I think this is enough to make it probable that China will never surpass the USA economically. If the baton passes, as it eventually must, it will take a while.
And it will more likely be to India.
Monday, February 08, 2010
Brains? We Don't Need No Stinkin' Brains!
It was Robert Kennedy's death, as I remember, that made me an atheist at age fourteen.
It was not the problem of evil—not “How could God allow this?” I understood the concept of free will well enough. It was a comment in some news story that, if Kennedy survived, at best, he would be “a vegetable,” given the serious damage to his brain.
This led me to the thought that the spirit or consciousness was too closely tied to the brain to allow for any life after death. Or for beings of pure spirit, like God.
By age eighteen, I had changed my mind completely. I had no choice—by then, I had had direct personal experience of God. But I’d also seen through what I thought by then was an obvious fallacy. So I was a bit surprised recently to hear one of the “new atheists”--I think it was Richard Dawkins--use the same argument: that people cannot think or even be without brains, and this proves that the soul cannot exist without the body.
The fallacy seems so simple, to me at least. The same observed phenomenon, of a “vegetative” state, could be equally accounted for by assuming the brain were the sine qua non of consciousness, or that it was only the conduit or bridge through which the soul was able to influence the body. Calling a lack of response “vegetative” merely illegitimately presupposes the former.
But it turns out I was not quite right. In fact, there seems to be legitimate empirical evidence emerging that the “conduit” or “bridge” hypothesis is the more probable one. For example, an article in yesterday's National Post cites a number of people who “woke up” hours after being declared brain dead, with the “loss of all brain function,” and in at least one case with no blood supply to the brain. This at least makes the link between brain and mind a bit more ambiguous: it seems to me that, even if the brain as an organ survived intact, if all activity ceased and then resumed, and the brain is the creator of consciousness, the consciousness that arose then should be a new one, with no memories and no continuing sense of self.
Otherwise, where did it go in the interim?
An accompanying article reports on a study (http://content.nejm.org/cgi/content/full/NEJMoa0905370)
bearing directly on the proverbial “vegetative” state. A British/Belgian team has discovered that, with sufficiently sensitive equipment for detecting brain wave activity, such “vegetables,” despite “devastating brain damage,” can actually answer questions about their past lives, correctly. In other words, they are not vegetables at all. They are perfectly conscious and apparently mentally intact. What is gone is the ability to communicate with their bodies and so with the outside world.
Only some patients have responded in this way. But, knowing that some people with severe brain damage are still conscious, it becomes entirely possible that all the rest are too—but either less able to communicate, or less able to detect our attempts at communication.
Occam’s razor now comes into play: we know that at least some “vegetables” are conscious and mentally intact. We do not know if any are not. The simplest, and therefore more probable, hypothesis is that they all are.
Had enough? For there are also scores of cases of people living normal lives, and scoring within the normal intelligence range—or even above it—with “no detectable brain.”
In other words, the onus seems clearly on those who would deny the possibility of a spirit or mind existing without a brain to demonstrate their case; even the purely empirical evidence, such as it is, seems to go against them.
Even if all this were not true, reducing thought and consciousness to a physical wad of soggy tissue or the electrical impulses coursing through it is a nonsensical concept in philosophical terms: as obvious an error as sitting down in a restaurant and eating the menu instead of ordering the meal, meanwhile looking out the window at the sky in hopes of seeing time flying by. While one may be related to the other in some mysterious way, they obviously exist and are on different planes.
It was not the problem of evil—not “How could God allow this?” I understood the concept of free will well enough. It was a comment in some news story that, if Kennedy survived, at best, he would be “a vegetable,” given the serious damage to his brain.
This led me to the thought that the spirit or consciousness was too closely tied to the brain to allow for any life after death. Or for beings of pure spirit, like God.
By age eighteen, I had changed my mind completely. I had no choice—by then, I had had direct personal experience of God. But I’d also seen through what I thought by then was an obvious fallacy. So I was a bit surprised recently to hear one of the “new atheists”--I think it was Richard Dawkins--use the same argument: that people cannot think or even be without brains, and this proves that the soul cannot exist without the body.
The fallacy seems so simple, to me at least. The same observed phenomenon, of a “vegetative” state, could be equally accounted for by assuming the brain were the sine qua non of consciousness, or that it was only the conduit or bridge through which the soul was able to influence the body. Calling a lack of response “vegetative” merely illegitimately presupposes the former.
But it turns out I was not quite right. In fact, there seems to be legitimate empirical evidence emerging that the “conduit” or “bridge” hypothesis is the more probable one. For example, an article in yesterday's National Post cites a number of people who “woke up” hours after being declared brain dead, with the “loss of all brain function,” and in at least one case with no blood supply to the brain. This at least makes the link between brain and mind a bit more ambiguous: it seems to me that, even if the brain as an organ survived intact, if all activity ceased and then resumed, and the brain is the creator of consciousness, the consciousness that arose then should be a new one, with no memories and no continuing sense of self.
Otherwise, where did it go in the interim?
An accompanying article reports on a study (http://content.nejm.org/cgi/content/full/NEJMoa0905370)
bearing directly on the proverbial “vegetative” state. A British/Belgian team has discovered that, with sufficiently sensitive equipment for detecting brain wave activity, such “vegetables,” despite “devastating brain damage,” can actually answer questions about their past lives, correctly. In other words, they are not vegetables at all. They are perfectly conscious and apparently mentally intact. What is gone is the ability to communicate with their bodies and so with the outside world.
Only some patients have responded in this way. But, knowing that some people with severe brain damage are still conscious, it becomes entirely possible that all the rest are too—but either less able to communicate, or less able to detect our attempts at communication.
Occam’s razor now comes into play: we know that at least some “vegetables” are conscious and mentally intact. We do not know if any are not. The simplest, and therefore more probable, hypothesis is that they all are.
Had enough? For there are also scores of cases of people living normal lives, and scoring within the normal intelligence range—or even above it—with “no detectable brain.”
In other words, the onus seems clearly on those who would deny the possibility of a spirit or mind existing without a brain to demonstrate their case; even the purely empirical evidence, such as it is, seems to go against them.
Even if all this were not true, reducing thought and consciousness to a physical wad of soggy tissue or the electrical impulses coursing through it is a nonsensical concept in philosophical terms: as obvious an error as sitting down in a restaurant and eating the menu instead of ordering the meal, meanwhile looking out the window at the sky in hopes of seeing time flying by. While one may be related to the other in some mysterious way, they obviously exist and are on different planes.
Friday, February 05, 2010
Was Salinger Demonically Possessed?
There seem to be two distinct schools on JD Salinger: those who remember him best for Catcher in the Rye, and those who better remember better Franny & Zooey. Among Catholics, this division is expressed equally well as: those who detest Salinger (the Catcher crew) and those who see him at least as a spiritual fellow-traveller (the Zooeys). David Warren is of the former party: not to be hyperbolic or anything, he sees Salinger as writing “under direct demonic possession.”
I am in the second camp. Catcher in the Rye left relatively little impression on me. I saw Holden Caulfield as sincere, and his problem as real and fundamental, but I did not really identify with him, because I felt, even as a teenager myself, that he was reacting to it in entirely the wrong way—just thrashing about instead of looking for a solution. I surmised, and still think, the author thought the same, and was ironically distancing himself from Caulfield in many ways—firstly be making him young. I feel a lot of people, including Warren, as well as the several million who have seen Holden Caulfield as themselves, and all been totally alienated from one another together, have been missing the main point.
As I see it, Salinger was using Catcher in the Rye to lay out the problem—the problem of life in general, or life in what Jesus called “this world,” the objective, physical, shared, social world. Caulfield calls it “phoniness”; Jesus and John the Baptist called it “hypocrisy”; Socrates and Plato called it “sophistry.” Any reasonably intelligent adolescent figures out, by about Holden's age, that things here are rarely what they seem, and rarely what they clearly ought to be. Those in charge are generally liars, and nobody says the truth to anyone else.
But so far, in Catcher in the Rye, he was only setting out the problem. I believe Salinger then created the Glass family to explore and debate possible solutions. Caulfield had nobody to talk to about his perceptions; and this alienation was itself a large part of the problem. In a world of phoniness, whom can you trust? Salinger's Glass family, as a mental experiment, at least solves this problem: because they are all siblings, and already know each other thoroughly well, they are rather more likely to be straight with one another, and permitted to speak at the deepest emotional levels. Being exceptionally intelligent, they are allowed, in ordinary dialogue, to refer to anything and everything that might be relevant, however arcane the scholarship, however subtle the point.
Salinger can then use these characters to each represent a different possible path out of this thicket of phoniness, and have them by their interaction draw out the strengths and weaknesses of each possibility.
Seymour was the first and most obvious option, and so the first one Salinger explored in his stories: suicide. Obviously, that is not the option Salinger himself chose, in the end, since he lived into his nineties.
We know less about Salinger's other characters, because their choices were more complex. One of them, Waker, was already, when Salinger stopped publishing, a Carthusian monk—the religious life. Buddy represents the life of the writer, or artist, standing aloof and commenting on the world. Walt, “the only truly lighthearted member of the family,” represents the option of laughter, of seeing it all as absurd and meaningless and not caring; but Salinger seems to have already dismissed that option by blowing him up. Boo Boo represents the conventional life, trying to “fit in” and be “normal.” Zooey represents the life of an actor, the option of eternally wearing a mask for the world. Franny has an emotional breakdown—she is well-placed, at least, to be elaborated into an investigation of the always-popular option of going mad.
My guess is that it was Salinger’s plan to work out the issue by following through with the life stories of all of the Glasses, not really knowing himself which option would win out in the end. It was his plan for his personal spiritual development, which is one reason he felt no need to publish. My guess is that he has followed through on this, and his posthumous manuscripts will include the full biographies of all the Glasses.
I’m not sure who will win out, or whether that will even be apparent; other than that it’s pretty clearly not Seymour, Walt, or Zooey.
I’m kind of betting on the Carthusian monk. His name seems to suggest he had the inside track as of the 1950s. And Franny & Zooey already ended on a distinctly Christian note.
Written with a child on my lap.
Tuesday, February 02, 2010
What It Takes
The Intercollegiate Studies Institute administers a quiz each year to American college students to test how well they grasp basic civics. They do rather badly. I do not think the test is hard or obscure: I scored 32 of 33 correct myself, and I am neither an American nor trained in the field of political science.
But their test also keeps coming up with other fascinating results.
The latest round of testing, for example, discovered that elected officials actually score worse on their knowledge of how the US government works (44%) than do the general population (49%).
Surprising, isn't it? Why would we actually elect people to make our political decisions for us who know less about politics than we do? But I suspect the explanation is fairly simple. Unfortunately, it shows a flaw in democracy, which is, after all, as Churchiill observed, the worst possible way to choose a government—except for all the others.
Pop quiz: do you recall high school? Were the brightest kids the most popular? Would one of them stand a great chance of being elected class president? Not likely—more often, in most schools, they were shunned as “nerds.” The captain of the football team—there's your class president.
So it remains in the grown-up world.
The average person more or less instinctively dislikes anyone smarter than they are. “Hates” might really be the more accurate term. If you don't buy this from your personal experience with the phenomenon of “nerdiness,” consider the history of the Jews: a small, identifiable group with a significantly higher average IQ than the surrounding population. They haven't always been treated terribly well, have they? Has nothing to do with religion: they were hated consistently by Christians, pagans, and Muslims. Has nothing, really, to do with race; they are virtually the same race as the Arabs, who hate them; their suffering is simply that of the unusually intelligent generally. The Hakka Chinese have a quite similar history in Southeast Asia, for the same reason.
Because everyone hates anyone smarter than they are, they are never going to vote for them. Accordingly, to get the vote of a majority of the population, one has to be or at least appear less intelligent than an overall majority of the population.
Think of the most successful politicians, in an electoral sense, and you find a common theme: they almost always have a public image as a dunce. Ralph Klein, Jean Chretien, Ronald Reagan; the mainstream press thought it was doing Reagan harm by making him out to be a fool. They thought the same about George W. Bush, and they think the same about Sarah Palin. They cannot see that this image is helping them with the public. Similarly, British propaganda thought they were doing Hitler harm by referring to him as merely a “little corporal.” But the German people were reassured by this reminder that he was a perfectly ordinary man. Eisenhower beat Stevenson by seeming to be less intelligent; and Truman beat Dewey on the same premise. I doubt anyone would seriously claim FDR was brighter than Herbert Hoover—as someone said, Roosevelt had a “second-class mind, but a first-class personality.”
There are, it is true, exceptions: when a nation is in serious crisis, people can be persuaded to bend their principles just a little, in their desperation. Relatively smarter politicians can also sometimes slip into office on the strength of coming from some social subgroup that the majority just naturally assumes to be less intelligent than they are: Clinton from the Ozarks; Trudeau from Quebec, in his day (but never popular in Quebec, where his intelligence was resented); Napoleon from Corsica; Lloyd George from Wales; John Kennedy as an Irish Catholic; Thatcher, in her day, as a woman. The irony is that most people assume this is a disadvantage; just the reverse. Being black vaulted Obama into office years before he was ready. But will we ever see a Jewish president?
Reagan, too, I think, was smarter than people understood—able to pull off his affable fool persona because he was an experienced actor. Woodrow Wilson, too, was at least an intellectual—but he only had to take 33% of the vote to get in, because the opposition was divided between two strong parties. The relatively smart Clinton similarly benefitted from Perot's two campaigns. Churchill, too, was bright—and only thrust into office by a war. At that, he was helped by his plodding, almost comic demeanour, and could not survive in office past the peace.
It is troubling to think how much better we could do if we could come up with a system that put the best, rather than the sub-par, in command. But nobody has managed it yet.
Perhaps until then, the best thing to do is to limit the powers of government whenever possible.
Monday, February 01, 2010
Good Neighbour Jerry
Here's a glimpse of the real Salinger, as he was known to his neighbours. Accept no BS about him being mad, eccentric, or a recluse.
Subscribe to:
Posts (Atom)