Whenever I go out alone with my son, I can expect some unpleasantness. He is quite a well-behaved little six-year-old. And there is rarely a problem when his mother is with us. But it seems that when a man cares for children, he is subject to general contempt, most especially from women. Many women seem to think that they have the right to order around any man with a child, and they seem positively eager to complain about any inconvenience they have been caused. They never, in my experience, act this way with a mother. They wouldn’t dare.
Where a woman with a child is helped by all comers, a man seems to be obstructed wherever possible. It is an opportunity for bullies to get in their licks.
And never mind a man paying any attention at all to a child not his own. He is lucky if he is not openly accused of being a pedophile on the make.
This attitude is the furthest thing from equality of the sexes--and yet, while everybody claims to believe in equality of the sexes, nobody seems to remark on it. Equality of the sexes is the last thing “feminism” is about.
Similarly, current “progressive” thought holds that men are supposed to do half the housework. But let’s be frank: there is no equality in the kitchen. What happens if the man decides he wants to store the good plates here rather than there? Who decides?
Never the husband. Men who work in the kitchen inevitably work under the close direction of their wives. She is the supervisor; they are unpaid labour.
Don’t even get me going in “reproductive freedom.” Why is it that for women, the free choice to have or not have a child is a fundamental right, even if they choose to have sex with all comers, yet men have no say, beyond refusing all sex, in whether they have a child or not?
We will have equality of the sexes when fathers are given custody of children as often as mothers; when women are charged with sexual harassment as often as men; when men can choose as freely as women whether to work outside the home or not; and when there are as many women as men in prison.
Why are the feminists who claim to want sexual equality not lobbying for affirmative action on these fronts?
Saturday, September 29, 2007
Wednesday, September 26, 2007
The Crisis in Canadian Manufacturing
It is always amusing to read the Toronto Star—I haven’t seen a paper copy for years, but picked one up while in Canada this summer. It is like stepping into a different world. One that might have been familiar to Alice Liddell, as a little girl, when she sat on the knee of Lewis Carroll.
There was Carol Goar ringing the secular church bells in alarm over the crisis in Canada’s manufacturing sector (“Lean times in industrial heartland,” Star, July 27th). Apparently—who knew?—they are in decline; manufacturing is moving offshore. Something must be done—and it must, she stresses, be done by the federal government specifically. She notes that government has shovelled quite a few million to a few corporations already, speaking approvingly of $150 million given by Ontario to the forestry industry. Never mind that forestry is not actually manufacturing—all government money is good. But this is far too little. Even the billion plus given to the auto sector is, for Goar, far too little. We need more government spending, and more transfer of funds from the poor to big corporations; that goes without saying.
I can’t but feel from this that the Canadian left—the Star’s constituency—is living in the past. The decline in Canadian manufacturing is news? I am in my fifties now, and I have been aware of it all my life. The town I grew up in used to have forty different manufacturing concerns. It now has two, and earns its living from tourism. This fifty-year time lag in reporting the “news” tallies nicely with the recent uproar on the left (recent meaning to me within the last month, not the last century) about what was in the Catholic Latin mass before the 1950s. Indeed, the left generally gives the feeling that they have not yet emerged from the Great Depression. Perhaps they never will.
And it is indeed not just Goar, or just the Toronto Star. It must be more that coincidence that, speaking on the phone the next day to one of my leftist friends, and noting how prosperous Toronto is looking these days, he responded that it was all illusory, because in fact Canada is losing its manufacturing. Manufacturing is apparently the only real source of wealth.
No doubt many thought the same two hundred years ago—that Britain was heading to hell in a handcart with all these new factories, because it was losing its agricultural base. Land was the only real source of wealth. But we used at least to call that kind of thinking what it was—conservatism. To the modern left, doom is always just around the corner, and we must run backwards as fast as we can just to stay in the same place.
And no evidence of increasing Canadian prosperity, no matter how dramatic, can be real. Things must always be going from bad to worse to worst. If it isn’t global cooling, it’s global warming. If it isn’t desertification, it’s the shrinking deserts. All change is frightening. Only the status quo is good.
Of course, there really is a long-term decline in Canadian manufacturing; no dispute there. Canada is broadly moving from manufacturing into services, as manufacturing moves offshore.
To leftists like Goar, of course, “services” means only one thing: flipping burgers at McDonald’s. (As Goar puts it, “low wage jobs in the retail or hospitality sector.”) Not a bad job at all, really, a lot more fun than factory work, with better chance of advancement; but leftists hate McDonald’s. They have the prejudice of all idle elites against the stain of “commerce.” Pleasant and opportunity-full as a future at McDonald’s really is, though, in the real world, “services” also means banking, insurance, medicine, layering, accounting—not especially poorly-paying positions. And many good blue-collar jobs in transportation—trucking, for example, Canada’s single most common occupation. Indeed, the more manufacturing moves offshore, the more jobs there will be in transportation. Others of a blue collar bent might similarly enjoy piloting, air traffic control, barbering, cosmeticianship, construction, plumbing, electrical work, auto repair, boat repair, and on and on. All experiencing shortages, all well-paying, all services.
And, as Goar does not mention, or does not notice, that the same people also have growing opportunities in the resource sector. This cannot move offshore; and Canada’s uranium mines and oil patch are booming.
Leftists will argue, and Goar does argue, that, regardless of what other, better, jobs might be available, the average manufacturing factory worker does not have the skills for them (and, patronizingly enough, they believe the average factory worker does not have the intellect to learn them).
Now, this if true is only a problem if the decline in manufacturing is happening faster than the retirement rate can naturally reduce is workforce. Can this be true if the decline in manufacturing has been going on for at least fifty years? No, it cannot—anyone in manufacturing when the trend first became visible is already retired. If there was a problem, by now it is necessarily solved.
Conversely, anyone in manufacturing now has had their entire lifetimes to adjust; and has gone in aware of this issue, and taking it into consideration. Are they really, then, in need of and deserving of government help, which is to say, being supported financially by the rest of us? Are we obliged to save them from their own conscious choices?
Okay, let’s even say yes—and suppose that the money we have available to help others is unlimited, that there is no question even of setting priorities. Even so, if the decline in the manufacturing workforce is really faster than retirement can reduce it without dislocation, and manufacturing workers are really incapable of learning other work, this should logically show up in higher unemployment figures. Instead, the Canadian unemployment rate is down over the past fifty years or so, as Goar herself admits, with no sign of a recent jump. Despite the fact that the mass migration of women into the workforce over the same period ought logically to require twice as many jobs to be created, even leaving aside the growth in population from about twenty to about thirty million. With a growth in average income.
Odd, if there is a crisis at hand, that we have absolutely no evidence of it.
Goar and other leftists argue, in fact, that the apparent rise in jobs and employment masks people being forced to take worse jobs—i.e., flipping burgers at McDonald’s. (Perhaps even, God forbid, in the natural progression of things, ending up owning and operating their own McDonald’s outlet, or string of outlets.)
Let us even accept the dubious claim that former manufacturing employees are now commonly flipping burgers. It seems to me unlikely, since the people I see behind the counter at McD’s seem far too young to have had much seniority in any previous job. But let’s assume they are. Let’s even assume that working at McD’s is really a worse job than working in a factory—having worked both in a factory and in retail myself, I find this claim incredible, but let’s allow it. Does that justify the government taking money from the rest of us, and giving it to these former manufacturing workers, in compensation? Or worse, giving it to the shareholders of the corporations who employ them—since that is what Goar and other leftists actually propose?
I’d say just the reverse—because this necessarily means, if Goar is right about everything else, that she is demanding that people actually flipping burgers at McDonald’s—the great mass of us—despite their pitiable situation, have their money taken away from them and given to others who are much better off, corporate investors and possibly their employees, to protect the latter’s privileged position.
How can this be morally justified? How, indeed, does Goar and the left justify protecting or subsidizing manufacturing in rich Canada by any means, since it necessarily means taking money and jobs away from the poor in other countries—people necessarily poorer and needier than any Canadian?
As Alice might have said, curiouser and curiouser. The modern left seems built on one premise: that by calling anything “left-wing” or “progressive” or “liberal,” you can get away with it, no matter how reactionary, selfish, racist, and classist it is.
There was Carol Goar ringing the secular church bells in alarm over the crisis in Canada’s manufacturing sector (“Lean times in industrial heartland,” Star, July 27th). Apparently—who knew?—they are in decline; manufacturing is moving offshore. Something must be done—and it must, she stresses, be done by the federal government specifically. She notes that government has shovelled quite a few million to a few corporations already, speaking approvingly of $150 million given by Ontario to the forestry industry. Never mind that forestry is not actually manufacturing—all government money is good. But this is far too little. Even the billion plus given to the auto sector is, for Goar, far too little. We need more government spending, and more transfer of funds from the poor to big corporations; that goes without saying.
I can’t but feel from this that the Canadian left—the Star’s constituency—is living in the past. The decline in Canadian manufacturing is news? I am in my fifties now, and I have been aware of it all my life. The town I grew up in used to have forty different manufacturing concerns. It now has two, and earns its living from tourism. This fifty-year time lag in reporting the “news” tallies nicely with the recent uproar on the left (recent meaning to me within the last month, not the last century) about what was in the Catholic Latin mass before the 1950s. Indeed, the left generally gives the feeling that they have not yet emerged from the Great Depression. Perhaps they never will.
And it is indeed not just Goar, or just the Toronto Star. It must be more that coincidence that, speaking on the phone the next day to one of my leftist friends, and noting how prosperous Toronto is looking these days, he responded that it was all illusory, because in fact Canada is losing its manufacturing. Manufacturing is apparently the only real source of wealth.
No doubt many thought the same two hundred years ago—that Britain was heading to hell in a handcart with all these new factories, because it was losing its agricultural base. Land was the only real source of wealth. But we used at least to call that kind of thinking what it was—conservatism. To the modern left, doom is always just around the corner, and we must run backwards as fast as we can just to stay in the same place.
And no evidence of increasing Canadian prosperity, no matter how dramatic, can be real. Things must always be going from bad to worse to worst. If it isn’t global cooling, it’s global warming. If it isn’t desertification, it’s the shrinking deserts. All change is frightening. Only the status quo is good.
Of course, there really is a long-term decline in Canadian manufacturing; no dispute there. Canada is broadly moving from manufacturing into services, as manufacturing moves offshore.
To leftists like Goar, of course, “services” means only one thing: flipping burgers at McDonald’s. (As Goar puts it, “low wage jobs in the retail or hospitality sector.”) Not a bad job at all, really, a lot more fun than factory work, with better chance of advancement; but leftists hate McDonald’s. They have the prejudice of all idle elites against the stain of “commerce.” Pleasant and opportunity-full as a future at McDonald’s really is, though, in the real world, “services” also means banking, insurance, medicine, layering, accounting—not especially poorly-paying positions. And many good blue-collar jobs in transportation—trucking, for example, Canada’s single most common occupation. Indeed, the more manufacturing moves offshore, the more jobs there will be in transportation. Others of a blue collar bent might similarly enjoy piloting, air traffic control, barbering, cosmeticianship, construction, plumbing, electrical work, auto repair, boat repair, and on and on. All experiencing shortages, all well-paying, all services.
And, as Goar does not mention, or does not notice, that the same people also have growing opportunities in the resource sector. This cannot move offshore; and Canada’s uranium mines and oil patch are booming.
Leftists will argue, and Goar does argue, that, regardless of what other, better, jobs might be available, the average manufacturing factory worker does not have the skills for them (and, patronizingly enough, they believe the average factory worker does not have the intellect to learn them).
Now, this if true is only a problem if the decline in manufacturing is happening faster than the retirement rate can naturally reduce is workforce. Can this be true if the decline in manufacturing has been going on for at least fifty years? No, it cannot—anyone in manufacturing when the trend first became visible is already retired. If there was a problem, by now it is necessarily solved.
Conversely, anyone in manufacturing now has had their entire lifetimes to adjust; and has gone in aware of this issue, and taking it into consideration. Are they really, then, in need of and deserving of government help, which is to say, being supported financially by the rest of us? Are we obliged to save them from their own conscious choices?
Okay, let’s even say yes—and suppose that the money we have available to help others is unlimited, that there is no question even of setting priorities. Even so, if the decline in the manufacturing workforce is really faster than retirement can reduce it without dislocation, and manufacturing workers are really incapable of learning other work, this should logically show up in higher unemployment figures. Instead, the Canadian unemployment rate is down over the past fifty years or so, as Goar herself admits, with no sign of a recent jump. Despite the fact that the mass migration of women into the workforce over the same period ought logically to require twice as many jobs to be created, even leaving aside the growth in population from about twenty to about thirty million. With a growth in average income.
Odd, if there is a crisis at hand, that we have absolutely no evidence of it.
Goar and other leftists argue, in fact, that the apparent rise in jobs and employment masks people being forced to take worse jobs—i.e., flipping burgers at McDonald’s. (Perhaps even, God forbid, in the natural progression of things, ending up owning and operating their own McDonald’s outlet, or string of outlets.)
Let us even accept the dubious claim that former manufacturing employees are now commonly flipping burgers. It seems to me unlikely, since the people I see behind the counter at McD’s seem far too young to have had much seniority in any previous job. But let’s assume they are. Let’s even assume that working at McD’s is really a worse job than working in a factory—having worked both in a factory and in retail myself, I find this claim incredible, but let’s allow it. Does that justify the government taking money from the rest of us, and giving it to these former manufacturing workers, in compensation? Or worse, giving it to the shareholders of the corporations who employ them—since that is what Goar and other leftists actually propose?
I’d say just the reverse—because this necessarily means, if Goar is right about everything else, that she is demanding that people actually flipping burgers at McDonald’s—the great mass of us—despite their pitiable situation, have their money taken away from them and given to others who are much better off, corporate investors and possibly their employees, to protect the latter’s privileged position.
How can this be morally justified? How, indeed, does Goar and the left justify protecting or subsidizing manufacturing in rich Canada by any means, since it necessarily means taking money and jobs away from the poor in other countries—people necessarily poorer and needier than any Canadian?
As Alice might have said, curiouser and curiouser. The modern left seems built on one premise: that by calling anything “left-wing” or “progressive” or “liberal,” you can get away with it, no matter how reactionary, selfish, racist, and classist it is.
Tuesday, September 25, 2007
Thoughts As I Lay Dying
Over the past year, bizarrely, I have suffered all the symptoms of not one, but two different fatal illnesses, and twice briefly had fatal diagnoses.
It kind of makes you think. As Samuel Johnson said, “Nothing so concentrates the mind as knowing one is to be hanged in a fortnight.”
I wouldn’t mind checking out that much, really—we all die sooner or later, so it makes rather little difference whether it is sooner or later.
And most things in this world, the physical-social world, the world we leave at death, are only delusions. They are really the opposite of what they are commonly thought, and commonly claimed, to be.
Maybe that’s what the world is about. Maybe the world is a test; a puzzle, and it is up to us to solve it. If and when we do, we are ready for the next stage.
Four, or more, noble truths:
1. The world is a cock-up. Expect nothing, and don’t waste energy on it.
2. Trust only God—accept no substitutes.
3. If you’re good with God, it does not matter what anyone else thinks. If you’re n ot good with God, it does not matter what anyone else thinks.
4. Right and wrong are absolutes. They matter regardless of rewards and punishments.
5. God is God, and you are not.
6. God is in charge; there is no need for you to try to manage things. Things will go just fine without you.
7. It is better to be punished justly than to be rewarded unjustly.
8. When in need, pray. Believe, or not—it still works.
9. Any sin you can forgive in another when done to you, must necessarily be forgiven by God when done by you. For God’s mercy is greater than yours.
10. You cannot hide anything from God. Don’t waste energy trying.
11. Religious belief is not a matter of faith or will. It is the best explanation for things as they are.
12. God is never silent. Everything we perceive is his word. Sometimes we do not listen.
By the way, I got better.
It kind of makes you think. As Samuel Johnson said, “Nothing so concentrates the mind as knowing one is to be hanged in a fortnight.”
I wouldn’t mind checking out that much, really—we all die sooner or later, so it makes rather little difference whether it is sooner or later.
And most things in this world, the physical-social world, the world we leave at death, are only delusions. They are really the opposite of what they are commonly thought, and commonly claimed, to be.
Maybe that’s what the world is about. Maybe the world is a test; a puzzle, and it is up to us to solve it. If and when we do, we are ready for the next stage.
Four, or more, noble truths:
1. The world is a cock-up. Expect nothing, and don’t waste energy on it.
2. Trust only God—accept no substitutes.
3. If you’re good with God, it does not matter what anyone else thinks. If you’re n ot good with God, it does not matter what anyone else thinks.
4. Right and wrong are absolutes. They matter regardless of rewards and punishments.
5. God is God, and you are not.
6. God is in charge; there is no need for you to try to manage things. Things will go just fine without you.
7. It is better to be punished justly than to be rewarded unjustly.
8. When in need, pray. Believe, or not—it still works.
9. Any sin you can forgive in another when done to you, must necessarily be forgiven by God when done by you. For God’s mercy is greater than yours.
10. You cannot hide anything from God. Don’t waste energy trying.
11. Religious belief is not a matter of faith or will. It is the best explanation for things as they are.
12. God is never silent. Everything we perceive is his word. Sometimes we do not listen.
By the way, I got better.
Sunday, September 23, 2007
Napoleon's Pyramid
Like most other apparently successful dictatorships, Napoleon’s seems to have been a pyramid scheme. He floated it all on exploiting conquered territories. This meant that continual conquest was necessary to keep the books balanced. Necessarily, this meant eventual collapse.
Hitler did the same; as a new book, Hitler’s Beneficiaries (Gotz Aly) outlines, the finances of Nazi Germany were quite mad. Albert Speer states plainly in his biography that much of the regime’s early financial success was achieved simply by printing money, while preventing the sort of transparent financial reporting that would have made this apparent.
But this was a short-term solution. Hence Hitler was under a financial necessity to plunder the Jews; quite apart from any racist motive, and he needed their money to keep the government afloat. Even the famous Kristallnacht pogrom was actually a matter of an urgent need to cover two billion marks in short term debt quickly coming due. Fortunately, the Jews were small in number, clearly identifiable (so that they could be singled out without frightening the larger population), and unusually wealthy. The Knights Templar, in their day, fell to an avaricious and needy government of France in exactly the same way for exactly the same reason. So did the Jesuits, centuries later. It could have been anyone else fitting the description; it just happened to be the Jews.
But it was not enough—couldn’t be, because this was a pyramid scheme. By 1939, the Nazis were spending 36.8 billion marks a year, on weaponry and welfare, and only taking in 17 billion marks in annual revenues.
Hence, as Hitler actually said in so many words at the time, war was inevitable. Hitler needed a larger pool to exploit. He could not possibly back down, no matter what the offered compromise: he needed Czechoslovakia to plunder, and then he needed Poland, and then, when Britain did not fall, he needed the Soviet Union.
This insight explains several other things that would otherwise be puzzling. For example, when the Germans first took the Ukraine, they were hailed as liberators. They had a great opportunity to enlist the Ukraine’s manpower in their support against Russia; and the same was true of most other non-Russian areas they took. Instead, they raped the conquered lands ruthlessly, losing the support of the people and requiring large numbers of solders just to keep the areas pacified.
Why? Because they could not afford to do anything else. And by doing this, Hitler probably managed to keep Stalin and Communism in power in the Soviet Union for generations longer than would otherwise have been the case.
Again, this explains the supposed Nazi “prejudice against women” that allows feminists to claim for women the aura of a group opposed by Hitler. Hitler always resisted putting German women into the workforce to replace the German men sent off to war. Officially, this might have been because “women’s place was in the home.” But the financial truth is that women would have had to be paid, and the government had no money. It was therefore far better from their point of view to use slave labour from the occupied territories.
This seems to be the way of most, if not all, dictators. Napoleon III did it, too, if in a smaller way, by confiscating the estates of the followers of the House of Orleans, his predecessor. After that, he relied on shady financing for the reconstruction of Paris, until this scheme collapsed. When Bismark attacked, he discovered France, now short of funds, had little more than a paper army. Iraq’s Saddam, similarly, was quite overtly forced to invade Kuwait because he owed them too much money as a result of his invasion of Iran. Others, with fewer military capabilities, are limited to exploiting helpless internal “enemies,” just as Hitler did at the start: Mugabe, Amin, Mao, Pol Pot. It’s the economy, stupid.
The most surprising thing about Napoleon is that the French still remember him, despite the ravages his rule caused, despite his brutality, despite his sending hundreds of thousands of young Frenchmen to their deaths unnecessarily, despite the ultimate loss of international position and prestige, fondly. This seems to me a specific example of the French preference for illusion over reality. He put on a good party; that’s what matters, no matter how the morning after feels.
But by and large, people do seem to like this sort of thing. People seem too ready to buy cons from their government that are too good to be true. They want a man on a white horse; and they are even inclined to remember the good days fondly after they are gone, not making the connection of the beginning with the inevitable end. The willing suspension of disbelief is a powerful thing.
As for current pyramid schemes, I continue to suspect China of being one. I suspect it will all come apart some day, probably soon. The lack of transparency makes it possible to conceal too much. The current excitement about the safety of Chinese goods may be enough to do it. Not that cutting corners on safety or quality is enough to explain China’s stats—but it might cause a recession sharp enough to cause the whole house to collapse, if it is a Ponzi scheme based on continual expansion.
Hitler did the same; as a new book, Hitler’s Beneficiaries (Gotz Aly) outlines, the finances of Nazi Germany were quite mad. Albert Speer states plainly in his biography that much of the regime’s early financial success was achieved simply by printing money, while preventing the sort of transparent financial reporting that would have made this apparent.
But this was a short-term solution. Hence Hitler was under a financial necessity to plunder the Jews; quite apart from any racist motive, and he needed their money to keep the government afloat. Even the famous Kristallnacht pogrom was actually a matter of an urgent need to cover two billion marks in short term debt quickly coming due. Fortunately, the Jews were small in number, clearly identifiable (so that they could be singled out without frightening the larger population), and unusually wealthy. The Knights Templar, in their day, fell to an avaricious and needy government of France in exactly the same way for exactly the same reason. So did the Jesuits, centuries later. It could have been anyone else fitting the description; it just happened to be the Jews.
But it was not enough—couldn’t be, because this was a pyramid scheme. By 1939, the Nazis were spending 36.8 billion marks a year, on weaponry and welfare, and only taking in 17 billion marks in annual revenues.
Hence, as Hitler actually said in so many words at the time, war was inevitable. Hitler needed a larger pool to exploit. He could not possibly back down, no matter what the offered compromise: he needed Czechoslovakia to plunder, and then he needed Poland, and then, when Britain did not fall, he needed the Soviet Union.
This insight explains several other things that would otherwise be puzzling. For example, when the Germans first took the Ukraine, they were hailed as liberators. They had a great opportunity to enlist the Ukraine’s manpower in their support against Russia; and the same was true of most other non-Russian areas they took. Instead, they raped the conquered lands ruthlessly, losing the support of the people and requiring large numbers of solders just to keep the areas pacified.
Why? Because they could not afford to do anything else. And by doing this, Hitler probably managed to keep Stalin and Communism in power in the Soviet Union for generations longer than would otherwise have been the case.
Again, this explains the supposed Nazi “prejudice against women” that allows feminists to claim for women the aura of a group opposed by Hitler. Hitler always resisted putting German women into the workforce to replace the German men sent off to war. Officially, this might have been because “women’s place was in the home.” But the financial truth is that women would have had to be paid, and the government had no money. It was therefore far better from their point of view to use slave labour from the occupied territories.
This seems to be the way of most, if not all, dictators. Napoleon III did it, too, if in a smaller way, by confiscating the estates of the followers of the House of Orleans, his predecessor. After that, he relied on shady financing for the reconstruction of Paris, until this scheme collapsed. When Bismark attacked, he discovered France, now short of funds, had little more than a paper army. Iraq’s Saddam, similarly, was quite overtly forced to invade Kuwait because he owed them too much money as a result of his invasion of Iran. Others, with fewer military capabilities, are limited to exploiting helpless internal “enemies,” just as Hitler did at the start: Mugabe, Amin, Mao, Pol Pot. It’s the economy, stupid.
The most surprising thing about Napoleon is that the French still remember him, despite the ravages his rule caused, despite his brutality, despite his sending hundreds of thousands of young Frenchmen to their deaths unnecessarily, despite the ultimate loss of international position and prestige, fondly. This seems to me a specific example of the French preference for illusion over reality. He put on a good party; that’s what matters, no matter how the morning after feels.
But by and large, people do seem to like this sort of thing. People seem too ready to buy cons from their government that are too good to be true. They want a man on a white horse; and they are even inclined to remember the good days fondly after they are gone, not making the connection of the beginning with the inevitable end. The willing suspension of disbelief is a powerful thing.
As for current pyramid schemes, I continue to suspect China of being one. I suspect it will all come apart some day, probably soon. The lack of transparency makes it possible to conceal too much. The current excitement about the safety of Chinese goods may be enough to do it. Not that cutting corners on safety or quality is enough to explain China’s stats—but it might cause a recession sharp enough to cause the whole house to collapse, if it is a Ponzi scheme based on continual expansion.
Friday, September 21, 2007
More on the Threatened Mermaid
One of the problems with ecological studies showing that all kinds of animals and plants are endangered is that they are commonly funded by governments, or by NGOs with a mandate to protect the environment. Government is by far the largest source of funding for research of all kinds.
For a balanced view, there are two better sources: business, and think tanks.
Most of what think tanks do is funded by government or by business, but they also do studies on their own as a means of advertising their services and their abilities.
Corporations have a lot of good reasons for looking into the effect of their actions on the environment—the need to conform to government regulations, PR, and the need for sustainability of their profitable activities. The best thing about corporate research, though, is that it is often the only counterbalance to government-funded research, which tends to dominate the field.
There are any number of examples of hype in the ecology/environmentalism field. It’s hard to figure out what is not hype. According to Dr. Patrick Moore, the founder of Greenpeace, for example, the fastest way to increase biodiversity in a rainforest is to clearcut it. Mature growth forest is far from the ideal habitat for all species. The tall trees tend to muscle out a lot of other species. (Dr. Moore is no longer a fan of Greenpeace.)
But more than that—and this is me talking again--the best habitat on land for biomass as well as biodiversity is probably a wildlife park or a zoo; next to that someone’s urban garden; next to that, a farm.
Qatar used to have almost no resident bird species. Now there are many—and all have appeared since the 1980s. Why? With greater population and greater wealth, there is a lot more greenery in the city streets.
Among themselves, the eco-types have a conscious strategy of targeting “charismatic species” for PR purposes. That is, animals that people find good-looking, cute, fuzzy. They look for that stuffed animal quality.
This is obviously clever in propaganda terms. But is it honest? And does it give us a clear view of the situation? If nothing else, this should be a red flag warning us that such organizations indeed have vested interests, and are skewing what they tell us to promote those interests. This sort of thing is advertising, and should be approached with the same caution that we approach all advertising. Except that, unlike business advertisers, NGOs and government agencies are not liable to fraud charges.
Let’s look more closely at the “threatened” polar bear, current poster child for the ecologists:
http://www.csmonitor.com/2007/0503/p13s01-wogi.html?page=1
Here is a study (I think it is from the WWF) that relies, like most such studies, on computer projections. Broken down by separate population groups, it seems to indicate that most are facing no real threat:
http://assets.panda.org/downloads/statusofthepolarbear_14thworkingmtg_iucn_pbsg.pdf
We do not know if such computer projections are accurate, mind; they have no history of being so.
Note that the polar regions share the advantage of the oceans for alarmists’ purposes: it is the one part of the earth’s land area on which we have the least data. Therefore, the worst can be imagined.
The one group that knows best, in terms of actual observation in the field, is the Eskimos (aka Inuit). They have been insisting for years that polar bear populations have been growing rapidly. Of course, they, like government and NGOs, have a vested interest: they want to be free to hunt them. But they also, as hunters, will probably want to be sure their hunt is sustainable.
http://www.propertyrightsresearch.org/2007/articles01/davis.htm
And this looks like an example of the most reliable sort of research, something done by a think tank purely to demonstrate their research abilities:
http://www.ncpa.org/pub/ba/ba551/
Meanwhile, the real story is not that species are disappearing—so far as we can tell, they are not. It is that new species keep being discovered. Not just tiny microorganisms, either. A large creature may be easy to see, if one is around, but by their nature, large creatures, especially large predators, are more likely to be rare throughout their range.
Here’s one specific example that is, literally, close to home. When I was a kid, I was interested in monsters, as many kids are, and I read a few books about such things. One of the monsters I kept coming across, along with Sasquatch, the Loch Ness monster, The Abominable Snowman, and so on, was the celebrated “Eastern cougar.” The claim was that a few people in New Brunswick, in the Canadian Maritimes, kept spotting large wild cats. But there are no big cats in Eastern Canada: the nearest big cat is the cougar or mountain lion, supposedly ranging no further east than the eastern foothills of the Rocky Mountains. And there are no mountains in New Brunswick, 3000 kilometers away.
Then, believe it or not, my own parents one summer insisted they say a big cat bound across a country road in front of them.
Then, a few years later, someone actually shot a specimen, and dragged it in to the nearest qualified scientist.
In Quebec.
It is no longer a mythical monster. The textbooks now all suddenly say the cougar or mountain lion ranges as far east as the Canadian Maritimes.
Just today, in the local Qatar paper, it was announced that the local natural history group had discovered a previously unknown species of sea slug on one of their weekend hikes.
Sea slugs are not that small.
We know what large species are present in densely populated, scientifically-oriented Europe. It gets iffier and iffier as one moves away from Europe. Amateurs out on weekend strolls are still “finding” new species in much of the world.
Meanwhile, actual, documented extinctions of species—as opposed to computer models and projections—are not common. Most known species seem to stubbornly keep surviving. And formerly “extinct” species are even rediscovered.
Consider this Wikipedia page, a list of extinct animals of Europe:
http://en.wikipedia.org/wiki/List_of_extinct_animals_of_Europe
It lists 72 extinctions in all of Europe since the Pleistocene era.
Seventy-two in the world’s most densely-populated continent since the Pleistocene.
Only a small minority of these extinctions would be recent. Only 26 of the 72 are noted to have occurred since 1600.
And two extinct species have been rediscovered in the last few years.
And how quickly are new species being discovered? An MSNBC story reports scientists finding new fish species at a clip of two a week, 106 a year. All would automatically go on the “endangered species list,” of course. Indeed, all new species discovered in, say, the last fifty years or so are probably rare enough to qualify as endangered. That’s five thousand, isn’t it? And that’s fish alone, never mind insects or protozoa—except that the rate of new discoveries is probably accelerating over time.
http://www.msnbc.msn.com/id/6565772/
Another paper lists 57 new neotropical mammals discovered in the eight years between 1992 and 2000. That’s mammals alone, and that’s neotropics alone. Say six a year, 3,000 over fifty years, all presumably on the endangered list.
http://www.blackwell-synergy.com/doi/pdf/10.1046/j.1472-4642.2000.00080.x?cookieSet=1
The IUCN currently reports “16,119 species of animals and plants” “threatened with extinction.” You do the math. Subtract the above figures, and it does not leave a lot of room for anything _but_ newly discovered species, does it?
Next point: compare that figure, of 16,119 species “threatened with extinction,” with the number of those actually documented as extinct in all of Europe since the Pleistocene: 72. Seems as though in the wild, few species take this “threat of extinction” very seriously.
For a balanced view, there are two better sources: business, and think tanks.
Most of what think tanks do is funded by government or by business, but they also do studies on their own as a means of advertising their services and their abilities.
Corporations have a lot of good reasons for looking into the effect of their actions on the environment—the need to conform to government regulations, PR, and the need for sustainability of their profitable activities. The best thing about corporate research, though, is that it is often the only counterbalance to government-funded research, which tends to dominate the field.
There are any number of examples of hype in the ecology/environmentalism field. It’s hard to figure out what is not hype. According to Dr. Patrick Moore, the founder of Greenpeace, for example, the fastest way to increase biodiversity in a rainforest is to clearcut it. Mature growth forest is far from the ideal habitat for all species. The tall trees tend to muscle out a lot of other species. (Dr. Moore is no longer a fan of Greenpeace.)
But more than that—and this is me talking again--the best habitat on land for biomass as well as biodiversity is probably a wildlife park or a zoo; next to that someone’s urban garden; next to that, a farm.
Qatar used to have almost no resident bird species. Now there are many—and all have appeared since the 1980s. Why? With greater population and greater wealth, there is a lot more greenery in the city streets.
Among themselves, the eco-types have a conscious strategy of targeting “charismatic species” for PR purposes. That is, animals that people find good-looking, cute, fuzzy. They look for that stuffed animal quality.
This is obviously clever in propaganda terms. But is it honest? And does it give us a clear view of the situation? If nothing else, this should be a red flag warning us that such organizations indeed have vested interests, and are skewing what they tell us to promote those interests. This sort of thing is advertising, and should be approached with the same caution that we approach all advertising. Except that, unlike business advertisers, NGOs and government agencies are not liable to fraud charges.
Let’s look more closely at the “threatened” polar bear, current poster child for the ecologists:
http://www.csmonitor.com/2007/0503/p13s01-wogi.html?page=1
Here is a study (I think it is from the WWF) that relies, like most such studies, on computer projections. Broken down by separate population groups, it seems to indicate that most are facing no real threat:
http://assets.panda.org/downloads/statusofthepolarbear_14thworkingmtg_iucn_pbsg.pdf
We do not know if such computer projections are accurate, mind; they have no history of being so.
Note that the polar regions share the advantage of the oceans for alarmists’ purposes: it is the one part of the earth’s land area on which we have the least data. Therefore, the worst can be imagined.
The one group that knows best, in terms of actual observation in the field, is the Eskimos (aka Inuit). They have been insisting for years that polar bear populations have been growing rapidly. Of course, they, like government and NGOs, have a vested interest: they want to be free to hunt them. But they also, as hunters, will probably want to be sure their hunt is sustainable.
http://www.propertyrightsresearch.org/2007/articles01/davis.htm
And this looks like an example of the most reliable sort of research, something done by a think tank purely to demonstrate their research abilities:
http://www.ncpa.org/pub/ba/ba551/
Meanwhile, the real story is not that species are disappearing—so far as we can tell, they are not. It is that new species keep being discovered. Not just tiny microorganisms, either. A large creature may be easy to see, if one is around, but by their nature, large creatures, especially large predators, are more likely to be rare throughout their range.
Here’s one specific example that is, literally, close to home. When I was a kid, I was interested in monsters, as many kids are, and I read a few books about such things. One of the monsters I kept coming across, along with Sasquatch, the Loch Ness monster, The Abominable Snowman, and so on, was the celebrated “Eastern cougar.” The claim was that a few people in New Brunswick, in the Canadian Maritimes, kept spotting large wild cats. But there are no big cats in Eastern Canada: the nearest big cat is the cougar or mountain lion, supposedly ranging no further east than the eastern foothills of the Rocky Mountains. And there are no mountains in New Brunswick, 3000 kilometers away.
Then, believe it or not, my own parents one summer insisted they say a big cat bound across a country road in front of them.
Then, a few years later, someone actually shot a specimen, and dragged it in to the nearest qualified scientist.
In Quebec.
It is no longer a mythical monster. The textbooks now all suddenly say the cougar or mountain lion ranges as far east as the Canadian Maritimes.
Just today, in the local Qatar paper, it was announced that the local natural history group had discovered a previously unknown species of sea slug on one of their weekend hikes.
Sea slugs are not that small.
We know what large species are present in densely populated, scientifically-oriented Europe. It gets iffier and iffier as one moves away from Europe. Amateurs out on weekend strolls are still “finding” new species in much of the world.
Meanwhile, actual, documented extinctions of species—as opposed to computer models and projections—are not common. Most known species seem to stubbornly keep surviving. And formerly “extinct” species are even rediscovered.
Consider this Wikipedia page, a list of extinct animals of Europe:
http://en.wikipedia.org/wiki/List_of_extinct_animals_of_Europe
It lists 72 extinctions in all of Europe since the Pleistocene era.
Seventy-two in the world’s most densely-populated continent since the Pleistocene.
Only a small minority of these extinctions would be recent. Only 26 of the 72 are noted to have occurred since 1600.
And two extinct species have been rediscovered in the last few years.
And how quickly are new species being discovered? An MSNBC story reports scientists finding new fish species at a clip of two a week, 106 a year. All would automatically go on the “endangered species list,” of course. Indeed, all new species discovered in, say, the last fifty years or so are probably rare enough to qualify as endangered. That’s five thousand, isn’t it? And that’s fish alone, never mind insects or protozoa—except that the rate of new discoveries is probably accelerating over time.
http://www.msnbc.msn.com/id/6565772/
Another paper lists 57 new neotropical mammals discovered in the eight years between 1992 and 2000. That’s mammals alone, and that’s neotropics alone. Say six a year, 3,000 over fifty years, all presumably on the endangered list.
http://www.blackwell-synergy.com/doi/pdf/10.1046/j.1472-4642.2000.00080.x?cookieSet=1
The IUCN currently reports “16,119 species of animals and plants” “threatened with extinction.” You do the math. Subtract the above figures, and it does not leave a lot of room for anything _but_ newly discovered species, does it?
Next point: compare that figure, of 16,119 species “threatened with extinction,” with the number of those actually documented as extinct in all of Europe since the Pleistocene: 72. Seems as though in the wild, few species take this “threat of extinction” very seriously.
Thursday, September 20, 2007
The Medium is the President
The best thing that has happened to the Hillary Clinton campaign so far is Bill Clinton’s book tour. Perfect for them, just as the campaign season begins. He is really, really good on the stump—one of the best retail politicians ever. He can do folksy sincerity better than anyone. Fake that, and you have it made. Hillary’s strongest argument for being president is that if you vote for her, you also get Bill.
Except—there is a real danger that seeing Bill again may remind voters how different Hillary is in this regard. “I knew Bill Clinton. Bill Clinton was a friend of mine. And Hillary, you’re no Bill Clinton.” Clinton was and is hard to dislike. Now that he has no pressing need to lie, he is incredibly pleasant to listen to. Even his first Secret Service detail spotted that star quality—they gave him the code name “Elvis.” But Hillary? There is just something about her—a sweaty Nixonian, “not comfortable in her own skin” quality—that instinctively turns people off, even people who should support her ideologically.
I think this issue is extremely important to voters, in the end—more important than pundits seem to realize. And the people are right to consider it important. The US President, unlike a Canadian prime minister, is a symbol of the nation. Much of his effectiveness is based on his ability to sway public opinion—otherwise, he’s no more than a super-police-chief. Beyond vetoing it, he has no control over legislation. The “bully pulpit,” as Teddy Roosevelt put it, is the whole office.
This being so, it is important to the health of the US that the president be someone people enjoy seeing on their TV screens, and enjoy listening to. Someone with a “warm fuzzy” quotient. Bill Clinton has that. Hillary Clinton does not. She is somehow cold and sharp.
Cases in point: people trusted FDR’s voice on the radio. People loved the look of JFK on TV. Reagan had the skills of a seasoned actor, making him terribly hard to dislike: “the Teflon president.”
Counterexamples: Nixon, or Johnson, who were universally considered competent, but not personally likeable or physically attractive.
And their presidencies both ended in something like disaster.
Making it worse for Hillary, some others in the present race also have a lot of warm fuzzy quotient.
John Edwards, granted, is a bit too good-looking. People don’t like that. It makes them feel inadequate. Anne Coulter picked up on that. And all that stuff about haircuts, that’s what it’s really about. (Lucky for Bill Clinton that Monica Lewinsky was not especially good looking. Unlucky for Gary Hart that Donna Rice was). Better to have one or two obvious flaws, like Bill’s pudginess and bulbous nose. It helped Adlai Stevenson when he was photographed with a hole in his shoe, countering the general feeling that he was just too darned intelligent to be permitted to be president.
But Barack Obama? Get away. You just want him to succeed, with his eager schoolboy look. But not too good-looking: fortunately, he has big floppy ears.
There is a good chance, if he does not slip up, that he will wipe the floor with Hillarry in the primaries. If he does, he will be a much stronger candidate.
On the Republican side, Mitt Romney has the same problem as John Edwards. Too perfect. He even looks like George Washington. If he wants to be president, he needs to muss up his hair, wear his tie askew, and fart in public
Rudy Giuliani, with his Pinocchio nose and Obama ears, does not have Romney’s problems. And he is another real charmer—very like Bill Clinton in his style. New York folksy—a guy from the hood. John McCain also has formidable skills—he came very close to getting the Republican nomination in 2000 on sheer charm. Because of it, he is a real threat to be a “comeback kid,” like Clinton in 92. Charm lets you do that.
And Fred Thompson, of course, has the skills of a successful actor. Not to be sneezed at. A magnificent voice, an impressive height—the taller candidate almost always wins—and pleasantly ugly. Not too good looking, but expressive, fun to look at. (McCain suffers a bit here—his face is less expressive than it might be).
It is impossible to overestimate the importance of this warm fuzzy factor in an actual presidential election. Leaving aside all other considerations whatsoever, and running through all the presidential elections since the Second World War, the more likeable candidate almost always wins.
1948: Truman versus Dewey—Dewey, famously, was doomed by his moustache. Made him look sinister.
1952: Eisenhower versus Stevenson—as my father summarized it, Eisenhower came across as a regular guy, Stevenson as an “egghead.” So he voted Eisenhower.
1956: Stevenson was still an egghead.
1960: Kennedy versus Nixon—as we all know, Kennedy looked better on TV.
1964: Johnson versus Goldwater—this one really tests the thesis. Johnson was not likeable, and as I remember Goldwater, he was. Yet Johnson won in a historic landslide. Why?
There was a catch in Goldwater’s voice, which gave it a harsh and even a desperate-sounding quality. This was crystallized in the public consciousness as a suspicion he might actually be deranged. “In your guts, you know he’s nuts.”
1968: Nixon versus Humphrey—okay, Nixon was not likeable, but nobody in their right mind could have handled listening to Humphrey’s squeaky voice for four years.
1972: Nixon versus McGovern—McGovern’s voice was even more annoying than Humphrey’s. There was something wrong with his adenoids or his testicles, surely.
1976: Ford versus Carter—close call. Both were generally likeable. But Ford got an image for being bumbling. People start laughing at you, not with you, you lose the gravitas for being president.
1980: Carter versus Reagan—no contest. Reagan could pretty well mop the floor with anyone on personal likeability and stage presence. Meantime, Carter’s constant smile could only work for so long before it started to grate. You kind of wanted to plant your fist in his face after a while.
1984: Reagan versus Mondale. Nobody could stand up to Reagan. But Mondale also has a bit of a pip-squeak to his voice. Must be a Minnesota thing.
1988: Bush I versus Dukakis—close call, on the face of it. But perhaps by bad luck, Dukakis got filmed looking very goofy in a tank, and that image killed him. People start laughing at you, you’re toast. The Willie Horton thing mostly just underlined the impression that Dukakis was a lightweight. But in the end, didn’t this also have a lot to do with his lack of physical stature next to the gangling Bush? A little guy for a big job?
1992: Bush I versus Clinton—Elvis has entered the building. Bush looked wooden by contrast.
1996: Clinton versus Dole—Dole looked a bit like Nixon—too much five o-clock shadow. This made him look sinister. Too much edge to his voice, and too sharp a tongue. He always sounded a bit sarcastic.
2000: Gore versus Bush II—Gore looked like an inflated balloon; he did not know where to put his hands. Bush came across as a regular guy.
2004: Bush II versus Kerry—as above. Wheezing and whining and a doleful face—who wants that on their TV for four years? Who wants a mortician for president?
I think, based on this, you can pretty much predict the outcome of the presidency now. If the Democrats nominate Hillary Clinton, she will lose to any likely Republican nominee. If they nominate Obama, it will be a close contest against anyone on the Republican side.
For the Republicans, Thompson is probably their strongest choice--unless his acting skills fail him to the point of actually looking like an actor playing a part.
Except—there is a real danger that seeing Bill again may remind voters how different Hillary is in this regard. “I knew Bill Clinton. Bill Clinton was a friend of mine. And Hillary, you’re no Bill Clinton.” Clinton was and is hard to dislike. Now that he has no pressing need to lie, he is incredibly pleasant to listen to. Even his first Secret Service detail spotted that star quality—they gave him the code name “Elvis.” But Hillary? There is just something about her—a sweaty Nixonian, “not comfortable in her own skin” quality—that instinctively turns people off, even people who should support her ideologically.
I think this issue is extremely important to voters, in the end—more important than pundits seem to realize. And the people are right to consider it important. The US President, unlike a Canadian prime minister, is a symbol of the nation. Much of his effectiveness is based on his ability to sway public opinion—otherwise, he’s no more than a super-police-chief. Beyond vetoing it, he has no control over legislation. The “bully pulpit,” as Teddy Roosevelt put it, is the whole office.
This being so, it is important to the health of the US that the president be someone people enjoy seeing on their TV screens, and enjoy listening to. Someone with a “warm fuzzy” quotient. Bill Clinton has that. Hillary Clinton does not. She is somehow cold and sharp.
Cases in point: people trusted FDR’s voice on the radio. People loved the look of JFK on TV. Reagan had the skills of a seasoned actor, making him terribly hard to dislike: “the Teflon president.”
Counterexamples: Nixon, or Johnson, who were universally considered competent, but not personally likeable or physically attractive.
And their presidencies both ended in something like disaster.
Making it worse for Hillary, some others in the present race also have a lot of warm fuzzy quotient.
John Edwards, granted, is a bit too good-looking. People don’t like that. It makes them feel inadequate. Anne Coulter picked up on that. And all that stuff about haircuts, that’s what it’s really about. (Lucky for Bill Clinton that Monica Lewinsky was not especially good looking. Unlucky for Gary Hart that Donna Rice was). Better to have one or two obvious flaws, like Bill’s pudginess and bulbous nose. It helped Adlai Stevenson when he was photographed with a hole in his shoe, countering the general feeling that he was just too darned intelligent to be permitted to be president.
But Barack Obama? Get away. You just want him to succeed, with his eager schoolboy look. But not too good-looking: fortunately, he has big floppy ears.
There is a good chance, if he does not slip up, that he will wipe the floor with Hillarry in the primaries. If he does, he will be a much stronger candidate.
On the Republican side, Mitt Romney has the same problem as John Edwards. Too perfect. He even looks like George Washington. If he wants to be president, he needs to muss up his hair, wear his tie askew, and fart in public
Rudy Giuliani, with his Pinocchio nose and Obama ears, does not have Romney’s problems. And he is another real charmer—very like Bill Clinton in his style. New York folksy—a guy from the hood. John McCain also has formidable skills—he came very close to getting the Republican nomination in 2000 on sheer charm. Because of it, he is a real threat to be a “comeback kid,” like Clinton in 92. Charm lets you do that.
And Fred Thompson, of course, has the skills of a successful actor. Not to be sneezed at. A magnificent voice, an impressive height—the taller candidate almost always wins—and pleasantly ugly. Not too good looking, but expressive, fun to look at. (McCain suffers a bit here—his face is less expressive than it might be).
It is impossible to overestimate the importance of this warm fuzzy factor in an actual presidential election. Leaving aside all other considerations whatsoever, and running through all the presidential elections since the Second World War, the more likeable candidate almost always wins.
1948: Truman versus Dewey—Dewey, famously, was doomed by his moustache. Made him look sinister.
1952: Eisenhower versus Stevenson—as my father summarized it, Eisenhower came across as a regular guy, Stevenson as an “egghead.” So he voted Eisenhower.
1956: Stevenson was still an egghead.
1960: Kennedy versus Nixon—as we all know, Kennedy looked better on TV.
1964: Johnson versus Goldwater—this one really tests the thesis. Johnson was not likeable, and as I remember Goldwater, he was. Yet Johnson won in a historic landslide. Why?
There was a catch in Goldwater’s voice, which gave it a harsh and even a desperate-sounding quality. This was crystallized in the public consciousness as a suspicion he might actually be deranged. “In your guts, you know he’s nuts.”
1968: Nixon versus Humphrey—okay, Nixon was not likeable, but nobody in their right mind could have handled listening to Humphrey’s squeaky voice for four years.
1972: Nixon versus McGovern—McGovern’s voice was even more annoying than Humphrey’s. There was something wrong with his adenoids or his testicles, surely.
1976: Ford versus Carter—close call. Both were generally likeable. But Ford got an image for being bumbling. People start laughing at you, not with you, you lose the gravitas for being president.
1980: Carter versus Reagan—no contest. Reagan could pretty well mop the floor with anyone on personal likeability and stage presence. Meantime, Carter’s constant smile could only work for so long before it started to grate. You kind of wanted to plant your fist in his face after a while.
1984: Reagan versus Mondale. Nobody could stand up to Reagan. But Mondale also has a bit of a pip-squeak to his voice. Must be a Minnesota thing.
1988: Bush I versus Dukakis—close call, on the face of it. But perhaps by bad luck, Dukakis got filmed looking very goofy in a tank, and that image killed him. People start laughing at you, you’re toast. The Willie Horton thing mostly just underlined the impression that Dukakis was a lightweight. But in the end, didn’t this also have a lot to do with his lack of physical stature next to the gangling Bush? A little guy for a big job?
1992: Bush I versus Clinton—Elvis has entered the building. Bush looked wooden by contrast.
1996: Clinton versus Dole—Dole looked a bit like Nixon—too much five o-clock shadow. This made him look sinister. Too much edge to his voice, and too sharp a tongue. He always sounded a bit sarcastic.
2000: Gore versus Bush II—Gore looked like an inflated balloon; he did not know where to put his hands. Bush came across as a regular guy.
2004: Bush II versus Kerry—as above. Wheezing and whining and a doleful face—who wants that on their TV for four years? Who wants a mortician for president?
I think, based on this, you can pretty much predict the outcome of the presidency now. If the Democrats nominate Hillary Clinton, she will lose to any likely Republican nominee. If they nominate Obama, it will be a close contest against anyone on the Republican side.
For the Republicans, Thompson is probably their strongest choice--unless his acting skills fail him to the point of actually looking like an actor playing a part.
Tuesday, September 18, 2007
Are Mermaids an Endangered Species?
A friend is worried about the plight of the polar bear. Apparently, with global warming, they are suddenly endangered, at least according to the IUCN and the NRDC. The claim is that, with less ice in the Arctic, they will lose the diving platforms from which they hunt for their prey.
And the “US Commission on Ocean Policy” has come out strongly to “significantly increase federal spending on ocean science and education.”
I can’t get excited.
The IUCN is essentially a consortium of people who have a vested interest in convincing the public that at least some species are endangered. If they are not, IUCN ceases to have any reason to exist or to be funded. The same is true for the self-appointed “Natural Resources Defense Council.” As for the “US Commission on Ocean Policy,” could anyone really expect a commission so named ever to come out with the opposite finding: that the government should give them less money in future? That there is no need for such a commission and the government has wasted money by funding it? Does that sound vaguely consistent with human nature?
Rare individuals can sometimes be honest. Committees never are.
This is the basic flaw in too many, and quite possibly all, of the sources calling for more money to be put into the preservation of various species, and all ecological research. The calls come from people who stand to gain money and power if we buy the claim.
Of course, this is a bit of an ad hominem argument on my part. The fact that they have an obvious vested interest means that what they say should never be taken at face value; their claims and their methodology must be carefully examined. Nevertheless, their studies might be correct.
The IUCN comments on the polar bear, topping off their list of endangered species worldwide, suggest otherwise, though. They are purely speculative—polar bears _might_ run into a problem in future due to global warming, if there is global warming. They make no reference to the recent studies (e.g., see http://thescotsman.scotsman.com/international.cfm?id=143012005) that suggest the polar bear population is rising rapidly; if only to explain why they discount them. This suggests intent to deceive.
And it bodes ill for the rest of their list—if the polar bear is their best shot, they don’t have much of a case.
Remembering this bias, it is well to realize how limited our knowledge of what goes on in the biosphere really is; we are not much better at understanding or predicting its complex interrelationships than at predicting the weather, or predicting climate change. All are systems so far known to be too complex for reliable computer modeling. Because this is the case, studies can be made to appear to show almost anything.
Here’s an example: are the number of species known to exist in the world growing or declining? Answer: they are growing. We are still discovering new species all the time, while documented extinctions are rare.
Of course, being Darwinians, we believe these “new” species have always been there; we’ve just never noticed them before, in all of human history.
Still, our practical experience is of a growing, not a shrinking, species list.
But consider this as well: if we are so bad at spotting species that have always been there, how sure can we ever be that a given species is “extinct”? Indeed, many species, once thought to be extinct, have then been spotted again in the wild.
And how about this? Every time a new species is spotted, it goes immediately onto the “endangered species list.” Necessarily—it must be rare, or someone, out of six billion and over five thousand years, would have seen it before.
The natural impression this produces is that many more species every day are becoming endangered. My friend’s source, IUCN, reports the figure “16,119 species of animals and plants” are “threatened with extinction” as if this is an important number. Yet much of it may simply represent new species discoveries, not really a threatened decline in biodiversity at all.
We know even less, of course, about what goes on in the oceans, than about what is happening on the land. This makes the oceans an especially rich field for those who want to fudge data—and it is therefore telling that they seem to get a lion’s share of the attention of the doomsayers.
And next to the oceans, we know least about the polar regions. Intriguing, therefore, that it is from here that warnings come about supposed rapid global warming, ice shelf collapse, and holes in the ozone layer.
Travellers’ tales have always come from such places. A few hundred years ago, the inhabitants of the North Pole had tails, or wore a bright red suit and drove reindeer. Or Saturn ruled a temperate paradise where, beyond a whirlpool, people lived forever. Just as, in the oceans, there were vast sea serpents, below which beautiful mermaids lived in their coral palaces, in a place called Atlantis.
And the “US Commission on Ocean Policy” has come out strongly to “significantly increase federal spending on ocean science and education.”
I can’t get excited.
The IUCN is essentially a consortium of people who have a vested interest in convincing the public that at least some species are endangered. If they are not, IUCN ceases to have any reason to exist or to be funded. The same is true for the self-appointed “Natural Resources Defense Council.” As for the “US Commission on Ocean Policy,” could anyone really expect a commission so named ever to come out with the opposite finding: that the government should give them less money in future? That there is no need for such a commission and the government has wasted money by funding it? Does that sound vaguely consistent with human nature?
Rare individuals can sometimes be honest. Committees never are.
This is the basic flaw in too many, and quite possibly all, of the sources calling for more money to be put into the preservation of various species, and all ecological research. The calls come from people who stand to gain money and power if we buy the claim.
Of course, this is a bit of an ad hominem argument on my part. The fact that they have an obvious vested interest means that what they say should never be taken at face value; their claims and their methodology must be carefully examined. Nevertheless, their studies might be correct.
The IUCN comments on the polar bear, topping off their list of endangered species worldwide, suggest otherwise, though. They are purely speculative—polar bears _might_ run into a problem in future due to global warming, if there is global warming. They make no reference to the recent studies (e.g., see http://thescotsman.scotsman.com/international.cfm?id=143012005) that suggest the polar bear population is rising rapidly; if only to explain why they discount them. This suggests intent to deceive.
And it bodes ill for the rest of their list—if the polar bear is their best shot, they don’t have much of a case.
Remembering this bias, it is well to realize how limited our knowledge of what goes on in the biosphere really is; we are not much better at understanding or predicting its complex interrelationships than at predicting the weather, or predicting climate change. All are systems so far known to be too complex for reliable computer modeling. Because this is the case, studies can be made to appear to show almost anything.
Here’s an example: are the number of species known to exist in the world growing or declining? Answer: they are growing. We are still discovering new species all the time, while documented extinctions are rare.
Of course, being Darwinians, we believe these “new” species have always been there; we’ve just never noticed them before, in all of human history.
Still, our practical experience is of a growing, not a shrinking, species list.
But consider this as well: if we are so bad at spotting species that have always been there, how sure can we ever be that a given species is “extinct”? Indeed, many species, once thought to be extinct, have then been spotted again in the wild.
And how about this? Every time a new species is spotted, it goes immediately onto the “endangered species list.” Necessarily—it must be rare, or someone, out of six billion and over five thousand years, would have seen it before.
The natural impression this produces is that many more species every day are becoming endangered. My friend’s source, IUCN, reports the figure “16,119 species of animals and plants” are “threatened with extinction” as if this is an important number. Yet much of it may simply represent new species discoveries, not really a threatened decline in biodiversity at all.
We know even less, of course, about what goes on in the oceans, than about what is happening on the land. This makes the oceans an especially rich field for those who want to fudge data—and it is therefore telling that they seem to get a lion’s share of the attention of the doomsayers.
And next to the oceans, we know least about the polar regions. Intriguing, therefore, that it is from here that warnings come about supposed rapid global warming, ice shelf collapse, and holes in the ozone layer.
Travellers’ tales have always come from such places. A few hundred years ago, the inhabitants of the North Pole had tails, or wore a bright red suit and drove reindeer. Or Saturn ruled a temperate paradise where, beyond a whirlpool, people lived forever. Just as, in the oceans, there were vast sea serpents, below which beautiful mermaids lived in their coral palaces, in a place called Atlantis.
Sunday, September 16, 2007
Some Citizens are More Equal than Others
Canada has joined the US, Australia, and New Zealand in voting against a UN Resolution on the Rights of Indigenous People. There is of course a hue and cry, about Canada voting against human rights.
Right. When you see the US, Australia, and New Zealand together on one side of an issue, and Saudi Arabia, Cuba, China, Pakistan, Azerbaijan, and Russia—all current members of the UN Council on Human Rights—on the other, who is more likely to be defending human rights?
To put it in the most charitable light possible, the proposed charter—now an official UN resolution—is self-contradictory. It requires that indigenous people be treated as equal to all others (Article 2), then immediately contradicts this by giving indigenous peoples a wide range of rights not available to other citizens.
Governments are obliged, for example, to protect indigenous groups from “any action”—not any _government_ action, but any action by anybody—with the “aim or effect” of depriving the aboriginals of their “ethnic identities” or “cultural values.”
Apparently, if an aboriginal person fails to think like all other aboriginal people, a crime has been committed—perhaps by that aboriginal, but certainly by the nearest non-aboriginal. What values one may choose or not choose to hold, as an indigene, are strictly limited. If an evangelist convinces an Indian to convert to a new religion, for example, isn’t he depriving him of some element of his ethnic identity? Many already argue strongly that he is—see the furor about religious residential schools in Canada.
What about a converted aboriginal who manages to convert a second aboriginal? Has he committed a crime? Or is this crime only possible for non-aboriginal persons? How about a teacher who teaches them to read and write? Or even how to use a wheel? None of these are present in traditional Canadian Indian culture. Has a crime been committed?
Indigenous people also now have the right to “practice and revitalize their cultural traditions and customs,” specifically including their “religious and spiritual traditions.” Governments may not interfere.
Okay; but what if their cultural traditions involve a violation of the human rights of others? What if they traditionally practice, say, human sacrifice, or cannibalism, or slavery, or polygamy, or the exposure of unwanted grandparents, or the need to kill another adult human in order to be recognized as a full member of the community? In the past, indigenous groups in Canada did all of these things. It would seem this is now their inalienable right as aboriginals.
Even if this is okay, can the rest of us do the same?
Canada objected in particular to the requirement that no law be passed affecting indigenous people without their prior consent, through their own institutions.
And Canada should know.
Canadian aboriginals are not just one people. They are as different from one another as Scots, Hungarians, and Mauritanians. There are over 600 recognized native governments in Canada. Now, remember how hard it was to get the consent of ten different provinces in order to amend the constitution? Multiply this by 60—and to pass any law.
Now apply the principle to other groups, as equality demands: one would need the consent of all organized groups in Canada in order to pass any law. In effect, any legislation would have to pass unanimously. Pass a law against picking pockets? Not if the pickpockets’ union objected.
Aboriginals are also apparently entitled, as a matter of “equality,” to “special measures” to improve their economic and social conditions.
If this is not special treatment, what meaning is left in the term? And what can “special measures” mean—given that all honest governments, as a matter of course, already seek to improve the economic and social conditions of all citizens as much as they can? Doesn’t it necessarily mean taking money, resources, jobs, and opportunities from other citizens and giving it to aboriginals? Whatever you might want to call this, “equality” is not the right word. This is actually about how Hitler started out with the Jews—partly on the basis that the Aryans were the indigenous people, the natives, and the Jews were foreign colonizers.
Another article in the resolution grants aboriginals the right to “conservation and protection of the environment and the productive capacity of their lands… and resources. States shall establish and implement assistance programmes for indigenous people for such conservation.”
This is an odd right—it looks like a “right” never to fail. Wish we all had that. Given that the indigenes already hold their own lands and resources, who is it but they themselves who could possibly be depleting the productive capacity of their lands and resources? Aboriginals are guaranteed bailouts by the rest of us if they destroy their own property. A non-aboriginal, on the other hand, might instead be charged with arson, or fined for pollution.
Indigenes are also granted “intellectual property” rights over their “cultural heritage.” This “cultural heritage” includes their “genetic resources.”
This implies that aboriginals are personally responsible for their own genetic makeup as well as their entire culture. Do all Italians collect a royalty whenever anyone uses Italian DNA, or views the Mona Lisa? And, if we are really going to hold aboriginals responsible for their own genetic makeup, aren’t we opening a door to a rather unfortunate possibility? That they can just as readily be blamed and punished as rewarded for it?
The resolution also reserves the right of traditional governments not only to select their own membership (Article 33), but also to “determine the responsibilities of individuals to their communities” (Article 35).
Literally, this means any individual can be declared by any aboriginal government to be their rightful subject. Once so designated, they can be legally forced to do whatever that government chooses. And there is no escape.
There is a word for that, when done by non-aboriginals.
The word is slavery.
There are other practical problems. In demanding such special treatment for a special group designated “indigenous,” the UN resolution neglects to define what they mean by the term “indigenous.” It is, after all merely a “social construct,” indeed, a polite fiction. With one hypothetical exception, no country really has an indigenous population. All peoples started out somewhere else; nobody has sprung from the earth where they stand. Conversely, all nations have previous inhabitants, and in those in which they have not been massacred wholesale, most still retain populations of them.
There is, apparently, a reason for this, and it is the obvious one; it is not merely an oversight. No UN body has ever been able to agree on a definition.
This alone makes the entire resolution frighteningly arbitrary—any government is free to enforce or not enforce it to the benefit or detriment of any group, ensuring unequal protection from the law.
It is, in sum, an instrument of tyranny.
Right. When you see the US, Australia, and New Zealand together on one side of an issue, and Saudi Arabia, Cuba, China, Pakistan, Azerbaijan, and Russia—all current members of the UN Council on Human Rights—on the other, who is more likely to be defending human rights?
To put it in the most charitable light possible, the proposed charter—now an official UN resolution—is self-contradictory. It requires that indigenous people be treated as equal to all others (Article 2), then immediately contradicts this by giving indigenous peoples a wide range of rights not available to other citizens.
Governments are obliged, for example, to protect indigenous groups from “any action”—not any _government_ action, but any action by anybody—with the “aim or effect” of depriving the aboriginals of their “ethnic identities” or “cultural values.”
Apparently, if an aboriginal person fails to think like all other aboriginal people, a crime has been committed—perhaps by that aboriginal, but certainly by the nearest non-aboriginal. What values one may choose or not choose to hold, as an indigene, are strictly limited. If an evangelist convinces an Indian to convert to a new religion, for example, isn’t he depriving him of some element of his ethnic identity? Many already argue strongly that he is—see the furor about religious residential schools in Canada.
What about a converted aboriginal who manages to convert a second aboriginal? Has he committed a crime? Or is this crime only possible for non-aboriginal persons? How about a teacher who teaches them to read and write? Or even how to use a wheel? None of these are present in traditional Canadian Indian culture. Has a crime been committed?
Indigenous people also now have the right to “practice and revitalize their cultural traditions and customs,” specifically including their “religious and spiritual traditions.” Governments may not interfere.
Okay; but what if their cultural traditions involve a violation of the human rights of others? What if they traditionally practice, say, human sacrifice, or cannibalism, or slavery, or polygamy, or the exposure of unwanted grandparents, or the need to kill another adult human in order to be recognized as a full member of the community? In the past, indigenous groups in Canada did all of these things. It would seem this is now their inalienable right as aboriginals.
Even if this is okay, can the rest of us do the same?
Canada objected in particular to the requirement that no law be passed affecting indigenous people without their prior consent, through their own institutions.
And Canada should know.
Canadian aboriginals are not just one people. They are as different from one another as Scots, Hungarians, and Mauritanians. There are over 600 recognized native governments in Canada. Now, remember how hard it was to get the consent of ten different provinces in order to amend the constitution? Multiply this by 60—and to pass any law.
Now apply the principle to other groups, as equality demands: one would need the consent of all organized groups in Canada in order to pass any law. In effect, any legislation would have to pass unanimously. Pass a law against picking pockets? Not if the pickpockets’ union objected.
Aboriginals are also apparently entitled, as a matter of “equality,” to “special measures” to improve their economic and social conditions.
If this is not special treatment, what meaning is left in the term? And what can “special measures” mean—given that all honest governments, as a matter of course, already seek to improve the economic and social conditions of all citizens as much as they can? Doesn’t it necessarily mean taking money, resources, jobs, and opportunities from other citizens and giving it to aboriginals? Whatever you might want to call this, “equality” is not the right word. This is actually about how Hitler started out with the Jews—partly on the basis that the Aryans were the indigenous people, the natives, and the Jews were foreign colonizers.
Another article in the resolution grants aboriginals the right to “conservation and protection of the environment and the productive capacity of their lands… and resources. States shall establish and implement assistance programmes for indigenous people for such conservation.”
This is an odd right—it looks like a “right” never to fail. Wish we all had that. Given that the indigenes already hold their own lands and resources, who is it but they themselves who could possibly be depleting the productive capacity of their lands and resources? Aboriginals are guaranteed bailouts by the rest of us if they destroy their own property. A non-aboriginal, on the other hand, might instead be charged with arson, or fined for pollution.
Indigenes are also granted “intellectual property” rights over their “cultural heritage.” This “cultural heritage” includes their “genetic resources.”
This implies that aboriginals are personally responsible for their own genetic makeup as well as their entire culture. Do all Italians collect a royalty whenever anyone uses Italian DNA, or views the Mona Lisa? And, if we are really going to hold aboriginals responsible for their own genetic makeup, aren’t we opening a door to a rather unfortunate possibility? That they can just as readily be blamed and punished as rewarded for it?
The resolution also reserves the right of traditional governments not only to select their own membership (Article 33), but also to “determine the responsibilities of individuals to their communities” (Article 35).
Literally, this means any individual can be declared by any aboriginal government to be their rightful subject. Once so designated, they can be legally forced to do whatever that government chooses. And there is no escape.
There is a word for that, when done by non-aboriginals.
The word is slavery.
There are other practical problems. In demanding such special treatment for a special group designated “indigenous,” the UN resolution neglects to define what they mean by the term “indigenous.” It is, after all merely a “social construct,” indeed, a polite fiction. With one hypothetical exception, no country really has an indigenous population. All peoples started out somewhere else; nobody has sprung from the earth where they stand. Conversely, all nations have previous inhabitants, and in those in which they have not been massacred wholesale, most still retain populations of them.
There is, apparently, a reason for this, and it is the obvious one; it is not merely an oversight. No UN body has ever been able to agree on a definition.
This alone makes the entire resolution frighteningly arbitrary—any government is free to enforce or not enforce it to the benefit or detriment of any group, ensuring unequal protection from the law.
It is, in sum, an instrument of tyranny.
Wednesday, September 12, 2007
On Jihad and the Muslim Oppression of Women
I attended a talk at our campus today by a former Texas Protestant preacher who, quite some years ago, converted to Islam.
He was good; of course. He was a professional talker. He has his own local TV show. Lots of folksy jokes, only a few of which I had heard before. Granted, no intellectual depth, and a whole lot of repetition. He was not honest or clever enough to avoid repeating as truth a typical bit of the Black Legend against the Catholic Church. He claimed vaguely that “a Catholic Church council 1600 years ago debated whether women had souls.”
There is a reason why he had to be vague—and this suggests dishonesty on his part. Makes it harder to check. The reference seems to be to the Council of Macon. No, you’ve never heard of it. Because it does not appear in any list of the Church Councils. Indeed, there is no reference to it of any kind in the Catholic Encyclopedia. This is because it was not an ecumenical Church Council, not what we normally mean by the term "Church Council." It had no authority to speak for the Church. It was just a small regional gathering of bishops in France, called by themselves--a "council" only in the generic sense of a meeting.
But there is also no evidence that it debated any such question.
We have the records from the actual council. There is no mention in them of either women or souls. All we have to the contrary is a claim by St. Gregory of Tours that at this or some other unidentified French council of about this time, one bishop argued that the term “man” in the Bible did not include women. No more, and no less; and none present, according to Gregory, agreed with him. Gregory reports the incident, apparently, because the bishop’s claim seemed so shocking, so absurd, to Gregory and to his contemporaries.
Now, here’s the great irony: if this means even one bishop believed women did not have souls, it means that feminists today do not believe women have souls—for they make exactly the same claim, that the term "man" does not include women. Yet nobody is shocked when they say it.
In other words, much of what Dr. Yusuf Estes said was probably misinformed. But he was fun to watch, anyway.
The Rev. Dr’s topic was “Why are women not equal in Islam?” An interesting extension of my recent posting here on abayas.
He pointed out that the majority of converts to Islam are women. And the reason for this is simple: because women get a better deal under Islam than do men. They are not equal, because they are superior.
The abaya is only the beginning. In Islam, a man who marries is obliged to fully support his wife financially. She has no financial obligations towards him. Legally, she can do as she pleases; work or not work. If she chooses to work, her money is then her own.
It is an overwhelming advantage for women: the same advantage a slaveowner has over a slave.
What of the man’s status within the family? Dr. Yusuf pointed out that the best the Quran said for the husband and father was that he was “imam” of the family. This does not imply dominance—the word for that would be “malek,” king. “Imam” means simply “leader in prayer.” Moreover, the word derives from the root “Uma,” which is the word for “Mother.”
In other words, it means the father’s position is to be “the mother during prayer.” And “mother” is roughly cognate to “leader.”
How can this be construed as elevating man above woman? Rather, at most, it elevates man to equality with women.
But only for a few minutes a day.
Dr. Estes also asked the audience to recite in unison—as they were able to do, even in English—one of the best-known Hadith.
Someone asked Mohammed to whom, after God, a mortal owed his obedience. “To the prophets,” Muhammed explained.
“And after that?”
“To their mother.”
“And after that?”
“To their mother.”
“And after that?”
“To their mother.”
“And after that?”
“To their father.”
Kind of makes the point.
Oppression of women? Right.
Of course, all this would be about equally true of Christian traditions—that they give the advantage overwhelmingly to the woman in most things. Which is one reason churches are mostly full of women, not men. The only real difference is that Islam still holds strongly to these values, while the West has mostly fallen away from its religious values.
In passing, Dr. Yusuf made another important point, on the meaning of the word “kaffir.” This is the word usually translated into English as “infidels” or “unbelievers.” People quote passages about fighting kaffirs, about “jihad” against kaffirs, as if they referred to Christians and Jews, as if they justified aggression against them.
They do not.
The proper meaning of the word “kaffir” is “someone who covers.” It is actually etymologically related to the English word “cover.” It is very like the Christian term “hypocrite,” literally, “one who wears a mask.”
It refers properly to someone who accepts the truth of Islam, and yet conceals it, or conceals his own acts counter to it.
No honest Christian can be accused of “covering up” his lack of adherence to Islam. Exactly as no honest Muslim can be accused of being a hypocrite.
And the calumnies the Quran pronounces against kaffirs are very similar to the calumnies the Bible pronounces against hypocrites.
He was good; of course. He was a professional talker. He has his own local TV show. Lots of folksy jokes, only a few of which I had heard before. Granted, no intellectual depth, and a whole lot of repetition. He was not honest or clever enough to avoid repeating as truth a typical bit of the Black Legend against the Catholic Church. He claimed vaguely that “a Catholic Church council 1600 years ago debated whether women had souls.”
There is a reason why he had to be vague—and this suggests dishonesty on his part. Makes it harder to check. The reference seems to be to the Council of Macon. No, you’ve never heard of it. Because it does not appear in any list of the Church Councils. Indeed, there is no reference to it of any kind in the Catholic Encyclopedia. This is because it was not an ecumenical Church Council, not what we normally mean by the term "Church Council." It had no authority to speak for the Church. It was just a small regional gathering of bishops in France, called by themselves--a "council" only in the generic sense of a meeting.
But there is also no evidence that it debated any such question.
We have the records from the actual council. There is no mention in them of either women or souls. All we have to the contrary is a claim by St. Gregory of Tours that at this or some other unidentified French council of about this time, one bishop argued that the term “man” in the Bible did not include women. No more, and no less; and none present, according to Gregory, agreed with him. Gregory reports the incident, apparently, because the bishop’s claim seemed so shocking, so absurd, to Gregory and to his contemporaries.
Now, here’s the great irony: if this means even one bishop believed women did not have souls, it means that feminists today do not believe women have souls—for they make exactly the same claim, that the term "man" does not include women. Yet nobody is shocked when they say it.
In other words, much of what Dr. Yusuf Estes said was probably misinformed. But he was fun to watch, anyway.
The Rev. Dr’s topic was “Why are women not equal in Islam?” An interesting extension of my recent posting here on abayas.
He pointed out that the majority of converts to Islam are women. And the reason for this is simple: because women get a better deal under Islam than do men. They are not equal, because they are superior.
The abaya is only the beginning. In Islam, a man who marries is obliged to fully support his wife financially. She has no financial obligations towards him. Legally, she can do as she pleases; work or not work. If she chooses to work, her money is then her own.
It is an overwhelming advantage for women: the same advantage a slaveowner has over a slave.
What of the man’s status within the family? Dr. Yusuf pointed out that the best the Quran said for the husband and father was that he was “imam” of the family. This does not imply dominance—the word for that would be “malek,” king. “Imam” means simply “leader in prayer.” Moreover, the word derives from the root “Uma,” which is the word for “Mother.”
In other words, it means the father’s position is to be “the mother during prayer.” And “mother” is roughly cognate to “leader.”
How can this be construed as elevating man above woman? Rather, at most, it elevates man to equality with women.
But only for a few minutes a day.
Dr. Estes also asked the audience to recite in unison—as they were able to do, even in English—one of the best-known Hadith.
Someone asked Mohammed to whom, after God, a mortal owed his obedience. “To the prophets,” Muhammed explained.
“And after that?”
“To their mother.”
“And after that?”
“To their mother.”
“And after that?”
“To their mother.”
“And after that?”
“To their father.”
Kind of makes the point.
Oppression of women? Right.
Of course, all this would be about equally true of Christian traditions—that they give the advantage overwhelmingly to the woman in most things. Which is one reason churches are mostly full of women, not men. The only real difference is that Islam still holds strongly to these values, while the West has mostly fallen away from its religious values.
In passing, Dr. Yusuf made another important point, on the meaning of the word “kaffir.” This is the word usually translated into English as “infidels” or “unbelievers.” People quote passages about fighting kaffirs, about “jihad” against kaffirs, as if they referred to Christians and Jews, as if they justified aggression against them.
They do not.
The proper meaning of the word “kaffir” is “someone who covers.” It is actually etymologically related to the English word “cover.” It is very like the Christian term “hypocrite,” literally, “one who wears a mask.”
It refers properly to someone who accepts the truth of Islam, and yet conceals it, or conceals his own acts counter to it.
No honest Christian can be accused of “covering up” his lack of adherence to Islam. Exactly as no honest Muslim can be accused of being a hypocrite.
And the calumnies the Quran pronounces against kaffirs are very similar to the calumnies the Bible pronounces against hypocrites.
An N-Bomb
Those who follow this blog regularly may have noticed a certain absence of paid advertising recently.
The reason for this, I think, is my recent blatant use of "the n-word": "nigger."
Kind of makes you understand why AP did not dare use it, even though it was the crucial information in a straight news piece.
Kind of makes you understand why there are so many important things the mainstream media dare not say. Their livelihoods, after all, depend on those paid ads pouring in.
Nor are the advertisers to blame here--it's not their business to alienate people. Just the way the world works, and how social pressures for conformity can be so strong.
Thank God for the blogosphere.
The reason for this, I think, is my recent blatant use of "the n-word": "nigger."
Kind of makes you understand why AP did not dare use it, even though it was the crucial information in a straight news piece.
Kind of makes you understand why there are so many important things the mainstream media dare not say. Their livelihoods, after all, depend on those paid ads pouring in.
Nor are the advertisers to blame here--it's not their business to alienate people. Just the way the world works, and how social pressures for conformity can be so strong.
Thank God for the blogosphere.
Tuesday, September 11, 2007
Fun with Abayas
In a world survey of happiness, guess which countries scored highest for the happiness of women relative to men? Nine of the top ten: Afghanistan, Iran, Egypt, Turkey, the UAE, Bangladesh, Palestine, Jordan, and Morocco. All are Muslim.
It is just a lot more fun, it turns out, to be a Muslim woman than a Muslim man.
How does the anti-religious left reconcile this with their claim that women in these countries, and in Islam, are “oppressed” or “repressed”?
I can’t imagine. But did they ever really believe it? No, those who complain about the “treatment of women” in Muslim societies are not opposed to religion because they are concerned about the welfare of women; they are opposed to religion, and “concern about the welfare of women” is a polite fiction, a mental veil, to mask their true focus.
It’s all, after all, about getting naked.
That's all "women's liberation" was ever all about.
For women, though, wearing an abaya is lots of fun. It confers the power of invisibility—you can see others, but they cannot see you. This is one of mankind's deepest fantasies. It’s like having tinted windows in your car, or wearing sunglasses, or wearing a mask; only much more so. It gives you a huge strategic advantage over everyone you meet.
Men, of course, are not allowed to wear them. That would be a serious crime.
But for women, wearing an abaya is, contrary to much nonsense in the West, completely voluntary.
They do it because it is fun.
It is just a lot more fun, it turns out, to be a Muslim woman than a Muslim man.
How does the anti-religious left reconcile this with their claim that women in these countries, and in Islam, are “oppressed” or “repressed”?
I can’t imagine. But did they ever really believe it? No, those who complain about the “treatment of women” in Muslim societies are not opposed to religion because they are concerned about the welfare of women; they are opposed to religion, and “concern about the welfare of women” is a polite fiction, a mental veil, to mask their true focus.
It’s all, after all, about getting naked.
That's all "women's liberation" was ever all about.
For women, though, wearing an abaya is lots of fun. It confers the power of invisibility—you can see others, but they cannot see you. This is one of mankind's deepest fantasies. It’s like having tinted windows in your car, or wearing sunglasses, or wearing a mask; only much more so. It gives you a huge strategic advantage over everyone you meet.
Men, of course, are not allowed to wear them. That would be a serious crime.
But for women, wearing an abaya is, contrary to much nonsense in the West, completely voluntary.
They do it because it is fun.
Monday, September 10, 2007
Pyramid Powers
Busy at my own work, I was half-listening to two other college professors chatting. Something about Von Daniken and Velikovsky--they both enjoyed them, and took them quite seriously. One was advising the other what books to read next. Then something about the Mayan pyramids in Mexico being identical to the early pyramids in Egypt. But “the Church had destroyed the Mayan writings.” So we’ll never know the connections.
The International Catholic Conspiracy had struck again.
These were very educated people, holders of advanced degrees. These are the people educating our young.
I have been around many ivied water coolers and faculty lounges, and have heard there many such urban legends. This overheard conversation was about par for the course.
How is it that so much of what such well-educated people “know” is superstition and prejudice?
Let’s look at this bit about the Catholic Church suppressing Native American culture.
In fact, what records we have of the Mayans were preserved by Catholic clergy. The Popul Vuh, the Mayan sacred epic, was preserved and translated into Spanish by Father Francisco Ximanez in 1702. Without him, we would perhaps know nothing of the Mayans.
And this is true throughout the Americas. In most places, where the native peoples have a writing system, it was devised for them by missionaries. Where they have any kind of written records of their past or any literature, they were written down and preserved by missionaries. Where the languages have been preserved, it has been through grammars and dictionaries made by missionaries. Where we know the old legends and beliefs, these were recorded by missionaries. Missionaries worked hard to preserve native culture. In Kamloops, BC, for example, Father Jean-Marie Raphael LeJeune founded and published the Kamloops Wawa from 1891 to 1923: a regular newspaper in the Chinook language, reflecting Native concerns.
Native culture was changed forever –one might say it was “destroyed,” but that is debatable--by plagues, alcohol, and the sudden availability of higher quality goods and more efficient technologies from abroad. It was not destroyed by Christianity.
Granted, Christianity was one part of this mix. Christianity replaced the native religions, just as it had earlier replaced native religions across Europe. At the same time, it did what it could to protect the natives from alcohol and plague.
And the change from native to Christian beliefs was hardly by compulsion. I, too, would be pretty eager to embrace a new faith that condemned human sacrifice, slavery and torture, preached the equality of man, and protected me from curses, loss of soul, and demonic possession. Never mind that it was the core of the culture that had produced steel, navigation, and so forth.
Was this a great loss for native culture? Surely, any believing Christian must believe it was a great gain, for the culture and for every native individual. Perhaps we Christians are wrong—but the natives of that day were perfectly free to make up their own minds on that, and their own decision ought to be respected. In any case, if in this Europeans “destroyed” native culture, we must believe equally that the culture of Europe itself was earlier “destroyed” by this same foreign doctrine. Some do believe this.The Nazis did.
But why do two college professors believe something so obviously false?
Whenever occurs a great evil or a great good, whenever right and wrong appear in the world in the raw, a standard process begins. All of us who have a vested interest in concealing the truth—which probably means all of us--start to work, spinning silken threads out of our own guts, to conceal matters under thick cobwebs of disinformation. The guilty are exonerated and rewarded, the good punished and blamed. We do this so we can live with ourselves, for we know we are too often not good and not right.
And so the evils of Fascism and Nazism are now commonly blamed on the Catholic Church, which was at the time almost the only voice against it.
And so pedophilia, which was almost accepted by Kinsey and the Sixties “sexual revolution,” is now commonly blamed on the Catholic Church, which was at the time almost the only voice against it.
And so racism is now commonly blamed uniquely on White Anglo-Saxons, on the British and the Americans; note Andrew Young’s famous claim that the English invented racism. But these were, historically, the people who fought hardest against Hitler, against slavery, against caste in India. These were the people who most promoted the doctrines of human rights and the equality of man.
And so racism now commonly masquerades as “anti-racism.” White Anglo-Saxons are now racially stigmatized themselves, by “anti-racists,” in terms hauntingly similar to those once used against Jews. White men are stigmatized in terms hauntingly similar to those once used against blacks.
And so our universities and colleges, because they have been so successful as the bastions of free inquiry and deep thought, are now where the cobwebs most gather. Where thought must stay safe and shallow, and inquiry be carefully proscribed. Where absurdities and plain lies must be embraced as equally likely as certain truth to be true. Because we are all frightened of truth; we all have something to lose from it.
The truth about pyramids, of course, is they are simply the easiest high structure to build.
Watch any child piling blocks.
The International Catholic Conspiracy had struck again.
These were very educated people, holders of advanced degrees. These are the people educating our young.
I have been around many ivied water coolers and faculty lounges, and have heard there many such urban legends. This overheard conversation was about par for the course.
How is it that so much of what such well-educated people “know” is superstition and prejudice?
Let’s look at this bit about the Catholic Church suppressing Native American culture.
In fact, what records we have of the Mayans were preserved by Catholic clergy. The Popul Vuh, the Mayan sacred epic, was preserved and translated into Spanish by Father Francisco Ximanez in 1702. Without him, we would perhaps know nothing of the Mayans.
And this is true throughout the Americas. In most places, where the native peoples have a writing system, it was devised for them by missionaries. Where they have any kind of written records of their past or any literature, they were written down and preserved by missionaries. Where the languages have been preserved, it has been through grammars and dictionaries made by missionaries. Where we know the old legends and beliefs, these were recorded by missionaries. Missionaries worked hard to preserve native culture. In Kamloops, BC, for example, Father Jean-Marie Raphael LeJeune founded and published the Kamloops Wawa from 1891 to 1923: a regular newspaper in the Chinook language, reflecting Native concerns.
Native culture was changed forever –one might say it was “destroyed,” but that is debatable--by plagues, alcohol, and the sudden availability of higher quality goods and more efficient technologies from abroad. It was not destroyed by Christianity.
Granted, Christianity was one part of this mix. Christianity replaced the native religions, just as it had earlier replaced native religions across Europe. At the same time, it did what it could to protect the natives from alcohol and plague.
And the change from native to Christian beliefs was hardly by compulsion. I, too, would be pretty eager to embrace a new faith that condemned human sacrifice, slavery and torture, preached the equality of man, and protected me from curses, loss of soul, and demonic possession. Never mind that it was the core of the culture that had produced steel, navigation, and so forth.
Was this a great loss for native culture? Surely, any believing Christian must believe it was a great gain, for the culture and for every native individual. Perhaps we Christians are wrong—but the natives of that day were perfectly free to make up their own minds on that, and their own decision ought to be respected. In any case, if in this Europeans “destroyed” native culture, we must believe equally that the culture of Europe itself was earlier “destroyed” by this same foreign doctrine. Some do believe this.The Nazis did.
But why do two college professors believe something so obviously false?
Whenever occurs a great evil or a great good, whenever right and wrong appear in the world in the raw, a standard process begins. All of us who have a vested interest in concealing the truth—which probably means all of us--start to work, spinning silken threads out of our own guts, to conceal matters under thick cobwebs of disinformation. The guilty are exonerated and rewarded, the good punished and blamed. We do this so we can live with ourselves, for we know we are too often not good and not right.
And so the evils of Fascism and Nazism are now commonly blamed on the Catholic Church, which was at the time almost the only voice against it.
And so pedophilia, which was almost accepted by Kinsey and the Sixties “sexual revolution,” is now commonly blamed on the Catholic Church, which was at the time almost the only voice against it.
And so racism is now commonly blamed uniquely on White Anglo-Saxons, on the British and the Americans; note Andrew Young’s famous claim that the English invented racism. But these were, historically, the people who fought hardest against Hitler, against slavery, against caste in India. These were the people who most promoted the doctrines of human rights and the equality of man.
And so racism now commonly masquerades as “anti-racism.” White Anglo-Saxons are now racially stigmatized themselves, by “anti-racists,” in terms hauntingly similar to those once used against Jews. White men are stigmatized in terms hauntingly similar to those once used against blacks.
And so our universities and colleges, because they have been so successful as the bastions of free inquiry and deep thought, are now where the cobwebs most gather. Where thought must stay safe and shallow, and inquiry be carefully proscribed. Where absurdities and plain lies must be embraced as equally likely as certain truth to be true. Because we are all frightened of truth; we all have something to lose from it.
The truth about pyramids, of course, is they are simply the easiest high structure to build.
Watch any child piling blocks.
Sunday, September 09, 2007
The Willing Suspension of Disbelief
The most interesting thing about human beings, next to consciousness itself, is what Coleridge called the “willing suspension of disbelief.” As Coleridge pointed out, we are wholly capable, indeed we are eager, to “pretend” any number of things we know are not true. This is how art works: we knowingly buy into an invented world.
Take, for example, a Hollywood movie. We know perfectly well that it is completely imaginary, that the people we are watching are not real people, but actors acting the parts. We have seen the same actors in other movies, in other parts. We even know more or less how it must end—we know, for example, that the hero is not going to get killed. We know, as Chekhov observed, that a gun visible in a drawer early in the movie, will be used by the final curtain. Yet we readily become quite wrapped up in it, feel real emotions for the images on the screen, real fear and concern, “forget” that we know the ending, “forget” everything outside the movie itself.
In fact, mostly we do not really forget. It is a question of splitting off our consciousness, so that one part of our mind knows all this, while another does not.
It is a talent even the smallest children have. It is the ability to “make believe.”
We are the same with sports and games: while they are perfectly useless, and what happens in them means nothing beyond themselves, we become very powerfully wrapped up in their progress and outcome. We yell; we cheer; we demand that the referee be hanged. Then we go home and go about our lives.
And we find this experience infinitely enjoyable.
This is a good thing; and it is a bad thing.
For something like the same thing happens with fanaticism, with true believers of all kinds. We are perfectly able, when presented with an attractive and emotionally satisfying theory, to “believe” things that are obviously false to reason and to evidence. Once people have a comfortable fixed idea, they will positively ignore evidence that contradicts it. As Churchill once observed, people occasionally stumble upon truth. But most of us just stand back up, dust ourselves off, and continue on our chosen way.
But the same ability gives us release, as Aristotle noted, from the stresses and the tribulations of life. When things are bad, we can step into a good book, or a beautiful painting, and escape the horrors of the everyday.
This profound human ability was mistaken, by Freud and Jung, for an “unconscious,” or “subconscious.” But this way of looking at it is completely wrong. For at least the vast majority of us, I would think—certainly for me--we are perfectly able to distinguish the dream from the waking world, and are never fully unconscious of either. We are not genuinely unconscious when we are reading a book; we have simply split our consciousness, so that we are aware of two levels of reality at once. As Blake described it:
This ability of ours to split our consciousness is of obvious value in explaining the nature of the Trinity. For, if we can split our consciousnesses in such a way, so surely can God, who is much greater than we, creating separate “persons,” including one provisionally unaware of his Godhood in order to share completely in our sufferings. Father, Son, incarnation, Holy Ghost.
It is also helpful in understanding the “problem of evil”—why this world, though created by an all-powerful and good God, has undesirable things in it. As Aristotle explained, in this game of suspended disbelief, we actually enjoy the experience of fear and pity—we go to the theatre or the cinema repeatedly to experience them. Children play tag in part in order to get scared; and no game is fun if you cannot possibly lose. So too, this world can be seen, as Shakespeare said, as a stage, and our little lives as dreams or sports we are enjoying in the manner of an instructive performance.
Life, in the end, is a two-ring, or a three-ring circus. On the one hand, there is our little diurnal performance on the stage. On the other, happening concurrently, is our eternal life in heaven. And also perhaps again, eternally present, the New Jerusalem at the end of time—the perfection, not just of our individual souls, but of all things. Given the latter two, the unhappy events of the former one are, perhaps, sheer playful joy. And perhaps we are all along conscious, in some part, of the latter two, even as we lose ourselves, by willing suspension of disbelief, in the former.
The trick is to remain always somewhat aware of the reality behind it all: of God and of his kingdom. And perhaps the point of life is learning that trick.
Take, for example, a Hollywood movie. We know perfectly well that it is completely imaginary, that the people we are watching are not real people, but actors acting the parts. We have seen the same actors in other movies, in other parts. We even know more or less how it must end—we know, for example, that the hero is not going to get killed. We know, as Chekhov observed, that a gun visible in a drawer early in the movie, will be used by the final curtain. Yet we readily become quite wrapped up in it, feel real emotions for the images on the screen, real fear and concern, “forget” that we know the ending, “forget” everything outside the movie itself.
In fact, mostly we do not really forget. It is a question of splitting off our consciousness, so that one part of our mind knows all this, while another does not.
It is a talent even the smallest children have. It is the ability to “make believe.”
We are the same with sports and games: while they are perfectly useless, and what happens in them means nothing beyond themselves, we become very powerfully wrapped up in their progress and outcome. We yell; we cheer; we demand that the referee be hanged. Then we go home and go about our lives.
And we find this experience infinitely enjoyable.
This is a good thing; and it is a bad thing.
For something like the same thing happens with fanaticism, with true believers of all kinds. We are perfectly able, when presented with an attractive and emotionally satisfying theory, to “believe” things that are obviously false to reason and to evidence. Once people have a comfortable fixed idea, they will positively ignore evidence that contradicts it. As Churchill once observed, people occasionally stumble upon truth. But most of us just stand back up, dust ourselves off, and continue on our chosen way.
But the same ability gives us release, as Aristotle noted, from the stresses and the tribulations of life. When things are bad, we can step into a good book, or a beautiful painting, and escape the horrors of the everyday.
This profound human ability was mistaken, by Freud and Jung, for an “unconscious,” or “subconscious.” But this way of looking at it is completely wrong. For at least the vast majority of us, I would think—certainly for me--we are perfectly able to distinguish the dream from the waking world, and are never fully unconscious of either. We are not genuinely unconscious when we are reading a book; we have simply split our consciousness, so that we are aware of two levels of reality at once. As Blake described it:
Now I a two-fold vision see;
And a two-fold vision is given to me.
Tis fourfold in my supreme delight;
And threefold in the darkest night;
And twofold always; May God us keep
From single vision, and Newton’s sleep.
This ability of ours to split our consciousness is of obvious value in explaining the nature of the Trinity. For, if we can split our consciousnesses in such a way, so surely can God, who is much greater than we, creating separate “persons,” including one provisionally unaware of his Godhood in order to share completely in our sufferings. Father, Son, incarnation, Holy Ghost.
It is also helpful in understanding the “problem of evil”—why this world, though created by an all-powerful and good God, has undesirable things in it. As Aristotle explained, in this game of suspended disbelief, we actually enjoy the experience of fear and pity—we go to the theatre or the cinema repeatedly to experience them. Children play tag in part in order to get scared; and no game is fun if you cannot possibly lose. So too, this world can be seen, as Shakespeare said, as a stage, and our little lives as dreams or sports we are enjoying in the manner of an instructive performance.
Life, in the end, is a two-ring, or a three-ring circus. On the one hand, there is our little diurnal performance on the stage. On the other, happening concurrently, is our eternal life in heaven. And also perhaps again, eternally present, the New Jerusalem at the end of time—the perfection, not just of our individual souls, but of all things. Given the latter two, the unhappy events of the former one are, perhaps, sheer playful joy. And perhaps we are all along conscious, in some part, of the latter two, even as we lose ourselves, by willing suspension of disbelief, in the former.
The trick is to remain always somewhat aware of the reality behind it all: of God and of his kingdom. And perhaps the point of life is learning that trick.
… be cheerful, sir.
Our revels now are ended. These our actors,
As I foretold you, were all spirits and
Are melted into air, into thin air:
And, like the baseless fabric of this vision,
The cloud-capp'd towers, the gorgeous palaces,
The solemn temples, the great globe itself,
Ye all which it inherit, shall dissolve
And, like this insubstantial pageant faded,
Leave not a rack behind. We are such stuff
As dreams are made on, and our little life
Is rounded with a sleep.
Friday, September 07, 2007
The N-word
I just read a bizarre piece from Associated Press headlined “Comedian gets hook for using the N-word.”
I had to read the whole article just out of curiosity. What the heck is “the n-word”?
Strictly speaking, I still don’t know. The article itself never dares to even report the word. It is, apparently, just too offensive. It seems, though, to be the word “nigger”—albeit used here by a black comedian, in jest, before a black audience.
And what is wrong with the word “nigger”? The article does not say. Its offensiveness is taken for granted.
It all strikes me as odd. Not to say loopy. Not to say hysterical. Yes, it is a bit of a mispronunciation of the French “negre,” meaning “black.” And the French are a bit fussy about how foreigners pronounce their language. But offensive? Too offensive to appear in print? This still seems a bit over the top.
I do a dictionary check to find out more.
Random House reports “The term NIGGER is now probably the most offensive word in English.” Wow. Worse, apparently, than “m**********r,” say? Let alone “d**n”? (Although another dictionary insists that “nigra” is “even more offensive than ‘nigger.’”) Where have I been?
Okay—so it is offensive, by dictionary definition. But still, why? For the proper definition of the term, according to the same source, is simply “A. black person. B. a member of any dark-skinned people.”
So here’s what I don’t get. I’m afraid that, if this is the most offensive word in the English language, it follows that it is worse to be a black person than to fornicate with one’s mother, say, or to spend eternity in hell. To have dark skin is the worst possible thing that you could do.
Is that really the message we want to convey?
Let’s say it is. Even so, racial equality demands that we officially consider “whitey” to be equally offensive. It seems to be “nigger’s” exact parallel, right down to the offensive intent. If we object to one, and not the other equally, we are assuming either 1) it is far more creditable to have pale than dark skin, or 2) black people have more rights than white people.
But then, if words that are intrinsically neutral in meaning can become offensive by being used by someone or other with intent to insult, racial equality requires that we consider almost any term for any ethnic group just as offensive. Certainly, “Jew,” is commonly used as a pejorative. So nobody must henceforth use the word. Same for “WASP,” or “Yank,” or “Anglo.”
Somehow, though, as a Mick and a Paddy myself, not to mention an Anglo, I just cannot muster a sufficient level of outrage. Who cares? It occurs to me that black people in North America today must have remarkably little to worry about if they can really spare the energy for this crusade against a harmless word. I envy them.
There may be a few words referring to race or ethnicity in English that are genuinely offensive. But I can’t think of a single clear example.
English can insult on other grounds. GI—now there’s a word that genuinely is insulting in its meaning. And yet, ironically, nobody considers it offensive. “Guy” for males is another example—it seems to imply that one is either stuffed doll to be paraded to general contempt, or a traitor worthy of being hanged, drawn, and quartered.
Other languages can insult on racial grounds. In Chinese, Westerners are commonly referred to as devils or demons; and one common word for “foreign” also means “ugly.”
But English, with its international viewpoint and intercultural history, is actually exemplary in this regard.
And it is worth remembering that, in the end, even a genuinely insulting word cannot hurt us. The matter is trivial.
Nor can banning a word end the insults. If someone wants to insult you, and mere intent to insult makes a word an insult, any conceivable word to do it just as well as “nigger.”
So all objecting to “nigger” as a word really manages to do is to limit everyone’s freedom of speech. And once the principle is solidly established, that language can and should be limited in order to limit or prevent unwanted thoughts or sentiments, the next limitation might not be equally trivial. Legislating language to control thoughts is the nightmare that Orwell conjures up with “Newspeak” in his novel 1984.
And there is another consideration. Every time we change the language, we limit mutual comprehension. This is a grave matter—this is what language is for. We are pulling one more random bolt out of the community-making machine.
Oldspeakers, in particular, unless they happen go to the right schools and attend the right parties, may miss the change—and be persecuted for it. A great way, in the end, to enforce a class system.
As for anyone who has already died—they are out of luck.
But this also means we are gradually cutting ourselves off from the greatest minds of the past, and from our cultural heritage. For example, this particular word “nigger” was used, without pejorative intent, by such writers as Mark Twain, Joseph Conrad, and Charles Dickens.
Now, let’s suppose we are at least wise enough not to ban these books from our libraries and our schools for the sake of “the n-word.” Even so, we have caused a grave loss. If we insist that “nigger” is pejorative, our children, if not we ourselves, will henceforth quite likely no longer be able to understand these books properly. We will see the word “nigger” and assume its means Twain, Conrad, and Dickens hated black people. Which is not their point at all.
A similar case, in my own experience: Yeats’s magnificent late poem “Lapis Lazuli,” to my mind one of the very best poems written by the best poet in English, ends with the punchline “Their ancient, glittering eyes are gay.” It was written in the 1930s. It is now almost impossible, I find, for a student to hear it without breaking into a smirk. A masterpiece of our common culture has thereby been destroyed, like an indelible moustache painted on the Mona Lisa.
Losing Twain, Conrad, and Dickens—that is no trivial loss. What could be worth it?
I had to read the whole article just out of curiosity. What the heck is “the n-word”?
Strictly speaking, I still don’t know. The article itself never dares to even report the word. It is, apparently, just too offensive. It seems, though, to be the word “nigger”—albeit used here by a black comedian, in jest, before a black audience.
And what is wrong with the word “nigger”? The article does not say. Its offensiveness is taken for granted.
It all strikes me as odd. Not to say loopy. Not to say hysterical. Yes, it is a bit of a mispronunciation of the French “negre,” meaning “black.” And the French are a bit fussy about how foreigners pronounce their language. But offensive? Too offensive to appear in print? This still seems a bit over the top.
I do a dictionary check to find out more.
Random House reports “The term NIGGER is now probably the most offensive word in English.” Wow. Worse, apparently, than “m**********r,” say? Let alone “d**n”? (Although another dictionary insists that “nigra” is “even more offensive than ‘nigger.’”) Where have I been?
Okay—so it is offensive, by dictionary definition. But still, why? For the proper definition of the term, according to the same source, is simply “A. black person. B. a member of any dark-skinned people.”
So here’s what I don’t get. I’m afraid that, if this is the most offensive word in the English language, it follows that it is worse to be a black person than to fornicate with one’s mother, say, or to spend eternity in hell. To have dark skin is the worst possible thing that you could do.
Is that really the message we want to convey?
Let’s say it is. Even so, racial equality demands that we officially consider “whitey” to be equally offensive. It seems to be “nigger’s” exact parallel, right down to the offensive intent. If we object to one, and not the other equally, we are assuming either 1) it is far more creditable to have pale than dark skin, or 2) black people have more rights than white people.
But then, if words that are intrinsically neutral in meaning can become offensive by being used by someone or other with intent to insult, racial equality requires that we consider almost any term for any ethnic group just as offensive. Certainly, “Jew,” is commonly used as a pejorative. So nobody must henceforth use the word. Same for “WASP,” or “Yank,” or “Anglo.”
Somehow, though, as a Mick and a Paddy myself, not to mention an Anglo, I just cannot muster a sufficient level of outrage. Who cares? It occurs to me that black people in North America today must have remarkably little to worry about if they can really spare the energy for this crusade against a harmless word. I envy them.
There may be a few words referring to race or ethnicity in English that are genuinely offensive. But I can’t think of a single clear example.
English can insult on other grounds. GI—now there’s a word that genuinely is insulting in its meaning. And yet, ironically, nobody considers it offensive. “Guy” for males is another example—it seems to imply that one is either stuffed doll to be paraded to general contempt, or a traitor worthy of being hanged, drawn, and quartered.
Other languages can insult on racial grounds. In Chinese, Westerners are commonly referred to as devils or demons; and one common word for “foreign” also means “ugly.”
But English, with its international viewpoint and intercultural history, is actually exemplary in this regard.
And it is worth remembering that, in the end, even a genuinely insulting word cannot hurt us. The matter is trivial.
Nor can banning a word end the insults. If someone wants to insult you, and mere intent to insult makes a word an insult, any conceivable word to do it just as well as “nigger.”
So all objecting to “nigger” as a word really manages to do is to limit everyone’s freedom of speech. And once the principle is solidly established, that language can and should be limited in order to limit or prevent unwanted thoughts or sentiments, the next limitation might not be equally trivial. Legislating language to control thoughts is the nightmare that Orwell conjures up with “Newspeak” in his novel 1984.
And there is another consideration. Every time we change the language, we limit mutual comprehension. This is a grave matter—this is what language is for. We are pulling one more random bolt out of the community-making machine.
Oldspeakers, in particular, unless they happen go to the right schools and attend the right parties, may miss the change—and be persecuted for it. A great way, in the end, to enforce a class system.
As for anyone who has already died—they are out of luck.
But this also means we are gradually cutting ourselves off from the greatest minds of the past, and from our cultural heritage. For example, this particular word “nigger” was used, without pejorative intent, by such writers as Mark Twain, Joseph Conrad, and Charles Dickens.
Now, let’s suppose we are at least wise enough not to ban these books from our libraries and our schools for the sake of “the n-word.” Even so, we have caused a grave loss. If we insist that “nigger” is pejorative, our children, if not we ourselves, will henceforth quite likely no longer be able to understand these books properly. We will see the word “nigger” and assume its means Twain, Conrad, and Dickens hated black people. Which is not their point at all.
A similar case, in my own experience: Yeats’s magnificent late poem “Lapis Lazuli,” to my mind one of the very best poems written by the best poet in English, ends with the punchline “Their ancient, glittering eyes are gay.” It was written in the 1930s. It is now almost impossible, I find, for a student to hear it without breaking into a smirk. A masterpiece of our common culture has thereby been destroyed, like an indelible moustache painted on the Mona Lisa.
Losing Twain, Conrad, and Dickens—that is no trivial loss. What could be worth it?
Wednesday, September 05, 2007
Science in the Classroom
Teaching today is entirely scientific. Prospective teachers must learn to do “active research” with their classes, then use it continually to determine what does and does not work.
Unfortunately, the first thing that clearly does not work is active research. Though in theory every single teacher now teaching is doing this much of the time, nothing has ever come of it—no significant evidence that any one teaching technique consistently works better than any one other. As I understand it, this is true even of teaching itself: there is no significant evidence from “active research” that students taking a class learn their subject any faster than students studying it on their own.
True, there are at any given time well-known “good” teaching techniques that all teachers are expected to use. Today, we are supposed to be “student-centred.” We are supposed to use a “communicative approach,” and/or a “constructivist approach.” We must be “interactive” and use “intrinsic motivation.” There are even sometimes studies to back some of this up
But none of these studies ever turn out to be reproducible. It becomes a matter merely of fad or fashion. Invariably, in five year’s time, another study will conclusively disprove the current theory, and “good” teaching will involve rejecting all of the practices popular today.
This dog don’t hunt. This dog is chasing its own tail. This dog has made a dumb error, called “social science.” Teaching can never and should never be “scientific.” Human beings are far too complex and aware for science ever to understand.
Teaching, the New Testament points out, is a “gift of the spirit.” That means good teaching comes direct from God, as an inspiration—like prophecy, or speaking in tongues. You cannot teach a teacher to teach, any more than you can teach a writer to write. One is either a teacher, or one is not.
That also means that all teaching is essentially religious in nature.
We used to know this. Teaching used to be a, if not the, core religious activity. All schools, from Grade One through the Ph.D., were originally run by churches—not just in Christian culture, but in all religions. Buddhist monasteries were the local school, in Buddhist countries. In Muslim countries, the school was attached to the local mosque. In Confucian countries, the Confucian shrine was the local school, and vice versa. A Jewish synagogue is a shul, a school, no more, no less.
Not all great teachers are also religious figures; nor all religious figures also great teachers. But the exceptions are few. We all know Confucius was a teacher. But the Buddha’s essential nature, distinguishing him from uncounted unremembered enlightened beings who came before him, was that he alone took the trouble to teach the dharma to his fellow man—he is defined as a teacher. In the New Testament, Jesus is consistently addressed by his followers as “rebbe”—literally, “teacher.” One’s “guru,” the essential religious role in Hinduism, is one’s teacher; the Brahmin class is at once the priestly and the teacherly caste. It is the same function. So too in Confucianism. Every Catholic bishop is also by definition a teacher—that is his defined role.
One could learn a great deal, about teaching techniques, by reading the great books of the world’s religions. They often address the issue of form as well as content. Jesus, for example, with his parables; Confucius with his analysis of ritual, example, and ceremony; Buddhism with its koans; the Talmud with its illustrative stories. This, along with the techniques of other great teachers like Socrates, Aristotle, the Baal Shem Tov, St. Ignatius Loyola, Ngarjuna, Hui Neng, Shammai, Rumi, or Hillel, is what student teachers should be studying.
But that is not enough; content is no more arbitrary than form. Only religion ensures that we teach our students the right things—and if we do not, at best, we are wasting their time. How, without religion’s values, do we know what is of value to teach? How can we build a path without knowing its end?
In fact, to sit a child or adolescent in a classroom for five or seven hours a day, and bombard him with all sorts of things, and not touch upon religion, is to do him terrible harm. It is teaching him to avoid religion and leaving him without direction. If the child is at all bright or perceptive, this will cause a crisis some time in adolescence, when he notices that there is no apparent point to anything, no ultimate meaning or goal.
Consider this, and one must conclude that modern, “scientific” teaching is not just a failure.
It is a clear and present danger for our children.
It is teaching the best of them to commit suicide.
Unfortunately, the first thing that clearly does not work is active research. Though in theory every single teacher now teaching is doing this much of the time, nothing has ever come of it—no significant evidence that any one teaching technique consistently works better than any one other. As I understand it, this is true even of teaching itself: there is no significant evidence from “active research” that students taking a class learn their subject any faster than students studying it on their own.
True, there are at any given time well-known “good” teaching techniques that all teachers are expected to use. Today, we are supposed to be “student-centred.” We are supposed to use a “communicative approach,” and/or a “constructivist approach.” We must be “interactive” and use “intrinsic motivation.” There are even sometimes studies to back some of this up
But none of these studies ever turn out to be reproducible. It becomes a matter merely of fad or fashion. Invariably, in five year’s time, another study will conclusively disprove the current theory, and “good” teaching will involve rejecting all of the practices popular today.
This dog don’t hunt. This dog is chasing its own tail. This dog has made a dumb error, called “social science.” Teaching can never and should never be “scientific.” Human beings are far too complex and aware for science ever to understand.
Teaching, the New Testament points out, is a “gift of the spirit.” That means good teaching comes direct from God, as an inspiration—like prophecy, or speaking in tongues. You cannot teach a teacher to teach, any more than you can teach a writer to write. One is either a teacher, or one is not.
That also means that all teaching is essentially religious in nature.
We used to know this. Teaching used to be a, if not the, core religious activity. All schools, from Grade One through the Ph.D., were originally run by churches—not just in Christian culture, but in all religions. Buddhist monasteries were the local school, in Buddhist countries. In Muslim countries, the school was attached to the local mosque. In Confucian countries, the Confucian shrine was the local school, and vice versa. A Jewish synagogue is a shul, a school, no more, no less.
Not all great teachers are also religious figures; nor all religious figures also great teachers. But the exceptions are few. We all know Confucius was a teacher. But the Buddha’s essential nature, distinguishing him from uncounted unremembered enlightened beings who came before him, was that he alone took the trouble to teach the dharma to his fellow man—he is defined as a teacher. In the New Testament, Jesus is consistently addressed by his followers as “rebbe”—literally, “teacher.” One’s “guru,” the essential religious role in Hinduism, is one’s teacher; the Brahmin class is at once the priestly and the teacherly caste. It is the same function. So too in Confucianism. Every Catholic bishop is also by definition a teacher—that is his defined role.
One could learn a great deal, about teaching techniques, by reading the great books of the world’s religions. They often address the issue of form as well as content. Jesus, for example, with his parables; Confucius with his analysis of ritual, example, and ceremony; Buddhism with its koans; the Talmud with its illustrative stories. This, along with the techniques of other great teachers like Socrates, Aristotle, the Baal Shem Tov, St. Ignatius Loyola, Ngarjuna, Hui Neng, Shammai, Rumi, or Hillel, is what student teachers should be studying.
But that is not enough; content is no more arbitrary than form. Only religion ensures that we teach our students the right things—and if we do not, at best, we are wasting their time. How, without religion’s values, do we know what is of value to teach? How can we build a path without knowing its end?
In fact, to sit a child or adolescent in a classroom for five or seven hours a day, and bombard him with all sorts of things, and not touch upon religion, is to do him terrible harm. It is teaching him to avoid religion and leaving him without direction. If the child is at all bright or perceptive, this will cause a crisis some time in adolescence, when he notices that there is no apparent point to anything, no ultimate meaning or goal.
Consider this, and one must conclude that modern, “scientific” teaching is not just a failure.
It is a clear and present danger for our children.
It is teaching the best of them to commit suicide.
Monday, September 03, 2007
Mistah Kurtz--He Dead
This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.
TS Eliot perhaps had it right—the lines are from his grand poem “The Hollow Men”--describing the Modern Era as a slow slide into non-existence.
We hinted at this last post, musing that the “counterculture” was a suppression of emotion in favour of desire. This involved a reduction of both ourselves and other human beings from subjects into objects. It involved a reduction of ourselves andother humans into no more than our physical bodies. And this, we noted, was part of a wider, ongoing movement in this regard.
This is what Eliot was talking about, in “The Hollow Men.”
Back in the Sixties and Seventies, the general claim made by the terminally cool, at least in my earshot, was that we were all “out of touch with our bodies.” The press laments eternally that we are not taking proper care of the earth, or of our earthly flesh—global warming, the “obesity epidemic,” pollution, smoking, and the like.
As ever, this common view is the perfect inverse of the truth. Fitness crisis? So how come we are living much longer than never before? Ecological collapse? So how come we grow materially wealthier, year by year, and grow more food, while more people have potable water than ever?
The body is in fact the only part of our being that we care about; and “nature” has become the thing we worship. That’s what distinguishes modern Western civilization from all others: its sheer physicality.
“Psychology” means literally, “knowledge of the soul.” What is our approach to the soul? To stuff the body with chemicals. Are physical approaches likely to be the most direct and most efficient ways to treat spiritual illnesses? Not bloody likely. I hear one mental hospital introduced pets to the wards. From that point, the suicide rate was zero. But such straightforward solutions—a pathetic little bit of love for and from another being--are anathema, because they admit the existence of something more than bread alone. We know nothing, any more, of the soul. Most of us do not even believe it exists.
I recall recently a TV commercial which urged us all to “take care of the inner you.”
How is this to be done? Physical exercise.
There is no longer any inner us. As Eliot observed.
We are the hollow men.
We are the stuffed men.
Leaning together,
Headpiece filled with straw.
Alas.
We know our bodies, though, inch by quivering inch; usually, we indulge their every whim. We refuse our bodies things if and only if it is for their own good. Dieting, for example; or working out in the gym—and this is what we take to be virtue. Merely seeking to rot longer.
I watch proud professionals avoiding the stairs for the elevator, the sidewalk for their car, and then working out for an hour or more on their treadmills, and I cannot but think Eliot—and Jesus—got it right. This materialistic world is very like the classic Hades, the land of the dead without hope of salvation. What do these treadmill walkers resemble, running nowhere forever, but Sisyphus in Tartarus, forced eternally to push a stone up a hill and watch it roll back down again. What do these desperate dieters so resemble, as Tantalus, punished in the underworld by a vast banquet always just beyond his grasp?
It is all slow death by fast desire.
Never mind the suppression of love—which Eliot captures in such lines as:
Waking alone
At the hour when we are
Trembling with tenderness
Lips that would kiss
Form prayers to broken stone.
…We grope together
And avoid speech.
For that’s all others are to us, any more—broken stones. Mere things.
We also, equally, suppress hate and anger.
We fear violence to the point of hysteria, for example—to the point even of banning the spanking of children. To the point of condemning war as war, even a just war in self defense.
But it is not violence, per se, that we fear, is it? We have no problem at all with abortion, which is violence in the extreme. It is not even a crime; it does not even cost us money to do it. It is the anger and the hate and the courage that can accompany violence—the strong emotions—that we really fear. For we think it important to give stiffer penalties for crimes committed in a state of passion—“hate crimes”—than those done in cold blood, out of pure self-interest or desire. This seems mad, and is a reversal of the wisdom of the ages.
And it is not just emotion, either. It is everything except the body: it is reason, conscience, and imagination, too, that we fear. For they, too, can go counter to our will, to our desires.
The subjugation of reason to will and desire is apparent already in the philosophy of Adolf Hitler; but it is even clearer in postmodernism and in the movement called “human potential.” People maintain nowadays, almost as a matter of course, that we can actually choose our own beliefs; and that the choice we make is perfectly random and not open to dispute. As one young respondent remarked to Ted Byfield, “just because a thing is true does not mean that I have to believe it.”
Reality and reason must bend to our desires, rather than our desires to reality or reason.
Here the stone images
Are raised, here they receive
The supplication of a dead man’s hand
Under the twinkle of a fading star.
This is the dying of the light; this is death’s twilight kingdom. The light of reason has gone out.
It is less obvious that we have also abandoned imagination. The rediscovery of the lost imagination, after all, almost looks like the whole point of the counterculture. Yes, it was recovered by a purely physical means, by drugs, but it was claimed that the trip experience in the end recovered the knowledge that the imagination and its objects were real and powerful.
And yet, we did not really accept, or endorse, their reality. We accepted them as real only in so far as they seemed to correspond with our desires. We did not admit, then or even now, that the trip experience might have a dark side, that it might teach us unhappy things; that it might require things of us; or that the world of the imagination might include monsters and demons as well as angels and cute gnomes. We remained casual tourists in this strange foreign land. It remained, for us, as Freud and Jung call it, "subconscious" or "unconscious."
Just as we have subverted “love” to mean only sex, we have subverted the word “dream” to mean only our desires. When we speak of our “dreams,” we do not really mean our dreams, or even our daydreams, both of which we ignore. We mean our wants, our desires. To us, those who take their dreams seriously, as the early Christians did, are mad, “raving” (the word means, literally, “dreaming”).
And so, as Eliot realizes, we have systematically reduced our being, our consciousness, our selves, to the vanishing point: nothing but a rag stuffed on a stick. Nothing but the body and a bundle of ravening desires.
Unfortunately, of all the things we are, it is the body that knows death; it is the physical world that is subject to time. So, by reducing ourselves to no more, we have committed suicide. We are already in the dead land, the desert of prickly pears.
And death is as much our present as our future: bodies without souls are dead bodies. If we still move, it proves only that we are vampires.
This is the truth we must not speak. And so we fear any mention, any hint of death. We do not want to see it, we do not want to hear of it, we do not want to think about it. We take quite unreasonable steps to delay it as long as possible. Have someone dial 911 in Canada these days, and an ambulance, a police car, and a firetruck all appear with sirens blazing. That speaks of panic.
This is mad. We will all die; it is not something we can avoid. This being so, and the longest lifetime being trivial against eternity, a sane person would instead face death squarely, and make ready for it.
Those who have crossed
With direct eyes, to death’s other Kingdom
Remember us—if at all—not as lost
Violent souls, but only
As the hollow men
The stuffed men.
Sunday, September 02, 2007
Cool
One of the least attractive elements of the ‘60s was the emphasis on being “cool.” Cool sucks. Cool means not having or at least not showing emotion. As in "cold shoulder," "cold fish," “cold-blooded.” A related term is "hung up." In the Sixties, everyone wanted to avoid getting “hung up”—especially, as I recall it, on other people.
It is not hard to trace the origin of this idea. It is from Buddhism; it is from the second noble truth. "Suffering is caused by desire." Jack Kerouac and Gary Snyder picked this up from their studies of Japanese Zen Buddhism—much of what we know as the “Beat” movement was an attempt by the American WWII generation to assimilate what they had learned of Japanese culture during the war and occupation. But it has been subtly changed on the way: the usual translation of the Second Noble Truth is: “Suffering is caused by desire.” The hippies, rather than trying to overcome desire, tried instead to overcome emotion.
And there is a difference.
For example, lack of desire obviously argues against free love—as does Buddhism. But lack of emotion is what makes “free love”—indiscriminate sex—possible. A lack of concern for the other person with whom you are having sex is essential—one must not become “hung up” on them, or love is no longer “free.” There are obligations.
And one must not care what happens to any children. One must, if necessary, be prepared to abort them. Otherwise, there are obligations. Yet Buddhism condemns abortion.
Emotion, on the other hand, Buddhism does not condemn. Buddhism glorifies it; Buddhism requires it. The Bodhistattva Avalokiteshvara (aka Kwannon or Guan Yin), the most revered figure after the Buddha himself, is the very personification of compassion. He or she—a bodhisattva is a soul, which can be incarnated as either sex—eternally refuses nirvana for herself in order to help others, out of the perfect fullness of compassion. So, indeed, do all the bodhisattvas, or Buddhist saints—this is what one does to become a Buddhist saint.
One sacrifices one's own desires for the sake of compassion. One does not sacrifice compassion for the sake of one's desires. Up is not down.
Being “cool,” or emotionless, seems to be a good thing in certain situations. “Sang froid” in the face of peril seems admirable; or at least useful. Many people these days praise "EQ," which is essentially the ability to suppress one's own emotions for the sake of one's self-interest. But do not expect Buddhism (or Christianity) to support even this. There is the tale of a Zen master who was beheaded in the course of an anti-Buddhist purge. A follower criticized him for, at the moment he was about to die, screaming loudly.
He was corrected by his own master. This was no sign of lack of enlightenment. It is perfectly proper to experience fully one’s own emotions—to which one can be either attached or not attached. It is perfectly sensible to speak about desiring a particular emotion: wanting love, for example, or happiness. Hence the two are separate things. Indeed, to seek to avoid emotions is un-Buddhist, as this is itself a desire.
And so, like most good things, Buddhism has been perverted into its opposite. Hippiedom is to Buddhism more or less as Al Qaeda is to Islam.
Had the followers of Kerouac understood, the last thing they would have endorsed was free love. And, indeed, Kerouac himself, who was wiser than his followers, does not endorse it. His protagonist in Dharma Bums, as a proper Buddhist, considers it, quite accurately, “cruel.” And to Kerouac, becoming “hung up” was pretty clearly a good thing, not something to be avoided. One should, in fact, become “hung up” on whatever is there before you to be loved at every moment. This, to Kerouac, is what it is all about.
The systematic lack or suppression of emotion is in fact the technique of the con artist. It seems to me to lead, as night does day, to the sort of things Charles Manson was capable of, and capable of convincing hippies to do. Cold-bloodedness indeed.
I wonder, too, if people like Janis Joplin died, in the end, of a lack of love, from an emotional emptiness. For man does not live by sex alone.
In the end, this celebration of the “cool” marks hippiedom as a further progression of the tendency, common to the mainstream culture well before the fifties, of reducing human beings to machines or objects of detached scientific study; along the lines ofSkinner’s operant conditioning. Indeed, the poor deluded Beats embraced Zen Buddhism largely as a supposedly more “scientific” way to view the world.
It fits with their overall materialism: “love” to them was sex, and sex was love, as the phrase “free love” necessarily implies—for otherwise it is an oxymoron. And they mistook enlightenment for a chemical compound. It was all a dumb brutish blunder from which we have yet to work our way free.
It is not hard to trace the origin of this idea. It is from Buddhism; it is from the second noble truth. "Suffering is caused by desire." Jack Kerouac and Gary Snyder picked this up from their studies of Japanese Zen Buddhism—much of what we know as the “Beat” movement was an attempt by the American WWII generation to assimilate what they had learned of Japanese culture during the war and occupation. But it has been subtly changed on the way: the usual translation of the Second Noble Truth is: “Suffering is caused by desire.” The hippies, rather than trying to overcome desire, tried instead to overcome emotion.
And there is a difference.
For example, lack of desire obviously argues against free love—as does Buddhism. But lack of emotion is what makes “free love”—indiscriminate sex—possible. A lack of concern for the other person with whom you are having sex is essential—one must not become “hung up” on them, or love is no longer “free.” There are obligations.
And one must not care what happens to any children. One must, if necessary, be prepared to abort them. Otherwise, there are obligations. Yet Buddhism condemns abortion.
Emotion, on the other hand, Buddhism does not condemn. Buddhism glorifies it; Buddhism requires it. The Bodhistattva Avalokiteshvara (aka Kwannon or Guan Yin), the most revered figure after the Buddha himself, is the very personification of compassion. He or she—a bodhisattva is a soul, which can be incarnated as either sex—eternally refuses nirvana for herself in order to help others, out of the perfect fullness of compassion. So, indeed, do all the bodhisattvas, or Buddhist saints—this is what one does to become a Buddhist saint.
One sacrifices one's own desires for the sake of compassion. One does not sacrifice compassion for the sake of one's desires. Up is not down.
Being “cool,” or emotionless, seems to be a good thing in certain situations. “Sang froid” in the face of peril seems admirable; or at least useful. Many people these days praise "EQ," which is essentially the ability to suppress one's own emotions for the sake of one's self-interest. But do not expect Buddhism (or Christianity) to support even this. There is the tale of a Zen master who was beheaded in the course of an anti-Buddhist purge. A follower criticized him for, at the moment he was about to die, screaming loudly.
He was corrected by his own master. This was no sign of lack of enlightenment. It is perfectly proper to experience fully one’s own emotions—to which one can be either attached or not attached. It is perfectly sensible to speak about desiring a particular emotion: wanting love, for example, or happiness. Hence the two are separate things. Indeed, to seek to avoid emotions is un-Buddhist, as this is itself a desire.
And so, like most good things, Buddhism has been perverted into its opposite. Hippiedom is to Buddhism more or less as Al Qaeda is to Islam.
Had the followers of Kerouac understood, the last thing they would have endorsed was free love. And, indeed, Kerouac himself, who was wiser than his followers, does not endorse it. His protagonist in Dharma Bums, as a proper Buddhist, considers it, quite accurately, “cruel.” And to Kerouac, becoming “hung up” was pretty clearly a good thing, not something to be avoided. One should, in fact, become “hung up” on whatever is there before you to be loved at every moment. This, to Kerouac, is what it is all about.
The systematic lack or suppression of emotion is in fact the technique of the con artist. It seems to me to lead, as night does day, to the sort of things Charles Manson was capable of, and capable of convincing hippies to do. Cold-bloodedness indeed.
I wonder, too, if people like Janis Joplin died, in the end, of a lack of love, from an emotional emptiness. For man does not live by sex alone.
In the end, this celebration of the “cool” marks hippiedom as a further progression of the tendency, common to the mainstream culture well before the fifties, of reducing human beings to machines or objects of detached scientific study; along the lines ofSkinner’s operant conditioning. Indeed, the poor deluded Beats embraced Zen Buddhism largely as a supposedly more “scientific” way to view the world.
It fits with their overall materialism: “love” to them was sex, and sex was love, as the phrase “free love” necessarily implies—for otherwise it is an oxymoron. And they mistook enlightenment for a chemical compound. It was all a dumb brutish blunder from which we have yet to work our way free.
Subscribe to:
Posts (Atom)