Competition
Left Utopia envisions a world without economic winners and losers, which is to say, a world without competition. This world without competition is sometimes figured as “post-capitalism,” but as Nick Land has said, “post-capitalism has no real meaning except an end to the engine of change.” I’d go further, arguing that non-competitive post-capitalism has no real meaning except as slow destruction of technological society and reversion to pre-industrial lifestyles. Put the other way around, the engine of technological change and growth is competition—creative destruction, capitalism. The inevitable social by-product of growth is economic winners and losers. A healthy society will put a basic welfare system in place to help the losers temporarily, and to channel them back into the growth economy with new skills. A sick society will incentivize the losers’ staying losers, creating a permanent underclass of professional parasites. A terminally ill society will make this permanent underclass the focus of its moral and economic energy.
According to Joel Mokyr (whose work is highly recommended), the old guild system of Europe offers a clear case of a force impeding growth by minimizing competition to keep guild members from economic loss. Mokyr calls the guilds a “conservative” force, but he goes on to compare them implicitly to modern labor unions; clearly, then, the guilds, as proto-unions, were proto-Leftists. (However, one might also frame the guilds—just as one might frame pre-Civil Rights era labor unions—as populists whose concern was the welfare of the artisan and working classes, a Leftism of sorts but one not necessarily and often not associated with Leftward movement more generally. I’ll return to this alternate framing at the other end of the long quote.)
As with the rise and fall of Islamic science, the European guilds offer an historical example that informs us about what does or does not facilitate the growth of science and technology. One of the points made in Mokyr’s excellent essay—“Innovation and its Enemies”—is that if a population is dedicated to the maintenance of a safe, non-competitive equilibrium for itself, then science and technology will likewise remain in atrophic stasis. Mokyr writes:
The protection of skills and specific human capital is often combined with other forms of rent-seeking through the creation of barriers to entry and the control of output. This is clearly a widespread interpretation of the European craft-guild system which ruled urban artisans in many areas for many centuries. In pre-modern urban Europe these guilds enforced and eventually froze the technological status quo. Similar phenomena, mutatis mutandis, occurred in China.
. . . Kellenbenz, for example, states that “guilds defended the interests of their members against outsiders, and these included the inventors who, with their new equipment and techniques, threatened to disturb their members’ economic status. They were just against progress.” Much earlier Pirenne pointed out that “the essential aim [of the craft guild] was to protect the artisan, not only from external competition, but also from the competition of his fellow-members.” The consequence was “the destruction of all initiative. No one was permitted to harm others by methods which enabled him to produce more quickly and more cheaply than they. Technical progress took on the appearance of disloyalty.”
In most of Europe, then, craft guilds eventually became responsible for a level of regulation that stifled competition and innovation. They did this by laying down meticulous rules about three elements of production that we might term “the three p’s”: prices, procedures, and participation. As guilds gained in political power, they tried as much as they could to weaken market forces as aggregators and tended increasingly to freeze technology in its tracks. The regulation of prices was inimical to technological progress because process innovation by definition reduces costs, and the way through which the inventor makes his profits is by underselling his competitors. Regulating prices may still have allowed some technological progress because innovators could have realized increased profits through lowering costs even if they could not undersell their competitors. To prevent this, procedures stipulated precisely how a product was supposed to be made and such technical codes, while originally designed to deal with legitimate concerns such as reputation for quality, eventually caused production methods to ossify altogether. Enforcing these procedures, however, was far more difficult than enforcing pre-set prices. Finally, and in the long run perhaps the most effective brake on innovation, was participation: by limiting and controlling the number of entrants into crafts, and by forcing them to spend many years in apprenticeship and journeymanship, guild members infused them with the conventions of the technological status quo and essentially cut off the flow of fresh ideas and the cross-fertilization between branches of knowledge that so often is the taproot of technological change. Exclusion of innovators by guilds did not end with the Middle Ages or even the Industrial Revolution. In 1855, the Viennese guild of cabinetmakers filed a suit against Michael Thonet, who had invented a revolutionary process for making bentwood furniture . . .
In the past century resistance to new production technology has come in part from labor unions. There is no compelling reason why labor unions must always resist technological change: after all, as “encompassing organizations” they ought also to be aware of the undeniable benefits that new technology brings to their members qua consumers. The growth of the labor movement’s power in Britain is often held responsible for the declining technological dynamism of post-Victorian Britain. Resistance of organized labor slowed down technological progress in mining, shipbuilding and cotton weaving. Such resistance was not a hundred percent effective, but Coleman and MacLeod may well be right when they judge that labor’s resistance “reinforced the increasingly apathetic attitude of employers toward technological change.”
As anti-competition, the guilds were essentially anti-capitalist; their story is evidence that any Left Utopian attempts to initiate a non-competitive “post-capitalist” state will likely retard the progress of technology and science.
~~~
However, if you read Steve Sailer, you’ve probably drawn some parallels between the guilds’ attempt to keep outsiders from treading on their territory and possibly ‘takin their jobs!’ and the arguments for immigration restriction. After all, can’t Sailer’s entire argument be construed as wanting to limit who can enter the market, thus limiting competition? (Paging Bryan Caplan . . .)
No. You can’t construe it that way. First of all, neither Sailer nor any ethno-nationalists I know advocate for keeping out any and all possible entrants into the various American labor markets, so long as the new (foreign) entrants aren’t legion and so long as they truly bring a competitive edge with them (a degree, a high-IQ, a middle-class skill set, etc.). But Steve’s entire point (Caesar Chavez’s, as well) is that many new entrants into the American market a) are WAY TOO MANY and/or b) do not bring a competitive edge with them; they simply bring a willingness to work for cheap, cheap, cheap and without benefits.
To understand the difference between the anti-competition guild mindset and the anti-immigration stance, we can return to the last paragraph of Mokyr’s essay quoted above:
Finally, and in the long run perhaps the most effective brake on innovation, was participation: by limiting and controlling the number of entrants into crafts, and by forcing them to spend many years in apprenticeship and journeymanship, guild members infused them with the conventions of the technological status quo and essentially cut off the flow of fresh ideas and the cross-fertilization between branches of knowledge that so often is the taproot of technological change.
Restricting low-skilled immigration and reasonably regulating higher-skilled immigration will most certainly not “cut off the flow of fresh ideas” that are the “taproot of technological change.” Isolationist policies very well might, but not immigration restriction. Even Caplan and his ilk at their most delusional cannot argue, with a straight face, that curtailing Mexican immigration, not accepting third world refugees, and being stricter about H1B and H1B1 visas will have any measurable impact on scientific and technological progress, its speed or its quality. In fact, it’s arguable that inviting waves of labor who are valued first and foremost for their cheapness is, in fact, as anti-competitive as the European guild mindset.
Alexander Hamilton says, “No Exit.”
When the U.S. federal congress was given the power to do something, it clearly possessed the right to draft laws to execute that power. Nevertheless, to ensure that no one got any ideas about who was or was not under the power of those laws, the Necessary and Proper Clause and the Supremacy Clause were included in the U.S. Constitution. In Federalist 33, and in the context of the taxation question, Alexander Hamilton explains in clear terms why the Necessary and Proper Clause was (is) necessary and proper:
But SUSPICION may ask, Why then was it introduced? The answer is, that it could only have been done for greater caution, and to guard against all cavilling refinements in those who might hereafter feel a disposition to curtail and evade the legitimate authorities of the Union. The Convention probably foresaw, what it has been a principal aim of these papers to inculcate, that the danger which most threatens our political welfare is that the State governments will finally sap the foundations of the Union; and might therefore think it necessary, in so cardinal a point, to leave nothing to construction.
The Federalist Papers are nothing less than a Complete Treatise Against Exit. Our founding fathers knew that a Powerful Union would need all conflicts of interest between the states (slavery being the most prominent) to defer always to the interests of the One True State. To put it another way, in the interest of the State, the interests of the State are the only interests that matter. This is the principle upon which America as a nation was founded.
The State’s power is whole and complete, and its power is pointed in all directions, especially southward. Indeed, Alexander Hamilton knew that if he wanted to end slavery in the South, he simply needed to make the South part of the North Union and enlist slaves to fight for the Union. In 1779, he wrote:
An essential part of the plan is to give [negroes] their freedom with their muskets. This will secure their fidelity, animate their courage, and I believe will have a good influence upon those who remain, by opening a door to their emancipation. This circumstance, I confess, has no small weight in inducing me to wish the success of the project; for the dictates of humanity and true policy equally interest me in favour of this unfortunate class of men.
Read it once more. If you didn’t catch it, perhaps you will the second time: one of the founding fathers thought that emancipation of slaves and the creation of the Union through revolution were one and the same project—or, at least, that the Revolution could be an opportunity for laying the groundwork for emancipation in the South. He surely was not alone.
I’m not arguing that slavery was moral. It wasn’t. However, it was primarily an economic immorality—it was wrong not because blacks were picking cotton and being mistreated but because blacks were picking cotton, being mistreated, and not being paid for it. Had plantation owners paid their laborers and allowed them some limited freedom of movement (a la the servants of the old English gentry), then slavery would have been no worse than the other forms of hard labor that were eventually ameliorated with technology and labor laws.
The point lies in how the Union solved the slavery problem—not by allowing time, technology, or incentives to do their work, or by leaving the South to its own immoral destruction, but by encompassing the South entirely under its moralistic, universalist, Puritanical power and saying, “You can’t do it that way.” No exit. This method set a dangerous precedent. Post-1865, it unleashed the leftward, ever-democratizing impulse of American politics, which was, of course, dormant but fully formalized in the nation’s embryonic state. I return you to the first Hamilton quote above.
(Had more of our founding fathers possessed the agrarian temperament of a Thomas Jefferson, the federal state may have contained seeds that would have blossomed into an eventual loosening of its own power. But that was an impossibility. Why would anyone drawn to the center want to loosen the power he finds himself embodying? Besides, agrarians don’t do well in the center by definition.)
Systematizing Knowledge
In the comments about my post on Islamic science, Jim sketches an interesting genealogy of mathematical knowledge. While I find the history and transmission of ideas fascinating, however, I think the more important question is not who came up with something first? or who stole from whom? but rather who put scientific knowledge to the best use?
One of the conclusions drawn in my post is that Europe industrialized first because Europeans figured out how to bring theoretical knowledge together and put it to work for material, practical ends. However, I came across an interesting essay by Joel Mokyr—an historian of economics at Northwestern—in which Mokyr argues that the Enlightenment and the Industrial Revolution were made possible by neither applied technology nor pure science but by a generative relationship between both, a relationship enabled in great part by the printing press and an increased circulation of ideas.
In other cultures and historical periods, people had figured out how certain things work on the ground and other people were figuring out how the world’s mechanisms and systems operated at a more general or theoretical level. But no one brought the two methods together. It’s well known, for example, that the Romans possessed all the knowledge they needed to build a steam engine: they had cylinders, pistons, and valves scattered across various inventions, and Hero of Alexandria even drew up diagrams for one. (Here’s an interesting post by a high-IQ undergraduate who argues that the Romans also had what they needed to build a simple computing device.) What the Romans—or the Greeks or the Caliphate or the Chinese—didn’t have were a way to circulate ideas and an epistemic base that encouraged practical inventors and theoretical philosophers to bring their knowledge together. Both are clearly related. Mokyr writes:
. . . before the Industrial Revolution all techniques in use were supported by very narrow epistemic bases. That is to say, the people who invented them did not have much of a clue as to why and how they worked. The pre-1750 world produced, and produced well. It made many path-breaking inventions. But it was a world of engineering without mechanics, iron-making without metallurgy, farming without soil science, mining without geology, water-power without hydraulics, dye-making without organic chemistry, and medical practice without microbiology and immunology. The main point to keep in mind here is that such a lack of an epistemic base does not necessarily preclude the development of new techniques through trial and error and simple serendipity. But it makes the subsequent wave of micro-inventions that adapt and improve the technique and create the sustained productivity growth much slower and more costly. If one knows why some device works, it becomes easier to manipulate and debug it, to adapt to new uses and changing circumstances. Above all, one knows what will not work and thus reduce the costs of research and experimentation.
Bringing knowledge together—systematizing fragmented treatises and ideas—was a defining feature of the Enlightenment project: this “combination of different kinds of knowledge supporting one another” laid the groundwork for real technological progress and economic growth. Systematization also means experiments can be more focused and grounded and thus less costly in the long run; and as I mentioned in the last post, one thing Europe did that the Muslim world ceased doing was to fund scientific experimentation. So, with Mokyr’s essay in mind, we can say that a society must not only be willing to fund science (through government or private investment) but it also must know where to channel that funding for the best results.
Islamic science failed to systematize its knowledge across disciplines and never bridged what today we call the pure/applied science gap. It’s probably fair to suggest that this systematization never occurred because the Muslims lacked an adequate means of circulation. Seen in this light, the printing press was perhaps the most important pre-Enlightenment invention—whichever culture developed that first was bound to systematize its fragmented knowledge first. Instead, the Muslim world developed an increasingly fundamentalist and homogenous religion, lost its centrality in the network of global trade, and thus stopped funneling excess capital into scientific and technological development.
Islamic Science
In one of my first posts, I mentioned that studying the rise and fall of medieval Islamic science might provide interesting neoreactionary insights into the way societies optimize or fail to optimize for intelligence and culture. What did the Caliphate do correctly, where did the Islamic world go wrong, and why did it never return to a Golden Age while its neighbors to the West awoke to new heights of enlightenment? Before Christian Europe took up the mantle, the light of rational inquiry sparked by the Greeks and Romans enlightened the Islamic world. Why was it snuffed out and can we learn any relevant lessons for the contemporary West?
Rather than provide a concrete answer just yet, I’ll post some of the best resources and discussions I’ve discovered so far:
1. “The Religious State of Islamic Science.” An interesting interview with Turkish-American physicist, Taner Edis. He cautions against placing medieval Islamic science on a pedestal, reminding us that it was still thoroughly medieval, closer to classical Greek philosophy than to the empirical science developed in Renaissance and Enlightenment Europe. (However, he does recognize that Islamic scientists made prominent discoveries in medicine and astronomy and optics.) Edis’s general sense is that while Church power in Europe weakened or, at least, while the Church granted a measure of legal autonomy to systems outside the Church, the Islamic world never experienced a “reformation” or formulated a “separation of church and state.” The co-mingling of religious and legal power in the Islamic world was a fertile ground for science so long as the religious leaders were secular; once they became fundamentalist, however, the fertile ground dried up. Edis also talks here about the fundamentalist blinders that keep Islamic science from developing today. (Also enjoy the section in which the interviewer tries to get him to blame Western imperialism for the state of scientific research in Muslim countries; Edis refuses to play that game.)
2. “Tolerance, Religious Competition and the Rise and Fall of Muslim Science.” Written by Harvard economist Eric Chaney, this article argues that Muslim science flourished when theological competition made the study of Aristotelian logic a valuable enterprise, but that logic and rational inquiry became less important as Islam became more fundamentalist and homogenous. (At the same time, European Christianity was becoming more fractured, and theological debate was running rampant.) Islamic fundamentalist rulers, of course, had an interest in keeping their societies fundamentalist and theologically homogenous. Goodbye, rational inquiry. The results of Chaney’s study “highlight how religious groups, like their secular counterparts, can block innovation when they regard it as a threat to their interests.” Look past the eager uses of “diversity” and “tolerance” in this essay; it’s more interesting than its buzz words indicate. It also demonstrates that religion per se is not a detriment to an advanced, intelligent culture; indeed, Chaney argues that competition of religions (or, more accurately, sects within religions) encourages people to hone their logic skills, which lays the groundwork for rational inquiry outside of the theological arena. Perhaps its no mere correlation that the Enlightenment and the Reformation occurred more or less in tandem; it seems there is an optimal temperature for religious schism that is generative rather than destructive.
3. Here’s a video of George Saliba, a Professor of Arabic Studies at Columbia. According to Saliba, most historians, as well as Neil Degrasse Tyson, assume that the influence of anti-rational mystic Al-Ghazali (who refused to believe in cause-effect reasoning) in the 12th century and the sacking of Baghdad by the Mongols in the 13th century were the primary causes behind the quick decline of medieval Islamic science. Saliba argues, however, that extant treatises prove the continuation of some level of Islamic scientific development until the 15th and 16th centuries (apparently, an observatory was even built just after the Mongol invasion). So, he posits that the best question to ask is not “what went wrong in Islam?” but “what went right in Europe?” Now, Saliba is clearly a Leftist, but I think he makes an interesting point nonetheless: he argues that the discovery of the New World in 1492 completely re-aligned the trade routes of the Old World. Prior to the Age of Discovery, trade routes went from West to East through the Islamic world, meaning that a lot of human and monetary capital flowed into the Middle East—and excess capital is vital for scientific development. Once the New World was discovered, trade routes shifted into Europe and out into the Atlantic, and capital began to flow through Europe precisely at the same time that religious control was becoming more fundamentalist in the Muslim world and more de-centralized in Europe. Saliba also discusses how Europeans were intelligent enough (my words, not his) to re-invest their new capital into scientific development, both in the New World and in the Old. Saliba, with his Leftist spin, implies that Europeans just stole discoveries made elsewhere, but even his spin can’t hide the fact that what Europe did right was to systematically fund science in order to aggregate, use, and develop the basic discoveries of Islamic science, which never had a practical, experimental edge. (What Saliba tries to do is similar to scholars who say that natives ‘discovered’ penicillin just because they knew fungus could heal wounds. In reality, it takes the funding, empiricism, and high-IQ culture of Western science to isolate and package penicillin.) Also, Saliba overstates the extent to which trade routes ceased running through the Middle East. The trans-Atlantic slave trade into Arab lands was as extensive as it was through Europe and the New World. Rather, Saliba’s contribution is to demonstrate that it’s not enough for a culture to make theoretical discoveries; they also need to have the incentives, the funding, and (as neoreactionaries know) the IQ to figure out how to put those discoveries to work toward practical, material ends.
4. A short but interesting section about the Golden Age under the Abassid Caliphate from a book by John Esposito, a Cornell professor whose life seems to have been dedicated to Muslim-Christian relations. The most interesting tidbit from this chapter, from a neoreactionary perspective, is that Persians (i.e., white Muslims) played a major role in the bureaucracy of this period. (Persians also constitute a majority of the period’s prominent Muslim scientists.) Esposito also points out that the Golden Age was made possible, in part, by generous funding of culture, art, and science.
So what’s the take-away at this point? First, the consolidation of religious and state power is detrimental to scientific development because any knowledge that challenges the religion will itself be challenged—e.g., not funded or completely eradicated—by the state. Paging Dr. Jason Richwine. (Religion itself is not the problem; indeed, in both Europe and the Middle East, science flourished amidst vigorous religious debate; it’s no accident that most early scientists in both places were religious.) Second, science needs to be funded so that it can be perfected and put to use. As I’ve mentioned before in comments threads, I honestly have no problem with government expenditures so long as the money is spent on the advancement of science and technology—which will have material and fiscal returns—and not on programs that incentivize social pathology and failure. And finally, and most offensively, it appears as though the Golden Age of Islamic science was actually the Golden Age of Persian science . . . So, one way we can optimize for intelligence is to give more power to high-IQ populations and less power to low-IQ populations. That’s the current trend in America, right?
[Update: Per the comment thread, Islam’s contribution to science in the 21st century will be its continued jihad against the West, which will push the West to develop more and more sophisticated weapons technology for killing all the jihadis.)
Co-opting the Discourse of Oppression
Not Oppressed:
The black aristocracy has fallen a long, long way since the days of Booker T. and Dubois.
Prometheus
If you haven’t read it already, you need to read John Campbell’s “The Moral Imperative of Our Future Evolution.” Nick Land ends his magnum opus with Campbell. I consider it a foundational text for techno-commercialist neoreactionary thought. I’m curious what ethno-nationalists and Christian traditionalists think about Campbell’s vision, regardless of whether or not they think the technology upon which the vision relies will ever be developed.
Just a taste:
We probably will begin our interventions into brain and embryonic development with drugs and hormones and subsequently engineer the desirable intrusions into the genome. Then, after a further generation of accumulating biological information about individual gene function, developmental pathways, and the neural substrate of brain function, evolutionists probably will write novel genes for these traits from scratch using a DNA synthesizer.
The costs will be enormous, far beyond what most people could afford. This has kept our democratic society from appreciating that these possibilities will be used and will be important. However, their feasibility cannot be judged from what the average person will be willing to pay to procreate. What matters are the resources that the most successful generative lines will be able to apply to their goals. A million dollars per conception seems a great underestimate to me for the beings who hold evolution’s frontier.
Scarcity
Economics begins with the assumption that humans have needs and wants but not an infinite supply of things to fulfill those needs and wants. Almost all Cathedral-era politics can be summed up as a debate over whose needs and wants are most important, whose wants are fulfilled at the expense of others’ needs, and where to channel things to meet different peoples’ needs and wants.
Infinite needs and wants, but a finite supply to meet demand. The Marxist answer to this dilemma is to posit that needs and wants can be conditioned to match supply; wanting more than you need is a bourgeois bug that can be eradicated with the proper planning and policies—and given that the needs of so many people on the planet are hardly met at all, eradicating it is an urgent moral exigency.
Many Leftist policies essentially posit the same idea, which is why progressive SWPLs really like The Gods Must be Crazy and why economic debates with progs often involve, in their latter stages, someone’s pointing to the Bushmen of Africa as representatives of what society looks like (“so egalitarian!”) when no one wants more than they need or can reasonably attain. High time we learned from these noble savages. If people in the modern world can’t restrict their extraneous wants, then damn it all, the state had better start re-distributing shit so more needs are met across the board.
It might be easy not to want very much when you’re a low-IQ member of a population that never evolved beyond the hunter-gatherer stage. However, for those of us who learned to farm, build cities, and land on the moon, the cat’s out of the bag. Pandora’s box has been opened. Trying to recreate Economic Eden will only lead to social devolution; or, put in more concrete terms, jerry-rigging an economy that attempts to create parity across needs met by keeping people’s wants limned and contained through strict socialist policies will always result in Hugo Chavez’s Venezuela (i.e., in food shortages, runaway inflation, a lack of qualified individuals to run industry, and sci/tech Wikipedia pages that look like this and this). You want to be the Bushmen? Prepare to give up civilization.
Alright. So unless we want bread lines and a mass exodus of entrepreneurs, we can’t keep extraneous “needs” and wants in check through massive redistribution policies; nor can we divy out basic needs to everyone because, as I said about the Pandora’s box, everyone wants more than basic needs. (Try imagining a political platform whose vision of a “social safety net” is a nothing more than a cot, a shower, and three daily servings of stew.)
So what’s another option for meeting needs across the board, then?
Post-scarcity is the other option, but it’s way out there. This line of thinking assumes that technology will at some point deliver an infinite supply of needs and wants to the billions of people inhabiting our little mote of stellar dust, even the ones who can’t contribute to or make ends meet in our high tech societies. This would be great fun if it happens. No more starvation, for starters. Fewer robberies—ghetto children can just print out iPhones if they want one. Everyone’s basic needs and all of their wants are met, and no one has to pay for any of it because there’s an infinite supply of [whatever it may be] thanks to technology. The world’s inhabitants will be free to pursue new space programs or sit around and masturbate, whichever they choose. No more children with bloated bellies because everyone gets what he needs; no more laboring for bourgeois wants because you can press a button and have them met, whatever they are.
I don’t know. Sounds a long way off to me. I think a more attainable version of a post-scarcity society cuts in another direction: sterilize all the people in need, replace the low-wage laborers with robots, and encourage high-IQ individuals to breed more.
Or is that the just the scotch talking?
Justified
The Smithsonian has apparently been reading Steve Sailer . . .
The case of Iceland is an extreme one, but the idea that we are all distant cousins, in the scope of human history, is well accepted. A new study, published today in the journal PLOS Biology, explains this degree of relatedness in modern-day Europeans.
Anti-science Fiction
If you enjoy reading too much into things (and I do), you can read Jurassic Park as Luddite, anti-science propaganda. The ethical impulse of the entire film rests on Ian Malcolm’s famous line: “You were so concerned with whether or not you could, you never stopped to think if you should.” The movie plays out as a prolonged justification of Malcolm’s ethical wisdom, culminating in several scenes in which the paleontologist heroes essentially decide, nah, screw this research opportunity, it’s too dangerous, let’s go back to digging for bones in the dirt. Much better to be ethical than to extend the bounds of knowledge.
The film is a cautionary tale about scientific hubris, but that gets me thinking about the “scientific hubris” trope more generally. Beginning with Shelley’s Frankenstein, there is a strand in Western film and literature dealing with science or technology, even straight science fiction, that can be described as one giant cautionary tale against scientific hubris. In the past, this anti-science strand was always counteracted by the celebrants of technological progress: Jules Verne comes immediately to mind, he the great anecdote to the pessimist H.G. Wells. (Verne hated being compared to Wells.) Today, however, you’ll be hard pressed to find a Jules Verne, especially in Hollywood. From the reboot of The Manchurian Candidate to the Bourne franchise, the bad guys are more and more often rogue scientists (geneticists, usually, the worst kind of rogue scientist) and the good guys are the victims of Science, always ready with moralistic lines about Humanity.
“I’m not a science experiment, doc.”
“You were so concerned with whether or not you could, you never stopped to think if you should.”
I’m glad no one had that attitude in previous centuries. We’d still be bleeding patients and traveling by stagecoach.
There aren’t many films or novels these days whose protagonists are headstrong researchers pursuing knowledge and advancement at any costs, even though the achievements of Western civilization rely on headstrong researchers pursuing knowledge and advancement at any costs.* When did you last see a film or read a book in which the bad guys warned against scientific hubris, in which the antagonist was the guy who tries to thwart the geneticist from conducting a human experiment?
Not in America. For that, you need to go to Japan:
If you haven’t seen it or read it, I highly recommend Paprika. The story follows two psychiatrists who have invented devices that allow people to watch, record, and even enter one another’s dreams. The antagonist is Dr. Inui, a wheelchair-bound purist who believes the devices are unethical and dangerous, and who will go to great lengths—even murder—to stop the devices from being used. Inui considers himself the “protector of human dreams” from the audacity of science. The book describes him as “obsessed with justice” to a “dangerous” degree. He delivers puritanical speeches against the pro-research protagonists:
You see yourselves as brave pioneers riding on the leading edge of discovery! Where’s your sense of social responsibility? You should be ashamed to call yourselves scientists!
He critiques the Western notion of scientific genius from a Marxist perspective:
You seem to think you developed the devices all by yourself, but that’s merely the product of vanity. You didn’t. What about the investors who provided all the funds, the engineers who installed the equipment, the assistants, even the cleaning ladies? You can’t claim the invention as your own.
In other words, Inui is the Cathderal. And he’s the bad guy. Don’t expect authors outside the Sinosphere to start putting words like the ones above into the mouths of anyone but their holier-than-thou, anti-scientific heroes.
*TV medical dramas are the exception to this new rule. House and Doc Martin are especially enjoyable; both characters care far more about science and medicine than they do about bedside manners or ethics committees.
Violence and History
The more I discourse with Cathedral clerics and catachumens, the more I realize that the moral weight of the Left Narrative rests upon a single historical assumption: In 1492, the white peoples of Europe began to unleash all matter of human horrors into the rest of the world. Genocide, slavery, subjugation, destruction of traditions, imperialism, warfare over resources, so on and so forth. The descendants of those white Europeans are today living a privileged life made possible entirely by the brutality and racism of their ancestors. So, in the name of social justice, a weighing of the scales must take place. The white peoples of the world need to seek forgiveness from those who have been subjugated; whites need to live with constant guilt for the world-altering sins of their fathers; most importantly, whites need to pay damages or, at least, invite all the subjugated peoples into their cities and towns with open arms and laws designed to benefit the people they have oppressed for centuries.
To witness this assumption in full force, we can look at the discourse of academic post-colonial theory. I quote liberally from the Wikipedia entry on post-colonialism:
In The Wretched of the Earth (1961), the psychiatrist Frantz Fanon analysed and medically described the nature of colonialism as essentially destructive; that its societal effects — the imposition of a subjugating colonial identity — are harmful to the mental health of the coloured peoples who were subjugated into colonies. That the ideologic essence of colonialism is the systematic denial of “all attributes of humanity” of the colonised people; that such dehumanization is achieved with physical and mental violence, by which the colonist means to inculcate a servile mentality upon the native men and women, and that the native peoples must violently resist colonial subjugation.
Hence, violent resistance to colonialism is a mentally cathartic practice, which purges colonial servility from the native psyche, and restores self-respect to the men and women whom the colonialist subjugated with the epistemic violence that is inherent to the colonial institutions of the Mother Country . . .
. . . Notably, “The West” created the cultural concept of “The East”, which allowed the European suppression of the ability of the peoples of the Middle East, of the Indian Subcontinent, and of Asia, to express and represent themselves as discrete peoples and cultures. Orientalism thus conflated and reduced the non–Western world into the homogeneous cultural entity known as “The East”. Therefore, in service to the colonial type of imperialism, the Us-and-Them Orientalist paradigm allowed Europeans scholars to misrepresent the Oriental World as inferior and backward, irrational and wild, whilst misrepresenting Western Europe as superior and progressive, as rational and civil, as the opposite of the Oriental Other.
The Left Narrative of social justice assumes this one-eye-blind historical vision. It casts history in terms of good “natives” and bad white colonizers, and the Narrative carries its moral weight on this binary frame. However, it retains its moral weight only on the following related conditions:
1. The historical vision is, in fact, completely true.
2. The “natives” weren’t unleashing similar evils on one another before the Europeans arrived; oppression was invented in 1492.
To cause the masses to question the Narrative, we simply need to challenge either 1 or 2. Luckily, both are easily challenged. The history of the world did not begin in 1492, and history itself does not come equipped with good guys and bad guys. History is complex. Any Narrative imposed upon it will necessarily select certain elements and deflect dozens of others. So, to reclaim history from the Left Narrative, particularly postcolonialism, we should not try to resurrect any Victorian Narratives about the white man’s burden or anything like that. Rather, we simply need to show that the assumptions of the Left Narrative of History are factually wrong, wrong wrong. We show it with a simple recourse to confirmed historical facts.
A certain body of work—epitomized by Niall Fergusson—attempts to challenge the first condition above, documenting the many positive things accomplished by colonizers, such as curtailing the practice of sati in India, providing writing systems for many indigenous languages, increasing age-expectancies with Western medicine, and the like.
This work is effective for challenging the Narrative, but when all is said and done, it doesn’t necessarily change the effects of the Narrative.
“Well, sure,” the Leftist will reply, “colonialism may have left behind a few good things, typically because the oppressed natives were brilliant enough to use their masters’ tools for their own benefit. But this silver lining doesn’t excuse the overwhelming violence that the cancer of white Europe has inflicted upon the world.”
Given the inevitability of this response, I’ve realized that the second condition buttressing the moral weight of the Left Narrative—that the natives were not unleashing evils upon one another, that oppression began in 1492—is a much more central assumption of the Narrative. If this assumption is challenged, what can the progs say in return? “Well, sure, the Indians, Africans, and Mayans were all killing and conquering one another, and fighting over resources before the Europeans arrived, but that doesn’t excuse the Europeans for doing the same thing.”
Mm. Not quite as effective as the first rebuttal. Even if we admit that, no, pre-1492 violence doesn’t excuse the Europeans, it does excuse us from collective guilt and social re-engineering for the sake of the supposed “wretched of the earth.” If two children are fighting over a toy in a sandbox, grabbing the toy back and forth, throwing sand in each others’ faces, and a third child comes along, grabs the toy and throws sand in the other children’s faces . . . why should only the third child be punished or made guilty?
Discussing the creative destruction of techno-commercialism in the comments here, Spandrell writes the following:
Conquistadors won because they had higher IQs. End of story. Ceteris paribus the nastiest and better organized wins.
That usually involves violence and coercion. How many local traditions were destroyed by Rome? By Chingis Khan? How many by European imperialism?
I think this is precisely the tone we need to take. The Ferguson case for imperialism can be made, of course, but it’s much more powerful to admit, for argument’s sake, that, yes, colonial expansion was violent, destructive, fueled by a desire for resources, so on and so forth, before adding, “Just like it has been since homo-sapiens began killing other hominids as they expanded out of Africa. Do you think our species succeeded by always being nice?”
The Cathedral knows that this line of reasoning has serious implications for the moral weight of its Historical Narrative and, by extension, the moral weight of many of its white-guilt policies. This is why anthropology has abandoned its descriptive roots, and why Napoleon Chagnon was attacked so viciously for merely suggesting that the Yanamamo are no angels. His naieve defenders respond by pointing out the huge difference between describing the less savory elements of native culture and calling for their genocide or subjugation. However, this naieve defense ignores the fact that describing tribal life as anything less than Pure and Noble strikes a blow to the moral center of Leftism. If what happened during the colonial project was not the obliteration of peace-loving, Enya-listening Disney Indians, but more of the same old same that had gone on for millennia—just on a larger stage—then the Cathedral’s ability to use white guilt as a blunt object for getting things done is no longer guaranteed.
If you take a peak at the Library of the Dark Enlightenment at the top-right of this blog page, you’ll discover an in-progress list of legitimate scholarly work “On Violence.” Again, we needn’t revert to the old Victorian Narrative in order to refute the current Narrative (which has essentially been a recasting of the Backward Savages as the Noble Savages). We merely need to demonstrate the historical fact of violence, warfare, slavery, and conquest, because these facts, in and of themselves, do damage to the moral weight of the Left Narrative.
For example, we Americans are taught to believe that we completely destroyed and conquered peaceful native civilization. Instead of trying to complicate that fact, we should (also) point out that many native tribes were, in their day, just as conquering: the Iroquois pushed out the Osage, who then migrated west and pushed out some of the prairie tribes. During the Beaver Wars, the Iroquois attacked French-allied tribes and expanded their territory well beyond their original bounds. Aztec imperialism was so complete, it left genetic changes that can be detected today in the genes of their conquered descendants.
Again, we should not take these facts as evidence that they were all “backward, raging savages” or anything like that. The Victorians were just as wrong as the contemporary Neo-purtains of the Cathedral. Nor should we deny that European expansionism was at times violent and destructive to local traditions. Instead, we should remind the Cathedral, wherever we can, that the facts of indigenous violence and conquest seriously mitigate the guilt any of us—European or Mayan—should feel about blood shed and peoples conquered throughout the many ages since humans walked out of Africa.
Recent Comments