CIAO DATE: 01/2012
Volume: 6, Issue: 4
Winter 2011-2012
The American Right, the Purpose of Government, and the Future of Liberty
Craig Biddle
Now that the 2012 GOP presidential nominee is almost certain to be either Mitt Romney or Newt Gingrich (who, in terms of policy and lack of principle, are practically indistinguishable), many on the right are turning their attention to the 2012 Senate races. And they are wise to do so. In the 2010 midterm elections, Republicans gained control of the House but failed to secure a majority in the Senate, leaving Democrats with 53 of 100 seats. Of the 33 Senate seats up for election in 2012, 21 are held by Democrats, 2 by independents. Republicans are likely to retain control of the House, and if they manage to gain control of the Senate as well, they will have the opportunity to repeal ObamaCare, Dodd-Frank, and other disastrous laws and regulations, and to begin cutting federal spending. These are crucial short-term goals. But if we want to return America to the free republic it is supposed to be, we must do more than campaign and vote for Republicans. We must embrace and advocate the only principle that can unify our political efforts and ground them in moral fact. That principle pertains to the purpose of government. Government is an institution with a legal monopoly on the use of physical force in a given geographic area. What is the proper purpose of such an institution? Why, morally speaking, do we need it? The proper purpose of government is, as the Founding Fathers recognized, to protect people’s inalienable rights to life, liberty, property, and the pursuit of happiness. Government fulfills this vital function, as Ayn Rand put it, by banning the use of physical force from social relationships and by using force only in retaliation and only against those who initiate its use. Insofar as an individual respects rights—that is, insofar as he refrains from assault, robbery, rape, fraud, extortion, and the like—a proper government leaves him fully free to act on his own judgment and to keep and use the product of his effort. Insofar as an individual violates rights—whether by direct force (e.g., assault) or indirect force (e.g., fraud)—a proper government employs the police and courts as necessary to stop him, to seek restitution for his victims, and/or to punish him. Likewise for international relations: So long as a foreign country refrains from using (or calling for) physical force against our citizens, our government properly leaves that country alone. But if a foreign country (or gang) attacks or calls for others to attack us, our government properly employs our military to eliminate that threat. As Thomas Jefferson summed up, a proper government “shall restrain men from injuring one another, shall leave them otherwise free to regulate their own pursuits of industry and improvement, and shall not take from the mouth of labor the bread it has earned. This is the sum of good government.” In order to begin moving America toward good government, we must explicitly embrace this principle, and we must demand that politicians who want our support explicitly embrace it as well. To do so, however, we must understand what the principle means in practice, especially with respect to major political issues of the day, such as “entitlement” programs, corporate bailouts, “stimulus” packages, and the Islamist assault on America. . . .
The Assault on Abortion Rights Undermines All Our Liberties
Ari Armstrong, Diana Hsieh
Surveys the expanding efforts to outlaw abortion in America, examines the facts that give rise to a woman’s right to abortion, and shows why the assault on this right is an assault on all our rights
The Patience of Jobs
Daniel Wahl
In 1997, John Lilly went to hear Steve Jobs speak in Building 4 of Apple’s headquarters, taking a seat in the auditorium among many of his colleagues. Credit: Matthew Yohe “It was a tough time at Apple,” he remembers. “[W]e were trading below book value on the market—our enterprise value was actually less than our cash on hand. And the rumors were everywhere that we were going to be acquired.”1 But Jobs seemed excited. He told the employees gathered there that they were going to turn around the company. He told them why he thought the company “sucked” and why in the future it would be great. Then someone asked about Michael Dell’s suggestion that Apple shut down and return its cash to shareholders. “Fuck Michael Dell,” replied Jobs. Lilly was flabbergasted. Jobs continued: “If you want to make Apple great again, let’s get going. If not, get the hell out.”2 While Jobs was alive, few people thought of him as a patient man. Indeed, his own biographer concluded that “patience was never one of his virtues,” and there were understandable grounds for this view.3 Lilly recalls, for example, that soon after Jobs returned to Apple he made clear that he would not put up with any employee who was not with him and his vision for the company. One of the struggles we were going through when he came back was that Apple was about the leakiest organization in history—it had gotten so bad that people were cavalier about it. In the face of all those leaks, I remember the first all-company email that Steve sent around after becoming interim CEO . . . [H]e talked in it about how Apple would release a few things in the coming week, and a desire to tighten up communications so that employees could know more about what was going on—and how that required respect for confidentiality. That mail was sent on a Thursday; I remember all of us getting to work on Monday morning and reading mail from Fred Anderson, our then-CFO, who said basically: “Steve sent [an email] last week, he told you not to leak, we were tracking everyone’s mail, and [four] people sent the details to outsiders. They’ve all been terminated and are no longer with the company.”4 This was just a single instance of Jobs showing an “intolerance or irritability with anything that impedes or delays”—the dictionary definition of impatience. But there were countless others, and although Jobs’s intolerance may have shocked employees new to Apple, it didn’t surprise those who remembered how Jobs had acted in earlier years.5 Back then, Jobs was also famously unwilling to put up with anyone who was not actively adding to the creation of products he envisioned and wanted to use. Then, too, he didn’t want to waste his time on anything that was of secondary importance to him—and he didn’t want people on his payroll wasting their time on such things either. . . .
An Interview with Still-Life Painter Linda Mann
I recently spoke with Linda Mann about how she became a painter, the nature of still lifes, and how she makes pumpkins so intriguing. This interview includes images of several of her paintings. Her full portfolio, including details of the paintings found herein, can be seen at her website, www.lindamann.com, or by appointment. —Craig Biddle Craig Biddle: Linda, thank you for taking time away from the canvas to chat with me about your work. Linda Mann: It’s my pleasure. CB: Having tried my hand at painting, I’ve concluded that great painters fall into the same category as great pianists. They’re superhuman. LM: [Laughing] CB: Seriously, though, creating beautiful, engaging, fascinating works on canvas is an extremely difficult process, and I have several questions about it in relation to your work. But let me begin with a few preliminaries. How and when did you become an artist? LM: In some ways, I’ve always been an artist. Ever since I can remember, I’ve been drawing and painting—before I even really knew what being an artist meant. I took great pleasure in drawing everything around me—there were never enough paper and pencils for me! In school, I took all the art classes I could and thought I would pursue a career as an artist. But in high school I became discouraged, because what was being taught was largely modern or abstract art. I didn’t understand the point of it and began to think that if that was art, that wasn’t what I had in mind. When it came time for me to decide what to study in college, after much debating, I chose industrial design instead of art. It seemed to combine aesthetics and the rationality that I found utterly lacking in modern art. Soon after I got my degree, however, I discovered that I actually wasn’t that good at it because my heart wasn’t really in it. Over the years, I went from one design field to another—from industrial design to interior design to graphic design. I was always dissatisfied. Finally, I ended up in fashion design. While I was sketching costumes at a museum exhibit one afternoon, it struck me how much I’d always loved to draw. That was where my heart was. At that moment I decided to pursue fine arts again. I thought I could figure out a way to avoid the modern art that so discouraged me. I took some classes at the San Francisco Academy of Art and then, after I moved to Seattle, at The Academy of Realist Art. I studied off and on there, taking classes in traditional drawing and painting techniques, anatomy, and portraiture. Most of my learning was actually done studying old art technique books from the turn of the last century. I found that the majority of recent art textbooks were emphasizing modern art and not the classical methods that I wanted to learn. CB: So before you turned back to fine art, you designed fashion? Did you create any designs that went to market? LM: No, I didn’t get that far. I was studying pattern-making, draping, sewing, and illustration. I had not gotten so far as to actually produce anything. CB: So that was school only, not a career spell. LM: Exactly. CB: Given the various kinds of paintings that an artist might choose to create—portraits, figures, landscapes, seascapes, cityscapes, and so on—why have you chosen to focus on still lifes? What’s so great about still lifes, as it were? LM: Something about still lifes is very intimate. Everybody is familiar with a tabletop with things on it—it’s right in front of you. Landscapes are the world at large, out there; a portrait is involved with a person’s character, and complex issues of psychology, which aren’t what I’m mostly interested in. In a still life, the focus is on light and how we see. In this way, still-life painting seems to be more about epistemology than any other kind of painting. It says, “The world exists and I can know it.” A still life is like a little world more than any other kind of painting. It suits me to be able to contemplate this piece of my world in front of me—and it’s available for study because it’s up close and personal. . . .
Sanctum Sanctorum: The National Gallery of Art in Washington, D.C.
Lee Sandstead
Author’s note: An extensive gallery of original photography of the National Gallery of Art and its superb collection can be found here. If, like me, you hold that art is a necessity of ardent living, then experiencing art is one of the most crucial aspects of your life. And just as food is not meant only to be looked at in magazines but eaten—so too paintings and sculptures are not meant merely to be looked at in books, but devoured in person. This requires visiting museums, where most great art is housed. Photo credit: Lee Sandstead My favorite museum, after years of travel and thousands of hours spent in museums, is the West Building of the National Gallery of Art (NGA) in Washington, D.C. Although there are larger museums with bigger collections—most notably the Metropolitan Museum of Art in New York City—the NGA has a first-rate collection, a top-notch preservation policy, and a spectacular architectural setting. In America, museums on the East Coast have the strongest collections. Those such as the NGA, Met, and Museum of Fine Arts (MFA), Boston—recipients of the generosity of America’s 19th- and early-20th-century collector-industrialists—contain the largest, most-diverse group of masters and masterpieces. Museum collections in the western and southern parts of the country tend to consist more of second-tier artists and artworks. The J. Paul Getty Museum in Los Angeles, for instance, is a well-endowed museum with a large but relatively weak collection. It has no original Vermeers, merely a copy of one. Likewise, its A Young Girl Defending Herself Against Eros is not the original by 19th-century master William Bouguereau, but rather a much smaller reproduction done mostly by his assistants. Although the Getty does have a Raphael, it is a minor, early portrait, rather than one of his celebrated Madonnas. The museum’s Canaletto is not one of his giant panoramas of Venice, but a minor painting of the Arch of Constantine. By contrast, the NGA has four Vermeers, five Raphaels, and nine paintings by Sir Peter Paul Rubens; no West-Coast museum comes close to having masterworks on this scale. Leonardo da Vinci, Ginevra de’ Benci, ca. 1474/1478. Housed at the NGA, this is the only painting by da Vinci in the United States. Although only in his early twenties when he painted Ginevra de’ Benci, Leonardo was at his innovative best in this painting, placing the sitter in an outdoor setting, positioning the body in a three-quarter pose, and using a new medium—oil painting. Photo credit: Lee Sandstead. Created by an act of Congress in 1937, the NGA was formed largely from the donated collections of Andrew Mellon and Samuel Kress, and it features a robust collection of Renaissance, Baroque, rococo, neoclassical, Romantic, and American art. It houses the only painting by Leonardo da Vinci in the United States and important works of several masters, including Rembrandt, Boucher, Fragonard, David, and Bierstadt. . . .
2011 Essay Contest Winner: "'Dog Benefits Dog': The Harmony of Rational Men's Interests"
Antonio Puglielli
Responds to the prompt: In Atlas Shrugged, Ayn Rand dramatizes the principle that "there are no conflicts of interest among rational men, men who do not desire the unearned . . . men who neither make sacrifices nor accept them." Elucidate and concretize this principle using examples from both Atlas and real life.
The Help, directed by Tate Taylor
C.A. Wolski
Written and Directed by Tate Taylor
Starring Emma Stone, Viola Davis, Bryce Dallas Howard, Octavia Spencer, Jessica Chastain, Allison Janney, and Sissy Spacek
Distributed by DreamWorks SKG
Rated PG-13 for thematic material
Reviewed by C. A. Wolski
Only a handful of fictional films-among them, To Kill a Mockingbird and In the Heat of the Night-have successfully addressed the ugly realities of racism in 20th-century America in compelling, dramatic ways. Tate Taylor's The Help can be added to this list.
Set in the deeply segregated Mississippi of 1963, The Help is, on one level, about a young, privileged white woman's attempts to become a professional writer. Skeeter Phelan, played by Emma Stone, is the daughter of an old, wealthy, socially connected white family in Jackson, Mississippi. After graduating from Ole Miss with an English degree, Skeeter has come home, hoping to pursue her dream of writing literature, taking her first step by writing the housekeeping column for the local paper. Skeeter's career choice is diametrically opposed to those of her lifelong friends and the rest of the Junior League who, at twenty-three, have already settled down and begun having babies. Led by Hilly Holbrook (Bryce Dallas Howard), these would-be Scarlett O'Haras are supported by "the help" of the title, black housekeepers who do the cleaning, shopping, cooking, and, most critically, raising generation after generation of white children, yet are not even allowed to use their employers' bathrooms.
While writing her column, Skeeter seeks the assistance of Abileen Clark (Viola Davis), the black maid of one of her friends. In so doing, she sees for the first time the ugliness that underlies the system in which she has lived her entire life. Here the story turns to deeper matters and the theme of independence versus conformity. . . .
Steve Jobs by Walter Isaacson
Daniel Wahl
New York: Simon & Schuster, 2011. 656 pp. $35 (hardcover). Reviewed by Daniel Wahl With the recent passing of Steve Jobs, Walter Isaacson’s biography of the now-legendary businessman was certain to become a best seller. And it has. But not everything that sells well is worth reading. Is this? In Steve Jobs, Isaacson’s focus is on the choices, actions, and value judgments that Jobs made throughout his life—as well as on how Jobs himself evaluated these choices and actions. The result is that you truly get to know Steve Jobs—to see “what made him tick,” what he did, and how it all worked out for him—from his childhood on. As the only biographer with whom Jobs ever cooperated, Isaacson is able to include a lot of new information. For example, Isaacson tells us that Jobs knew from a very early age that he was adopted and gives us a dramatic moment when he realized what other people might think about his being adopted: “My parents were very open with me about that,” [Jobs] recalled. He had a vivid memory of sitting on the lawn of his house, when he was six or seven years old, telling the girl who lived across the street. “So does that mean your real parents didn’t want you?” the girl asked. “Lightning bolts went off in my head,” according to Jobs. “I remember running into the house crying. And my parents said, ‘No, you have to understand.’ They were very serious and looked me straight in the eye. They said, ‘We specifically picked you out.’ Both of my parents said that and repeated it slowly for me. And they put an emphasis on every word in that sentence.” (p. 4) Owing partly to this event, and partly to another—where Jobs noticed how smart he was in comparison with others—Isaacson shows how Jobs began to regard himself highly. He also quotes Jobs showing how he thought later in life of his being adopted: “There’s some notion that because I was abandoned, I worked very hard so I could do well and make my [biological] parents wish they had me back, or some such nonsense, but that’s ridiculous,” he insisted. “Knowing I was adopted may have made me feel more independent, but I have never felt abandoned. I’ve always felt special.” (p. 5) Isaacson shows that Jobs was independent to the core, that he never really cared what others thought on any deep level, a trait that Isaacson says often worked in Jobs’s favor, by making him more assertive and less hesitant in going after what he wanted. . . .
This is Herman Cain! My Journey to the White House by Herman Cain
Gideon Reich
New York: Threshold Editions, 2011. 223 pp. $25 (hardcover). Reviewed by Gideon Reich In This is Herman Cain, Herman Cain attempts to convince the reader to support him in his run for president of the United States by telling the story of his life, with emphasis on his amazing business accomplishments. Although the impressive story is somewhat undercut by Cain’s mixed politics and religious (even superstitious) beliefs, this self-confident, ambitious, and capable business leader appears to be an admirable man. Cain recounts his early childhood, growing up in segregated Atlanta “po’, which is even worse than being poor” (p. 1). His father “worked three jobs: as a barber, as a janitor at the Pillsbury Company, and as a chauffeur at the Coca-Cola Company”; and his mother worked as a maid (p. 15). Nevertheless, thanks to his father’s influence, Cain had a positive attitude: My attitude then—as it is to this very day—was that you take a seemingly impossible goal and you make it happen. That was one of the many lessons I learned from Dad: He never allowed his lack of formal education to be a barrier to his success. And he never allowed his starting point in life or the racial conditions of his time to be excuses for failing to pursue his dreams. Dad taught me the value of having dreams, the motivation to pursue them, and the determination to achieve them. (p. 14) According to Cain, he was ambitious from a young age, pursuing a series of ever more-challenging goals. He studied mathematics in college, then went to work in the U.S. Navy as a mathematician. When he learned that he was being passed over for promotions because he had only a bachelor’s degree, he studied computer science at Purdue University and earned his master’s degree in “one intense, demanding year” (p. 42). He did get promoted, and, at twenty-seven, achieved his first goal—a job that earned more than $20,000 a year (p. 44). . . .
American Individualism—How a New Generation of Conservatives Can Save the Republican Party by Margaret Hoover
Michael |A. LaFerrara
New York: Crown Forum, an imprint of the Crown Publishing Group, a division of Random House, Inc. 247 pp. $24.99 (hardcover). Reviewed by Michael A. LaFerrara While working on the 2004 Bush-Cheney reelection campaign team, Fox News contributor Margaret Hoover came to a stark realization: On gay rights, reproductive freedom, immigration, and environmentalism, the Republican party “was falling seriously out of step with a rising generation of Americans . . . the ‘millennials’” (pp. ix, x). “[B]orn roughly between the years 1980 and 1999 [and] 50 million strong,” this rising new voter block, says Hoover, has “yet to solidly commit to a political party” and thus could hold the key to the GOP’s electoral future (p. xi). Hoover looks back for comparison to 1980, when Ronald Reagan fused a coalition of diverse conservative “tribes” around a central theme: anticommunism (p. 25). If the millennials, who “demonstrate decidedly conservative tendencies” (p. xii), could be united with today’s conservatives under “a new kind of fusionism” (p. 41), the Republican party would be on its way to majority status, she holds. Hoover sees differences among conservatives and divides the “organized modern conservative coalition in America” (p. 28) into three main categories: economic libertarians and fiscal conservatives led by three “leading lights” who “were . . . not populists [nor] self-described conservatives,” but “thinkers”—Friedrich von Hayek, Milton Friedman, and Ayn Rand. social conservatives, traditionalists, and the “Religious Right” led early on by Russell Kirk, Richard Weaver, and Robert Novak, and later by Jerry Falwell, Pat Robertson, James Dobson, and Phyllis Schlafly. anticommunists and paleocons led by Whittaker Chambers, John Chamberlain, James Burnham, and Pat Buchanan. According to Hoover, these three factions have formed the core of the movement that began with the publication of the National Review in November 1955 (p. 28) and have since been joined by neocons (p. 35), Rush Limbaugh’s “Dittoheads,” Sarah Palin’s “Mama Grizzlies,” the Tea Party uprising (pp. 36–37), and the “Crunchy Cons” and “enviro-cons” (p. 37). Hoover’s hope is to find common ground between these conservatives and the millennials. . . .
Disabling America: The Unintended Consequences of the Government's Protection of the Handicapped by Greg Perry
Joshua Lipana
Nashville: WND Books, 2004. 240 pp. $17.99 (hardcover). Reviewed by Joshua Lipana For the purpose of “helping” the disabled, President George H. W. Bush signed the Americans with Disabilities Act into law in 1990. In Disabling America, Greg Perry tells us that the “ADA infiltrates the lives of average Americans in ways far beyond what we usually think—wheelchair signs in parking lots and grab bars in public restrooms” (p. 2). And as the book shows, the ADA affects virtually everything in the private sector. Perry, a successful writer and businessman who was born with one leg and only three fingers, explains in chapter 1, “Compassion or Coercion,” why he believes the ADA is immoral. He compares a situation in which a person voluntarily helps an elderly lady cross a street with a situation in which the government forces you to help the lady to cross the street. In the guise of compassion, we get state coercion. With a legal gun to your head, the government now states that you will be compassionate to the disabled and you must implement that commission exactly [how] the government spells out that you are to do so. Such force is cruel to both the disabled and the non-disabled. (p. 3) Perry moves on to show the damage that government intervention in the name of the disabled has done to businesses, including forcing some to close down. He reports on how business owners have had to spend hundreds of thousands—in some cases millions—of dollars fighting baseless lawsuits and complying with ADA standards, and how their overall freedom has been diminished. . . .
The Right to Earn a Living: Economic Freedom and the Law by Timothy Sandefur
Loribeth Kowalski
Cato Institute, 2010. 376 pp. $25.95 (hardcover). Reviewed by Loribeth Kowalski Parents in America typically tell their children that they can be anything they want to be when they grow up, and children tend to believe it and explore the countless possibilities. I recall my own childhood aspirations: imagining myself as an archaeologist, wearing a khaki hat and digging in the desert sun; as a veterinarian, talking to the animals like Dr. Doolittle; as a writer, alone at my desk, fingers poised over a typewriter keyboard. Recently I found an old note in a drawer. It said, “When I grow up, I want to be a doctor. I want to save people. When I grow up, I WILL be a doctor.” Underneath my signature I had written “age 10.” Unfortunately, in today’s America, a child cannot be whatever he wants to be. Leave aside for the time being the difficulties involved in entering a profession such as medicine. Consider the more man-on-the-street jobs through which millions of Americans seek to earn a living, support their families, and better themselves. Suppose a person wants to drive a taxi in New York City. To do so, he will first have to come up with a million dollars to buy a “medallion.”1 If he wants to create and sell flower arrangements, and lives in Louisiana, he’ll have to pass a “highly subjective, State-mandated licensing exam.”2 If he wants to sell tacos or the like from a “food truck,” and lives in Chicago, he had better keep his business away from competing restaurants, or else face a ticket and fine.3 And a child doesn’t have to wait until he’s an adult to directly experience such limitations on his freedom. Last summer, authorities in various states shut down children’s lemonade stands because they didn’t have vending permits or meet other local regulations.4 In today’s America, it is increasingly difficult to enter various professions, near impossible to enter some, and, whatever one’s profession, it is likely saddled with regulations that severely limit the ways in which one can produce and trade. Timothy Sandefur explores and explains these developments in The Right to Earn a Living: Economic Freedom and the Law. Sandefur addresses this subject in the most comprehensive manner I’ve seen, surveying the history of economic liberty from 17th-century England through the Progressive era in America and up to the present day. He shows how the freedom to earn a living has been eroded in multiple ways throughout the legal system, from unreasonable rules, to licensing schemes, to limitations on advertising, to restrictions on contracts. In The Right to Earn a Living, we see how these and other factors combine to create a system in which it is more and more difficult to support oneself and one’s family in the manner one chooses.
Keynes Hayek: The Clash That Defined Modern Economics by Nicholas Wapshott
Richard M. Salsman
New York: W. W. Norton & Company, 2011. 382 pp. $28.95 (hardcover). Reviewed by Richard M. Salsman The financial-economic crash of 2008–9, dubbed the “Great Recession” by pundits who have insisted its severity was second only to that of the Great Depression (1930s), has been blamed on “greed,” tax-rate cuts (2003), the GOP, and looser regulations in the prior decade—that is, to what passes today for full, laissez-faire capitalism (the same culprit fingered in the 1930s). The crash has also renewed interest in Keynesian economics, which holds that free markets are prone to failures, breakdowns, and recessions due to excessive production (supply) and can be cured of slumps only by state intervention to boost demand and dictate investment. And the crash has led to the worldwide adoption of two pet policies of John Maynard Keynes (1883–1946): massive deficit spending and inflation to “stimulate” stagnant economies. In fact, economies continue to languish not in spite of Keynesian policies but because of them. One key factor precipitating the recent revival of Keynes was the awarding of a Nobel prize to Keynesian Paul Krugman in fall 2008, during the worst weeks of the crisis, when the $700 billion bank bailout (TARP) was debated and enacted. A half dozen new books since 2008 also have helped revive Keynesian notions; one is subtitled “return of the master,” another eagerly reports that the crash has “restored Keynes, the capitalist revolutionary, to prominence.” As in the 1930s, when Keynes first exerted strong influence on policy, he is depicted today as capitalism’s savior, favoring a mixed economy to quell popular angst of recessions and prevent more authoritarian alternatives (fascism, communism). Like most intellectuals today, British journalist Nicholas Wapshott (formerly senior editor at the London Times and New York Sun) falsely attributes the recent financial crisis to overly free markets; he also admires Keynes, his demand-side theories, and his interventionist policies. Yet unlike typical hagiography on Keynes, Wapshott adopts an ideas-oriented approach to Keynes’s revival in his book, Keynes Hayek: The Clash That Defined Modern Economics. Like most interpreters, Wapshott believes that Keynesianism somehow “saves” capitalism from itself and from ultimate political tyranny, although he does not deny (or bother to hide) the many cases where Keynes expresses an unvarnished hatred for individualism and free markets. He acknowledges (and welcomes) the return of Keynesian policies, but he worries they may have been hastily implemented and thus ineffectual, given that multi-trillion-dollar stimulus schemes in the three years since 2008 have not boosted growth or jobs. Wapshott rightly recounts how Keynesianism was discredited during the 1970s “stagflation” (which it could not explain) and successfully challenged by “efficient market” theorists and classically oriented supply-siders (“Reaganomics”). But he exaggerates the reach of pro-capitalist ideas and policies in recent decades, and pins blame for the recent crash on what is still free about markets, not on the state interventions that necessarily render otherwise efficient markets dysfunctional and destructive. Yet Wapshott’s main goal in Keynes Hayek is to have us understand Keynes’s recent revival in the context of a long-running battle or “clash” between the ideas and policies of Keynes and those of Austrian economist Friedrich Hayek (1899–1992), who is portrayed as the champion of free markets and skeptic toward state intervention. Wapshott mostly succeeds in achieving his goal, but in the end he draws the wrong conclusion—namely, that the Keynesian revival is warranted—because he believes, not merely with Keynes, but, we see, also with Hayek, that markets fail when left free. In fact, free markets do not fail, but widespread belief that they do has helped revive Keynes. . . .
Capitalist Solutions: A Philosophy of American Moral Dilemmas by Andrew Bernstein
Ari Armstrong
New Brunswick: Transaction Publishers, 2012. 180 pp. $34.95 (hardcover). Reviewed by Ari Armstrong How often does an author defend the right of citizens to own guns and the right of homosexuals to marry—in the same book chapter? In his new book Capitalist Solutions, Andrew Bernstein applies the principle of individual rights not only to “social” issues such as gun rights and gay marriage but also to economic matters such as health care and education and to the threat of Islamic totalitarianism. Bernstein augments his philosophical discussions with a wide range of facts from history, economics, and science. The release of Capitalist Solutions could not have been timed more perfectly: It coincides with the rise of the “Occupy Wall Street” movement that focuses on “corporate greed” and the alleged evils of income inequality. Whereas many “Occupiers” call for more government involvement in various areas of the economy—including welfare support and subsidies for mortgages and student loans—Bernstein argues forcefully that government interference in the market caused today’s economic problems and that capitalism is the solution. The introductory essay reviews Ayn Rand’s basic philosophical theories, with an emphasis on her ethics of egoism and her politics of individual rights. Bernstein harkens back to this philosophical foundation throughout his book, applying it to the issues of the day. . . .
Toyota Under Fire: Lessons for Turning Crisis into Opportunity by Jeffrey K. Liker and Timothy N. Ogden
Daniel Wahl
New York: McGraw-Hill, 2011. 237 pp. $20 (Kindle edition). Reviewed by Daniel Wahl Already battered by slowing automobile sales due to the 2008 recession, Toyota faced a second crisis: claims that its management had put short-term profits ahead of their customers’ safety. With commentators in the United States harshly criticizing the Japanese car manufacturer, Jeffrey K. Liker felt compelled to rise to Toyota’s defense. Liker is the author of six books on the company, including the international best seller The Toyota Way, which shows readers the principles and operations that enabled Toyota to become both highly regarded by its customers and one of the most consistently profitable companies ever. In short, Liker knows Toyota more intimately than most, and the claims he was hearing in 2009 didn’t correspond to that knowledge. But before he rushed to defend the company, Liker paused. A friend reminded him that blindly defending the company wasn’t “the Toyota way,” and he had to agree. The Toyota Way demands that any problem be thoroughly investigated before any conclusions are reached. It demands that problem solvers “go and see” the problem firsthand and not rely on abstract, thirdhand reports. It demands thoughtful and critical reflection to find root causes and develop effective solutions. Most of all, it demands that every team member openly bring problems to the surface and work to continuously improve what is within their control. I wasn’t doing any of these things. Whether Toyota was living up to its principles or not, I wasn’t. (loc 165) So Liker set aside his defense of Toyota and set out to investigate what happened at Toyota during these crises; Toyota Under Fire: Lessons for Turning Crisis into Opportunity presents his findings. Together, Liker and coauthor Timothy N. Ogden went to plants across America and Japan to see whether Toyota was still the same company that Liker profiled in his earlier books—a company living up to its principles. As it turned out, Liker was glad he paused.
Dare to Stand Alone: The Story of Charles Bradlaugh, Atheist and Republican by Bryan Niblett
Roderick Fitts
Oxford: Kramedart Press, 2011. 400 pp. $32 (hardcover). Reviewed by Roderick Fitts Dr. Bryan Niblett’s work, Dare to Stand Alone: The Story of Charles Bradlaugh, Atheist and Republican, immerses the reader in the life of a man who courageously fought against the Victorian-era culture of his time—and won. Niblett shows Bradlaugh to be a radical of his time, whose life’s work consisted of passionately arguing in support of his unpopular views, including atheism and individual rights, and against injustices such as the “Oaths Act” of England, which excluded men with certain religious beliefs, including atheism, from taking office as a member of Parliament (MP). Niblett’s thesis is “that one man, relying on reason, and daring to stand alone, can make a difference in the world” (p. viii). This he shows by surveying the life of Charles Bradlaugh (1833–1891), including personal, family, social, and business matters, but focusing primarily on various legal conflicts that defined Bradlaugh’s career. His life and struggles are presented in a series of short and accessible chapters. We first see Bradlaugh as a poor young lad with a sense of justice, a desire to gain a wide range of knowledge and skills, and a penchant for conveying his ideas to others by means of logical arguments. He would grow to be one of 19th-century England’s greatest orators, a famous (and detested) atheist, the founder and first president of the National Secular Society, a powerful and distinctive MP, and a prominent opponent of socialism and communism. Niblett shows that Bradlaugh was a consummate individualist, believing that people should never accept the claims of authorities on faith or expect to be taken care of by others, but rather should seek to understand matters for themselves and solve their own problems. And because he held that men should live by the judgment of their own minds, he held that they should be free to do so. . . .
Craig Biddle
David Hayes