Government: Only Whistleblowers Get Fired

Anthony Salazar worked at the Los Angeles VA medical center in the ‘engineering service.’ In 2013, he reported that 30 of the center’s 88 motor vehicles were ‘unaccounted for.’ He also reported suspected fraudulent purchases on 10 government credit cards.

As a result, the VA convened an Administrative Investigation Board (AlB) in January 2014 to examine the facts and circumstances surrounding stolen agency vehicles, including whether managerial oversight played a part in the theft of such vehicles. The AlB concluded that managerial oversight contributed to the theft of government vehicles, for which [Engineering Service Chief Robert] Benkeser received a letter of counseling.

Mr. Benkeser then turned around and fired Salazar. This happened even though it’s usually very difficult to fire a federal employee. But in this case, the government managed to get the job done with alacrity. Salazar filed an appeal, but it was denied by the Administrative Judge. The taxpayers have to eat the loss of the 30 vehicles.

Salazar got the boot, but Benkeser still has his job. As do a number of other questionable VA employees.

For example, two felons work in management at the San Juan VA hospital, and a worker in the security office came to work each day with a GPS monitor because she had taken part in an armed robbery, which a spokesman said was irrelevant since it occurred off-duty. At another VA hospital, a nurse’s aide remains on the payroll as he awaits a manslaughter trial in the beating death of a patient.

Seems like the only way to get fired from the bureaucracy is to blow the whistle on government corruption. It’s almost as if government is a criminal enterprise or something.

Abolish the VA and give veterans a voucher for private health insurance.

Olive Oil: Buy Domestic?

One thing I noticed while in Italy a few years ago was how much fresher and tastier their olive oil seemed. Even a cheap bottle from an Italian supermarket gives off a wonderful aroma that wafts out of the bottle as soon as one takes off the cap. Tony just got back from visiting Italy this summer and he brought back a huge haul of olive oil–something like 25 bottles–made from scratch by his family in southern Italy.

Given how good the olive oil is in Italy, it would seem that in the U.S. the thing to do would be to buy olive oil exported from Italy. But that might not in fact be a good idea, because Italy apparently uses the U.S. as a dumping ground for bad oil. Italy gets away with this because…wait for it…Americans generally don’t know olive oil from shinola.

“We call the U.S. the world’s dumping ground for rancid and defective olive oil. We don’t know the difference,” said Sue Langstaff, a sensory scientist who consults for the beer, wine and olive oil industries, among others. Studies have shown that even frequent olive oil consumers in the U.S. don’t know what the extra virgin or cold pressed designations mean, let alone have the ability to taste the difference. And in blind taste tests, consumers often prefer lower-quality olive oils.

Rancidity, for example, isn’t generally a sought after quality in edible products. And yet, when it comes to olive oil in the U.S., people like it. Why? Partly, because rancid olive oil is less bitter than the good stuff. But also, likely because it’s what many of us know and grew up with. It’s what we think olive oil is supposed to taste like.

‘Murica. Where rancid olive oil is actually preferred. And we therefore have a textbook illustration of the gains from trade. Italy keeps the good olive oil, sends Americans the rancid oil, and everyone’s happy!

Anonymous commenter El Sabor Asiático offers some additional information on Italian exports.

Megacorporations like Bertolli are able to bend politicians to their will, which means that much of the “Italian” olive oil that is imported into the U.S. is actually produced in other parts of the world, then passed through Italy to get the “product of Italy” rubber stamp.

Hmmm. The label on my cheap bottle of Kroger brand ‘extra virgin’ oil says: “Packed in Italy with oils of (A) Italy, (B) Spain, (C) Greece, (D) Tunisia. (SEE CAP)” On the cap there’s a code that starts with a number and ends in “ABCD”. So does that mean that the bottle includes oil from all four countries? If so, why mix them? Olive oil is perishable and relatively costly to transport, so why send stuff from Spain to Italy only to be sent ultimately to the USA? Also, what are the proportions from each country? In particular, how much is from Tunisia? Hmmm.

The standard for what can be labeled “extra virgin olive oil” is so lax that it can be cut with low-grade “lamp oil” made from spoiled olives, or even with soy/canola oil. And enforcement of even these lax regulations is so inadequate that fly-by-night producers can pull off fraudulent schemes, make their profit, and disappear.

El Sabor Asiático makes the case for oil from California. The advantage is proximity.

The reason California olive oil is so often superior (when bought in the U.S.) and does well in these kinds of surveys is very simple: olive oil is actually a fruit juice, and extra virgin olive oil is essentially a fresh-squeezed fruit juice. If you think about what happens to fruit juice if it has to travel long distances or is stored for long periods of time, that is similar to what happens to olive oil. (And then imagine how much worse it gets when it’s shipped from South America to Europe and then to the U.S. — and then on top of that the fact that it’s low-grade oil to begin with.) Domestic olive oil producers have a tremendous natural advantage in terms of quality simply because their oil is pressed here and doesn’t need to travel nearly as far as imported oil.

Today in the Kroger I looked for olive oil from California. The selection was rather limited, but Kroger did carry a couple of brands. Prices were reasonable, as a 17 ounce (or so) bottle goes for about 6 or 7 dollars. I picked up a bottle but haven’t tried it yet.

Licensing Locks People Out of Work

Back in the 1950s, only one job in twenty required a license or certification. Now one job in four does. An incompetent doctor or dentist or even truck driver does pose a danger to the public, so there’s an argument for licensure of those professions. The problem, however, is that licensure has expanded to encompass jobs where information asymmetries pose little risk to the consumer. Examples include barbers, pet groomers, and interior decorators. A particularly egregious case concerns hair braiders.

A total of 30 states require braiders to be specifically certified in braiding, or to go even further by obtaining a full license in cosmetology.

In 14 states, aspiring hair braiders are required to attain a license specifically for braiding, and in those states, the number of hours in required training ranges from a low of six in South Carolina to a high of 600 in Oklahoma—nearly four times as many hours as it takes to become an emergency medical technician in the state, according to the Institute for Justice. [Emphasis added.]

That fact is something that should be pondered carefully by anybody who believes that making more laws and regulations will bring more order and reason to society.

Sixteen states, meanwhile, require aspiring hair braiders to attain a cosmetology license, which are often more costly and require a minimum of 1,000 hours of training or education, required in Massachusetts and Wyoming, for example.

South Dakota requires the highest number of training hours, 2,100, for hair braiders to become licensed cosmetologists and practice their craft.

Evidence suggests that the hair braiding requirements are effectively achieving their true purpose, which of course is not to protect consumers, but to prevent people from entering the occupation. Louisiana, which requires 500 hours of training, has just 47 braiders in the whole state. In contrast Mississippi, which repealed its cosmetology requirement a few years ago, has more than 1,200 braiders.

The good news is that a reform movement is working to repeal licensing requirements for hair braiding. This movement, oddly enough, is supported by both the Koch brothers and the Obama White House. The movement has had considerable success in just the last couple of years.

In 2015 and 2016, nine state legislatures—in Arkansas, Colorado, Maine, Texas, Delaware, Iowa, Kentucky, Nebraska, and West Virginia—eliminated licenses for hair braiders.

The following documentary tells the inspiring story of Melony Armstrong, whose quest to operate a braiding salon in Tupelo, Mississippi, was instrumental in eliminating the cosmetology requirement in that state.

Great Moments in Millennial Journalism


Following this advice would mean the end of roughly 3 out of every 5 marriages in America. Very serious!


After you get to the bottom of that one, let us know how the Clintons came to be worth hundreds of millions of dollars through careers in public service.


What’s amazing is that you can have a Harvard degree and presumably never have heard of Tolstoy, Pushkin, Dostoyevsky, Chekov, Solzhenitsyn, Pavlov, Lobachevsky, Stravinsky, Rachmaninoff, Borodin, Tchaikovsky and countless other Russians who made monumental contributions to the legacy of human achievement.

We’ll give the last word, so to speak, to the Russian National Orchestra. (Trigger Warning: The name of this piece is “Marche Slave.“)

Are People Getting Dumber?

Psychologist Michael Woodley claims that, since the Victorian Era, average IQ has fallen by 10-15 points, almost a full standard deviation. Woodley came to this conclusion by measuring reaction times, which correlate with IQ, and comparing his results to measurements made by the famous statistician Francis Galton during the 1880s.

Woodley’s conclusion seems to contradict the Flynn Effect, which is the fact that scores on pencil-and-paper IQ tests have increased in the long term by about 3 points per decade. Woodley argues, however, that IQ tests cannot be used to make comparisons in different time periods. Over time, IQ scores on tests have increased, but this doesn’t necessarily imply that people have gotten smarter; it could be that people have just gotten better at taking standardized pencil-and-paper tests.

Woodley says his results on reaction times are reinforced by results using other measures that correlate with IQ such as repeating digits backwards and (oddly enough) distinguishing between fine gradations of color. These criteria seem relatively immune to the sorts of bias that plague written IQ tests.

The cause of the 10-15 point drop remains a mystery. Woodley says that dysgenic fertility (the tendency for low IQ women to have more children than high IQ women) can account for a loss of only 3 or 4 points during the relevant time span. So about two-thirds of the decline remains unaccounted for. He doesn’t mention it, but it seems to me that the drop in childhood mortality should have had some effect as well. Lower childhood mortality probably implies that more relatively low-IQ children survive to adulthood.

Here’s an interview with Woodley in which he discusses his research. The interview is long and somewhat high-brow, but rewards scrutiny.

The Myth of Food Deserts

By now, most people are familiar with the concept of the ‘food desert,’ a poor neighborhood where residents do not have access to a grocery store selling healthy foods, like fresh fruits and vegetables. Recently, I found myself at a social gathering where the topic of food deserts came up and folks were shocked when I told them that food deserts are a myth. In fact, every serious study that has examined the issue has concluded that food deserts don’t exist.

The theory of food deserts actually has two components. The first is that poor people lack access to stores selling healthy foods. This notion has been promulgated by no less a personage than First Lady Michelle Obama.

“For 10 years,” [Michelle] Obama explained at a speech following her tour of the Fresh Grocer, “folks had to buy their groceries at convenience stores and gas stations, where usually, they don’t have a lot of fresh food—if any—to choose from.”

The second component states that this lack of access accounts for why poor people have unhealthy diets. It follows that if access were expanded, poor people would improve their diets by eating more healthy foods like fruits and vegetables. In short, the ‘food environment’ influences the diets and health outcomes of poor people.

Both components of the food desert theory are contradicted by the evidence. First, it is not true that poor people lack access to grocery stores.

[Dr. Helen] Lee also notes in her study that, on closer inspection, food deserts don’t actually exist in the U.S., at least not as a national problem—on average, poor neighborhoods have more grocery stores than wealthier neighborhoods. Even before Obama’s Healthy Food Financing Initiative was announced in 2010, studies suggested that the food desert explanation for obesity wasn’t right. A report from Department of Agriculture researchers presented to Congress in 2009 also showed more grocery stores in poor neighborhoods. In 2012, USDA researchers crunched the data again and found once more that low-income neighborhoods had more—not fewer—grocery stores.

Second, increasing access to fruits and vegetables doesn’t mean people will buy them. Many studies have in fact found that opening a new store does not cause people to change their diets.

Obesity levels don’t drop when low-income city neighborhoods have or get grocery stores. A 2011 study published in the Archives of Internal Medicine showed no connection between access to grocery stores and more healthful diets using 15 years’ worth of data from more than 5,000 people in five cities. One 2012 study showed that the local food environment did not influence the diet of middle-school children in California. Another 2012 study, published in Social Science and Medicine, used national data on store availability and a multiyear study of grade-schoolers to show no connection between food environment and diet. And this month, a study in Health Affairs examined one of the Philadelphia grocery stores that opened with help from the Fresh Food Financing Initiative. The authors found that the store had no significant impact on reducing obesity or increasing daily fruit and vegetable consumption in the four years since it opened.

That last study refers to the program initiated by Michelle Obama, which apparently accomplished nothing.

The above excerpt was published in Slate in February, 2014. This month’s issue of Scientific American reports on yet another study.

Sadler, Gilliland, and Arku examined the impact of a retail-based intervention in a socioeconomically disadvantaged area of Flint…their research found the introduction of a grocery store in the area did not have a significant impact on fruit and vegetable consumption. Further, there was an increase in the amount of prepared and fast foods consumed during the 17 months the grocery store was open.

Poor people just tend to have lousy eating habits. Maybe cooking classes and more information on nutrition would help, but the focus on ‘food deserts’ is misguided.

Question: If food deserts are a myth, why do we keep hearing so much about them?

Answer: The myth is promoted by shady non-profits as a way to get government grant money.

The Fallacy of Paying Workers ‘Enough to Buy the Product’

Alexander O’Keefe brings to our attention this clip of former Labor Secretary Robert Reich propagating a classic economic fallacy.

In 1914, Henry Ford gave his factory workers three times what the typical factory worker was then earning. And the Wall Street Journal, you know, called him a communist, said ‘How can you justify doing this?’…’It’s ruining capitalism’…And what Henry Ford said, ‘Wait a minute. Because when my workers get all this money they can turn around and they can buy Model-T Fords.’ And he was right, because once the workers had that money, and when other factories started emulating exactly what he did–they had to do it to get their workers–then all of a sudden workers had enough money to go and buy cars. And so, everybody who was in the car business got wealthy.

So according to Robert Reich, if you’re running a business and you want to get wealthy, what you should do is overpay your workers. Maybe this explains why Reich makes a living by running only his mouth.

The fallacy Reich is peddling has persisted for a long time. So long, in fact, that Henry Hazlitt spent a chapter refuting it in his classic book, Economics in One Lesson, originally published in 1946, the same year Reich was born!

Hazlitt refers to the idea as the “enough to buy back the product” fallacy.

[T]he only wages that will work, they tell us, the only wages that will prevent an imminent economic crash, are wages that will enable labor “to buy back the product it creates.”

The Buy Back the Product (BBTP) fallacy gets repeated over and over. Here’s a version published in the Washington Post and involving Henry Ford’s son.

Legend has it that while Henry Ford II was giving a tour around a new, highly automated factory to union leader Walter Reuther in the 1960s, Ford joked: “Walter, how are you going to get those robots to pay your union dues?”

Reuther is said to have replied: “Henry, how are you going to get them to buy your cars?”

The Washington Post seems to believe that Reuther’s riposte is devastating, but it’s actually just another version of the BBTP fallacy.

As Hazlitt points out, BBTP makes no sense if taken literally.

But surely they cannot mean that the makers of cheap dresses should get enough to buy back cheap dresses and the makers of mink coats enough to buy back mink coats; or that the men in the Ford plant should receive enough to buy Fords and the men in the Cadillac plant enough to buy Cadillacs.

Nor can it mean that the employees of General Dynamics, a defense contractor, should be paid enough to buy a nuclear submarine.

But without taking BBTP too literally, let’s just assume that it means workers should be paid considerably more than the wages determined by the labor market. The theory seems to exist in two versions: micro and macro. The micro version of BBTP asserts that an individual firm or industry can benefit itself by raising wages above market levels. In the macro version, a benefit accrues to the economy as a whole when wages are raised across industries. Both versions of BBTP are fallacies.

The micro version of BBTP is a fallacy simply because, as Megan McArdle puts it, your “employees are not your customers.” This point seems obvious to us, but somehow McArdle’s readers remained incredulous, and compelled her to provide a numerical example.

[L]et’s run a simple model based on Henry Ford’s legendary $5-a-day wage, introduced in 1914, which more than doubled the $2.25 workers were being paid.

That’s about $700 a year, almost enough to buy a Ford car (the Model T debuted at $825). Now let’s assume, unrealistically, that the workers devoted their extra wages to buying nothing but Model Ts; as soon as they bought the first one, they started saving for the next.

Is Ford making money on this transaction? No. At best, it could break even: It pays $700 a year in wages, gets $700 back in the form of car sales. But that assumes that it doesn’t cost anything except labor to make the cars. Unfortunately, automobiles are not conjured out of the ether by sheer force of will; they require things such as steel, rubber and copper wire. Those things have to be purchased. Once you factor in the cost of inputs, Ford is losing money on every unit.

But can the company make it up in volume, as the old economist’s joke goes? Perhaps by adding the workers to its customer base, Ford can get greater production volume and generate economies of scale. But Ford sold 300,000 units in 1914; its 14,000 employees are unlikely to have provided the extra juice it needed to drive mass efficiencies.

Another source puts Ford’s sales at the time at 200,000, but either way, the point stands; Ford’s employees comprised only a tiny fraction of its customer base, and the same was true for the auto industry as a whole.

The macro version of BBTP is a fallacy because the extra income paid to workers has to come out of the pockets of business owners and shareholders. Workers will be able to spend more, but that extra spending is offset by lower spending from capitalists and others. (This is true even if capitalists intend to save and not spend, because money saved is lent out by banks and financial markets.) As a result, aggregate demand in the economy remains unchanged despite the increase in wages. As Hazlitt puts it,

The national product, it should be obvious, is neither created nor bought by manufacturing labor alone. It is bought by everyone–by white collar workers, professional men, farmers, employers, big and little, by investors, grocers, butchers, owners of small drugstores and gasoline stations–by everybody, in short, who contributes toward making the product.

Worse, as Hazlitt points out, paying artificially high wages would in the long run create an economic distortion that would seriously impair overall production and economic efficiency.

But such a change would mean that the dollar profit margin, representing the income of investors, managers and the self-employed, would then have…[lower]…purchasing power….The long-run effect of this would be to cause a diminution of investment and new enterprise compared with what it would otherwise have been…

But if BBTP is a fallacy, why then did Henry Ford raise wages in 1914? Blogger Mark Hodak explains.

Ford’s stated intent in dramatically raising wages was to reduce the huge turnover his new assembly line process had created, and the high costs of dealing with that turnover.  In other words, it was a bold solution to a novel production problem.

So Robert Reich got not just the economics wrong, but also the history. This is coming from a guy who makes his living as a public intellectual, who is a former Labor Secretary with degrees from multiple elite universities (Dartmouth, Oxford, Yale), and who dated Hillary Rodham in college; in short, a star in the firmament of America’s Ruling Class.

And just another example of how the current Ruling Class just plain sucks.

Public Service, Then and Now


1884-85: Ulysses S. Grant, while dying of throat cancer, 450px-Mathew_B._Brady_-_Ulysses_S._Grant_-_Google_Art_Projectworks desperately during the last year of his life to complete his memoirs. Having given up his military pension in order to become president, he needs to sell his memoirs to provide an income for his wife. The memoirs, completed just a few days before Grant’s death, are now considered one of the greatest works of American non-fiction.

1953: At the termination of his presidency, Harry S. Truman drives himself and his family back to the old family home in Independence, Missouri. He is granted no secret service protection, and declines opportunities to make money by lending his name to corporate boards, since he considers it unseemly to take a no-show job. His income of $13,000 for 1954 does not allow him to hire a secretary to answer mail. Congress, out of concern for Truman’s financial situation, passes a law in 1958 granting a pension to former presidents.



2001-2016: Bill and Hillary Clinton amass a fortune of several hundred million dollars by peddling influence and taking bribes, laundered as ‘speaking fees’ and charitable gifts, from Wall Street and from shady international billionaires.


Not-So Settled Science: The Radon Scam

An increasingly common tactic employed by political hacks is to overstate the science backing their position, and then to accuse those who disagree of being anti-science. Most prominently, this tactic is used to promote the agenda behind global warming, but the tactic is similarly employed on other issues, such as consumption of salt or of saturated fat.

An another example, consider radon, a radioactive gas that is present in some homes. For years now, federal and state governments have been spending taxpayer money on ‘public service’ ads to warn Americans of the health dangers posed by radon.

The claim that radon in your home increases your risk of lung cancer is repeated by government over and over, without qualification, as if it were a thoroughly well-established scientific fact. The actual science, however, is not nearly so unequivocal. There is considerable evidence that radon in coal mines significantly increases the incidence of lung cancer, but the level of exposure in a coal mine exceeds by many times the level in a home. Studies of home exposure are decidedly more mixed, with some carefully executed studies finding no cancer risk from radon. Some studies even find that radon exposure in the home can DECREASE the risk of cancer. As Discover magazine explained in 2002,

Telling people to reduce the radon exposure in their home will actually increase their chances of getting lung cancer. Radiation exposure stimulates the immune system and makes people healthier and live longer. This is called “hormesis”.

From a 1995 study in Health Physics by Bernard L. Cohen:

With or without corrections for variations in smoking prevalence, there is a strong tendency for lung cancer rates to decrease with increasing radon exposure, in sharp contrast to the increase expected from the theory. The discrepancy in slope is about 20 standard deviations. It is shown that uncertainties in lung cancer rates, radon exposures, and smoking prevalence are not important and that confounding by 54 socioeconomic factors, by geography, and by altitude and climate can explain only a small fraction of the discrepancy… In spite of extensive efforts, no potential explanation for the discrepancy other than failure of the linear-no threshold theory for carcinogenesis from inhaled radon decay products could be found.

And here’s a graph based on Cohen’s data showing the inverse relationship between radon and lung cancer. The average home level of radiation is less than two units, and the data appear to show that cancer rates decline up to at least 5 units. This suggest that living in a home with two or three times the average level of radon actually lowers your cancer risk.


Why would government want to scare us needlessly about radon? Well the all-purpose answer is always ‘follow the money.’ A lot of private companies are making money on radon mitigation at an average cost of $1,200 per home. Tellingly, they even have a trade association: the American Association of Radon Scientists and Technologists (AARST).


The association boasts on its website that

AARST supports you in your business by providing the tools and training you need to do your job. In addition, AARST maintains a strong presence in Washington, D.C., advocating for sound policies designed to open new markets and business opportunities.

You know something smells rotten in the state of Denmark when the way they “open new markets and business opportunities” is by maintaining “a strong presence in Washington, D.C.”

A conversation that will never be heard at the annual International Radon Symposium:

“Have you seen the studies showing that radon in the home is not dangerous and might actually decrease cancer risk?”

“Yes, apparently all this time we’ve been on the wrong track. We’ll have to find another line of work.”