More Fake News On Mariel

It seems that the tremors set off by my Mariel paper (which first circulated privately almost two years ago; here is the published version) are still reverberating. I’m quickly losing track of all the rebuttals. But those critiques– including an early reaction written about a month after the public release of my NBER working paper by David Roodman, the Peri-Yasenov paper that appeared three months after the NBER release, and a recent exercise by Alex Nowrasteh at Cato–have not been able to demolish my evidence.

As anyone involved in the immigration debate well knows, the narrative that immigration is good for everyone must live on. Each time one of these critical appraisals comes out, the reaction is the same. A lot of gloating from the usual suspects in the interwebs about my original paper being proved wrong, etc. But, somehow, the paper refuses to retire peacefully to that burial ground populated by tens of thousands of forgotten and useless academic studies, as additional rebuttals keep appearing to beat up what the gloaters have repeatedly declared to be a dead horse.

So it is not surprising that my inbox is again cluttered with messages about yet another paper that questions my results. And this time the paper comes along with the appearance of paid-for empirical research. This new exercise was funded by a Silicon Valley “philanthropic” organization, Good Ventures. It’s hard to make this stuff up, but Good Ventures, run by Facebook co-founder Dustin Moskovitz, actually lists “love” as its first value. And, as we all know, such organizations, just like pharmaceutical and energy companies, will never fund research that offers anything but a balanced and objective appraisal of their missions.

The main criticism that Michael Clemens and Jennifer Hunt make of my Mariel paper is succinctly stated in their abstract:

We show that conflicting findings on the effects of the Mariel Boatlift can be explained by a sudden change in the race composition of the Current Population Survey extracts in 1980, specific to Miami but unrelated to the Boatlift.

I have not had the time–and most definitely do not have the desire–to go line-by-line through their code. But I can very easily dismiss their entire criticism by simply looking at what happens if I excluded all blacks from my analysis, so that the post-1980 increase in the relative number of blacks could not possibly play any role in generating the wage drop in Miami. Curiously enough, the evidence resulting from this trivially simple exercise is not reported in the Clemens-Hunt paper.

One crucial caveat: By excluding blacks, the sample size in the March CPS becomes even smaller than it was in my original Mariel analysis. Nevertheless, the results from the larger ORG samples seem similar.

This exercise is extremely easy to do with the programs and data that I put online last year. You only need to add one line to the code–a line that drops blacks from the sample (and here are the new programs). To my surprise, and despite the very small sample sizes, not much happens. Just look at the graph of the three-year moving average of the wage of non-black, non-Hispanic high school dropouts in Miami and in all other cities.

March wage

And here’s the same graph with the larger ORG sample:

ORG wage

And for those interested in regressions, these are the regression coefficients and standard errors that go along with those reported in the last column of Table 5 in my paper. As in the original paper, the regression coefficients are smaller and less significant in the ORG, but I showed that some of that arises because the ORG sample excludes many people who happened not to work in the survey’s reference week.

Revised regression table

In short, using the increase in the relative size of Miami’s black workforce after 1980 to dismiss my Mariel evidence performs the job of obfuscating the debate further, but does little to clarify.

There is no doubt that the racial composition of the sampled low-skill workforce in Miami changed beginning in 1980 (at least in the March CPS). These are the trends in both the March and ORG samples. (As an aside, there seems to be a problem with Table 1 of the Clemens-Hunt paper that confuses the survey and earnings year for the ORG. You can tell their data are wrong because they show the sample size for the ORG increasing in 1980. That increase should have been observed in 1979, the first year of the ORG files).

Percent black

But the claim that the rising proportion of blacks explains the observed decline in Miami’s low-skill wage is more than just a little misleading–it is downright false. Most obviously, note that the fraction of blacks in Miami’s low-skill workforce is relatively constant between the 1980 and 1984 survey years (representing earnings between 1979 and 1983), which just happen to be the years when the wage of high school dropouts fell most in my original paper! Here’s the graph showing the original year-by-year wage trend in the March CPS including blacks, rather than the 3-year moving average. It is trivially easy to see that the timing is off. There’s no connection between the 1979-83 large drop in the low-skill wage and the black share of the workforce. (And here’s the comparable graph in the ORG for the curious geek).

Wage and percent black

From my perspective, the increased proportion of low-skill black workers beginning in the 1980 survey raises even more questions. Could it be more than just a sampling glitch or a coding problem with the original CPS data? Where did all the white low-skill workers go? Might there be a link between their gradual disappearance in the ORG and the post-Mariel labor market dislocations? What do we make of the very different trends in the racial composition of the workforce in the March and ORG surveys? What does it say about the sacred statistics derived from the CPS?

In the meantime, however, the narrative must live on. And if there are funds to ensure its survival (and there seem to be an awful lot of “charity” organizations out in Silicon Valley trying to reenact the Summer of Love), there will surely be a large supply of researchers with an incentive to use every trick in the book to throw noise into the discussion, and further confuse and obfuscate the issue “with a little help from their friends.” Hopefully, this new Summer of Love will not come crashing down in Altamont.

Update, 5-23-17. About an hour after the blog post went online, I discovered that the specification I used in the ORG regression was not identical to the one I used in the Mariel paper. I have updated the regression table using the same specification, and also updated the programs.

Update, 5-24-17. And here are some more results.