Fish, Numbers, and the Story in Between

A New England ground fisherman, sorting his catch.

The other night I was driving home with my son while we listened to a podcast about Carlos Rafael—the New Bedford seafood dealer who built a quiet empire selling fish that, according to official records, did not exist. The story unfolded with federal agents, secret ledgers, and species that seemed to change names somewhere between the deck of a fishing boat and the dock where the fish were sold.

At one point my son asked the question that sits quietly beneath the entire scandal.

“How can someone sell fish that don’t exist?”

The answer, of course, is that the fish always existed.

They were caught, iced, filleted, boxed, and sold just like any other fish moving through the seafood economy. The only thing missing was the official record describing them. Somewhere between the rail of a fishing boat and the government ledger, the fish changed identity. A species becomes another species. A thousand pounds becomes five hundred. Sometimes the fish vanish entirely.

On paper, the ocean grows a little smaller.

In Rafael’s case the manipulation was breathtaking. Hundreds of thousands of pounds of fish moved through a parallel accounting system—one set of numbers for regulators and another for the real marketplace. The fish themselves never disappeared. Only the story told about them did.

Stories like this tend to leave people with a lingering suspicion about the science that governs fisheries management. If the numbers can be manipulated so easily, it is tempting to assume the science itself must be broken—that the models and stock assessments guiding fisheries policy are built on fiction.

In one sense, that suspicion is justified.

Fisheries science is only as reliable as the records it receives. If fishermen lie, if dealers misreport landings, if fish move through unofficial channels, or if recreational catches are poorly estimated, the scientific picture assembled from those inputs will begin to drift away from reality. The models may be mathematically sound. The scientists running them may be acting in good faith. But a stock assessment cannot magically correct bad information.

It absorbs the numbers it is given and tries to reconcile them.

Which means that when the reporting system is compromised, the science built on top of it can be compromised too.

The mistake most people make is assuming that corruption, when it exists, must begin with the scientists. In reality it usually begins much earlier, at the point where fish become numbers.

When people imagine fisheries science, they picture government research vessels towing nets across the seafloor while biologists measure fish on stainless steel tables. Those surveys are real, and they matter enormously. NOAA’s bottom trawl survey has been quietly sampling the Atlantic for decades and remains one of the most important long-term ecological datasets in the world.

But those surveys are only one input into the machinery of fisheries science.

Most of the information used to understand American fisheries originates somewhere far less glamorous: in the records created when fishermen catch fish.

Every commercial fishing trip generates a trail of information. Captains file vessel trip reports describing where they fished and what they caught. When fish are landed, seafood dealers submit reports documenting how many pounds of each species entered the marketplace. Observers sometimes ride along to measure fish and estimate how many are discarded at sea.

And the commercial fleet is only part of the story.

In many fisheries—cod among them—the recreational fleet removes an enormous share of the fish. Millions of anglers fish American waters every year from charter boats, private vessels, beaches, jetties, and docks. Those fish have to be counted as well, even though recreational fishermen are not required to file trip reports the way commercial captains are.

To estimate recreational catch, scientists rely on large survey programs designed to reconstruct how many people are fishing, how often they go, and what they catch. Interviewers meet anglers at docks and boat ramps, measure fish, and ask where they were caught. Separate surveys estimate how many fishing trips occur along entire regions of coastline. Together those pieces allow scientists to estimate total recreational harvest.

Over time these fragments accumulate into enormous datasets describing the interaction between people and fish. Survey catches provide one signal. Commercial landings provide another. Recreational estimates add a third. Fish sizes, growth rates, and mortality patterns contribute additional clues. Stock assessment models attempt to reconcile all of these signals at once, searching for a population trajectory that makes sense of the evidence.

In other words, fisheries science is not conducted outside the fishing industry.

It is built largely from the industry’s own record of its activity.

During my career I spent years examining fisheries data with a simple goal: looking for the places where those records begin to drift away from the fish themselves.

When you spend enough time staring at fisheries data, patterns begin to reveal themselves. Fish leave fingerprints. Species appear together in predictable combinations. Sizes follow familiar curves. Fishing effort produces catch compositions that repeat themselves season after season.

When those patterns begin to bend in strange directions, it usually means something has gone wrong with the record describing the fishery.

Sometimes the signals are subtle. A species appears where it rarely occurs. Dealer reports seem unusually large compared to the effort used to catch them. Catch begins appearing under names that shift depending on where the fish enter the market.

None of these distortions happen in the ocean.

Fish do not become data when they cross the rail of a boat. They become data only after someone identifies them, weighs them, writes them down, and reports them. Along that chain of decisions there are countless opportunities—sometimes small, sometimes large—for the numbers to wander away from reality.

Fish can disappear from the reporting system through underreporting, mislabeling, or sales that bypass official channels entirely. But distortions run in the opposite direction as well. Catch can be inflated through guesswork or sloppy reporting, creating the illusion of fish that were never there.

Either way, the signal the science is trying to detect becomes blurred.

Over time those distortions ripple through the models used to understand fish populations. If enough fish disappear from the reporting system, the models may conclude the population itself must be smaller than previously believed. If catch is inflated, the opposite distortion can occur.

Eventually that drift works its way into management advice—quotas rise or fall, seasons open or close—and when those outcomes conflict with what fishermen believe they see on the water, the conclusion often drawn is that the science must be wrong.

But there is another possibility that rarely receives much attention.

Sometimes the science is working exactly as designed.

Sometimes the problem is that the data feeding the science were never enforced in the first place.

For decades the United States has invested enormous resources building systems to measure the fishing industry—vessel trip reports, dealer reporting systems, observer programs, and large-scale recreational surveys. Together they form the backbone of how we understand what is happening in the ocean.

Yet enforcement of those reporting requirements has rarely kept pace with the importance of the information itself.

Instead of strengthening these core systems, the policy response has often been to search for new streams of data. Cooperative research programs are launched. Cameras are mounted on vessels. Electronic monitoring systems promise deeper insight into what happens at sea.

Many of these efforts are valuable. Some are genuinely innovative.

But they also reveal something strange.

We continue spending millions of dollars building new data streams while failing to enforce the ones our science already depends on.

At the center of the entire enterprise lies a simple act that rarely feels scientific at all.

Filling out a vessel trip report does not feel like science. For most fishermen it feels closer to filing taxes—another mandatory form to complete between one trip and the next. Dealer reports and recreational surveys can feel like bureaucratic obligations imposed from the outside rather than part of understanding the ocean.

But those records are not administrative paperwork.

They are the foundation of the science.

Long before a research vessel leaves the dock, long before a scientist opens a population model, the process of describing the fishery has already begun. Every trip report, every dealer landing record, every recreational survey interview adds another fragment to the picture scientists are trying to assemble.

In that sense, fisheries science does not begin with NOAA.

It begins with the fishing industry.

The models and surveys that shape fisheries policy are built on information generated by the people who catch fish. If those records faithfully describe what happened on the water, the science has a fighting chance.

But when they drift away from reality—through fraud, mislabeling, weak enforcement, or simple neglect—the science does not collapse in some dramatic fashion.

It does something quieter and more dangerous.

It absorbs the error.

A fish that is misreported does not simply disappear. It becomes a different number in a different column. It alters the catch history the models are trying to reconstruct. The error moves through the system—into mortality estimates, population projections, and management advice.

By the time the models run, the trajectory has already been set.

Caught fish become data.
Data become models.
Models become management decisions.

And everything that follows depends on whether that first translation—from fish to numbers—was honest.

Next
Next

Another Vessel Goes Down