Here's how scientists are using machine learning to listen to fish

red information

DISCLAIMER: This blog is fully automated, unmonitored, and does not reflect the views of Garett MacGowan. The ML model may produce content that is offensive to some readers.

blue information

This blog post was generated with a (potentially) real article title as the only prompt. A link to the original article is below.

blog header

Photo by Luca Bravo on Unsplash

Generated: 1/24/2022
Here's how scientists are using machine learning to listen to fish

A number of companies and research institutions are developing systems to record the sounds of marine animals in a way that could help marine biologists better understand how species communicate, fight and coexist.Credit
Matt Rose for The New York Times

Scientists have long thought that the ocean is the greatest library of information on the planet, but listening to all that sound has been a long and arduous process, and the ocean is very large.

Even today, only a minority of marine scientists actually spend any time outside listening to, and noting down, the sounds of life underwater. Many marine biologists, especially those working on the most important questions — say, population dynamics — are content to rely on data collected from fish tanks and small-scale field tests. If the species does not live on land, data might come from a few scattered nets.

For example, studies of black sea bass, a large warm-water coastal fish, in the northern Pacific Ocean have yielded useful data. Biologists have recorded the range of frequency produced by the fish that the fish uses to attract mates and repel competitors. But while the research helped, it was not exactly easy for marine biologists to reproduce these results in situ, making it hard to understand what the black sea bass actually tells others across the ocean in its native waters and on the other side of the world.

As a result, scientists have been relying on studies of fish in tanks that reproduce only a fraction of what is known about the species in natural environments. This was one of the main reasons cited by two economists in a 2006 study as to why marine conservation was failing.

Now, researchers are using techniques borrowed from modern machine learning to collect new types of data on marine life. And the results suggest that these methods may actually be good at predicting and understanding much more about the ocean than their machine-intelligence adversaries, but still not as good as the ear of a trained biologist.

“I’ll admit that I find these tools kind of scary,” Stephen E. Koski, a marine biologist at the Monterey Bay Aquarium Research Institute in California, said of the machines. “But I actually feel as if there’s going to be more of an impact because now they’re a step removed from the original biological process.”

The work has also raised ethical issues.

Researchers are trying to determine how much noise people should be allowed to use to record the sounds of marine life. The machines, designed to interpret human language, are making their way onto the oceans and into devices that make it easier for people to hear the seas.

And if the machines are wrong, or if people start relying on them too much, there could be serious consequences, said Joseph D. Blumstein, a research scientist at the University of Delaware and a pioneer of underwater sound research. His field, acoustical bioacoustics, began largely as a way to listen to the ocean, but in some ways, his research has turned into a way of knowing ocean life and even learning about human languages. And while he is concerned about the potential consequences of turning to automated instruments before biologists learn how to interpret what they hear in the ocean, he does concede that if the new tools make progress, it would put pressure on the scientists to develop other kinds of techniques and expertise.

There have not been many studies that have looked explicitly at the way machine learning is being used to record sound underwater, but researchers say it is being widely used, and they are beginning to learn about what exactly is happening.

But this new wave of research is often focused on specific organisms in specific ways. It is not the kind of large-scale study that would be necessary to really understand the complexity of marine life, and its potential effects on human society.

Researchers can collect information about individuals through machines, as opposed to biologists, but many machine learning tools simply extract broad patterns based on large amounts of data about fish, for example, by analyzing how often and in which ways a fish might emit different kinds of sounds. Then biologists might take specific aspects, like the pitch of the sound a fish might make, and the context in which it is used or produced, and try to develop patterns about it.

Researchers say it will be at least a couple of years before the tools can reliably interpret the content of a sound.

“We’re in early stages,” said James A. Estes, director of the Monterey Bay Aquarium Research Institute. “We’re trying to keep machines simple — we don’t want to lose the ability to actually understand what we’re hearing underwater.”

Machine learning, computer science

The tools take advantage of a natural concept in human communication known as machine learning. Humans learn from each others’ voices, gestures, posture and body movements. These techniques are now being applied to computers.

The field has become increasingly important as scientists find that humans often fail to use all of the information contained in their environments, or their “visual and audio input,” in communicating or reasoning.

The idea of machine learning is to teach a computer to recognize patterns in how some of its input changes given another set of pattern changes. Then you can use a computer to understand input, and to infer some new outcomes.

Scientists have been trying to make computers do this kind of work for many years, but it is only recently that it has gotten more sophisticated and able to make broad inferences that are indistinguishable from those of a human.

The field grew out of research on how to analyze huge amounts of images and video. And people are now trying similar techniques on audio, a data set that is more difficult to work with. It requires a different set of skills and knowledge, which explain why, for a long time, the machines have had little to do with marine life.

This is starting to change. At the Monterey Bay Aquarium, in collaboration with the University of Southern California, Dr. Koski and his colleagues are using machine learning to make a machine recognize sounds of different types of reef fish, so biologists could then figure out how they are using specific sounds to communicate.

Some aspects of what they are learning could be predicted. But even within that broad context, there is a lot of variety, and the models struggle to make broad predictions about it.

“We’re still learning how to interpret sounds in the real world,” said Dr. Koski.

This is one of the challenges of working with acoustic data. It is more difficult to get a complete picture, because it often involves making broad assumptions about the context of the data and extrapolating from it. People can have highly specialized ways of making very precise judgments, but that is not the nature of the world.

The tools being used are based on the field of artificial neural networks, a method based in part on the ways in which the human brain works. As the name of the field suggests, the basic idea is to find an equivalent in a computer, and to then replicate its success in learning from its input.

A computer might, for example, be given one kind of data about what a person sees when they look at certain pictures. Then it might be taken to another room, where it might look at more examples of those pictures, and learn what combinations of colors might make something look orange or pink. It might then learn how to make more accurate judgments about when it looks on a new picture.

The machine tries to find its own solution to a problem. And if it does succeed using these methods — and it usually does — the result may be the creation of a network that is more efficient, more useful, more flexible and ultimately, better than what a human might come up with on his or her own.

If the process is successful, machines learn what other machines can learn from them, and they learn what the best possible human can do, all in a way that could not be duplicated in person.

“It’s a very interesting convergence of the two different fields,” said David M. Kaplan, principal architect in the artificial intelligence systems group at HP Labs in Palo Alto.

That is why the use of artificial neural networks is being applied to problems of such diverse interests as identifying and tracking people from faces, or automatically driving a car in a large city. But in the field of acoustic signal recognition, the application is very specific.

The tools are able to detect some broad trends in some very specific cases. This does not mean that the machines make reliable predictions. There is still a gap, like in the field of medicine: The fact that the machine identified certain patterns does not mean that it could be trusted to make an accurate diagnosis. And it would rarely lead to reliable predictions about animals in their natural habitats.

But the first steps are being taken, and they are promising. With the right kind of data and a sufficient number of people who are experts at learning to read it, a machine could learn to recognize the patterns in the ocean in much a way that a person would, said Michael G. Berman, co-director of the Center for Advanced Systems Innovation, an engineering research group at the University of Illinois that researches using machine learning to solve specific problems in the ocean.

Machine learning is being applied to everything from the way the sound of a car engine changes given different kinds of driving conditions to how the sound of air turbulence changes given the wind velocity.

But the machines are not as good as humans at recognizing those acoustic patterns. The two teams have tested the machines in lab settings, and they work well, but they do not always see the patterns that will be needed in the oceans.
logo

Garett MacGowan

© Copyright 2023 Garett MacGowan. Design Inspiration