The Science of Sound Frequencies Explained

What Is Sound Frequency?

In my work developing audio tools for WhiteNoise.top, I deal with sound frequencies every single day, and I have come to appreciate how elegant the underlying physics really is. Sound frequency is the number of complete pressure oscillation cycles that occur per second, measured in hertz (Hz). A tuning fork vibrating at 440 Hz produces 440 complete cycles of compression and rarefaction every second, creating the musical note A above middle C. This simple relationship between vibration rate and perceived pitch is one of the most fundamental concepts in acoustics.

Sound itself is a mechanical wave that requires a medium, such as air, water, or a solid, to propagate. When a source vibrates, it creates alternating regions of high pressure (compressions) and low pressure (rarefactions) that travel outward from the source. The frequency of these oscillations determines the pitch we perceive, while their magnitude determines the loudness. In air at room temperature, these pressure waves travel at approximately 343 meters per second, a value known as the speed of sound.

The range of frequencies that humans can hear spans roughly from 20 Hz at the low end to 20,000 Hz at the high end, though this range varies significantly between individuals and narrows with age. In my listening tests, I have found that most adults over 30 have difficulty hearing pure tones above 15,000 Hz, and by age 60, the upper limit typically drops to around 12,000 Hz. This age-related hearing loss, called presbycusis, disproportionately affects high frequencies because the hair cells in the cochlea that detect these frequencies are the most vulnerable to cumulative damage.

Wavelength and Its Relationship to Frequency

Frequency and wavelength are inversely related through the speed of sound. The wavelength of a sound wave is the physical distance between two consecutive points of identical phase, such as two adjacent compressions. The formula is straightforward: wavelength equals the speed of sound divided by the frequency. At 343 meters per second in air, a 20 Hz sound has a wavelength of about 17.15 meters, while a 20,000 Hz sound has a wavelength of only 1.7 centimeters.

In my experience designing acoustic environments and testing audio equipment, wavelength has profound practical implications. Low-frequency sounds, with their long wavelengths, diffract easily around obstacles and are difficult to absorb or contain. This is why bass from a neighbor's subwoofer travels through walls so effectively, and why bass traps in recording studios need to be physically large to be effective. High-frequency sounds, with their short wavelengths, are easily absorbed by soft materials and blocked by thin barriers, which is why closing a door effectively attenuates treble but barely affects bass.

When I design noise generators, I think about wavelength in the context of room acoustics. A room with dimensions comparable to the wavelength of a particular frequency will exhibit standing wave patterns at that frequency, creating positions of constructive and destructive interference. These are called room modes, and they can cause dramatic level variations at low frequencies. A 70 Hz tone, with a wavelength of about 4.9 meters, might be 20 decibels louder at one position in a room than at another position just two meters away. Understanding this interaction between wavelength and room dimensions is essential for anyone working with sound.

Amplitude, Intensity, and the Perception of Loudness

While frequency determines pitch, amplitude determines how loud a sound is. Amplitude refers to the maximum displacement of the pressure wave from its resting state, and it is directly related to the energy carried by the wave. Sound intensity, measured in watts per square meter, is proportional to the square of the amplitude. Doubling the amplitude quadruples the intensity.

Human hearing operates across an extraordinarily wide range of intensities. The quietest sound a healthy young person can detect, the threshold of hearing at 1 kHz, has an intensity of about one trillionth of a watt per square meter. The threshold of pain occurs at roughly one watt per square meter, a factor of one trillion higher. To manage this enormous range, acousticians use the decibel scale, which compresses the intensity ratio into a more manageable logarithmic range of 0 to about 130 dB SPL.

In my measurements, I have observed that the perception of loudness is not uniform across frequencies. Human ears are most sensitive in the range of 2,000 to 5,000 Hz, which corresponds to the resonant frequency of the ear canal. A 1 kHz tone at 40 dB SPL sounds noticeably louder than a 100 Hz tone at the same level. This frequency-dependent sensitivity is captured by equal-loudness contours, originally measured by Fletcher and Munson in the 1930s and later refined by Robinson and Dadson. When I calibrate our noise generators, I take these contours into account to ensure that the perceived loudness remains consistent even when users adjust the spectral shape.

Harmonics, Overtones, and Timbre

Pure tones, which consist of a single frequency, are rare in nature. Most real-world sounds are complex waveforms made up of a fundamental frequency plus a series of harmonics, which are integer multiples of the fundamental. A guitar string vibrating at 220 Hz produces harmonics at 440 Hz, 660 Hz, 880 Hz, and so on. The relative amplitudes of these harmonics give each instrument its characteristic timbre, which is why a piano and a violin playing the same note sound fundamentally different.

In my analysis of natural sound recordings for our platform, I use spectrograms to visualize the harmonic content of different sources. A spectrogram plots frequency on the vertical axis, time on the horizontal axis, and intensity as color or brightness. Tonal sounds like birdsong and engine hum show clear horizontal lines at the fundamental and harmonic frequencies. Broadband sounds like rushing water and wind show continuous energy spread across a wide frequency range, with no distinct harmonic structure.

Noise signals, by their nature, lack harmonic structure. White noise has energy at all frequencies with random phase relationships, so there is no periodicity and no fundamental pitch. This is precisely what makes noise useful for masking: because it contains no tonal pattern, the auditory system cannot latch onto it the way it latches onto speech or music. It remains in the perceptual background, raising the threshold for detecting other sounds without demanding attention itself.

How the Human Ear Processes Frequencies

The human auditory system performs a remarkable real-time frequency analysis using the cochlea, a fluid-filled spiral structure in the inner ear. Sound enters the ear canal, vibrates the eardrum, and is transmitted through three tiny bones, the malleus, incus, and stapes, to the oval window of the cochlea. Inside the cochlea, the basilar membrane vibrates in response to the incoming sound. Different positions along the membrane respond to different frequencies: the base of the cochlea near the oval window responds to high frequencies, while the apex responds to low frequencies.

This tonotopic organization means the cochlea essentially performs a continuous spectral analysis of incoming sound. Each position along the basilar membrane excites specific hair cells, which convert the mechanical vibration into electrical signals sent to the brain via the auditory nerve. The brain then interprets these signals as pitch, loudness, timbre, and spatial location.

In my work with noise generators, I find it useful to think about the cochlea's frequency resolution, which is described by the concept of critical bands. A critical band is the frequency range within which the ear integrates acoustic energy. At low frequencies, critical bands are narrow in absolute terms, about 100 Hz wide below 500 Hz. At higher frequencies, they widen, reaching about 2,500 Hz wide at 10 kHz. This variable resolution is why the ear perceives pitch logarithmically: a frequency change from 100 Hz to 200 Hz sounds like the same interval as a change from 1,000 Hz to 2,000 Hz, even though the absolute difference is ten times larger.

Frequency in the Context of Audio Engineering

Audio engineers work with frequency constantly, whether they are designing speakers, mixing music, or building noise generators like ours. The standard audible range of 20 Hz to 20 kHz is divided into conventional sub-bands for convenience: sub-bass (20 to 60 Hz), bass (60 to 250 Hz), low midrange (250 to 500 Hz), midrange (500 Hz to 2 kHz), upper midrange (2 to 4 kHz), presence (4 to 6 kHz), and brilliance (6 to 20 kHz). Each band contributes differently to the overall character of a sound.

In my experience tuning our noise profiles, I pay special attention to the 2 to 4 kHz range because this is where the ear is most sensitive and where speech consonants carry most of their information. Small changes in energy in this range have a disproportionate effect on perceived brightness and intelligibility. When I shape a masking noise to reduce speech perception, I ensure there is adequate energy in this region to interfere with the consonant frequencies that carry meaning.

The Nyquist-Shannon sampling theorem governs how frequencies are captured in digital audio. To accurately represent a signal, the sampling rate must be at least twice the highest frequency present. Standard CD-quality audio uses a 44,100 Hz sample rate, allowing faithful reproduction of frequencies up to 22,050 Hz. Our noise generators operate at this sample rate by default but can be configured for higher rates when users require extended bandwidth for specialized applications such as ultrasonic testing or oversampled processing chains.

Understanding frequency is not just academic; it is the practical foundation upon which all audio tools are built. Every equalizer, filter, compressor, and noise generator operates by manipulating the frequency content of a signal. In my development work, a solid grasp of frequency theory informs every design decision, from the choice of filter topology to the resolution of spectral analysis displays. It is the language of sound, and fluency in it is essential for anyone working in audio engineering.

References

Frequently Asked Questions

What is the range of human hearing in hertz?

The commonly cited range is 20 Hz to 20,000 Hz, but this varies between individuals. Most adults lose sensitivity to frequencies above 15,000 Hz as they age, and the range continues to narrow over time.

Why do low-frequency sounds travel through walls more easily?

Low-frequency sounds have long wavelengths that are comparable to or larger than typical wall thicknesses. Long wavelengths diffract around obstacles and are not efficiently absorbed by thin barriers, allowing bass to pass through structures that block higher frequencies.

What is the difference between frequency and pitch?

Frequency is a physical property of a sound wave measured in hertz. Pitch is the subjective perception of frequency by the human auditory system. They are closely related but not identical, as pitch perception is influenced by loudness, timbre, and context.

Why does the ear perceive pitch logarithmically?

The cochlea's basilar membrane maps frequencies logarithmically along its length, so equal physical distances correspond to equal octave intervals. This logarithmic mapping means that perceived pitch intervals correspond to frequency ratios, not absolute frequency differences.

What sampling rate is needed to capture the full audible range?

According to the Nyquist-Shannon theorem, the sampling rate must be at least twice the highest frequency of interest. A 44,100 Hz sample rate captures frequencies up to 22,050 Hz, comfortably covering the full audible range.

Leo Chen

Leo Chen is a tool developer and audio enthusiast, focused on building practical online sound and productivity tools.