Written by Petri Kuittinen, 1999
Noise can be found everywhere in nature. In the world technology and science it is often an unwanted phenomenon. Noise is present in all analog devices, but the digital domain is principally noise free. In digital world there are only combinations of zero and one, everything else is considered as an error. Humans are accustomed to process information which contains noise and errors. The current digital systems can often appear too sterile to the human senses.
There are different kind of noises. Noise can be coarse or soft, sometimes it is described with color (e.g. pink noise, white noise). The most important characteristic of noise is that it is always unpredictable. Real, natural noise cannot be fully predicted, but it can be simulated with various mathematical formulas.
The human generated arts, such as music, contain patterns and elements which have interesting relation to noise. It seems that the kind of noise that appeals to us in nature, is also appealing in man-generated art. Carefully generated artificial noise can make images look better and improve the experienced sound quality.
Noise is present in all electronic circuits. In electronics the term noise can be applied to anything that obscures the desired signal. Noise can itself be another signal (interference). E.g. The 50 Hz interference noise from power supplies and ground paths is very common. It sounds like a low hum or e.g. generates horizontal moving bars to the TV picture. Most often the term noise is used to describe "random" noise of physical (usually thermal) origin. As an example all resistors generate noise voltage across its terminals called Johnson noise.
Signal-to-noise ratio (SNR) is defined as the difference between signal and noise floor:
SNR = 10 log10 (Vsignal2/Vnoise2) dB
The voltages are RMS (Root-Mean-Square) values and some bandwidth and center frequency are specified.
Noise with a flat spectrum is called "white noise". White noise sounds like a rush of waterfall and it looks like "snowing" on TV which isn't tuned to any TV station. It can be easily produced by a pseudo-random number generator which has uniform (flat) distribution (each value of white noise signal is the output of pseudo-random number generator). White noise is completely uncorrelated from point to point.
1/f-noise is called "pink noise". It has equal energy in each octave, whereas white noise has equal energy for any given bandwidth. In other words, the white noise has equal amount of all frequency components (flat frequency response). The frequency response of pink noise drops with frequency - lower frequencies have more energy than higher frequencies. Pink noise sounds softer, less harsh than white noise, a little bit like a low hiss.
There are no simple mathematical models that produce 1/f-noise. It is usually artificially generated by low-pass filtering white noise. The physical origins of 1/f-noise are mostly unknown, but it is found in many physical systems: in almost all electronic components, in all semi-conducting devices; in all time standards from the most accurate atomic clock to quartz oscillators to the sand flow of ancient hourglass; the flow of traffic in Japanese expressways; in the small voltages measurable across nerve membranes due to sodium and potassium flow; in the speed of ocean currents and in the yearly flood levels of Nile (measured over the last 2000 years) . 1/f-noise is found in music.
Pink noise is often used in audio measurements as a test signal. Almost all professional frequency analyzators have logarithmic frequency bands, the scale is more exact is lower frequencies than in higher frequencies. Pink noise comes handy here. An audio frequency analyzator will show flat picture when the input is pink noise.
There are other colors of noise e.g. Brown or Brownian noise which has 1/f^2 spectrum.
Richard.F.Voss and J.Clarke  made an exciting discovery that almost all musical melodies mimic 1/f-noise. If one takes a musical score and draws lines between successive notes of the melody lines one finds a graph which is remarkably similar to the graph of 1/f-noise. Voss and Clarke analyzed the pitch variations of different kind of music ranging from medieval music to Beethoven, Debussy and R. Strauss, to the traditional music of Japan, classical ragas of India, folk songs of Russia, American blues and Beatles and they all showed this same pattern. A similar 1/f-spectra can also be found in the loudness fluctuations in all music types.
Some of the very modern composers such as Stockhausen, Jolet and Carter make an exception. The melody fluctuations in their music approach white noise at low frequencies. Maybe because of this their music sounds too random too most listeners?
Not surprisingly stochastically generated computer music which uses 1/f-noise for note variation sounds closest to real music when compared to white noise (which sounds too random) or brown (1/f^2) noise in which consecutive notes sound too correlated.
Gottfried Mayer-Kress and SP Layne  studied the effect of various musical signals to human brain waves (EEG, MEG). There was a resonant response of the brain wave to the complexity of these signals. The response was similar in the cases of very high (periodic signal) or very low (random numbers) predictability, but it was distinctly different in the case of intermediate predictability of chaotic systems. In other words purely random musical signal sounds like noise to our ears and too static or periodic signal is as boring. The aesthetically most interesting signals have a balanced mixture between recognition and surprise.
Dither is a calibrated random noise .
When we reduce the number of possible colors in a picture, this is called quantization. Most simply this is done by snapping the color to the nearest equivalent. But when the number of possible colors afterwards is many orders smaller than before, or when the 'nearest' color is very different, we can get ugly viewing artifacts, called bands. This is because the gap between two successive shades of a color has become to big. One solution to this is called dithering. It smears the sudden color transitions to noise.
The most simple kind of dithering algorithms is random dither. E.g. when converting a 256-grey-scale image to black and white (2 colors) for each value in the picture generate a random number between 1 and 256; if it is greater than the image value at that point, plot the point white, otherwise plot it black. This generates a picture with a lot of noise, which looks like TV picture snow.
While random dither adds a lot of high-frequency noise to a picture, it is useful in reproducing very low-frequency images where the absence of artifacts is more important than the absence of noise. For example, a whole screen containing a gradient of all levels from black to white would actually look best with a random dither.
There are other kind of more sophisticated dithering algorithms e.g. pattern, ordered or error dispersion dither. The last one is often called the "Floyd and Steinberg" error diffusion dither. Instead of using a completely random noise, they use noise patterns, which approximate the original, more detailed color image, when looked at from a distance.
The quantization problem is also present in audio. The 16-bit linear PCM used in audio CDs has the theoretical dynamic range of about 96 dB, but the current high-end digital recording consoles can do much better (up 24-bit PCM). Signals which are below or near this -96 dB threshold are distorted or completely lost. Dither allows us to hear sounds which are below threshold. Using simple random noise as dither adds a small amount of hiss at about -96dB to the music. Improved dithering techniques, such as the SBM (Super Bit Mapping) by Sony, try to minimize the audibility of dither. They use noise-shaping techniques to re-equalize the spectrum of the dither while retaining its average power, moving the noise away from the areas where the ear is most sensitive (circa 3 KHz), and into the high frequency region (10-22 KHz) .
A properly designed dithering can greatly improve the perceived image or sound quality, but a poorly selected one can make the quality worse.
Sometimes the picture is already quantized and there is no additional information available. If the resulting image has lots of ugly bands (because of loss of colors), then adding random noise to picture can actually improve it's quality. The additional noise will smooth the gaps between two successive shades of a color.
Computer generated images, such as ray traced images, often appear too sterile, artificial-looking to the human eye. Noise can be used to generate small irregular variations to surfaces of objects or to slightly distort textures used in 3D texture mapping to make the world look a bit like our own dirty, imperfect world. Noise can be used to generate textures (imitating clouds, natural materials such as wood or stone) or to control the flow of particles used in 3D graphics .
Claude Shannon initiated the information theory and noiseless source coding by publishing his ground-breaking paper "A mathematical theory of communication" in 1948. One of his key ideas was the concept entropy, which is the information value of the message which cannot be compressed losslessly. Entropy can be though as a measure of unpredictability in message. If a message is very unpredictable, it has high entropy. Shannon showed that for certain message and channel there is a way of variable-rate block coding, where the average code-length is approaching entropy. The problem was to find such a coding.
Quite quickly Shannon and Robert M.Fano devised a noiseless variable-rate block code called Shannon-Fano coding. This method is not optimal, but it approaches optimal when the number of different messages approaches infinite. On 1952 a graduate student of Fano, called David Huffman , invented and published an algorithm, now called Huffman coding, which is optimal under certain requirements. Huffman coding is a variable-rate prefix coding, where more common code words are assigned shorter prefixes.
Later-on various methods of arithmetic coding were invented which were even better than Huffman coding in sense of approaching entropy. Jacob Ziv and Abraham Lempel published in 1977-1978 two methods of noiseless source coding, which are basis of almost all modern lossless data compression methods. These methods take an advantage of code words patterns in data .
Foley James D. & van Dam Andries & Feiner Steven K. & Hughes John F.
Computer Graphics - Principles and Practice (Second Edition in C)
Horowitz Paul & Hill Winfield
The Art of Electronics, Second Edition
Cambridge University Press, 1989
Tiedon kompressoinnin seminaari - Seminar on Data Compression
Yli-Harja Olli & Mölsä Jarmo (editors)
Huffman-koodauksen alkutaival, pages 5-11
Tietojenkäsittelytekniikan laitos, TKK-C77, ISBN 951-22-2930-7, 1995
New 20-bit High Resolution Technology
Katz Bob (Digital Domain)
The Secrets of Dither
Kieffer John C.
History of Source Coding
IEEE Information Theory Society
Reflections of Chaos in Music and Art
Art@Science, pages 183-192
edited by Christa Sommerer and Laurent Mignonneau
Sound Check - The Basics of Sound and Sound Systems
Hal Leonard Corporation, 1994
Peitgen Heinz-Otto & Saupe Dietmar
The Science of Fractal Images
Springer-Verlag New York, 1988
Dithering Fully Explained
found from usenet news group news:comp.graphics.algorithms
Link to home page.