Bit

The bit is the most basic unit of information in computing and digital communication. The name is a portmanteau of binary digit.[1] The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1" or "0", but other representations such as true/false, yes/no, on/off, or +/ are also widely used.

The relation between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device.

A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight bits is called one byte, but historically the size of the byte is not strictly defined.[2] Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is usually a nibble.

In information theory, one bit is the information entropy of a random binary variable that is 0 or 1 with equal probability,[3] or the information that is gained when the value of such a variable becomes known.[4][5] As a unit of information or negentropy, the bit is also known as a shannon,[6] named after Claude E. Shannon. As a measure of the length of a digital string that is encoded as symbols over a 0-1 (binary) alphabet, the bit has been called a binit,[7] but this usage is now rare.[8]

In data compression, the goal is to find a shorter representation for a string, so that it requires fewer bits of storage -- but it must be "compressed" before storage and then (generally) "decompressed" before it is used in a computation. The field of Algorithmic Information Theory is devoted to the study of the "irreducible information content" of a string (i.e. its shortest-possible representation length, in bits), under the assumption that the receiver has minimal a priori knowledge of the method used to compress the string.

The symbol for the binary digit is either "bit", per the IEC 80000-13:2008 standard, or the lowercase character "b", per the IEEE 1541-2002 standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte.

  1. ^ Cite error: The named reference Mackenzie_1980 was invoked but never defined (see the help page).
  2. ^ Cite error: The named reference Bemer_2000 was invoked but never defined (see the help page).
  3. ^ Cite error: The named reference Anderson_2006 was invoked but never defined (see the help page).
  4. ^ Cite error: The named reference Haykin_2006 was invoked but never defined (see the help page).
  5. ^ Cite error: The named reference IEEE_260 was invoked but never defined (see the help page).
  6. ^ Cite error: The named reference Rowlett was invoked but never defined (see the help page).
  7. ^ Breipohl, Arthur M. (1963-08-18). Adaptive Communication Systems. University of New Mexico. p. 7. Retrieved 2025-01-07.
  8. ^ "binit". The Free Dictionary. Retrieved 2025-01-07.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Nelliwinne