A byte is defined as 8 bits, it is typically the smallest addressable unit of memory as 8 bits is the amount needed to encode a single character. The byte being defined as 8 bits means that it is a power of two allowing 256 values (0 – 255) to be encoded at once. This unit of memory is the basis of larger units like Mega- or Gigabyte.
Technipages Explains Byte
A byte is officially defined by ISO and IEC standards as 8 bits. The Byte was historically the number of bits required to encode a single character of text in a computer, that definition was too loose however with many systems using different length bytes. ASCII an early text standard encodes its characters in 7 bits, various implementations of C and C++ use: 8, 9, 16, 32 or 36 bits due to the definition in those languages being “at least 8 bits”.
The byte is denoted by the symbol B often prefixed with standard SI units for multiples such as KB, MB and GB – one thousand, one million and one billion bytes respectively. There is continued confusion as to exactly what these multiples mean in terms of Bytes, this confusion comes from the fact that computers operate and store data in binary as such the 1000 of Kilo doesn’t fit neatly into the binary structure as it’s not a power of 2 the nearest of which would be 1024. Attempts have been made to officially define Kibi (1024), Mebi(1024^2) and Gibi(1024^3) – KiB, MiB and GiB respectively (the scale does continue to yotta or yobi – 1000^8 vs 1024^8) as binary multiples for bytes. While these units do see some usage in some technical environments their usage has not caught on.
Common Uses of Byte
- I just bought a terabyte hard drive.
- My download speed is 10 Megabytes per second.
- A single byte is historically defined as the number of bits required to encode one character.
Common Misuses of Byte
- I’ve had eight bytes of pizza.
- This huge download took a byte out of my data plan.