2.1.2 Data Representation

DATA REPRESENTATION

BINARY DIGIT

Computers recognize only two discrete states: on and off. These states can be represented by two digits, 0 and 1. Each 0 or 1 is called a bit in the binary system.

Bit is the smallest unit of data a computer can process. Bit is a short for binary digit. The binary system has a base of 2 with the two digits (0 and 1). Combinations of 0s and 1s represent larger numbers.


BIT

A bit is the smallest unit of data that the computer can process. Bit is a short for binary digit. A bit is represented by the numbers 1 and 0. These numbers represent the binary system. They correspond to the states of on and off, true and false, or yes and no.
All digital data use the binary system to process the information. This information include letters, digits or special character.

BYTE

Byte is a unit of information built from bits. One byte is equals to 8 bits. Eight bits that are grouped together as a unit. A byte provides enough different combinations of 0s and 1s to represent 256 individual characters.
One byte represents a single character such as the number 3, letter b
or a $ symbol. Bits and bytes are the basis for representing all meaningful information and programs on computers.

CHARACTER

8 bits = 1 byte

One byte represents one character such as A, 7, 9 and +.Eight bits that are grouped together as a unit. A byte provides enough different combinations of 0s and 1s to represent 256 individual characters.


For example, the capital letter F is represented by the binary code 01000110 that can be understood by the computer system. Eight bits grouped together as a unit are called a byte. A byte represents a single character in the
computer.



CHARACTER CODES

There are three character codes to represent characters which are ASCII, EBCDIC and Unicode. Each byte contains eight bits. A byte provides enough
different combination of 0s and 1s to represent 256 characters.
The combinations of 0s and 1s are defined by patterns. These patterns are called coding scheme. The 256-character capability of ASCII and EBCDIC is too small to handle the characters that are used by other languages such as
Arabic, Japanese and Chinese.

The Unicode coding scheme is designed to solve this problem. It uses two bytes (16 bits) to represent one character. Unicode will have more than 65,000 different characters. This can cover all the world’s languages.


0 comments:

Post a Comment