Code and data are all stored in memory as binary; in particular, the only difference between a number and any other values stored in memory is only in the way those values are interpreted. In order to store text, we thus need to assign numerical values to every symbol.
The most naïve way to store text in a computer is to assign a number to every symbol that can appear in your text. This strategy underlies all pre-Unicode systems that I know of.
The ASCII system is used for encoding many Latin-based scripts. Every symbol in a string of text is allocated one byte (a byte is 8bits). This gives 2⁸ = 256 possible symbols.
Many of these symbols are used for "control characters"; these are used to control the display for interactive systems. For example, the symbol with decimal value 08 is typically issued with "Control-H" or the backspace key; it is used to delete the previous character.
You can read more at the [Wikipedia article on ASCII].
There are many other encodings following the same patterns; popular ones for Russian are KOI8 variants. The only way to correctly interpet binary values as text is if you already know the encoding.
Mistaking one encoding for another was a hugely common problem on the internet until fairly recently.
Further, if you want to mix encodings (for example, a Latin and a Cyrillic text), you generally needed special software and a way to indicate encoding changes.
Unicode was created to unify all encodings; it assigns to every symbol a unique number. Unicode has a bit over one million symbols currently; symbols are allocated through an international organisation called the Unicode Consortium. Their goal is to make it possible to encode all human writing systems; many other things can also be encoded with Unicode: mathematics, card games, and emojis.
There are several different data formats for Unicode; the most impressive is UTF-32, where symbols are assigned numbers from 32bit (4-bytes) ranges. Documents using single-byte encodings, when transcoded to UTF-32, take four times as much space.
More popular are Unicode's variable-length encodings: in this system, some symbols receive shorter encodings than others. UTF-16 has 16-bit code units; symbols are either one or two code units long. UTF-16 is used internally by Windows, Java, and JavaScript.
The most popular is UTF-8, which uses 8-bit code units; symbols are between one and four code units long. The single-unit codes were chosen to match ASCII; valid ASCII documents are thus automatically valid UTF-8 also.
UTF-8 is recommended for use on the web and in e-mail. Most modern systems are native UTF-8 (meaning that other encoding systems are converted to or from UTF-8 before processing). This document is written in UTF-8.