So why all this hassle of dealing with code pages instead of just using UTF-8, which as a Unicode transformation supports all of the world’s modern languages (and some not so modern ones)?
Unicode is necessary when using multple scripts in one file, and two or more of the scripts use different code pages. You can mix Thai and English, because TIS-620 also includes the ASCII symbols. But you cannot mix Thai and Greek without using Unicode, because Thai and Greek require different code pages.
But Unicode wastes bandwidth when using only a single language. In TIS-620, the single byte code page for Thai, all characters takes up one byte. But in UTF-8, Thai characters take up three bytes each. Many people think UTF-8 is efficient because ASCII characters take up only one byte. In reality, UTF-8 is only efficient if most of your file consists of ASCII. If half your file consists of HTML tags (in ASCII), and the other half is Thai body text, then saving the file in UTF-8 makes it take up twice as much space than TIS-620 would. For HTML files, which provide a reliable way to specify the encoding via the meta tag, this is simply a waste of bandwidth.
It gets worse as the non-ASCII content of your file increases. If your file is half Thai (3 bytes per character in UTF-8) and half Chinese (3 or 4 bytes per character in UTF-8), even UTF-16 is a better choice than UTF-8. While UTF-16 is often seen as inefficient because it uses 2 bytes for ASCII characters, it also uses only 2 bytes for Thai and 2 bytes for most Chinese characters (but 4 bytes for supplementary characters, for which UTF-8 also uses 4 bytes).
Unicode certainly offers a lot of flexibility and takes away a lot of problems. But Unicode isn’t free. In situations where bandwidth is at a premium, and only one script (plus ASCII) is used, using a legacy character set can make a lot of sense.