Unicode in Microsoft Windows
Microsoft was one of the first companies to implement Unicode in their products. Windows NT was the first operating system that used "wide characters" in system calls. Using the UCS-2 encoding scheme at first, it was upgraded to UTF-16 starting with Windows 2000, allowing a representation of additional planes with surrogate pairs. Nevertheless, Microsoft failed to support UTF-8 until 2017. In May 2019 Microsoft reversed course and started recommending using UTF-8 exclusively.[1]
In various Windows families
Windows NT based systems
Current Windows versions and all back to Windows XP and prior Windows NT (3.x, 4.0) are shipped with system libraries that support string encoding of two types: 16-bit "Unicode" (UTF-16 since Windows 2000) and a (sometimes multibyte) encoding called the "code page" (or incorrectly referred to as ANSI code page). 16-bit functions have names suffixed with 'W' (from "wide") such as SetWindowTextW
. Code page oriented functions use the suffix 'A' for "ANSI" such as SetWindowTextA
(some other conventions were used for APIs that were copied from other systems, such as _wfopen/fopen
or wcslen/strlen
). This split was necessary because many languages, including C, did not provide a clean way to pass both 8-bit and 16-bit strings to the same function.
'A' functions are implemented as wrappers that translate the text using the current code page to UTF-16 and then call the corresponding 'W' functions. 'A' functions that return strings do the opposite conversion, turning characters that don't exist in the current locale into '?'.
Microsoft attempted to support Unicode "portably" by providing a "UNICODE" switch to the compiler, that switches unsuffixed "generic" calls from the 'A' to the 'W' interface and converts all string constants to "wide" UTF-16 versions.[2][3] This does not actually work because it does not translate UTF-8 outside of string constants, resulting in code that attempts to open files just not compiling.
Earlier, and independent of the "UNICODE" switch, Windows also provided the Multibyte Character Sets (MBCS) API switch.[4] This changes some functions that don't work in MBCS such as strrev
to an MBCS-aware one such as _mbsrev
.[5][6]
Microsoft documentation uses the term "Unicode" to mean "not 8-bit encoding".
Windows CE
In Windows CE, UTF-16 was used almost exclusively, with the 'A' API mostly missing.[7] A limited set of ANSI API is available in Windows CE 5.0, for use on a reduced set of locales that may be selectively built onto the runtime image.[8]
Windows 9x
In 2001, Microsoft released a special supplement to Microsoft's old Windows 9x systems. It includes a dynamic link library, 'unicows.dll', (only 240 KB) containing the 16-bit flavor (the ones with the letter W on the end) of all the basic functions of Windows API. It is merely a translation layer: SetWindowTextW
will simply convert its input using the current codepage and call SetWindowTextA
.
UTF-8
Microsoft Windows has a code page designated for UTF-8, code page 65001.[9] Prior to Windows 10 insider build 17035 (November 2017),[10] it was impossible to set the locale code page to 65001, leaving this code page only available for (a) explicit conversion functions such as MultiByteToWideChar and/or (b) the Win32 console command chcp 65001
to translate stdin/out between UTF-8 and UTF-16. This means that "narrow" functions, in particular fopen
(which opens files), cannot be called with UTF-8 strings, and in fact there is no way to open all possible files using fopen
no matter what the locale is set to and/or what bytes are put in the string, as none of the available locales can produce all possible UTF-16 characters. This problem also applies to all other api that takes or returns 8 bit strings, including Windows ones such as SetWindowText
.
Microsoft said that a UTF-8 locale might break some functions as they were written to assume multibyte encodings used no more than 2 bytes per character, thus code pages with more bytes such as UTF-8 (and also GB 18030, cp54936) could not be set as the locale.[11]
On all modern non-Windows platforms, the file-name string passed to fopen
is effectively UTF-8. This produces an incompatibility between other platforms and Windows. The normal work-around is to add Windows-specific code to convert UTF-8 to UTF-16 using MultiByteToWideChar and call the "wide" function instead of fopen
.[12] Another popular work-around is to convert the name to the 8.3 filename equivalent, this is necessary if the fopen
is inside a library function that takes a string filename and thus calling another function is not possible. There were also proposals to add new APIs to portable libraries such as Boost to do the necessary conversion, by adding new functions for opening and renaming files. These functions would pass filenames through unchanged on Unix, but translate them to UTF-16 on Windows. Such a library, Boost.Nowide,[13] was accepted into Boost[14] and will be part of the 1.73 release. This would allow code to be "portable", but required just as many code changes as calling the wide functions.
In April 2018 with insider build 17035 (nominal build 17134) for Windows 10, a "Beta: Use Unicode UTF-8 for worldwide language support" checkbox appeared for setting the locale code page to UTF-8.[lower-alpha 1] This allows for calling "narrow" functions, including fopen
and SetWindowTextA
, with UTF-8 strings. In May 2019 Microsoft added the ability for a program to set the code page to UTF-8 itself, and started recommending that all software do this and use UTF-8 exclusively.[1]
Programming platforms
Microsoft's compilers often fail at producing UTF-8 string constants from UTF-8 source files. The most reliable method is to turn off UNICODE, not mark the input file as being UTF-8 (i.e. do not use a BOM), and arrange the string constants to have the UTF-8 bytes. If a BOM was added, a Microsoft compiler will interpret the strings as UTF-8, convert them to UTF-16, then convert them back into the current locale, thus destroying the UTF-8.[15] Without a BOM and using a single-byte locale, Microsoft compilers will leave the bytes in a quoted string unchanged.
See also
- Bush hid the facts, a text encoding mojibake
Notes
- Found under control panel, "Region" entry, "Administrative" tab, "Change system locale" button.
References
- "Use the Windows UTF-8 code page - UWP applications". docs.microsoft.com. Retrieved 2020-06-06.
As of Windows Version 1903 (May 2019 Update), you can use the ActiveCodePage property in the appxmanifest for packaged apps, or the fusion manifest for unpackaged apps, to force a process to use UTF-8 as the process code page. [..]
CP_ACP
equates toCP_UTF8
only if running on Windows Version 1903 (May 2019 Update) or above and the ActiveCodePage property described above is set to UTF-8. Otherwise, it honors the legacy system code page. We recommend usingCP_UTF8
explicitly. - "Unicode in the Windows API". Retrieved 7 May 2018.
- "Conventions for Function Prototypes (Windows)". MSDN. Retrieved 7 May 2018.
- "Support for Multibyte Character Sets (MBCSs)". Retrieved 2020-06-15.
- "Double-byte Character Sets". MSDN. 2018-05-31. Retrieved 2020-06-15.
our applications use DBCS Windows code pages with the "A" versions of Windows functions.
- _strrev, _wcsrev, _mbsrev, _mbsrev_l Microsoft Docs
- "Differences Between the Windows CE and Windows NT Implementations of TAPI". MSDN. Retrieved 7 May 2018.
Windows CE is Unicode-based. You might have to recompile source code that was written for a Windows NT-based application.
- "Code Pages (Windows CE 5.0)". Microsoft Docs. Retrieved 7 May 2018.
- "Code Page Identifiers (Windows)". msdn.microsoft.com.
- "Windows10 Insider Preview Build 17035 Supports UTF-8 as ANSI". Hacker News. Retrieved 7 May 2018.
- MSDN forums
- "UTF-8 in Windows". Stack Overflow. Retrieved July 1, 2011.
- "Boost.Nowide".
- "Boost mailing list".
- UTF-8 Everywhere FAQ: How do I write UTF-8 string literal in my C++ code?