Message board for the users of flat assembler.
> Windows > about character translation
When i input data via keyboard, os must translate key signal into proper character.
How is it done? From my understanding, foolowing this:
- keyboard interrupts cpu
- cpu read usb/ps2 register wich is mapped to real keyboard hardware
- cpu has now scan code
- scan code is translated by OS into virtual key code depending on driver (keyboard layout)
- virtual key code is a universal name of a key
- now os take state of keyboard, and translate virtual key code into code pint using code page, or holds if its a dead char waiting for another V-code
- how does windows manage code points? how they are stored in code page? is it utf-16, or something else?
- now translated character is sent to something
WM_CHAR will always contain utf16 character, is it true on windows 9x and nt?
when storing file anme in ntfs, what encoding is used? I guess its utf16, am i right? What about FAT in windows 9x/NT? Is it also utf16, or something else with specyfied code page?
Where do i find such knowleadge about my questions, i belive its inefficient to ask everytime when i dont understand something.
|22 Dec 2010, 11:30||
NTFS internally uses UNICODE (not UTF-16) for filenames.
FAT uses ASCII (or byte, or CHAR, or something like that anyway).
WM_CHAR return a TCHAR (size depends upon A or W call made to the API)
NT/2K/XP/++ uses UNICODE for all internal functions and is said to be more efficient when running UNICODE apps.
95/98/ME uses ASCII for all internal functions.
|22 Dec 2010, 12:01||
and code points in code pages?
when i translate character from 1 page to another, it must go through a code point list. Whats used there?
|22 Dec 2010, 16:35||
< Last Thread | Next Thread >
Copyright © 1999-2020, Tomasz Grysztar. Also on GitHub, YouTube, Twitter.
Website powered by rwasa.