How does the event loop handle timing in a single-core CPU with multiple processes

There are many ways I can think of asking this question. Let’s say we have a single-core CPU on which the OS is running two processes. Process 1 is a Node application and process 2, we don’t care about. Given that there’s only a single core, the two processes will have to share access to it, which means for roughly half the time, each process won’t be executing.

Now, if we call setTimeout(callback, 1000) in process 1, will the callback execute after exactly 1000 (real-world) ms, or will there be some unaccounted delay for the time process 1 has not been executing?

If there’s no extra delay, how can the event loop know when to precisely return the callback? Does this mean the event loop schedules an event using the CPU clock directly, rather than a process-level abstraction, such as the number of loops? And even then, how can the event loop ensure it has CPU access by the time the callback needs to be returned; are events scheduled on a CPU-level queue that can prioritise certain processes at certain times? Or would this callback be executed as soon as process 1 regains access, even if late?

If there’s an extra delay, however, what is the impact this can have on time-sensitive applications that require precise synchronisation and how could we ensure a Node process has continuous access to a CPU core?