Multithreading explained
Modern applications and games present bigger and bigger challenges for hardware. Aside from sophisticated graphics and enormous demands on memory, one of the most important characteristics of a well-running computer is the speed of the processor. To keep up with these challenges, hardware manufacturers regularly develop new technologies and architectures for CPUs. One such technology is multithreading, in which the processor works on multiple threads (that is, independent units of execution) more or less simultaneously. How can that work? Keep reading to find out.
- Free website builder with .co.uk
- Free website protection with one Wildcard SSL
- Free 2 GB email account
What is multithreading?
In order to increase the speed of the processor core without changing the frequency, you can use multithreading to have the CPU process several tasks at once. Or to be precise, you can have it process several threads at once. A thread is a sequence of programmed instructions that’s part of a larger process. Programs can be broken down into processes and these processes can be broken down into individual threads. Every process consists of at least one thread.
Processes are usually executed sequentially - one process after the other. However, this can lead to lengthy tasks blocking the hardware, which is less than optimal. If another process needs to be executed, it will have to wait its turn. In the case of multithreading, multiple threads are processed more or less simultaneously. It’s rare that processing happens truly simultaneously, though it is now possible.
But even so-called pseudo-simultaneity provides a boost in performance. The system organises and computes threads so intelligently that the user perceives it as simultaneous processing. This form of simultaneity is not to be confused with what a multicore processor can do. If the system has multiple microprocessors, multiple processes will be processed simultaneously.
In order to effectively implement multithreading, you’ll need properly prepared software. If developers don’t (or can’t) separate their programs into multiple threads, the method is useless. For example, gamers have noticed that performance actually suffers from turning on multithreading. That would be because the computer games in question weren’t designed with multithreading in mind. The system’s attempt to process multiple threads at once then has a detrimental effect.
Goals of multithreading
The ultimate goal of multithreading is to increase the computing speed of a computer and thus also its performance. To this end, we try to optimise CPU usage. Rather than sticking with a process for a long time, even when it’s waiting on data for example, the system quickly changes to the next task. This means that there is hardly any unused time.
At the same time, the system reacts more quickly to changes in priorities. If the user or an application suddenly and unexpectedly needs another task, the processor can quickly dedicate itself to that task, thanks to priority rankings and short threads.
The technology is primarily designed to speed up individual applications that consist of several processes and threads. Multiple tasks from the same software can be processed more or less parallel to one another. This is useful, for example, in video editing as one scene can be rendered in the background while the user edits the next scene.
With multithreading, chip manufacturers can speed up their CPUs without generating significantly higher energy use. While increased frequency entails increased heat, which then has to be dissipated at high costs, this is not the case with multithreading.
How does multithreading work?
Multithreading is the result of interactions between hardware and software. Programs and processes are broken down into individual threads, which are then processed in order to execute the program. We make the distinction between hardware multithreading and software multithreading.
Hardware
For hardware multithreading, the individual programs provide their processes in separated threads. The operating system takes control of the management of the threads and decides when each of the threads should be sent to the CPU. The processor then tackles the threads either simultaneously or pseudo-simultaneously.
In practice, there are various implementations of hardware multithreading.
Switch on Event Multithreading (SoEMT)
Switch on Event Multithreading works with two threads, one in the foreground and one in the background. The change between the levels (known as a context switch) is triggered by events such as input from the user or the message that a thread is waiting on data and thus can’t be processed further. The system then quickly switches to a second thread and pushes the first one into the background. It won’t be processed anymore until the necessary information has arrived. This is how the system reacts quickly and creates a pseudo-simultaneity between threads in the foreground and the background.
“Switch on Event Multithreading” is also known as “coarse grained multithreading”. The word “coarse” is used because the technology is mostly suitable for long waiting times. While other technologies react even faster, SoEMT works best with larger thread blocks.
Time slice multithreading
While with SoEMT the switch between threads is set off by an event, with time-slicing the switch takes place at set time intervals. Even if a thread hasn’t been completed, the processor starts computing another thread and only switches back at the next interval. Any progress in processing the thread is saved in RAM.
The challenge here is choosing the best interval length. If the timespan is too short, barely any progress can be made on the process. If it’s too long, you end up losing pseudo-simultaneity and the user would notice that the processes are taking place one after another and not at the same time.
Simultaneous multithreading (SMT)
Simultaneous multithreading (SMT) involves true simultaneity. Threads wait to be computed in so-called pipelines. The processor works on multiple pipelines in parallel. So rather than constantly switching between two threads, the parts of the process are actually handled simultaneously. A single processor thus acts like multiple (logical) processors. In practice, SMT is combined with multicore technology, so that a system with two processor cores gives the impression of having eight cores.
The CPU manufacturer Intel has been very successful with so-called hyper threading technology (HTT). Their competitor AMD has also started manufacturing comparable technologies. Both involve SMT.
Software
With software multithreading, the application alone is responsible for breaking down processes into threads. Only the individual threads are delivered to the operating system and processor. The hardware is thus not aware of the connections between threads and handles each thread individually. The system sets a priority level for each thread. Higher levels are processed more quickly. With this method, new processes that need to be finished quickly can be squeezed in. In the case of more long-term processes, only one thread will be completed and the others will be placed further back in the queue.
Software multithreading is mostly useful for systems with single-core processors. Since modern computers now come with at least a dual-core CPU, this form of multithreading has lost relevance.
Multithreading vs. multitasking
Multithreading and multitasking might look quite similar at first glance, but the two technologies are based on different ideas. Multitasking simply means that multiple programs are running at the same time. The CPU switches between individual tasks, but the applications aren’t simultaneously (neither truly simultaneously nor pseudo-simultaneously) computed. The operating system usually takes control of organising the various tasks and assigns the CPU pending processes. It appears to the user as if several programs were being processed at once, but in reality it’s switching back and forth between them.
If you open the task manager, you can see which processes the system is running side-by-side. The same goes for the Mac task manager.
In the case of multithreading, a higher degree of simultaneity is aimed for. The technology primarily aims to speed up individual programs. While multitasking enables different programs to run side-by-side, multithreading involves multiple threads from the same program. The simultaneous processing of threads ensures that the software runs faster.
Multithreading is a smart, cost-saving method for increasing processor performance. However, it only works if the software is set up for it. If you want to increase your computer’s performance without implementing multithreading, you also have a number of options. If you overclock the CPU, be sure to pay attention to CPU temperature - otherwise you could bring the whole system down.