Expert Blog: Optimizing CPU Usage and Performance

Bit Ninja


For most applications, performance is centered around throughput – how much work the server can process in a specific timeframe. That’s why most high-end servers on the market today are designed with throughput in mind.

However, there is no perfectly optimized server for running all types of workloads out-of-the-box. To get the most out of your server’s available resources, you need to understand the CPU’s physical limitations relative to your performance requirements. Read the Expert Blog by Danilo Danicic, Marketing Content Writer at phoenixNAP.

All CPU optimization efforts are essentially software-based. That implies optimizing your workload or other software running on the machine as opposed to upgrading hardware. The process of fine-tuning server performance usually involves tradeoffs – optimizing one aspect of the server at the expense of another.

Define Performance Objectives

Danilo Danicic
Danilo Danicic

When optimizing your workloads for resource usage and performance, tradeoffs between CPU, memory, and storage are inevitable. For example, increasing the size of the cache will improve the overall processing speed, but will result in higher memory consumption.

That’s why determining which part of your system needs to be optimized calls for an in-depth understanding of the workloads running on it.

Since each workload is unique and consumes resources differently, there are no fixed rules for defining hardware requirements or performance targets. SQL database workloads, for example, are heavy on the CPU while 3D rendering workloads gobble up high amounts of memory.

Before deploying or configuring your server infrastructure, you need to assess your requirements through a process called capacity planning. Server capacity planning gives you insight into your current resource usage and helps you develop optimization strategies for present and future performance targets.

For the most common types of workloads such as database processing, app or web hosting, you should consider these fundamental questions when defining CPU performance objectives:

  • What is the number of anticipated users that will be using your application?
  • What is the size of a single request?
  • How many requests do you expect during average load and spikes in demand?
  • What is the desired SLA? (i.e., one second, five seconds, etc.)
  • What is your target CPU usage?

You will also need to determine which server components your application relies on the most:

  • Performance/throughput – CPU-based
  • Memory consumption – RAM-based
  • Latency – network-based
  • I/O operations – disk-base

When optimizing for performance, the number of CPU cores plays a crucial role. Multi-threaded jobs benefit from multi-core processors. So, not having enough cores can cause high CPU wait time, increasing the time it takes to process multi-threaded jobs.

Your storage configuration also has an impact of performance. Having the right disk type and properly configured storage resources decreases latency. This is especially important when running latency-sensitive workloads such as database processing. Such workloads need to be optimized to utilize memory resources efficiently.

Therefore, understanding how various types of workloads utilize difference server resources can help you achieve your performance objectives.

  • CPU-intensive workloads – video encoding, machine learning, algorithmic calculations, high-traffic web servers
  • Memory-hungry workloads – SQL databases, CGI rendering, processing Big Data in real time
  • Latency-sensitive workloads – video streaming, search engines, mobile text message processing, High Frequency Trading

Monitor Resource Usage

High CPU usage will degrade the overall performance of your servers. When usage exceeds 80% of the CPU’s physical resources, your application will load slower, or the server may stop responding.

There are many causes of high CPU usage. For example, if a database server uses 100% of the CPU capacities, that might be due to the application running too many processor-intensive tasks such as sorting queries. To decrease the load, you would have to optimize your SQL queries to utilize available resources more efficiently.

High CPU usage could also be attributed to poorly coded applications, outdated software, or security vulnerabilities. The only way to precisely determine the root cause of performance issues or bottlenecks is to use analytics and monitoring tools.

While analyzing monitoring data, keep in mind that CPU utilization should never be at full capacity. If your application is consuming 100% of the total processing power under average load, the CPU will be unable to handle sudden spikes in demand, resulting in latency or service unavailability.

As a rule of thumb, latency-sensitive applications should utilize at most 80% of the CPU’s power. And for applications that are less sensitive to latency, CPU utilization can go up to 90%. However, load that is continuously above 80% should be investigated further to determine what is eating up resources.

To mitigate high CPU usage issues, common solutions include optimizing your application’s code, limiting the number of requests, optimizing the kernel or operating system processes, or offloading the workload to additional servers.

Hardware Optimization

Most servers are shipped with pre-built hardware components, especially those offered by service providers. That usually means fixed CPU and RAM resources. Therefore, rarely will you have the option to perform hardware upgrades when your workloads hit maximum CPU capacity. However, there is a solution on the market that comes close.

In collaboration with Intel, phoenixNAP has developed and launched FlexServers — the world’s only dedicated server platform with vertical CPU scaling capabilities. FlexServers come with four predefined CPU configurations in a single chassis. When you need more processing power, a simple server reboot is all it takes to access a more robust physical CPU.

According to the specs, each FlexServer is powered by 2nd Generation Intel Xeon Scalable processors. The basic CPU configuration delivers 20 cores at 2.1GHz clock speed per core, while the most powerful configuration offers 56 cores at a frequency of 2.7GHz per core.

Cores, clock speed, and threads

Depending on your application’s structure and the type of workload, the number of processing units (cores) and the CPU’s frequency (clock speed) will play an important role when performing optimization tasks.

Keep in mind that a higher number of cores doesn’t necessarily mean better performance. A CPU with fewer cores operating at a higher frequency can process more instructions faster than a system with a higher core count and a slower clock speed.

Web servers, for example, benefit from multiple cores because they initiate hundreds of threads that require more processing units. Whereas hypervisor workloads perform better with faster clock speeds. That’s because the hypervisor distributes physical resources across multiple virtual machines which can share a single processing unit.

However, hypervisors can also take advantage of multi-core processors. The higher the number of cores, the more virtual machines it can support, all the while keeping CPU wait times low.

The programming language running your application also has an impact on CPU usage. Programming languages that run close to the metal, such as C or C++, give you more control over hardware operations than compiled languages such as Python or PHP. That’s why writing applications in C or C++ gives you an obvious advantage if you need granular control over how your application consumes CPU and memory resources.


Since every workload is different, you should use various monitoring tools to determine which parts of your system or application need to be optimized. Almost all your CPU usage optimization efforts will be software-based. That’s because upgrading the server’s physical components to support increases in demand is oftentimes impossible, expensive, or impractical. However, there are solutions on the market such as phoenixNAP’s FlexServers that allow you to change physical CPU configurations to gain access to more processing power when you need it, without upgrading hardware.