CPI Vs. DPI: What’s The Difference?

CPI Vs. DPI: What's The Difference?

So you’ve heard of CPI or DPI in a similar context but what are they? What is the difference? Well, the short answer is not a lot but the fact remains there is a synonymous relationship between CPI and DPI and in-game sensitivity. CPI Vs DPI doesn’t really work though as they are ultimately representing the same thing when it comes to a mouse. 

This article aims to help you understand the underlying technical concepts behind CPI/DPI while also getting to the bottom of a few myths.

CPI Vs DPI, What Do They Mean?

Monitor Printer CPI DPI

While both CPI and DPI measure the same thing when are talking about mice, they have different backgrounds.

Mouse CPI

CPI refers to “counts per inch”. Counts are base units a mouse sensor is working with and generally, the higher the CPI, the more sensitive your movements are going to feel. The CPI describes the number of counts that are being registered when physically moving the mouse’s sensor exactly one inch, much like a CMOS sensor on a digital camera. A mouse set to 800 CPI will, therefore, register 800 counts when being moved one inch.

Mice these days can come with a CPI of up to 20,000, meaning they can make 20,000 counts per inch. Mice manufacturers allow gamers to change their CPI settings in increments of 100, usually, meaning you could go as low as 100 if that was your preference.

Now imagine you are moving your 800 CPI mouse exactly 1/800 inch, which results in a single count being registered. This single count will first be processed by the mouse then be transmitted to the PC, where it can be interpreted by your operating system.

Windows Sensitivity

On the desktop, you are working with a 2D environment, you are moving your cursor along a flat 2D plane that has clear boundaries. Once you reach the edge of the screen the cursor will no longer move, despite moving the mouse. The cursor in Windows is controlled by the operating system's cursor settings, which everyone can adjust in the control panel. The default slider will allow you to adjust the sensitivity between 11 different levels that represent varying multipliers. When this is set to the sixth notch, the multiplier will be one, leading to a one-to-one translation between physical mouse movement and on-screen motion. In this case, one count will be translated into one pixel of on-screen motion. When set to a multiplayer below one, some counts will be omitted, whereas a multiplier that is greater than one will lead to some counts being doubled in order to achieve the specified cursor sensitivity.

Example: If the multiplier is set to 11/11, every count will be translated to an on-screen motion of two pixels. Since one count is the smallest possible input from the mouse, a multiplier of two would result in every second pixel being skipped. In other words, a multiplier greater than one inevitably leads to a loss of precision as certain pixels are impossible to reach with the desktop cursor, known as pixel skipping.

3D Game Environments

There is a direct relation between counts and pixels (hence the dangers of pixel skipping), which is true for all 2D environments (desktop or games like StartCraft). This doesn’t seem to be the case for 3D environments, let’s see why.

When we look at a 3D game, there are things that can be differentiated at a basic level: the world in front of you as you see it (your point of view), which restricts what you are seeing at any given time. The same basic principle applies to movies or photographs. In a movie, the world is fixed whereas the camera is moving, capturing whatever slice of that reality it is looking at. When it comes to games, however, this principle is reversed: The camera is fixed whereas the world around you is being moved. The term for this is an “inverted world transform matrix” (or short view matrix). For convenience, we'll be calling any world movement “camera movement” from here on out.

This kind of camera movement can be thought of as rotation. The base axis of this rotation is the camera itself. Rotation based on angles, and the base unit for measuring angles in radians. For convenience, we’ll be using degrees instead of radians below. For rotating the camera, games typically use a base radial unit that corresponds to one count. As such, for one count registered by the mouse, the camera movement (view matrix) will be shifted by the amount specified by the base radial unit (in degrees) multiplied with the sensitivity. Much like with the Windows sensitivity slider explained above, sensitivity acts as nothing but a multiplier here as well.

To better understand this take Quake as an example (an early FPS game), the base radial unit is called yaw (for the x-axis) and pitch (for the y-axis) and measures exactly 0.022°. For every count received from the mouse, the camera movement will be shifted by 0.022° given an in-game sensitivity of one. If the sensitivity is set to two, the camera movement will shift by 0.044°, whereas at a sensitivity of 0.5 the camera movement will be shifted by 0.011°.

The formula for this can, therefore, be defined as follows: Effective sensitivity df=(yaw/pitch*sensitivity multiplier). 

As you can see, the base radial unit merely defines how the sensitivity scales, i.e. using a smaller base radial unit will result in higher sensitivity multipliers, whereas a larger base radial unit will result in lower sensitivity multipliers.

Example: Overwatch has a yaw/pitch of 0.0066°, which is why the sensitivity multipliers are higher than CS:GO, which is using a yaw/pitch of 0.022°.

Where Does CPI Enter The Equation?

As explained earlier, the base radial unit “corresponds” to one count. This means that when you move your 400 CPI mouse exactly one ich, the camera movement will be shifted by exactly 400*(base radial unit*sensitivity multiplier) - it’s as simple as that. So if your base radial unit is 0.022°, your sensitivity multiplier is two and your CPI is 400, the view matrix will be shifted by 17.6° (400*0.022*2). We can further calculate (360/17.6) to get the number of inches it takes to perform a full revolution: 20.45 inches (51.95 cm). This is called the turn circumference, i.e. the distance of physical mouse movement it takes to perform a full 360° rotation within the game.

The relation between the number of counts and the effective sensitivity (as defined above) is again inversely proportional: The lower the CPI, the higher the effective sensitivity needs to be in order to achieve the same set turn circumference. Conversely, the higher the CPI, the lower the effective sensitivity needs to be in order to achieve the same set turn circumference. The important part is that the base radial unit (yaw/pitch) is constant, so the only variable here is the in-game sensitivity. If you have lower CPI you need a higher sensitivity and if you have higher CPI you need a lower sensitivity to get the same turn circumference. Why is that important? Let’s take a look at the following thought experiment.

Imagine you want your turn circumference to be exactly 10.39 cm (a very high effective sensitivity) in Quake (yaw/pitch of 0.022°). Furthermore, your mouse only supports 400 CPI. In order to achieve that turn circumference, you have to set your sensitivity multiplier to 10. Now imagine that you want to turn your point of view as little as possible. You move your mouse very little, just enough for it to register a single count. This single count being registered will result in an on-screen view matrix shift of 0.22°, which is quite large. The view matrix will ‘jump’ a fair distance on the screen in a single step. The point here is that this view matrix shift will be the upper limit of your maximum possible accuracy. Since a single count being registered is the smallest possible input from the mouse it will be impossible to move the cursor to a position between the starting and ending point of said ‘jump’. The smallest possible view matrix shift will be 0.22°, any smaller view matrix shifts you may intend to perform are impossible to achieve. Compare this to the same situation with a mouse set to 1600 CPI. In order to achieve the same desired turn circumference of 10.39 cm you only need a sensitivity multiplier of 2.5 this time, which will result in a minimum possible view matrix shift of 0.055°, which is a lot finer.

The phenomenon just described is what people mean when they talk about the so-called ‘pixel skipping’. As we have seen, however, there are no pixels being ‘skipped’, as the set resolution is entirely irrelevant when it comes to sensitivity for games using an inverted world transform matrix (a 3D environment). Camera movement is done by employing angles, not pixels. A view matrix shift of 0.22° will be a view matrix shift of 0.22° no matter whether the resolution is 720p or 2160p. The apt term for this phenomenon is, therefore, angular granularity, which describes how fine or coarse the rotation (the camera movement) will be. Using low CPI with high sensitivity will result in lower angular granularity whereas using high CPI with low sensitivity will result in higher angular granularity, given an identical set turn circumference

What Does This Mean For You And Your Mouse?

Now, this doesn’t mean that you should turn up your CPI to the highest value possible in order to achieve the highest possible angular granularity.

The following example should make it obvious why. Imagine that you want your turn circumference to be 51.95 cm instead of 10.39 cm in Quake. Given the same 400 CPI mouse as before your sensitivity multiplier would have to be 2 instead of 10 in this game. The resulting minimum view matrix shift would then be 0.044°, which is finer than the minimum view matrix shift in the 1600 CPI example above. The conclusion from this is that angular granularity becomes less of an issue the higher your desired turn circumference is (or, to put it more simply, the lower your effective sensitivity is). Of course, it is always possible to increase CPI while decreasing sensitivity in order to increase the angular granularity and get even finer (‘smoother’) rotation, but after a certain point, the gain in angular granularity is no longer perceptible (let alone performance relevant).

If you haven’t noticed your rotation being ‘jerky’ or ‘not smooth’ in games so far, chances are the angular granularity is already sufficiently high. As a general rule of thumb, I’d say it’s advised to use at least 1600 CPI at a turn circumference <10 cm, at least 800 CPI at a turn circumference of 10 cm<25 cm and at least 400 CPI at a turn circumference >25 cm for sufficiently high angular granularity, but these are just rough estimates and ultimately subjective. It is absolutely not advised to increase CPI to unnecessarily high levels just to avoid any ‘pixel skipping’ (while possibly increasing smoothing levels), which (as we have seen) only exists in a specific sense anyway.

So What On Earth Is DPI Then?

We will assist with the confusion in a moment but first, let’s discuss what DPI technically is. DPI is short for dots per inch and is actually a measure for the output resolution of a printer.

DPI refers to the number of individual dots that can go in a line within a span of one inch, which technically means “printer dots per inch” and is often misused. DPI or PPI in this context is basically used to describe the resolution number of dots per inch with a digitally printed item.

So Why Do We Use DPI Instead Of CPI?

CPI vs DPI, so they are the same thing, right? Well, when we are talking about gaming mice or general office mice, yes they basically mean the same thing. The use of DPI over CPI seems to have just been a marketing choice from mice manufacturers, maybe they thought we, the general public, could relate to DPI in an easier way.

Whether you just use a mouse at the office or you love to game, the take away from all this is CPI, and DPI ultimately means the same thing in the world of mice. Yea, stop worrying, you can continue to use your mouse in the same way you have been doing.