- Hardware specs
- Results and Discussion
In this post I’d like to share a few interesting observations about how the power consumption of the Thinkpad T series (and as such of the individual components) evolved over 4 generations. I had a T420 as a baseline machine, and a T460 to compare it with. Some of the more important specs of the machines is tabulated below:
|HD Graphics 3000
|HD Graphics 520
|Intel Cougar Point QM67
|Intel Sunrise Point-LP
One notable thing here is the DDR3 memory used in the T460, which was surprising to me, as the 6th Gen. Core processors (also Skylake) already support DDR4 memories. Apparently, there are at least two versions of the T460 officially available, one that is equipped with DDR3 and another that has DDR4 installed. There is absolutely no difference in looks or main model number of these machines, they are both simply sold as “Thinkpad T460”. If you want to know which model you have, you actually have to turn the machine on, or look inside and ascertain which memory type it has been manufactured with. It’d be interesting to know why did Lenovo choose to camouflage this rather important difference so well.
These machines mainly incorporate Intel based solutions, and as such I can only discuss what was going on Intel’s side of the story. Of course, it would also be interesting to see how things developed in the AMD and ARM departments, but this post was insipred mainly by chance (having two such machines at my disposal simultaneously), so trends in those two other companies’ products could not be explored here.
To get a semi-comprehensive view on the trends developed in the time period these computers were manufactured, I did a battery of power consumption tests on these machines, which measurements are by no means terribly accurate because of the measurement device I had at my disposal. The device itself is a simple digital power meter that can be bought in just about any store dealing with household electronics, so it doesn’t come as a surprise, that it is slow to respond to changes in input, a slight fluctuation in readout is a possibility and it doesn’t even allow finer grained measurements than full Watts. Knowing all this, I must stress that all the data below are only for informative purposes.
I used both Linux (current Gentoo stable, 4.14 kernel series) and Windows (7 and 10) to conduct my measurements. One could argue that not using the same version of Windows on both laptops kinda reduces the comparability of the results, but I have two objections against such thoughts: 1. Both computers use the version of Windows they were designed for, and such it is safe to assume, that we are testing them under factory intended circumstances, which in the end makes the comparison valid. 2. The inaccuracy of the measurement device introduces a far greater error than the error produced by using different versions of Windows. Also, if there really is a difference between the power management schemes of these OSs, Linux used in parallel will reveal such anomalies.
Also, I made sure that no battery was charging and no other resource intensive tasks were runnung in the background. I’d like to note here, that this latter was especially hard to achieve with Windows 10, as it always seems to randomly do some pretty CPU and disk intensive tasks literally for minutes under the hood, especially during the first half hour of a fresh boot. Some of these processes go under the name of “Windows Modules Installer Worker”, “Office click-to-run (SxS)” or “Compatibility telemetry”, but the virus scanner also likes to update itself in such a resource intensive way, often racing for CPU with the aforementioned processes. Windows 7 was far better in this regard in my opinion, as from the casual short Windows update check it allowed me to start the tests pretty quickly after boot. Under my Gentoo Linux I had no such problems, which also happens to be one of the many reasons I prefer to use it as my primary OS.
Anyway, aside from these and with them in mind, here are the scenarios under which I tested the machines:
- idle CPU, full brightness (IC, FB): Complete system idle, with LCD brightness at max
- idle CPU, half brightness (IC, HB): Same as above, with LCD brightness set to half
- idle CPU, min. brightness (IC, MB): Same as first, but LCD brightness is set to minimum and it is still on
- idle CPU, full brightness, wifi off (IC, FB, W0): Same as first, but wireless card was turned off (not only disconnected from AP)
- full CPU, full brightness (FC, FB): CPU was working at 100% with full LCD brightness
- full CPU+FPU, full brightness(FCF, FB): Same as above, but FPU unit was also stressed
- full mem, full brightness (FM, FB): Intensive memory operations with 100% CPU usage
- full GPU, full brightness (FG, FB): GPU was working at 100% with full LCD brightness
On Windows I used AIDA64 and FurMark softwares to stress different components, where the latter was only used for the GPU tests. On Linux I used the following command for CPU stressing:
seq 8 | xargs -P0 -n1 md5sum /dev/zero
GPU, memory and FPU measurements were not conducted under Linux, as no difference in component maximum consumption was expected between OSs, which assumption is supported by the CPU stress test results. If dissimilarities exist between OSs, they will be in low power modes (because of varying power management implementations), and those shall be revealed in other tests.
Results and Discussion
Before starting the disussion of the results, it is necessary to highlight the fact, that all data above are consuptions under maximal usage, unless otherwise noted. Raw data can be found in the Appendix at the end of this post.
The total power consumption in complete idle of the T460 is 4-5 W lower than it is for the T420, depending on the OS, which is a ~50% drop. Also, while it would seem that there is no difference in power consumption under Linux or Windows on the T420, there is one for the T460: under Linux the system seems to draw more power on the T460 by a whole watt. It would be interesting to know why is there a difference between the two OSs on the T460 and why isn’t there on the older model, but in my opinion (which might not be true) this difference might be due to the faster evolution of power management under Windows than what we can see under Linux. This discrepancy could also be driver related, but this is all just speculation. Anyway, that 4-5 extra W of the older Thinkpad model might be mostly due to the higher idle consumption of the i5-2520M, and to some less degree to the inferior power efficiency of the chipset or SSD, but I admit, these two can contribute a Watt or two.
In the display department we can also see some interesting results. On full brightness I couldn’t detect any difference between the older TN and newer IPS panels, but at half brightness, yet again, there was something happening on the T460 when testing different OSs. Under Linux, the IPS panel seems to draw more power by 1 W. I measured it quite a few times to make sure, but the results were consistent: there really is an extra Watt being used up at half brightness under Linux. Again, this was not happening on the T420, so the different LCD panel technologies could contribute to this discrepancy somehow. On minimum brightness Windows 10 on the T460 was the winner. I can’t even speculate why these inequalities could exist, as I’m totally out of clue how these two OSs implement brightness control, so I’ll leave it to the public to figure it out.
Full CPU usage showed something intriguing, but let’s get the obvious out of the way first. As expected, the i5-6200U was more efficient, and this time, there was no dissimilarity between Linux and Windows. The puzzling thing comes in when we check the total drawn power at 100% usage, which is 21 W for the i5-2520M and 12 W for the i5-6200U. So why is this noteworthy? Simply because the factory TDP is 35 W for the 2520M and 15 W for the 6200U (as indicated in the parentheses in the tabulated data), so in both cases we see 40% and 20% lower than factory specified values, respectively. In all tests I made sure that all cores are being stressed, so these are most intriguing results indeed. I guess Intel left a bit of surplus in the thermal design to be able to handle the multiple components on the die, but again, this is just my guess.
Let’s examine these results from a computation power perspective as well. According to passmark.com the single thread rating practically did not increase over the years, although the overall CPU mark (multi thread) score was found to be higher (4016 instead of 3584), which is a good 12% change. By calculating the points per Watt (PPW) values for the overall CPU mark, we get 170.6 (102.4) PPW and 334.7 (267.7) PPW for the 2520M and 6200U, respectively, which is 96% (161%) increase in efficiency. Single thread PPW values are the following: 71.2 (42.7) and 124.9 (99.9), where a 75% (134%) change in efficiency is observable. The values between parentheses are based on factory provided power consumption, while outside of them are based on measured wattages.
The WLAN test also showed something interesting. It would seem that the older N6205 consumes a measurable 1 W of power, while with the newer AC8260 I couldn’t get the device to show varying wattage data while the Wifi radio was on or off. Another plus point for the newer tech.
The FPU stress tests pretty much showed what was expected, the newer device ate 5 W less than the older one, but what is astounding is that it translates to an almost 50% decrease.
Probably the most surprising result was in the memory department, where the previous trend turned around, and the T460 came out behind the T420. The stress test for the memory produced a significant 3 W higher power consumption for the T460 than what I could measure for the older computer, which is a 2.5X increase. One could argue that the 20% higher clock rate of the T460 memory modules may result in a increased wattage drain, but that alone does not explain the 125% increase. But, if we assume (I think we can safely do so), that memory modules of twice the capacity consume twice the power (8 and 16 GB -> 2 and 4 W, respectively), and a 20 % higher frequency will result in a comparable increase in power drain as well (which it does: ~12%), then the results above totally make sense. The technology generation didn’t change, so there isn’t any improvement from the power perspective.
By seeing the previous trends, I think it doesn’t come as a surprise, that yet again, the newer generation exhibited a lower wattage value, namely ~14%. What is more eye catching here, is the fact that according to passmark.com, all this was achieved while increasing G2D and G3D ratings by ~16% and ~164%, respectively. This gives us 13.9 PPW and 9.8 PPW for the HD3000; and 18.5 PPW and 29.6 PPW for the HD520 for G2D and G3D, respectively. This is a 33% and 202% efficiency increase for the respective computation methods over 4 generations!
The image above provides a partial overview of the findings written earlier in this post. Again, it is clearly seen, that Intel did quite a great job decreasing power consumption, while still managing to increase computing power for GPUs and CPUs. This resulted in a skyrocketing efficiency boost for these components, which is quite welcome in a mobile environment.
The base system also received some care in this area, which resulted in a 50% drop in drained power, meaning that the chipset, CPU and storage media in idle now barely requires more than a few Watts to run.
Displays did not seem to evolve from this perspective during these few years, but those 3 Watts at maximum brightness isn’t that much to be concerned about anyway, although, with the ever increasing efficiency of the base system, displays now became a major hinderance behind battery-powered longevity of such devices, as half of the total consumed power in idle is now contributed by LCD panels. This could hopefully mean that in the future OLED displays will become more commonplace on laptops as well, as even a Watt or two saved can nowadays mean considerably longer battery life.
Since I had the same memory technology in both of the machines, I cannot conclude anything of worth here, except, that higher memory capacity and frequency means higher consumption, which observations are almost not worthy of writing down because of their obvious nature.
I was a bit surprised by WLAN card performances, as more novel technologies in this branch achieved such gains, that my measurement device could not register any added drain with the newer AC8260 card under use. Obviously, this was also due to the inaccuracy of the measurement devide, but still, quite impressive.
As always, thanks for reading!
|Base system (chipset+idleCPU+SSD)
|21 (35) W
|21 (35) W
|12 (15) W
|12 (15) W
|idle CPU, full brightness (IC, FB)
|idle CPU, half brightness (IC, HB)
|idle CPU, min. brightness (IC, MB)
|idle CPU, full brightness, wifi off (IC, FB, WO)
|full CPU, full brightness (FC, FB)
|full CPU+FPU, full brightness (FCF, FB)
|full mem, full brightness (FM, FB)
|full GPU, full brightness (FG, FB)