Friday, March 30, 2012

Processor Governers & Disk I/O Schedulers in Android

I found the article from great kernel developer Neo3000 posted in forum.xda-developer.com, Samsung Galaxy SII thread.
I think is really useful for me and anyone who still confuse or dont know about "governors" processor in ARM architecture and I/O or disk schedulers in Android.
Hope this article reproduction is helpful.


Governors

1) Ondemand
2) Lulzactive (default)
3) Performance
4) Lagfree
5) Conservative
(module)
6) Lazy
(module)

7.)Lionheart (tweaked version of Conservative governers)

I/O Schedulers

1) BFQv3-R1 (Budget Fair Queuing) 2) Noop
3) SIO
4) VR (default)
5.) CFQ (complety Fair Queuing)
Governors Guide:
1) Ondemand
Default governor in almost all stock kernels. Simply put, Ondemand jumps to maximum frequency on CPU load and decreases the frequency step by step on CPU idle. No suspend/wake profiles. Even though many of us consider this a reliable governor, it falls short on battery saving and performance on default settings.

2) Lulzactive
This new find from Tegrak is based on interactive & smartass governors and is one of our favorites.
Old Version: When workload is greater than or equal to 60%, the governor scales up cpu to next higher step. When workload is less than 60%, governor scales down cpu to next lower step. When screen is off, frequency is locked to global scaling minimum frequency.
New Version: Three more user configurable parameters: inc_cpu_load, pump_up_step, pump_down_step. Unlike older version, this one gives more control for the user. We can set the threshold at which governor decides to scale up/down. We can also set number of frequency steps to be skipped while polling up and down.
When workload greater than or equal to inc_cpu_load, governor scales CPU pump_up_step steps up. When workload is less than inc_cpu_load, governor scales CPU down pump_down_step steps down.
Example:
Consider
inc_cpu_load=70
pump_up_step=2
pump_down_step=1
If current frequency=200, Every up_sampling_time Us if cpu load >= 70%, cpu is scaled up 2 steps - to 800.
If current frequency =1200, Every down_sampling_time Us if cpu load < 70%, cpu is scaled down 1 step - to 1000.

3) Performance
Sets min frequency as max frequency. Use this while benchmarking!

4) Lagfree
Lagfree is similar to ondemand. Main difference is it's optimization to become more battery friendly. Frequency is gracefully decreased and increased, unlike ondemand which jumps to 100% too often. Lagfree does not skip any frequency step while scaling up or down. Remember that if there's a requirement for sudden burst of power, lagfree can not satisfy that since it has to raise cpu through each higher frequency step from current. Some users report that video playback using lagfree stutters a little.

5) Conservative
A slower ondemand which scales up slowly to save battery. Simply put, this governor increases the frequency step by step on CPU load and jumps to lowest frequency on CPU idle.

6) Lazy
This governor from Ezekeel is basically an ondemand with an additional parameter min_time_state to specify the minimum time cpu stays on a frequency before scaling up/down. The Idea here is to eliminate any instabilities caused by fast frequency switching by ondemand. Lazy governor polls more often than ondemand, but changes frequency only after completing min_time_state on a step. Lazy also has a screenoff_maxfreq parameter which can be configured to specify screen-off max frequency.

I/O Schedulers Guide:
Q. "What purposes does an i/o scheduler serve?"
A.
  1. Minimize hard disk seek latency.
  2. Prioritize I/O requests from processes.
  3. Allocate disk bandwidth for running processes.
  4. Guarantee that certain requests will be served before a deadline.

So in the simplest of simplest form: Kernel controls the disk access using I/O Scheduler.

Q. "What goals every I/O scheduler tries to balance?"
A.
  1. Fairness (let every process have its share of the access to disk)
  2. Performance (try to serve requests close to current disk head position first, because seeking there is fastest)
  3. Real-time (guarantee that a request is serviced in a given time)

Q. "Description, advantages, disadvantages of each I/O Scheduler?"
A.

1) Noop

Inserts all the incoming I/O requests to a First In First Out queue and implements request merging. Best used with storage devices that does not depend on mechanical movement to access data (yes, like our flash drives). Advantage here is that flash drives does not require reordering of multiple I/O requests unlike in normal hard drives.

Advantages:
  • Serves I/O requests with least number of cpu cycles. (Battery friendly?)
  • Best for flash drives since there is no seeking penalty.
  • Good throughput on db systems.

Disadvantages:
  • Reduction in number of cpu cycles used is proportional to drop in performance.

2) BFQ
Instead of time slices allocation by CFQ, BFQ assigns budgets. Disk is granted to an active process until it's budget (number of sectors) expires. BFQ assigns high budgets to non-read tasks. Budget assigned to a process varies over time as a function of it's behavior.

Advantages:
  • Believed to be very good for usb data transfer rate.
  • Believed to be the best scheduler for HD video recording and video streaming (because of less jitter as compared to CFQ and others)
  • Considered an accurate i/o scheduler.
  • Achieves about 30% more throughput than CFQ on most workloads.
Disadvantages:
  • Not the best scheduler for benchmarking.
  • Higher budget assigned to a process can affect interactivity and increased latency.

3) SIO
Simple I/O scheduler aims to keep minimum overhead to achieve low latency to serve I/O requests. No priority quesues concepts, but only basic merging. Sio is a mix between noop & deadline. No reordering or sorting of requests.

Advantages:
  • Simple, so reliable.
  • Minimized starvation of requests.

Disadvantages:
  • Slow random-read speeds on flash drives, compared to other schedulers.
  • Sequential-read speeds on flash drives also not so good.

4) VR

Unlike other schedulers, synchronous and asynchronous requests are not treated separately, instead a deadline is imposed for fairness. The next request to be served is based on it's distance from last request.

Advantages:
  • May be best for benchmarking because at the peak of it's 'form' VR performs best.

Disadvantages:
  • Performance fluctuation results in below-average performance at times.
  • Least reliable/most unstable.


Just collecting notes and sharing..
Please let me know if there are wrong statements in this article or a copy-paste one without copyright stated.

No comments:

Post a Comment