JasonWang's Blog

Android如何使用PSI管理内存

字数统计: 3.6k阅读时长: 18 min
2024/06/25

早前,Android使用内核中的lowmemorykiller驱动模块来监控系统内存,在内存不足时会主动杀掉某些非关键性的进程或者应用,从而减少系统的内存压力;自从内核版本4.12之后,lowmemorykiller从内核中移除了,因此Android增加一个lmkd(Low Memory Killer Daemon)来替代内核驱动用以监控系统内存状态,在系统处于内存高压状态时,主动清理部分内存,确保内存水位处于可接受的状态。那么,LMKD又是如何获取系统内存压力状态的了?这个就要说到PSI(Pressure stall information)这个内核模块了。

Android10开始在LMKD中引入了PSI(压力失速信息)来检测内存压力,简单来说,PSI通过检测由于内存不足导致的任务延迟,这些延迟可以用来表示系统内存压力状态;并提供了接口来给用户进程获取这些状态信息;早期Android的版本则使用vmpressure模块来获取系统内存压力状态。

这篇文章我们主要看看Android如何使用PSI来管理内存,在出现内存压力如何释放内存。

PSI简介

PSI(Pressure stall information)是内核中的一个模块,用来监控系统中CPU、内存、IO资源压力状态,目的是衡量系统整体的资源健康情况;当系统出现资源压力时(由于CPU、内存或IO资源的不足而导致的任务延迟),系统的运行效率会降低。通过PSI的接口,我们可以获取系统资源的压力状态,可以选择对应的资源管理策略进行调优,从而提升系统的效率。

要获取到PSI状态,可以通过/proc/pressure中的文件来查看,对应有三个接口:

  • /proc/pressure/cpu: 查看系统CPU压力状态
  • /proc/pressure/memory: 查看系统内存压力状态
  • /proc/pressure/io: 查看系统IO压力状态

对应输出的数据都是统一的格式,具体来说,第一行some表示当前系统有一个或者多个进程有出现压力;第二行full表示所有的进程都出现了资源压力。avg10表示10s的平均值,avg60表示60s的平均值,avg300表示300s的平均值,total表示累计延迟的总时间(以ms为单位)。

对CPU来说,不存在full状态,因为系统中始终存在一个可运行的进程

1
2
3
4

some avg10=0.77 avg60=0.23 avg300=0.14 total=117592248
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

以下图为列,有两个进程执行的情况下,由于出现内存压力,导致B进程等待了30s; A进程正常执行没有延迟, 则some的值对应50%(0.5);因此some从某种程度上表示了系统资源压力带来的延迟;

some crop

类似地,如下图,两个进程AB都因为内存压力出现了延迟等待,A等待了10s, B等待了30s,则此时some50%, 而full16.67%; 高full值表示由于系统资源压力导致的总吞吐量的损失。

full crop

打开PSI需要开启内核配置CONFIG_PSICONFIG_PSI_LMKDCONFIG_PSI_SYSCALL三个配置项,默认都是开启的。

内核一般会在系统发生进程切换或者内存分配发生回收时时主动通知PSI模块,从而统计当前资源的压力,对应的内核接口都可以在linux/psi.h中找到,主要用这么几个接口:

  • psi_init: 初始化PSI模块
  • psi_task_change: 任务状态发生变化时通知PSI模块,更新统计数据
  • psi_task_switch: 任务切换时通知PSI模块,更新统计数据
  • psi_memstall_tick: 定时器中断产生时通知PSI模块,更新内存压力统计数据
  • psi_memstall_enter: 内存分配出现压力开始时通知PSI模块,更新内存压力统计数据
  • psi_memstall_leave: 内存分配压力结束时通知PSI模块,更新内存压力统计数据

以内存为例,内核会在主要的几个内存路径, 如内存回收、内存整理等都会调用,psi_memstall_enter/psi_memstall_leave函数,用于统计内存压力状态。比如在内存分配路径如果由于内存碎片化,启动了内存整理,则会告知PSI模块,统计内存碎片化整理带来的延迟:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63

// page_alloc.c

* Try memory compaction for high-order allocations before reclaim */
static struct page *
__alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
unsigned int alloc_flags, const struct alloc_context *ac,
enum compact_priority prio, enum compact_result *compact_result)
{
struct page *page = NULL;
unsigned long pflags;
unsigned int noreclaim_flag;

if (!order)
return NULL;

// 开始内存整理
psi_memstall_enter(&pflags);
noreclaim_flag = memalloc_noreclaim_save();

*compact_result = try_to_compact_pages(gfp_mask, order, alloc_flags, ac,
prio, &page);

memalloc_noreclaim_restore(noreclaim_flag);

// 结束内存整理
psi_memstall_leave(&pflags);

/*
* At least in one zone compaction wasn't deferred or skipped, so let's
* count a compaction stall
*/
count_vm_event(COMPACTSTALL);

/* Prep a captured page if available */
if (page)
prep_new_page(page, order, gfp_mask, alloc_flags);

/* Try get a page from the freelist if available */
if (!page)
page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);

if (page) {
struct zone *zone = page_zone(page);

zone->compact_blockskip_flush = false;
compaction_defer_reset(zone, order, true);
count_vm_event(COMPACTSUCCESS);
return page;
}

/*
* It's bad if compaction run occurs and fails. The most likely reason
* is that pages exist, but not enough to satisfy watermarks.
*/
count_vm_event(COMPACTFAIL);

cond_resched();

return NULL;
}


更多的技术细节可以参考linux/stats.h(CPU调度)以及kernel/mm(内存管理)相关的代码。下面我们就来看看Android中如何利用PSI来管理系统的内存,并在低内存状态时如何回收内存的。

LMKD如何使用PSI监控内存状态

下图是LMKD内存管理服务的架构图,LMKD主要是为了解决低内存时系统内存回收的问题,通过PSI的信息获取到系统内存压力状态后,只要内存压力超过一定的水位,LMKD会主动选择杀掉一个低优先级任务(根据进程的oom_score的值),释放部分内存,确保系统内存达到正常水位。

lmkd-psi-architecture

LMKD低内存管理服务在启动时,会主动注册一个内核事件,用于监听PSI内存压力状态(Android10以后默认使用PSI):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

static bool init_monitors() {
/* Try to use psi monitor first if kernel has it */
use_psi_monitors = GET_LMK_PROPERTY(bool, "use_psi", true) &&
init_psi_monitors();
/* Fall back to vmpressure */
if (!use_psi_monitors &&
(!init_mp_common(VMPRESS_LEVEL_LOW) ||
!init_mp_common(VMPRESS_LEVEL_MEDIUM) ||
!init_mp_common(VMPRESS_LEVEL_CRITICAL))) {
ALOGE("Kernel does not support memory pressure events or in-kernel low memory killer");
return false;
}
if (use_psi_monitors) {
ALOGI("Using psi monitors for memory pressure detection");
} else {
ALOGI("Using vmpressure for memory pressure detection");
}
return true;
}

初始化PSI监听模块会根据内存压力阈值进行初始化:内存压力分为lowmediumcritical三个阈值,实际只使用了VMPRESS_LEVEL_MEDIUMVMPRESS_LEVEL_CRITICAL两个值, 对应PSI中的somefull两种状态。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

static bool init_psi_monitors() {
/*
* When PSI is used on low-ram devices or on high-end devices without memfree levels
* use new kill strategy based on zone watermarks, free swap and thrashing stats
*/
bool use_new_strategy =
GET_LMK_PROPERTY(bool, "use_new_strategy", low_ram_device || !use_minfree_levels);

/* In default PSI mode override stall amounts using system properties */
if (use_new_strategy) {
/* Do not use low pressure level */
psi_thresholds[VMPRESS_LEVEL_LOW].threshold_ms = 0;
psi_thresholds[VMPRESS_LEVEL_MEDIUM].threshold_ms = psi_partial_stall_ms;
psi_thresholds[VMPRESS_LEVEL_CRITICAL].threshold_ms = psi_complete_stall_ms;
}

if (!init_mp_psi(VMPRESS_LEVEL_LOW, use_new_strategy)) {
return false;
}
if (!init_mp_psi(VMPRESS_LEVEL_MEDIUM, use_new_strategy)) {
destroy_mp_psi(VMPRESS_LEVEL_LOW);
return false;
}
if (!init_mp_psi(VMPRESS_LEVEL_CRITICAL, use_new_strategy)) {
destroy_mp_psi(VMPRESS_LEVEL_MEDIUM);
destroy_mp_psi(VMPRESS_LEVEL_LOW);
return false;
}
return true;
}

函数mp_event_psi最终会通过PSI的接口监听内存压力状态(PSI接口可以支持poll/epoll等通用的方式进行监听),并注册一个监听的回调,在有状态变化时会调用mp_event_psi

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31


static bool init_mp_psi(enum vmpressure_level level, bool use_new_strategy) {
int fd;

/* Do not register a handler if threshold_ms is not set */
if (!psi_thresholds[level].threshold_ms) {
return true;
}

fd = init_psi_monitor(psi_thresholds[level].stall_type,
psi_thresholds[level].threshold_ms * US_PER_MS,
PSI_WINDOW_SIZE_MS * US_PER_MS);

if (fd < 0) {
return false;
}

vmpressure_hinfo[level].handler = use_new_strategy ? mp_event_psi : mp_event_common;
vmpressure_hinfo[level].data = level;
if (register_psi_monitor(epollfd, fd, &vmpressure_hinfo[level]) < 0) {
destroy_psi_monitor(fd);
return false;
}
maxevents++;
mpevfd[level] = fd;

return true;
}


注册psi接口的核心函数init_psi_monitor, 主要功能是打开/proc/pressure/memory, 然后按照标准格式写入一个配置文件,用于设定内存压力阈值和监听时间窗口,并返回一个文件描述符。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48

# <some|full> <stall amount in us> <time window in us>

int init_psi_monitor(enum psi_stall_type stall_type,
int threshold_us, int window_us) {
int fd;
int res;
char buf[256];

fd = TEMP_FAILURE_RETRY(open(PSI_PATH_MEMORY, O_WRONLY | O_CLOEXEC));
if (fd < 0) {
ALOGE("No kernel psi monitor support (errno=%d)", errno);
return -1;
}

switch (stall_type) {
case (PSI_SOME):
case (PSI_FULL):
res = snprintf(buf, sizeof(buf), "%s %d %d",
stall_type_name[stall_type], threshold_us, window_us);
break;
default:
ALOGE("Invalid psi stall type: %d", stall_type);
errno = EINVAL;
goto err;
}

if (res >= (ssize_t)sizeof(buf)) {
ALOGE("%s line overflow for psi stall type '%s'",
PSI_PATH_MEMORY, stall_type_name[stall_type]);
errno = EINVAL;
goto err;
}

res = TEMP_FAILURE_RETRY(write(fd, buf, strlen(buf) + 1));
if (res < 0) {
ALOGE("%s write failed for psi stall type '%s'; errno=%d",
PSI_PATH_MEMORY, stall_type_name[stall_type], errno);
goto err;
}

return fd;

err:
close(fd);
return -1;
}

内核一旦触发内存压力事件,会通过mp_event_psi注册的回调函数进行处理:首先根据当前设定的内存水位,判断系统是否处于低内存状态(剩余内存少或交换内存zram不足),如果是,则找出后台的低优先级任务,然后选择其中一个杀掉,释放部分内存。

内存水位是根据每个内存区域的配置(/proc/zoneinfo)来判断的,有三个配置:WMARK_MINWMARK_LOWWMARK_HIGH,分别对应内存压力的阈值,越低表示内存压力越大

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262

static void mp_event_psi(int data, uint32_t events, struct polling_params *poll_params) {
enum reclaim_state {
NO_RECLAIM = 0,
KSWAPD_RECLAIM,
DIRECT_RECLAIM,
};
...
if (clock_gettime(CLOCK_MONOTONIC_COARSE, &curr_tm) != 0) {
ALOGE("Failed to get current time");
return;
}

record_wakeup_time(&curr_tm, events ? Event : Polling, &wi);

bool kill_pending = is_kill_pending();
if (kill_pending && (kill_timeout_ms == 0 ||
get_time_diff_ms(&last_kill_tm, &curr_tm) < static_cast<long>(kill_timeout_ms))) {
/* Skip while still killing a process */
wi.skipped_wakeups++;
goto no_kill;
}
/*
* Process is dead or kill timeout is over, stop waiting. This has no effect if pidfds are
* supported and death notification already caused waiting to stop.
*/
stop_wait_for_proc_kill(!kill_pending);

if (vmstat_parse(&vs) < 0) {
ALOGE("Failed to parse vmstat!");
return;
}
/* Starting 5.9 kernel workingset_refault vmstat field was renamed workingset_refault_file */
workingset_refault_file = vs.field.workingset_refault ? : vs.field.workingset_refault_file;

if (meminfo_parse(&mi) < 0) {
ALOGE("Failed to parse meminfo!");
return;
}

/* Reset states after process got killed */
if (killing) {
killing = false;
cycle_after_kill = true;
/* Reset file-backed pagecache size and refault amounts after a kill */
base_file_lru = vs.field.nr_inactive_file + vs.field.nr_active_file;
init_ws_refault = workingset_refault_file;
thrashing_reset_tm = curr_tm;
prev_thrash_growth = 0;
}

/* Check free swap levels */
if (swap_free_low_percentage) {
if (!swap_low_threshold) {
swap_low_threshold = mi.field.total_swap * swap_free_low_percentage / 100;
}
swap_is_low = mi.field.free_swap < swap_low_threshold;
}

/* Identify reclaim state */
if (vs.field.pgscan_direct > init_pgscan_direct) {
init_pgscan_direct = vs.field.pgscan_direct;
init_pgscan_kswapd = vs.field.pgscan_kswapd;
reclaim = DIRECT_RECLAIM;
} else if (vs.field.pgscan_kswapd > init_pgscan_kswapd) {
init_pgscan_kswapd = vs.field.pgscan_kswapd;
reclaim = KSWAPD_RECLAIM;
} else if (workingset_refault_file == prev_workingset_refault) {
/*
* Device is not thrashing and not reclaiming, bail out early until we see these stats
* changing
*/
goto no_kill;
}

prev_workingset_refault = workingset_refault_file;

/*
* It's possible we fail to find an eligible process to kill (ex. no process is
* above oom_adj_min). When this happens, we should retry to find a new process
* for a kill whenever a new eligible process is available. This is especially
* important for a slow growing refault case. While retrying, we should keep
* monitoring new thrashing counter as someone could release the memory to mitigate
* the thrashing. Thus, when thrashing reset window comes, we decay the prev thrashing
* counter by window counts. If the counter is still greater than thrashing limit,
* we preserve the current prev_thrash counter so we will retry kill again. Otherwise,
* we reset the prev_thrash counter so we will stop retrying.
*/
since_thrashing_reset_ms = get_time_diff_ms(&thrashing_reset_tm, &curr_tm);
if (since_thrashing_reset_ms > THRASHING_RESET_INTERVAL_MS) {
long windows_passed;
/* Calculate prev_thrash_growth if we crossed THRASHING_RESET_INTERVAL_MS */
prev_thrash_growth = (workingset_refault_file - init_ws_refault) * 100
/ (base_file_lru + 1);
windows_passed = (since_thrashing_reset_ms / THRASHING_RESET_INTERVAL_MS);
/*
* Decay prev_thrashing unless over-the-limit thrashing was registered in the window we
* just crossed, which means there were no eligible processes to kill. We preserve the
* counter in that case to ensure a kill if a new eligible process appears.
*/
if (windows_passed > 1 || prev_thrash_growth < thrashing_limit) {
prev_thrash_growth >>= windows_passed;
}

/* Record file-backed pagecache size when crossing THRASHING_RESET_INTERVAL_MS */
base_file_lru = vs.field.nr_inactive_file + vs.field.nr_active_file;
init_ws_refault = workingset_refault_file;
thrashing_reset_tm = curr_tm;
thrashing_limit = thrashing_limit_pct;
} else {
/* Calculate what % of the file-backed pagecache refaulted so far */
thrashing = (workingset_refault_file - init_ws_refault) * 100 / (base_file_lru + 1);
}
/* Add previous cycle's decayed thrashing amount */
thrashing += prev_thrash_growth;
if (max_thrashing < thrashing) {
max_thrashing = thrashing;
}

/*
* Refresh watermarks once per min in case user updated one of the margins.
* TODO: b/140521024 replace this periodic update with an API for AMS to notify LMKD
* that zone watermarks were changed by the system software.
*/
if (watermarks.high_wmark == 0 || get_time_diff_ms(&wmark_update_tm, &curr_tm) > 60000) {
struct zoneinfo zi;

if (zoneinfo_parse(&zi) < 0) {
ALOGE("Failed to parse zoneinfo!");
return;
}

calc_zone_watermarks(&zi, &watermarks);
wmark_update_tm = curr_tm;
}

/* Find out which watermark is breached if any */
wmark = get_lowest_watermark(&mi, &watermarks);

if (!psi_parse_mem(&psi_data)) {
critical_stall = psi_data.mem_stats[PSI_FULL].avg10 > (float)stall_limit_critical;
}
/*
* TODO: move this logic into a separate function
* Decide if killing a process is necessary and record the reason
*/
if (cycle_after_kill && wmark < WMARK_LOW) {
/*
* Prevent kills not freeing enough memory which might lead to OOM kill.
* This might happen when a process is consuming memory faster than reclaim can
* free even after a kill. Mostly happens when running memory stress tests.
*/
kill_reason = PRESSURE_AFTER_KILL;
strncpy(kill_desc, "min watermark is breached even after kill", sizeof(kill_desc));
} else if (level == VMPRESS_LEVEL_CRITICAL && events != 0) {
/*
* Device is too busy reclaiming memory which might lead to ANR.
* Critical level is triggered when PSI complete stall (all tasks are blocked because
* of the memory congestion) breaches the configured threshold.
*/
kill_reason = NOT_RESPONDING;
strncpy(kill_desc, "device is not responding", sizeof(kill_desc));
} else if (swap_is_low && thrashing > thrashing_limit_pct) {
/* Page cache is thrashing while swap is low */
kill_reason = LOW_SWAP_AND_THRASHING;
snprintf(kill_desc, sizeof(kill_desc), "device is low on swap (%" PRId64
"kB < %" PRId64 "kB) and thrashing (%" PRId64 "%%)",
mi.field.free_swap * page_k, swap_low_threshold * page_k, thrashing);
/* Do not kill perceptible apps unless below min watermark or heavily thrashing */
if (wmark > WMARK_MIN && thrashing < thrashing_critical_pct) {
min_score_adj = PERCEPTIBLE_APP_ADJ + 1;
}
check_filecache = true;
} else if (swap_is_low && wmark < WMARK_HIGH) {
/* Both free memory and swap are low */
kill_reason = LOW_MEM_AND_SWAP;
snprintf(kill_desc, sizeof(kill_desc), "%s watermark is breached and swap is low (%"
PRId64 "kB < %" PRId64 "kB)", wmark < WMARK_LOW ? "min" : "low",
mi.field.free_swap * page_k, swap_low_threshold * page_k);
/* Do not kill perceptible apps unless below min watermark or heavily thrashing */
if (wmark > WMARK_MIN && thrashing < thrashing_critical_pct) {
min_score_adj = PERCEPTIBLE_APP_ADJ + 1;
}
} else if (wmark < WMARK_HIGH && swap_util_max < 100 &&
(swap_util = calc_swap_utilization(&mi)) > swap_util_max) {
/*
* Too much anon memory is swapped out but swap is not low.
* Non-swappable allocations created memory pressure.
*/
kill_reason = LOW_MEM_AND_SWAP_UTIL;
snprintf(kill_desc, sizeof(kill_desc), "%s watermark is breached and swap utilization"
" is high (%d%% > %d%%)", wmark < WMARK_LOW ? "min" : "low",
swap_util, swap_util_max);
} else if (wmark < WMARK_HIGH && thrashing > thrashing_limit) {
/* Page cache is thrashing while memory is low */
kill_reason = LOW_MEM_AND_THRASHING;
snprintf(kill_desc, sizeof(kill_desc), "%s watermark is breached and thrashing (%"
PRId64 "%%)", wmark < WMARK_LOW ? "min" : "low", thrashing);
cut_thrashing_limit = true;
/* Do not kill perceptible apps unless thrashing at critical levels */
if (thrashing < thrashing_critical_pct) {
min_score_adj = PERCEPTIBLE_APP_ADJ + 1;
}
check_filecache = true;
} else if (reclaim == DIRECT_RECLAIM && thrashing > thrashing_limit) {
/* Page cache is thrashing while in direct reclaim (mostly happens on lowram devices) */
kill_reason = DIRECT_RECL_AND_THRASHING;
snprintf(kill_desc, sizeof(kill_desc), "device is in direct reclaim and thrashing (%"
PRId64 "%%)", thrashing);
cut_thrashing_limit = true;
/* Do not kill perceptible apps unless thrashing at critical levels */
if (thrashing < thrashing_critical_pct) {
min_score_adj = PERCEPTIBLE_APP_ADJ + 1;
}
check_filecache = true;
} else if (check_filecache) {
int64_t file_lru_kb = (vs.field.nr_inactive_file + vs.field.nr_active_file) * page_k;

if (file_lru_kb < filecache_min_kb) {
/* File cache is too low after thrashing, keep killing background processes */
kill_reason = LOW_FILECACHE_AFTER_THRASHING;
snprintf(kill_desc, sizeof(kill_desc),
"filecache is low (%" PRId64 "kB < %" PRId64 "kB) after thrashing",
file_lru_kb, filecache_min_kb);
min_score_adj = PERCEPTIBLE_APP_ADJ + 1;
} else {
/* File cache is big enough, stop checking */
check_filecache = false;
}
}

/* Kill a process if necessary */
if (kill_reason != NONE) {
struct kill_info ki = {
.kill_reason = kill_reason,
.kill_desc = kill_desc,
.thrashing = (int)thrashing,
.max_thrashing = max_thrashing,
};

/* Allow killing perceptible apps if the system is stalled */
if (critical_stall) {
min_score_adj = 0;
}
psi_parse_io(&psi_data);
psi_parse_cpu(&psi_data);
int pages_freed = find_and_kill_process(min_score_adj, &ki, &mi, &wi, &curr_tm, &psi_data);
if (pages_freed > 0) {
killing = true;
max_thrashing = 0;
if (cut_thrashing_limit) {
/*
* Cut thrasing limit by thrashing_limit_decay_pct percentage of the current
* thrashing limit until the system stops thrashing.
*/
thrashing_limit = (thrashing_limit * (100 - thrashing_limit_decay_pct)) / 100;
}
}
}
...
}

参考资料

原文作者:Jason Wang

更新日期:2024-07-19, 15:08:46

版权声明:本文采用知识共享署名-非商业性使用 4.0 国际许可协议进行许可

CATALOG
  1. 1. PSI简介
  2. 2. LMKD如何使用PSI监控内存状态
  3. 3. 参考资料