Skip to content

Commit 3465074

Browse files
ryncsn1Naim
authored andcommitted
mm/mglru: use the common routine for dirty/writeback reactivation
Currently MGLRU will move the dirty writeback folios to the second oldest gen instead of reactivate them like the classical LRU. This might help to reduce the LRU contention as it skipped the isolation. But as a result we will see these folios at the LRU tail more frequently leading to inefficient reclaim. Besides, the dirty / writeback check after isolation in shrink_folio_list is more accurate and covers more cases. So instead, just drop the special handling for dirty writeback, use the common routine and re-activate it like the classical LRU. This should in theory improve the scan efficiency. These folios will be rotated back to LRU tail once writeback is done so there is no risk of hotness inversion. And now each reclaim loop will have a higher success rate. This also prepares for unifying the writeback and throttling mechanism with classical LRU, we keep these folios far from tail so detecting the tail batch will have a similar pattern with classical LRU. The micro optimization that avoids LRU contention by skipping the isolation is gone, which should be fine. Compared to IO and writeback cost, the isolation overhead is trivial. Reviewed-by: Axel Rasmussen <axelrasmussen@google.com> Signed-off-by: Kairui Song <kasong@tencent.com>
1 parent 6d1e871 commit 3465074

1 file changed

Lines changed: 0 additions & 19 deletions

File tree

mm/vmscan.c

Lines changed: 0 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -4437,7 +4437,6 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
44374437
int tier_idx)
44384438
{
44394439
bool success;
4440-
bool dirty, writeback;
44414440
int gen = folio_lru_gen(folio);
44424441
int type = folio_is_file_lru(folio);
44434442
int zone = folio_zonenum(folio);
@@ -4487,21 +4486,6 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
44874486
return true;
44884487
}
44894488

4490-
dirty = folio_test_dirty(folio);
4491-
writeback = folio_test_writeback(folio);
4492-
if (type == LRU_GEN_FILE && dirty) {
4493-
sc->nr.file_taken += delta;
4494-
if (!writeback)
4495-
sc->nr.unqueued_dirty += delta;
4496-
}
4497-
4498-
/* waiting for writeback */
4499-
if (writeback || (type == LRU_GEN_FILE && dirty)) {
4500-
gen = folio_inc_gen(lruvec, folio, true);
4501-
list_move(&folio->lru, &lrugen->folios[gen][type][zone]);
4502-
return true;
4503-
}
4504-
45054489
return false;
45064490
}
45074491

@@ -4523,9 +4507,6 @@ static bool isolate_folio(struct lruvec *lruvec, struct folio *folio, struct sca
45234507
if (!folio_test_referenced(folio))
45244508
set_mask_bits(&folio->flags.f, LRU_REFS_MASK, 0);
45254509

4526-
/* for shrink_folio_list() */
4527-
folio_clear_reclaim(folio);
4528-
45294510
success = lru_gen_del_folio(lruvec, folio, true);
45304511
VM_WARN_ON_ONCE_FOLIO(!success, folio);
45314512

0 commit comments

Comments
 (0)