diff options
author | SeongJae Park <sj@kernel.org> | 2024-04-12 16:25:25 -0700 |
---|---|---|
committer | SeongJae Park <sj@kernel.org> | 2024-04-12 16:25:25 -0700 |
commit | 9220e9039e093fca47bb3bc030ad255abfcd5b24 (patch) | |
tree | 1b6bfd71db1b0dc599fe56fe2bba8b129d521659 | |
parent | b0991341743c26771b6828ae1e9ef42881cfbeea (diff) | |
download | damon-hack-9220e9039e093fca47bb3bc030ad255abfcd5b24.tar.gz |
backup damon/next patches
Signed-off-by: SeongJae Park <sj@kernel.org>
39 files changed, 1989 insertions, 0 deletions
diff --git a/patches/next/ACMA.patch b/patches/next/ACMA.patch new file mode 100644 index 0000000..e11ea13 --- /dev/null +++ b/patches/next/ACMA.patch @@ -0,0 +1,17 @@ +From a66d35c8ed2247fef8ea218a1cbd64950da21cb0 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Thu, 11 Apr 2024 16:08:52 -0700 +Subject: [PATCH] ==== ACMA ==== + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/7Q37ed37 | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/7Q37ed37 + +diff --git a/damon_meta_changes/7Q37ed37 b/damon_meta_changes/7Q37ed37 +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/Add-damon-suffix-to-the-version-name.patch b/patches/next/Add-damon-suffix-to-the-version-name.patch new file mode 100644 index 0000000..6915d4b --- /dev/null +++ b/patches/next/Add-damon-suffix-to-the-version-name.patch @@ -0,0 +1,28 @@ +From 14ea3cf07fc9338a49ba320e76e44a327c3faaec Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Thu, 11 Jan 2024 15:52:04 -0800 +Subject: [PATCH] Add -damon suffix to the version name + +Append -damon suffix to the kernel version name. + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + Makefile | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/Makefile b/Makefile +index 4bef6323c47d..019c6a799c1e 100644 +--- a/Makefile ++++ b/Makefile +@@ -2,7 +2,7 @@ + VERSION = 6 + PATCHLEVEL = 9 + SUBLEVEL = 0 +-EXTRAVERSION = -rc2 ++EXTRAVERSION = -rc2-mm-unstable-damon + NAME = Hurr durr I'ma ninja sloth + + # *DOCUMENTATION* +-- +2.39.2 + diff --git a/patches/next/Add-debug-log-for-PSI.patch b/patches/next/Add-debug-log-for-PSI.patch new file mode 100644 index 0000000..ab06043 --- /dev/null +++ b/patches/next/Add-debug-log-for-PSI.patch @@ -0,0 +1,25 @@ +From ebf29bde9b9a83c55ad39f32ba213c855a2d2ab1 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 16 Feb 2024 15:26:23 -0800 +Subject: [PATCH] Add debug log for PSI + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/core.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/mm/damon/core.c b/mm/damon/core.c +index bce67059c67a..37a19534a6f5 100644 +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -1196,6 +1196,7 @@ static void damos_set_quota_goal_current_value(struct damos_quota_goal *goal) + now_psi_total = damos_get_some_mem_psi_total(); + goal->current_value = now_psi_total - goal->last_psi_total; + goal->last_psi_total = now_psi_total; ++ pr_info("PSI current value %lu\n", goal->current_value); + break; + default: + break; +-- +2.39.2 + diff --git a/patches/next/DAMOS-filter-type-YOUNG.patch b/patches/next/DAMOS-filter-type-YOUNG.patch new file mode 100644 index 0000000..d9d69c5 --- /dev/null +++ b/patches/next/DAMOS-filter-type-YOUNG.patch @@ -0,0 +1,50 @@ +From 58b33b493f93bfe4e27b7ac31573bee04171d080 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Wed, 6 Mar 2024 18:43:00 -0800 +Subject: [PATCH] ==== DAMOS filter type YOUNG ==== + +Subject: [RFC PATCH v3] mm/damon: add a DAMOS filter type for page granularity access recheck + +Changes from RFC v2 +(https://lore.kernel.org/r/20240311204545.47097-1-sj@kernel.org) +- Add documentation +- Add Tested-by: Honggyu Kim <honggyu.kim@sk.com> + +Changes from RFC v1 +(https://lore.kernel.org/r/20240307030013.47041-1-sj@kernel.org) +- Mark the folio as old if it was young +- Rename __damon_pa_young() to damon_folio_young_one() + +DAMON allows users to specify desired ranges of overhead and accuracy of +the monitoring, and do its best effort to make most lightweight and +accurate results. A recent discussion for tiered memory management +support from DAMON[1] revealed that the best effort accuracy may not +suffice in some use cases, while increasing the minimum accuracy can +incur too high overhead. The discussion further concluded finding +memory regions of specific access pattern via DAMON first, and then +double checking the access of the region again in finer granularity +could help increasing the accuracy while keeping the low overhead. + +Add a new type of DAMOS filter, namely YOUNG for such a case. Like anon +and memcg, the type of filter is applied to each page of the memory +region of DAMOS target memory region, and check if the page is accessed +since the last check. Because this is a filter type that applied in +page granularity, the support depends on DAMON operations set. Because +there are expected usages of this filter for physical address space +based DAMOS usage[1], implement the support for only DAMON operations +set for the physical address space, paddr. + +[1] https://lore.kernel.org/r/20240227235121.153277-1-sj@kernel.org + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/0tpBKqMR | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/0tpBKqMR + +diff --git a/damon_meta_changes/0tpBKqMR b/damon_meta_changes/0tpBKqMR +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/Docs-ABI-damon-update-for-youg-page-type-DAMOS-filte.patch b/patches/next/Docs-ABI-damon-update-for-youg-page-type-DAMOS-filte.patch new file mode 100644 index 0000000..23d21ed --- /dev/null +++ b/patches/next/Docs-ABI-damon-update-for-youg-page-type-DAMOS-filte.patch @@ -0,0 +1,30 @@ +From 25c47042e36d46375383cebe967c5df2a09355af Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Wed, 13 Mar 2024 18:20:53 -0700 +Subject: [PATCH] Docs/ABI/damon: update for 'youg page' type DAMOS filter + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + Documentation/ABI/testing/sysfs-kernel-mm-damon | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-damon b/Documentation/ABI/testing/sysfs-kernel-mm-damon +index dad4d5ffd786..cef6e1d20b18 100644 +--- a/Documentation/ABI/testing/sysfs-kernel-mm-damon ++++ b/Documentation/ABI/testing/sysfs-kernel-mm-damon +@@ -314,9 +314,9 @@ Date: Dec 2022 + Contact: SeongJae Park <sj@kernel.org> + Description: Writing to and reading from this file sets and gets the type of + the memory of the interest. 'anon' for anonymous pages, +- 'memcg' for specific memory cgroup, 'addr' for address range +- (an open-ended interval), or 'target' for DAMON monitoring +- target can be written and read. ++ 'memcg' for specific memory cgroup, 'young' for young pages, ++ 'addr' for address range (an open-ended interval), or 'target' ++ for DAMON monitoring target can be written and read. + + What: /sys/kernel/mm/damon/admin/kdamonds/<K>/contexts/<C>/schemes/<S>/filters/<F>/memcg_path + Date: Dec 2022 +-- +2.39.2 + diff --git a/patches/next/Docs-admin-guide-mm-damon-usage-fix-wrong-example-of.patch b/patches/next/Docs-admin-guide-mm-damon-usage-fix-wrong-example-of.patch new file mode 100644 index 0000000..b55d2a3 --- /dev/null +++ b/patches/next/Docs-admin-guide-mm-damon-usage-fix-wrong-example-of.patch @@ -0,0 +1,36 @@ +From 9aa69986a4875ae890845b8c3469650cf5173d83 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Sun, 17 Mar 2024 12:14:07 -0700 +Subject: [PATCH] Docs/admin-guide/mm/damon/usage: fix wrong example of DAMOS + filter matching sysfs file + +The example usage of DAMOS filter sysfs files, specifically the part of +'matching' file writing for memcg type filter, is wrong. The intention +is to exclude pages of a memcg that already getting enough care from a +given scheme, but the example is setting the filter to apply the scheme +to only the pages of the memcg. Fix it. + +Fixes: 9b7f9322a530 ("Docs/admin-guide/mm/damon/usage: document DAMOS filters of sysfs") +Closes: https://lore.kernel.org/r/20240317191358.97578-1-sj@kernel.org +Cc: <stable@vger.kernel.org> # 6.3.x +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + Documentation/admin-guide/mm/damon/usage.rst | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/Documentation/admin-guide/mm/damon/usage.rst b/Documentation/admin-guide/mm/damon/usage.rst +index 69bc8fabf378..3ce3f0aaa1d5 100644 +--- a/Documentation/admin-guide/mm/damon/usage.rst ++++ b/Documentation/admin-guide/mm/damon/usage.rst +@@ -434,7 +434,7 @@ pages of all memory cgroups except ``/having_care_already``.:: + # # further filter out all cgroups except one at '/having_care_already' + echo memcg > 1/type + echo /having_care_already > 1/memcg_path +- echo N > 1/matching ++ echo Y > 1/matching + + Note that ``anon`` and ``memcg`` filters are currently supported only when + ``paddr`` :ref:`implementation <sysfs_context>` is being used. +-- +2.39.2 + diff --git a/patches/next/Docs-admin-guide-mm-damon-usage-update-for-young-pag.patch b/patches/next/Docs-admin-guide-mm-damon-usage-update-for-young-pag.patch new file mode 100644 index 0000000..30cff4d --- /dev/null +++ b/patches/next/Docs-admin-guide-mm-damon-usage-update-for-young-pag.patch @@ -0,0 +1,51 @@ +From 018b884f2176f821da0ce6866a24208b0e66346a Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Wed, 13 Mar 2024 18:19:41 -0700 +Subject: [PATCH] Docs/admin-guide/mm/damon/usage: update for young page type + DAMOS filter + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + Documentation/admin-guide/mm/damon/usage.rst | 26 ++++++++++---------- + 1 file changed, 13 insertions(+), 13 deletions(-) + +diff --git a/Documentation/admin-guide/mm/damon/usage.rst b/Documentation/admin-guide/mm/damon/usage.rst +index 6fce035fdbf5..69bc8fabf378 100644 +--- a/Documentation/admin-guide/mm/damon/usage.rst ++++ b/Documentation/admin-guide/mm/damon/usage.rst +@@ -410,19 +410,19 @@ in the numeric order. + + Each filter directory contains six files, namely ``type``, ``matcing``, + ``memcg_path``, ``addr_start``, ``addr_end``, and ``target_idx``. To ``type`` +-file, you can write one of four special keywords: ``anon`` for anonymous pages, +-``memcg`` for specific memory cgroup, ``addr`` for specific address range (an +-open-ended interval), or ``target`` for specific DAMON monitoring target +-filtering. In case of the memory cgroup filtering, you can specify the memory +-cgroup of the interest by writing the path of the memory cgroup from the +-cgroups mount point to ``memcg_path`` file. In case of the address range +-filtering, you can specify the start and end address of the range to +-``addr_start`` and ``addr_end`` files, respectively. For the DAMON monitoring +-target filtering, you can specify the index of the target between the list of +-the DAMON context's monitoring targets list to ``target_idx`` file. You can +-write ``Y`` or ``N`` to ``matching`` file to filter out pages that does or does +-not match to the type, respectively. Then, the scheme's action will not be +-applied to the pages that specified to be filtered out. ++file, you can write one of five special keywords: ``anon`` for anonymous pages, ++``memcg`` for specific memory cgroup, ``young`` for young pages, ``addr`` for ++specific address range (an open-ended interval), or ``target`` for specific ++DAMON monitoring target filtering. In case of the memory cgroup filtering, you ++can specify the memory cgroup of the interest by writing the path of the memory ++cgroup from the cgroups mount point to ``memcg_path`` file. In case of the ++address range filtering, you can specify the start and end address of the range ++to ``addr_start`` and ``addr_end`` files, respectively. For the DAMON ++monitoring target filtering, you can specify the index of the target between ++the list of the DAMON context's monitoring targets list to ``target_idx`` file. ++You can write ``Y`` or ``N`` to ``matching`` file to filter out pages that does ++or does not match to the type, respectively. Then, the scheme's action will ++not be applied to the pages that specified to be filtered out. + + For example, below restricts a DAMOS action to be applied to only non-anonymous + pages of all memory cgroups except ``/having_care_already``.:: +-- +2.39.2 + diff --git a/patches/next/Docs-mm-damon-design-add-API-link-to-damon_ctx.patch b/patches/next/Docs-mm-damon-design-add-API-link-to-damon_ctx.patch new file mode 100644 index 0000000..e08a230 --- /dev/null +++ b/patches/next/Docs-mm-damon-design-add-API-link-to-damon_ctx.patch @@ -0,0 +1,32 @@ +From 740f96e4c6caa1b52416c637be1091bc549bc6b1 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Sat, 2 Dec 2023 10:13:53 -0800 +Subject: [PATCH] Docs/mm/damon/design: add API link to damon_ctx + +More link updates is needed before posting patch. + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + Documentation/mm/damon/design.rst | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +diff --git a/Documentation/mm/damon/design.rst b/Documentation/mm/damon/design.rst +index f2baf617184d..57709ed53220 100644 +--- a/Documentation/mm/damon/design.rst ++++ b/Documentation/mm/damon/design.rst +@@ -12,9 +12,9 @@ Execution Model and Data Structures + + The monitoring-related information including the monitoring request + specification and DAMON-based operation schemes are stored in a data structure +-called DAMON ``context``. DAMON executes each context with a kernel thread +-called ``kdamond``. Multiple kdamonds could run in parallel, for different +-types of monitoring. ++called DAMON :c:type:`context <damon_ctx>`. DAMON executes each context with a ++kernel thread called ``kdamond``. Multiple kdamonds could run in parallel, for ++different types of monitoring. + + + Overall Architecture +-- +2.39.2 + diff --git a/patches/next/Docs-mm-damon-design-document-young-page-type-DAMOS-.patch b/patches/next/Docs-mm-damon-design-document-young-page-type-DAMOS-.patch new file mode 100644 index 0000000..f8f19d3 --- /dev/null +++ b/patches/next/Docs-mm-damon-design-document-young-page-type-DAMOS-.patch @@ -0,0 +1,44 @@ +From 820f7628e75c31532ca8d672c3dfbbca448a8196 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Wed, 13 Mar 2024 18:17:39 -0700 +Subject: [PATCH] Docs/mm/damon/design: document 'young page' type DAMOS filter + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + Documentation/mm/damon/design.rst | 20 +++++++++++--------- + 1 file changed, 11 insertions(+), 9 deletions(-) + +diff --git a/Documentation/mm/damon/design.rst b/Documentation/mm/damon/design.rst +index 5620aab9b385..f2baf617184d 100644 +--- a/Documentation/mm/damon/design.rst ++++ b/Documentation/mm/damon/design.rst +@@ -461,15 +461,17 @@ number of filters for each scheme. Each filter specifies the type of target + memory, and whether it should exclude the memory of the type (filter-out), or + all except the memory of the type (filter-in). + +-Currently, anonymous page, memory cgroup, address range, and DAMON monitoring +-target type filters are supported by the feature. Some filter target types +-require additional arguments. The memory cgroup filter type asks users to +-specify the file path of the memory cgroup for the filter. The address range +-type asks the start and end addresses of the range. The DAMON monitoring +-target type asks the index of the target from the context's monitoring targets +-list. Hence, users can apply specific schemes to only anonymous pages, +-non-anonymous pages, pages of specific cgroups, all pages excluding those of +-specific cgroups, pages in specific address range, pages in specific DAMON ++Currently, anonymous page, memory cgroup, young page, address range, and DAMON ++monitoring target type filters are supported by the feature. Some filter ++target types require additional arguments. The memory cgroup filter type asks ++users to specify the file path of the memory cgroup for the filter. The ++address range type asks the start and end addresses of the range. The DAMON ++monitoring target type asks the index of the target from the context's ++monitoring targets list. Hence, users can apply specific schemes to only ++anonymous pages, non-anonymous pages, pages of specific cgroups, all pages ++excluding those of specific cgroups, pages that not accessed after the last ++access check from the scheme, pages that accessed after the last access check ++from the scheme, pages in specific address range, pages in specific DAMON + monitoring targets, and any combination of those. + + To handle filters efficiently, the address range and DAMON monitoring target +-- +2.39.2 + diff --git a/patches/next/Docs-mm-damon-design-use-a-list-for-supported-filter.patch b/patches/next/Docs-mm-damon-design-use-a-list-for-supported-filter.patch new file mode 100644 index 0000000..03741a9 --- /dev/null +++ b/patches/next/Docs-mm-damon-design-use-a-list-for-supported-filter.patch @@ -0,0 +1,74 @@ +From 51300c8c0e07c1828a09cdfcc705aa17248d0b95 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Wed, 13 Mar 2024 18:10:21 -0700 +Subject: [PATCH] Docs/mm/damon/design: use a list for supported filters + +Filters section is listing currently supported filter types in a normal +paragraph. Since the number of types are four, it is not easy to read +for specific type. Use a list for easier finding of specific types. + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + Documentation/mm/damon/design.rst | 46 +++++++++++++++++-------------- + 1 file changed, 26 insertions(+), 20 deletions(-) + +diff --git a/Documentation/mm/damon/design.rst b/Documentation/mm/damon/design.rst +index 57709ed53220..bb82465c83dc 100644 +--- a/Documentation/mm/damon/design.rst ++++ b/Documentation/mm/damon/design.rst +@@ -461,26 +461,32 @@ number of filters for each scheme. Each filter specifies the type of target + memory, and whether it should exclude the memory of the type (filter-out), or + all except the memory of the type (filter-in). + +-Currently, anonymous page, memory cgroup, young page, address range, and DAMON +-monitoring target type filters are supported by the feature. Some filter +-target types require additional arguments. The memory cgroup filter type asks +-users to specify the file path of the memory cgroup for the filter. The +-address range type asks the start and end addresses of the range. The DAMON +-monitoring target type asks the index of the target from the context's +-monitoring targets list. Hence, users can apply specific schemes to only +-anonymous pages, non-anonymous pages, pages of specific cgroups, all pages +-excluding those of specific cgroups, pages that not accessed after the last +-access check from the scheme, pages that accessed after the last access check +-from the scheme, pages in specific address range, pages in specific DAMON +-monitoring targets, and any combination of those. +- +-To handle filters efficiently, the address range and DAMON monitoring target +-type filters are handled by the core layer, while others are handled by +-operations set. If a memory region is filtered by a core layer-handled filter, +-it is not counted as the scheme has tried to the region. In contrast, if a +-memory regions is filtered by an operations set layer-handled filter, it is +-counted as the scheme has tried. The difference in accounting leads to changes +-in the statistics. ++For efficient handling of filters, some types of filters are handled by the ++core layer, while others are handled by operations set. In the latter case, ++hence, support of the filter types depends on the DAMON operations set. In ++case of the core layer-handled filters, the memory regions that excluded by the ++filter are not counted as the scheme has tried to the region. In contrast, if ++a memory regions is filtered by an operations set layer-handled filter, it is ++counted as the scheme has tried. This difference affects the statistics. ++ ++Below types of filters are currently supported. ++ ++- anonymous page ++ - Applied to pages that containing data that not stored in files. ++ - Handled by operations set layer. Supported by only ``paddr`` set. ++- memory cgroup ++ - Applied to pages that belonging to a given cgroup. ++ - Handled by operations set layer. Supported by only ``paddr`` set. ++- young page ++ - Applied to pages that are accessed after the last access check from the ++ scheme. ++ - Handled by operations set layer. Supported by only ``paddr`` set. ++- address range ++ - Applied to pages that belonging to a given address range. ++ - Handled by the core logic. ++- DAMON monitoring target ++ - Applied to pages that belonging to a given DAMON monitoring target. ++ - Handled by the core logic. + + + Application Programming Interface +-- +2.39.2 + diff --git a/patches/next/Revert-kselftest-runner.sh-Propagate-SIGTERM-to-runn.patch b/patches/next/Revert-kselftest-runner.sh-Propagate-SIGTERM-to-runn.patch new file mode 100644 index 0000000..190306c --- /dev/null +++ b/patches/next/Revert-kselftest-runner.sh-Propagate-SIGTERM-to-runn.patch @@ -0,0 +1,33 @@ +From 99600661d7441644254596d1444e1d047763ad86 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Thu, 21 Sep 2023 09:41:45 +0000 +Subject: [PATCH] Revert "kselftest/runner.sh: Propagate SIGTERM to runner + child" + +This reverts commit 9616cb34b08ec86642b162eae75c5a7ca8debe3c. + +The commit makes 'stty' hungup, which is used by kunit in +damon-tests/corr. Revert the commit as a temporal workaround for now. + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + tools/testing/selftests/kselftest/runner.sh | 3 +-- + 1 file changed, 1 insertion(+), 2 deletions(-) + +diff --git a/tools/testing/selftests/kselftest/runner.sh b/tools/testing/selftests/kselftest/runner.sh +index 74954f6a8f94..c0070ef649b9 100644 +--- a/tools/testing/selftests/kselftest/runner.sh ++++ b/tools/testing/selftests/kselftest/runner.sh +@@ -37,8 +37,7 @@ tap_timeout() + { + # Make sure tests will time out if utility is available. + if [ -x /usr/bin/timeout ] ; then +- /usr/bin/timeout --foreground "$kselftest_timeout" \ +- /usr/bin/timeout "$kselftest_timeout" $1 ++ /usr/bin/timeout --foreground "$kselftest_timeout" $1 + else + $1 + fi +-- +2.39.2 + diff --git a/patches/next/commit-cleanup.patch b/patches/next/commit-cleanup.patch new file mode 100644 index 0000000..19caead --- /dev/null +++ b/patches/next/commit-cleanup.patch @@ -0,0 +1,17 @@ +From be375fa7e777045d89d7520a5ec415cbe9c4fc54 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Thu, 11 Apr 2024 16:04:44 -0700 +Subject: [PATCH] ==== commit cleanup ==== + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/RYnTeJnM | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/RYnTeJnM + +diff --git a/damon_meta_changes/RYnTeJnM b/damon_meta_changes/RYnTeJnM +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/commits-aiming-not-to-be-posted.patch b/patches/next/commits-aiming-not-to-be-posted.patch new file mode 100644 index 0000000..35fcb79 --- /dev/null +++ b/patches/next/commits-aiming-not-to-be-posted.patch @@ -0,0 +1,17 @@ +From 007c340c4b36c642d56aba9be42f3c6ce30858ea Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 30 Jun 2023 19:06:22 +0000 +Subject: [PATCH] === commits aiming not to be posted === + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/4qolZk6j | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/4qolZk6j + +diff --git a/damon_meta_changes/4qolZk6j b/damon_meta_changes/4qolZk6j +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/docs-improvement.patch b/patches/next/docs-improvement.patch new file mode 100644 index 0000000..284de7b --- /dev/null +++ b/patches/next/docs-improvement.patch @@ -0,0 +1,17 @@ +From cce302bf76207a732a17fbdc9bcbe53f19f561cb Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Thu, 11 Apr 2024 16:08:41 -0700 +Subject: [PATCH] ==== docs improvement ==== + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/MRCkmupO | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/MRCkmupO + +diff --git a/damon_meta_changes/MRCkmupO b/damon_meta_changes/MRCkmupO +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/drivers-virtio-virtio_balloon-integrate-ACMA-and-bal.patch b/patches/next/drivers-virtio-virtio_balloon-integrate-ACMA-and-bal.patch new file mode 100644 index 0000000..86788a6 --- /dev/null +++ b/patches/next/drivers-virtio-virtio_balloon-integrate-ACMA-and-bal.patch @@ -0,0 +1,52 @@ +From 222d8018cc78aa72e39c0c8d4ffb41034e14f3f4 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Wed, 28 Feb 2024 16:17:08 -0800 +Subject: [PATCH] drivers/virtio/virtio_balloon: integrate ACMA and ballooning + +This is just an idea. + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + drivers/virtio/virtio_balloon.c | 26 ++++++++++++++++++++++++++ + 1 file changed, 26 insertions(+) + +diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c +index 1f5b3dd31fcf..a954d75789ae 100644 +--- a/drivers/virtio/virtio_balloon.c ++++ b/drivers/virtio/virtio_balloon.c +@@ -472,6 +472,32 @@ static void virtballoon_changed(struct virtio_device *vdev) + struct virtio_balloon *vb = vdev->priv; + unsigned long flags; + ++#ifdef CONFIG_ACMA_BALLOON ++ s64 target; ++ u32 num_pages; ++ ++ ++ /* Legacy balloon config space is LE, unlike all other devices. */ ++ virtio_cread_le(vb->vdev, struct virtio_balloon_config, num_pages, ++ &num_pages); ++ ++ /* ++ * Aligned up to guest page size to avoid inflating and deflating ++ * balloon endlessly. ++ */ ++ target = ALIGN(num_pages, VIRTIO_BALLOON_PAGES_PER_PAGE); ++ ++ /* ++ * If the given new max mem size is larger than current acma's max mem ++ * size, same to normal max mem adjustment. ++ * If the given new max mem size is smaller than current acma's max mem ++ * size, strong aggressiveness is applied while memory for meeting the ++ * new max mem is met is stolen. ++ */ ++ acma_set_max_mem_aggressive(totalram_pages() - target); ++ return; ++#endif ++ + spin_lock_irqsave(&vb->stop_update_lock, flags); + if (!vb->stop_update) { + start_update_balloon_size(vb); +-- +2.39.2 + diff --git a/patches/next/hacks-in-progress.patch b/patches/next/hacks-in-progress.patch new file mode 100644 index 0000000..eb63809 --- /dev/null +++ b/patches/next/hacks-in-progress.patch @@ -0,0 +1,17 @@ +From 2b291cb93c79e56709f80677416386d44a769b01 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 30 Jun 2023 19:06:35 +0000 +Subject: [PATCH] === hacks in progress === + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/IbjSshhs | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/IbjSshhs + +diff --git a/damon_meta_changes/IbjSshhs b/damon_meta_changes/IbjSshhs +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/mark-start-of-DAMON-hack-tree.patch b/patches/next/mark-start-of-DAMON-hack-tree.patch new file mode 100644 index 0000000..4a9570d --- /dev/null +++ b/patches/next/mark-start-of-DAMON-hack-tree.patch @@ -0,0 +1,26 @@ +From eb1710363dda303b07d9789e932e500d05080ee4 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 30 Jun 2023 19:04:43 +0000 +Subject: [PATCH] === mark start of DAMON hack tree === + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/README | 6 ++++++ + 1 file changed, 6 insertions(+) + create mode 100644 damon_meta_changes/README + +diff --git a/damon_meta_changes/README b/damon_meta_changes/README +new file mode 100644 +index 000000000000..e02d9bb18c50 +--- /dev/null ++++ b/damon_meta_changes/README +@@ -0,0 +1,6 @@ ++This is a directory for having fake changes that required to only make valid ++commits. ++ ++To avoid conflict, add a unique file to this directory. E.g., ++ ++$ mktemp damon_meta_changes/XXXXXXXX +-- +2.39.2 + diff --git a/patches/next/mm-damon-Add-debug-code.patch b/patches/next/mm-damon-Add-debug-code.patch new file mode 100644 index 0000000..a489339 --- /dev/null +++ b/patches/next/mm-damon-Add-debug-code.patch @@ -0,0 +1,129 @@ +From 002188309fd15ef525cf146c0b1d5150fa2cbd27 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Sun, 14 Aug 2022 16:08:10 +0000 +Subject: [PATCH] mm/damon: Add debug code + +This commit adds verification check code. Those should not be merged in +the final code. Those are not expected to incur high overhead, though. + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/core.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++ + 1 file changed, 58 insertions(+) + +diff --git a/mm/damon/core.c b/mm/damon/core.c +index 939ecfcd4641..31ac8e4b1189 100644 +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -126,6 +126,12 @@ struct damon_region *damon_new_region(unsigned long start, unsigned long end) + if (!region) + return NULL; + ++ if (start >= end) { ++ pr_err("%s called with start %lu and end %lu!\n", __func__, ++ start, end); ++ BUG(); ++ } ++ + region->ar.start = start; + region->ar.end = end; + region->nr_accesses = 0; +@@ -146,6 +152,10 @@ void damon_add_region(struct damon_region *r, struct damon_target *t) + + static void damon_del_region(struct damon_region *r, struct damon_target *t) + { ++ if (t->nr_regions == 0) { ++ pr_err("nr_regions 0 but damon_del_region called\n"); ++ BUG(); ++ } + list_del(&r->list); + t->nr_regions--; + } +@@ -476,8 +486,27 @@ void damon_destroy_target(struct damon_target *t) + damon_free_target(t); + } + ++static void damon_nr_regions_verify(struct damon_target *t) ++{ ++ struct damon_region *r; ++ unsigned int count = 0; ++ static unsigned called; ++ ++ if (called++ % 100) ++ return; ++ ++ damon_for_each_region(r, t) ++ count++; ++ ++ if (count != t->nr_regions) ++ pr_err("%s expected %u but %u\n", __func__, count, t->nr_regions); ++ BUG_ON(count != t->nr_regions); ++} ++ + unsigned int damon_nr_regions(struct damon_target *t) + { ++ damon_nr_regions_verify(t); ++ + return t->nr_regions; + } + +@@ -1033,6 +1062,15 @@ static void damos_apply_scheme(struct damon_ctx *c, struct damon_target *t, + DAMON_MIN_REGION); + if (!sz) + goto update_stat; ++ if (sz >= damon_sz_region(r)) { ++ pr_err("sz: %lu, region: %lu-%lu (%lu), quota: %lu, charged: %lu\n", ++ sz, r->ar.start, ++ r->ar.end, r->ar.end - ++ r->ar.start, ++ quota->esz, ++ quota->charged_sz); ++ BUG(); ++ } + damon_split_region_at(t, r, sz); + } + if (damos_filter_out(c, t, r, s)) +@@ -1318,6 +1356,14 @@ static void damon_merge_two_regions(struct damon_target *t, + l->nr_accesses_bp = l->nr_accesses * 10000; + l->age = (l->age * sz_l + r->age * sz_r) / (sz_l + sz_r); + l->ar.end = r->ar.end; ++ ++ if (l->ar.start >= l->ar.end) { ++ pr_err("%s made %lu-%lu region\n", __func__, l->ar.start, ++ r->ar.end); ++ pr_err("r: %lu-%lu\n", r->ar.start, r->ar.end); ++ BUG(); ++ } ++ + damon_destroy_region(r, t); + } + +@@ -1339,6 +1385,12 @@ static void damon_merge_regions_of(struct damon_target *t, unsigned int thres, + else + r->age++; + ++ if (r->nr_accesses != r->nr_accesses_bp / 10000) { ++ pr_err("nr_accesses (%u) != nr_accesses_bp (%u)\n", ++ r->nr_accesses, r->nr_accesses_bp); ++ BUG(); ++ } ++ + if (prev && prev->ar.end == r->ar.start && + abs(prev->nr_accesses - r->nr_accesses) <= thres && + damon_sz_region(prev) + damon_sz_region(r) <= sz_limit) +@@ -1379,6 +1431,12 @@ static void damon_split_region_at(struct damon_target *t, + { + struct damon_region *new; + ++ if (sz_r == 0 || sz_r >= r->ar.end - r->ar.start) { ++ pr_err("%s called with region of %lu-%lu and sz_r %lu!\n", ++ __func__, r->ar.start, r->ar.end, sz_r); ++ BUG(); ++ } ++ + new = damon_new_region(r->ar.start + sz_r, r->ar.end); + if (!new) + return; +-- +2.39.2 + diff --git a/patches/next/mm-damon-add-DAMOS-filter-type-YOUNG.patch b/patches/next/mm-damon-add-DAMOS-filter-type-YOUNG.patch new file mode 100644 index 0000000..1571082 --- /dev/null +++ b/patches/next/mm-damon-add-DAMOS-filter-type-YOUNG.patch @@ -0,0 +1,55 @@ +From 7d63e6f16dd2baf3d5cd15083f61c0cb96146a32 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Tue, 5 Mar 2024 16:02:29 -0800 +Subject: [PATCH] mm/damon: add DAMOS filter type YOUNG + +Define yet another DAMOS filter type, YOUNG. Like anon and memcg, the +type of filter will be applied to each page in the memory region, and +check if the page is accessed since the last check. + +Note that this commit is only defining the type. Implementation of it +should be made on DAMON operations sets. A couple of commits for the +implementation on 'paddr' DAMON operations set will follow. + +Signed-off-by: SeongJae Park <sj@kernel.org> +Tested-by: Honggyu Kim <honggyu.kim@sk.com> +--- + include/linux/damon.h | 2 ++ + mm/damon/sysfs-schemes.c | 1 + + 2 files changed, 3 insertions(+) + +diff --git a/include/linux/damon.h b/include/linux/damon.h +index 886d07294f4e..f7da65e1ac04 100644 +--- a/include/linux/damon.h ++++ b/include/linux/damon.h +@@ -297,6 +297,7 @@ struct damos_stat { + * enum damos_filter_type - Type of memory for &struct damos_filter + * @DAMOS_FILTER_TYPE_ANON: Anonymous pages. + * @DAMOS_FILTER_TYPE_MEMCG: Specific memcg's pages. ++ * @DAMOS_FILTER_TYPE_YOUNG: Recently accessed pages. + * @DAMOS_FILTER_TYPE_ADDR: Address range. + * @DAMOS_FILTER_TYPE_TARGET: Data Access Monitoring target. + * @NR_DAMOS_FILTER_TYPES: Number of filter types. +@@ -315,6 +316,7 @@ struct damos_stat { + enum damos_filter_type { + DAMOS_FILTER_TYPE_ANON, + DAMOS_FILTER_TYPE_MEMCG, ++ DAMOS_FILTER_TYPE_YOUNG, + DAMOS_FILTER_TYPE_ADDR, + DAMOS_FILTER_TYPE_TARGET, + NR_DAMOS_FILTER_TYPES, +diff --git a/mm/damon/sysfs-schemes.c b/mm/damon/sysfs-schemes.c +index 53a90ac678fb..bea5bc52846a 100644 +--- a/mm/damon/sysfs-schemes.c ++++ b/mm/damon/sysfs-schemes.c +@@ -343,6 +343,7 @@ static struct damon_sysfs_scheme_filter *damon_sysfs_scheme_filter_alloc(void) + static const char * const damon_sysfs_scheme_filter_type_strs[] = { + "anon", + "memcg", ++ "young", + "addr", + "target", + }; +-- +2.39.2 + diff --git a/patches/next/mm-damon-core-a-bit-more-cleanup-and-comments.patch b/patches/next/mm-damon-core-a-bit-more-cleanup-and-comments.patch new file mode 100644 index 0000000..d6a8330 --- /dev/null +++ b/patches/next/mm-damon-core-a-bit-more-cleanup-and-comments.patch @@ -0,0 +1,87 @@ +From 919bd1e742788e78e928ac6f9a1cc5d045231546 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Tue, 20 Feb 2024 15:59:05 -0800 +Subject: [PATCH] mm/damon/core: a bit more cleanup and comments + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/core.c | 36 +++++++++++++++++++----------------- + 1 file changed, 19 insertions(+), 17 deletions(-) + +diff --git a/mm/damon/core.c b/mm/damon/core.c +index 9b777b35ac6d..e799318559a5 100644 +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -813,15 +813,15 @@ static int damon_update_scheme(struct damos *dst, struct damos *src) + dst->pattern = src->pattern; + dst->action = src->action; + dst->apply_interval_us = src->apply_interval_us; ++ ++ dst->quota.reset_interval = src->quota.reset_interval; + dst->quota.ms = src->quota.ms; + dst->quota.sz = src->quota.sz; +- dst->quota.reset_interval = src->quota.reset_interval; ++ damos_update_quota_goals(&dst->quota, &src->quota); + dst->quota.weight_sz = src->quota.weight_sz; + dst->quota.weight_nr_accesses = src->quota.weight_nr_accesses; + dst->quota.weight_age = src->quota.weight_age; + +- damos_update_quota_goals(&dst->quota, &src->quota); +- + dst->wmarks = src->wmarks; + + err = damos_update_filters(dst, src); +@@ -933,34 +933,36 @@ static int damon_update_targets(struct damon_ctx *dst, struct damon_ctx *src) + } + + /** +- * damon_update_ctx_prams() - Update input parameters of given DAMON context. +- * @old_ctx: DAMON context that need to be udpated. +- * @new_ctx: DAMON context that having new user parameters. ++ * damon_update_ctx() - Update input parameters of given DAMON context. ++ * @dst: DAMON context that need to be udpated. ++ * @src: DAMON context that having new user parameters. + * + * damon_ctx contains user input parameters for monitoring requests, internal + * status of the monitoring, and the results of the monitoring. This function +- * updates only input parameters for monitoring requests of @old_ctx with those +- * of @new_ctx, while keeping the internal status and monitoring results. This ++ * updates only input parameters for monitoring requests of @dst with those ++ * of @src, while keeping the internal status and monitoring results. This + * function is aimed to be used for online tuning-like use case. + */ +-int damon_update_ctx(struct damon_ctx *old_ctx, struct damon_ctx *new_ctx) ++int damon_update_ctx(struct damon_ctx *dst, struct damon_ctx *src) + { + int err; + +- err = damon_update_schemes(old_ctx, new_ctx); +- if (err) +- return err; +- err = damon_update_targets(old_ctx, new_ctx); ++ err = damon_update_schemes(dst, src); + if (err) + return err; +- err = damon_set_attrs(old_ctx, &new_ctx->attrs); ++ err = damon_update_targets(dst, src); + if (err) + return err; + /* +- * ->ops update should be done at least after targets update, for pid +- * handling ++ * schemes and targets should be updated first, since ++ * 1. damon_set_attrs() updates monitoring results of targets and ++ * next_apply_sis of schemes, and ++ * 2. ops update should be done after pid handling is done. + */ +- old_ctx->ops = new_ctx->ops; ++ err = damon_set_attrs(dst, &src->attrs); ++ if (err) ++ return err; ++ dst->ops = src->ops; + + return 0; + } +-- +2.39.2 + diff --git a/patches/next/mm-damon-core-add-debugging-purpose-log-of-tuned-esz.patch b/patches/next/mm-damon-core-add-debugging-purpose-log-of-tuned-esz.patch new file mode 100644 index 0000000..c02fdb5 --- /dev/null +++ b/patches/next/mm-damon-core-add-debugging-purpose-log-of-tuned-esz.patch @@ -0,0 +1,26 @@ +From ead587892c5be0d4966ec919062aba49e5a3d07c Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Sat, 11 Nov 2023 19:36:03 +0000 +Subject: [PATCH] mm/damon/core: add debugging-purpose log of tuned esz + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/core.c | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/mm/damon/core.c b/mm/damon/core.c +index d2505528bd6d..bce67059c67a 100644 +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -1256,6 +1256,8 @@ static void damos_set_effective_quota(struct damos_quota *quota) + esz = quota->sz; + + quota->esz = esz; ++ ++ pr_info("esz %lu\n", esz); + } + + static void damos_adjust_quota(struct damon_ctx *c, struct damos *s) +-- +2.39.2 + diff --git a/patches/next/mm-damon-core-add-todo-for-DAMOS-interval-validation.patch b/patches/next/mm-damon-core-add-todo-for-DAMOS-interval-validation.patch new file mode 100644 index 0000000..cdd5309 --- /dev/null +++ b/patches/next/mm-damon-core-add-todo-for-DAMOS-interval-validation.patch @@ -0,0 +1,25 @@ +From f69a427a3af9bbb40703a8016b2ed389286511ae Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Sun, 3 Sep 2023 05:02:44 +0000 +Subject: [PATCH] mm/damon/core: add todo for DAMOS interval validation + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/core.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/mm/damon/core.c b/mm/damon/core.c +index 31ac8e4b1189..d2505528bd6d 100644 +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -1634,6 +1634,7 @@ static void kdamond_init_intervals_sis(struct damon_ctx *ctx) + ctx->next_ops_update_sis = ctx->attrs.ops_update_interval / + sample_interval; + ++ /* todo: ensure apply_interval_us > sample_interval */ + damon_for_each_scheme(scheme, ctx) { + apply_interval = scheme->apply_interval_us ? + scheme->apply_interval_us : ctx->attrs.aggr_interval; +-- +2.39.2 + diff --git a/patches/next/mm-damon-core-initialize-esz_bp-from-damos_quota_ini.patch b/patches/next/mm-damon-core-initialize-esz_bp-from-damos_quota_ini.patch new file mode 100644 index 0000000..5e4c9cc --- /dev/null +++ b/patches/next/mm-damon-core-initialize-esz_bp-from-damos_quota_ini.patch @@ -0,0 +1,35 @@ +From 8b7585d6f106869f027f0286abe2cf6bb4497c6f Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Thu, 15 Feb 2024 15:36:41 -0800 +Subject: [PATCH] mm/damon/core: initialize ->esz_bp from + damos_quota_init_priv() + +damos_quota_init_priv() function should initialize all private fields of +struct damos_quota. However, it is not initializing ->esz_bp field. +This could result in use of uninitialized variable from +damon_feed_loop_next_input() function. + +Note: not Cc-ing stable@ since every DAMON kernel API users are not +causing the issue. + +Fixes: 9294a037c015 ("mm/damon/core: implement goal-oriented feedback-driven quota auto-tuning") +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/core.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/mm/damon/core.c b/mm/damon/core.c +index 6d503c1c125e..939ecfcd4641 100644 +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -346,6 +346,7 @@ static struct damos_quota *damos_quota_init(struct damos_quota *quota) + quota->charged_from = 0; + quota->charge_target_from = NULL; + quota->charge_addr_from = 0; ++ quota->esz_bp = 0; + return quota; + } + +-- +2.39.2 + diff --git a/patches/next/mm-damon-core-reduce-fields-copying-using-temporal-l.patch b/patches/next/mm-damon-core-reduce-fields-copying-using-temporal-l.patch new file mode 100644 index 0000000..4d26659 --- /dev/null +++ b/patches/next/mm-damon-core-reduce-fields-copying-using-temporal-l.patch @@ -0,0 +1,65 @@ +From b206b21db787588561fdbcedabc4bb9602487a31 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Tue, 20 Feb 2024 15:58:28 -0800 +Subject: [PATCH] mm/damon/core: reduce fields copying using temporal list_head + backup + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/core.c | 29 ++++++++--------------------- + 1 file changed, 8 insertions(+), 21 deletions(-) + +diff --git a/mm/damon/core.c b/mm/damon/core.c +index 3592e313661f..9b777b35ac6d 100644 +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -739,12 +739,12 @@ static void damos_update_quota_goals(struct damos_quota *dst, + damos_for_each_quota_goal_safe(goal, next, dst) { + struct damos_quota_goal *src_goal = + damos_nth_quota_goal(i++, src); ++ struct list_head head; + + if (src_goal) { +- goal->metric = src_goal->metric; +- goal->target_value = src_goal->target_value; +- goal->current_value = src_goal->current_value; +- goal->last_psi_total = src_goal->last_psi_total; ++ head = goal->list; ++ *goal = *src_goal; ++ goal->list = head; + continue; + } + damos_destroy_quota_goal(goal); +@@ -775,25 +775,12 @@ static int damos_update_filters(struct damos *dst, struct damos *src) + + damos_for_each_filter_safe(filter, next, dst) { + struct damos_filter *src_filter = damos_nth_filter(i++, src); ++ struct list_head head; + + if (src_filter) { +- filter->type = src_filter->type; +- filter->matching = src_filter->matching; +- switch (src_filter->type) { +- case DAMOS_FILTER_TYPE_ANON: +- break; +- case DAMOS_FILTER_TYPE_MEMCG: +- filter->memcg_id = src_filter->memcg_id; +- break; +- case DAMOS_FILTER_TYPE_ADDR: +- filter->addr_range = src_filter->addr_range; +- break; +- case DAMOS_FILTER_TYPE_TARGET: +- filter->target_idx = src_filter->target_idx; +- break; +- default: +- break; +- } ++ head = filter->list; ++ *filter = *src_filter; ++ filter->list = head; + continue; + } + damos_destroy_filter(filter); +-- +2.39.2 + diff --git a/patches/next/mm-damon-implement-DAMON-context-input-only-update-f.patch b/patches/next/mm-damon-implement-DAMON-context-input-only-update-f.patch new file mode 100644 index 0000000..758c161 --- /dev/null +++ b/patches/next/mm-damon-implement-DAMON-context-input-only-update-f.patch @@ -0,0 +1,382 @@ +From a2017f13d9858588a996aedae89c32d6214ca0f8 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Mon, 19 Feb 2024 22:00:35 -0800 +Subject: [PATCH] mm/damon: implement DAMON context input-only update function + +work in progress. Only build on test machine confirmed. + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + include/linux/damon.h | 5 + + mm/damon/core.c | 286 ++++++++++++++++++++++++++++++++++++++++++ + 2 files changed, 291 insertions(+) + +diff --git a/include/linux/damon.h b/include/linux/damon.h +index f7da65e1ac04..961b4df672c5 100644 +--- a/include/linux/damon.h ++++ b/include/linux/damon.h +@@ -714,12 +714,14 @@ void damon_update_region_access_rate(struct damon_region *r, bool accessed, + struct damos_filter *damos_new_filter(enum damos_filter_type type, + bool matching); + void damos_add_filter(struct damos *s, struct damos_filter *f); ++void damos_move_filter(struct damos *s, struct damos_filter *f); + void damos_destroy_filter(struct damos_filter *f); + + struct damos_quota_goal *damos_new_quota_goal( + enum damos_quota_goal_metric metric, + unsigned long target_value); + void damos_add_quota_goal(struct damos_quota *q, struct damos_quota_goal *g); ++void damos_move_quota_goal(struct damos_quota *q, struct damos_quota_goal *g); + void damos_destroy_quota_goal(struct damos_quota_goal *goal); + + struct damos *damon_new_scheme(struct damos_access_pattern *pattern, +@@ -728,11 +730,13 @@ struct damos *damon_new_scheme(struct damos_access_pattern *pattern, + struct damos_quota *quota, + struct damos_watermarks *wmarks); + void damon_add_scheme(struct damon_ctx *ctx, struct damos *s); ++void damon_move_scheme(struct damon_ctx *ctx, struct damos *s); + void damon_destroy_scheme(struct damos *s); + + struct damon_target *damon_new_target(void); + void damon_add_target(struct damon_ctx *ctx, struct damon_target *t); + bool damon_targets_empty(struct damon_ctx *ctx); ++void damon_move_target(struct damon_ctx *ctx, struct damon_target *t); + void damon_free_target(struct damon_target *t); + void damon_destroy_target(struct damon_target *t); + unsigned int damon_nr_regions(struct damon_target *t); +@@ -742,6 +746,7 @@ void damon_destroy_ctx(struct damon_ctx *ctx); + int damon_set_attrs(struct damon_ctx *ctx, struct damon_attrs *attrs); + void damon_set_schemes(struct damon_ctx *ctx, + struct damos **schemes, ssize_t nr_schemes); ++int damon_update_ctx(struct damon_ctx *old_ctx, struct damon_ctx *new_ctx); + int damon_nr_running_ctxs(void); + bool damon_is_registered_ops(enum damon_ops_id id); + int damon_register_ops(struct damon_operations *ops); +diff --git a/mm/damon/core.c b/mm/damon/core.c +index 37a19534a6f5..3592e313661f 100644 +--- a/mm/damon/core.c ++++ b/mm/damon/core.c +@@ -299,6 +299,12 @@ static void damos_del_filter(struct damos_filter *f) + list_del(&f->list); + } + ++void damos_move_filter(struct damos *s, struct damos_filter *f) ++{ ++ damos_del_filter(f); ++ damos_add_filter(s, f); ++} ++ + static void damos_free_filter(struct damos_filter *f) + { + kfree(f); +@@ -335,6 +341,12 @@ static void damos_del_quota_goal(struct damos_quota_goal *g) + list_del(&g->list); + } + ++void damos_move_quota_goal(struct damos_quota *q, struct damos_quota_goal *g) ++{ ++ damos_del_quota_goal(g); ++ damos_add_quota_goal(q, g); ++} ++ + static void damos_free_quota_goal(struct damos_quota_goal *g) + { + kfree(g); +@@ -416,6 +428,12 @@ static void damon_del_scheme(struct damos *s) + list_del(&s->list); + } + ++void damon_move_scheme(struct damon_ctx *ctx, struct damos *s) ++{ ++ damon_del_scheme(s); ++ damon_add_scheme(ctx, s); ++} ++ + static void damon_free_scheme(struct damos *s) + { + kfree(s); +@@ -471,6 +489,12 @@ static void damon_del_target(struct damon_target *t) + list_del(&t->list); + } + ++void damon_move_target(struct damon_ctx *ctx, struct damon_target *t) ++{ ++ damon_del_target(t); ++ damon_add_target(ctx, t); ++} ++ + void damon_free_target(struct damon_target *t) + { + struct damon_region *r, *next; +@@ -692,6 +716,268 @@ void damon_set_schemes(struct damon_ctx *ctx, struct damos **schemes, + damon_add_scheme(ctx, schemes[i]); + } + ++static struct damos_quota_goal *damos_nth_quota_goal( ++ int n, struct damos_quota *q) ++{ ++ struct damos_quota_goal *goal; ++ int i = 0; ++ ++ damos_for_each_quota_goal(goal, q) { ++ if (i++ == n) ++ return goal; ++ } ++ return NULL; ++} ++ ++ ++static void damos_update_quota_goals(struct damos_quota *dst, ++ struct damos_quota *src) ++{ ++ struct damos_quota_goal *goal, *next; ++ int i = 0, j = 0; ++ ++ damos_for_each_quota_goal_safe(goal, next, dst) { ++ struct damos_quota_goal *src_goal = ++ damos_nth_quota_goal(i++, src); ++ ++ if (src_goal) { ++ goal->metric = src_goal->metric; ++ goal->target_value = src_goal->target_value; ++ goal->current_value = src_goal->current_value; ++ goal->last_psi_total = src_goal->last_psi_total; ++ continue; ++ } ++ damos_destroy_quota_goal(goal); ++ } ++ damos_for_each_quota_goal_safe(goal, next, src) { ++ if (j++ < i) ++ continue; ++ damos_move_quota_goal(dst, goal); ++ } ++} ++ ++static struct damos_filter *damos_nth_filter(int n, struct damos *s) ++{ ++ struct damos_filter *filter; ++ int i = 0; ++ ++ damos_for_each_filter(filter, s) { ++ if (i++ == n) ++ return filter; ++ } ++ return NULL; ++} ++ ++static int damos_update_filters(struct damos *dst, struct damos *src) ++{ ++ struct damos_filter *filter, *next; ++ int i = 0, j = 0; ++ ++ damos_for_each_filter_safe(filter, next, dst) { ++ struct damos_filter *src_filter = damos_nth_filter(i++, src); ++ ++ if (src_filter) { ++ filter->type = src_filter->type; ++ filter->matching = src_filter->matching; ++ switch (src_filter->type) { ++ case DAMOS_FILTER_TYPE_ANON: ++ break; ++ case DAMOS_FILTER_TYPE_MEMCG: ++ filter->memcg_id = src_filter->memcg_id; ++ break; ++ case DAMOS_FILTER_TYPE_ADDR: ++ filter->addr_range = src_filter->addr_range; ++ break; ++ case DAMOS_FILTER_TYPE_TARGET: ++ filter->target_idx = src_filter->target_idx; ++ break; ++ default: ++ break; ++ } ++ continue; ++ } ++ damos_destroy_filter(filter); ++ } ++ ++ damos_for_each_filter_safe(filter, next, src) { ++ if (j++ < i) ++ continue; ++ damos_move_filter(dst, filter); ++ } ++ return 0; ++} ++ ++static struct damos *damon_nth_scheme(int n, struct damon_ctx *ctx) ++{ ++ struct damos *s; ++ int i = 0; ++ ++ damon_for_each_scheme(s, ctx) { ++ if (i++ == n) ++ return s; ++ } ++ return NULL; ++} ++ ++static int damon_update_scheme(struct damos *dst, struct damos *src) ++{ ++ int err; ++ ++ dst->pattern = src->pattern; ++ dst->action = src->action; ++ dst->apply_interval_us = src->apply_interval_us; ++ dst->quota.ms = src->quota.ms; ++ dst->quota.sz = src->quota.sz; ++ dst->quota.reset_interval = src->quota.reset_interval; ++ dst->quota.weight_sz = src->quota.weight_sz; ++ dst->quota.weight_nr_accesses = src->quota.weight_nr_accesses; ++ dst->quota.weight_age = src->quota.weight_age; ++ ++ damos_update_quota_goals(&dst->quota, &src->quota); ++ ++ dst->wmarks = src->wmarks; ++ ++ err = damos_update_filters(dst, src); ++ return err; ++} ++ ++static int damon_update_schemes(struct damon_ctx *dst, struct damon_ctx *src) ++{ ++ struct damos *scheme, *next; ++ int i = 0, j = 0, err; ++ ++ damon_for_each_scheme_safe(scheme, next, dst) { ++ struct damos *src_scheme = damon_nth_scheme(i++, src); ++ ++ if (src_scheme) { ++ err = damon_update_scheme(scheme, src_scheme); ++ if (err) ++ return err; ++ continue; ++ } ++ damon_destroy_scheme(scheme); ++ } ++ ++ damon_for_each_scheme_safe(scheme, next, src) { ++ if (j++ < i) ++ continue; ++ damon_move_scheme(dst, scheme); ++ } ++ return 0; ++} ++ ++static int damon_update_target_regions(struct damon_target *dst, ++ struct damon_target *src) ++{ ++ struct damon_addr_range *ranges; ++ int i = 0, err; ++ struct damon_region *r; ++ ++ damon_for_each_region(r, src) ++ i++; ++ if (!i) ++ return 0; ++ ranges = kmalloc_array(i, sizeof(*ranges), GFP_KERNEL | __GFP_NOWARN); ++ if (!ranges) ++ return -ENOMEM; ++ i = 0; ++ damon_for_each_region(r, src) { ++ if (r->ar.start > r->ar.end) { ++ err = -EINVAL; ++ goto out; ++ } ++ ranges[i].start = r->ar.start; ++ ranges[i++].end = r->ar.end; ++ if (i == 1) ++ continue; ++ if (ranges[i - 2].end > ranges[i - 1].start) { ++ err = -EINVAL; ++ goto out; ++ } ++ } ++ err = damon_set_regions(dst, ranges, i); ++out: ++ kfree(ranges); ++ return err; ++} ++ ++static struct damon_target *damon_nth_target(int n, struct damon_ctx *ctx) ++{ ++ struct damon_target *t; ++ int i = 0; ++ ++ damon_for_each_target(t, ctx) { ++ if (i++ == n) ++ return t; ++ } ++ return NULL; ++} ++ ++static int damon_update_targets(struct damon_ctx *dst, struct damon_ctx *src) ++{ ++ struct damon_target *t, *next; ++ int i = 0, j = 0, err; ++ ++ damon_for_each_target_safe(t, next, dst) { ++ struct damon_target *src_target = damon_nth_target(i++, src); ++ ++ if (damon_target_has_pid(dst)) ++ put_pid(t->pid); ++ ++ if (src_target) { ++ if (damon_target_has_pid(src)) ++ get_pid(src_target->pid); ++ t->pid = src_target->pid; ++ ++ err = damon_update_target_regions(t, src_target); ++ if (err) ++ return err; ++ continue; ++ } ++ damon_destroy_target(t); ++ } ++ ++ damon_for_each_target_safe(t, next, src) { ++ if (j++ < i) ++ continue; ++ damon_move_target(dst, t); ++ } ++ return 0; ++} ++ ++/** ++ * damon_update_ctx_prams() - Update input parameters of given DAMON context. ++ * @old_ctx: DAMON context that need to be udpated. ++ * @new_ctx: DAMON context that having new user parameters. ++ * ++ * damon_ctx contains user input parameters for monitoring requests, internal ++ * status of the monitoring, and the results of the monitoring. This function ++ * updates only input parameters for monitoring requests of @old_ctx with those ++ * of @new_ctx, while keeping the internal status and monitoring results. This ++ * function is aimed to be used for online tuning-like use case. ++ */ ++int damon_update_ctx(struct damon_ctx *old_ctx, struct damon_ctx *new_ctx) ++{ ++ int err; ++ ++ err = damon_update_schemes(old_ctx, new_ctx); ++ if (err) ++ return err; ++ err = damon_update_targets(old_ctx, new_ctx); ++ if (err) ++ return err; ++ err = damon_set_attrs(old_ctx, &new_ctx->attrs); ++ if (err) ++ return err; ++ /* ++ * ->ops update should be done at least after targets update, for pid ++ * handling ++ */ ++ old_ctx->ops = new_ctx->ops; ++ ++ return 0; ++} ++ + /** + * damon_nr_running_ctxs() - Return number of currently running contexts. + */ +-- +2.39.2 + diff --git a/patches/next/mm-damon-paddr-check-access-in-page-level-again-for-.patch b/patches/next/mm-damon-paddr-check-access-in-page-level-again-for-.patch new file mode 100644 index 0000000..2e0ef7a --- /dev/null +++ b/patches/next/mm-damon-paddr-check-access-in-page-level-again-for-.patch @@ -0,0 +1,54 @@ +From acc820dc05de5ebd29cd4a7c8c2caf6339c562b2 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 8 Mar 2024 16:06:00 -0800 +Subject: [PATCH] mm/damon/paddr: check access in page level again for pageout + DAMOS + +DAMON does access monitoring in the region granularity in best level. +Hence, a region could be reported as cold, while a few pages in the +region is hot. To fill the gap, DAMOS pageout action implementation of +DAMON operations set for the physical address space (paddr) checks the +access to each page again by setting 'ignore_references' argument of +'reclaim_pages()' as false. Since DAMOS filters of young type is +introduced, doing the recheck always could ignore users' intention. Do +the recheck only if users didn't make specific such request by adding +the young type DAMOS filter. + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/paddr.c | 12 +++++++++++- + 1 file changed, 11 insertions(+), 1 deletion(-) + +diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c +index 5685ba485097..d5f2f7ddf863 100644 +--- a/mm/damon/paddr.c ++++ b/mm/damon/paddr.c +@@ -244,6 +244,16 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) + { + unsigned long addr, applied; + LIST_HEAD(folio_list); ++ bool ignore_references = false; ++ struct damos_filter *filter; ++ ++ /* respect user's page level reference check handling request */ ++ damos_for_each_filter(filter, s) { ++ if (filter->type == DAMOS_FILTER_TYPE_YOUNG) { ++ ignore_references = true; ++ break; ++ } ++ } + + for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { + struct folio *folio = damon_get_folio(PHYS_PFN(addr)); +@@ -265,7 +275,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) + put_folio: + folio_put(folio); + } +- applied = reclaim_pages(&folio_list, false); ++ applied = reclaim_pages(&folio_list, ignore_references); + cond_resched(); + return applied * PAGE_SIZE; + } +-- +2.39.2 + diff --git a/patches/next/mm-damon-paddr-do-page-level-access-check-for-pageou.patch b/patches/next/mm-damon-paddr-do-page-level-access-check-for-pageou.patch new file mode 100644 index 0000000..9e7768e --- /dev/null +++ b/patches/next/mm-damon-paddr-do-page-level-access-check-for-pageou.patch @@ -0,0 +1,55 @@ +From 800b8d2a04b7d1bb275b97ca8f4ffecf660104b2 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 8 Mar 2024 16:21:03 -0800 +Subject: [PATCH] mm/damon/paddr: do page level access check for pageout DAMOS + action on its own + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/paddr.c | 16 ++++++++++++---- + 1 file changed, 12 insertions(+), 4 deletions(-) + +diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c +index d5f2f7ddf863..974edef1740d 100644 +--- a/mm/damon/paddr.c ++++ b/mm/damon/paddr.c +@@ -244,16 +244,22 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) + { + unsigned long addr, applied; + LIST_HEAD(folio_list); +- bool ignore_references = false; ++ bool install_young_filter = true; + struct damos_filter *filter; + +- /* respect user's page level reference check handling request */ ++ /* check access in page level again by default */ + damos_for_each_filter(filter, s) { + if (filter->type == DAMOS_FILTER_TYPE_YOUNG) { +- ignore_references = true; ++ install_young_filter = false; + break; + } + } ++ if (install_young_filter) { ++ filter = damos_new_filter(DAMOS_FILTER_TYPE_YOUNG, true); ++ if (!filter) ++ return 0; ++ damos_add_filter(s, filter); ++ } + + for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) { + struct folio *folio = damon_get_folio(PHYS_PFN(addr)); +@@ -275,7 +281,9 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) + put_folio: + folio_put(folio); + } +- applied = reclaim_pages(&folio_list, ignore_references); ++ if (install_young_filter) ++ damos_destroy_filter(filter); ++ applied = reclaim_pages(&folio_list, true); + cond_resched(); + return applied * PAGE_SIZE; + } +-- +2.39.2 + diff --git a/patches/next/mm-damon-paddr-implement-damon_folio_mkold.patch b/patches/next/mm-damon-paddr-implement-damon_folio_mkold.patch new file mode 100644 index 0000000..e0ada8f --- /dev/null +++ b/patches/next/mm-damon-paddr-implement-damon_folio_mkold.patch @@ -0,0 +1,83 @@ +From 385c46a78628d32b8cd5f56f5a0bc4f140226cb0 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 8 Mar 2024 17:54:17 -0800 +Subject: [PATCH] mm/damon/paddr: implement damon_folio_mkold() + +damon_pa_mkold() receives a physical address, finds the folio covering +the address, and makes the folio as old. Split the internal logic for +checking access to the given folio, for future reuse of the logic. +Also, change the name of the rmap walker from __damon_pa_mkold() to +damon_folio_mkold_one() for more consistent naming. + +Signed-off-by: SeongJae Park <sj@kernel.org> +Tested-by: Honggyu Kim <honggyu.kim@sk.com> +--- + mm/damon/paddr.c | 27 ++++++++++++++++----------- + 1 file changed, 16 insertions(+), 11 deletions(-) + +diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c +index 25c3ba2a9eaf..310b803c6277 100644 +--- a/mm/damon/paddr.c ++++ b/mm/damon/paddr.c +@@ -16,8 +16,8 @@ + #include "../internal.h" + #include "ops-common.h" + +-static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma, +- unsigned long addr, void *arg) ++static bool damon_folio_mkold_one(struct folio *folio, ++ struct vm_area_struct *vma, unsigned long addr, void *arg) + { + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0); + +@@ -31,33 +31,38 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma, + return true; + } + +-static void damon_pa_mkold(unsigned long paddr) ++static void damon_folio_mkold(struct folio *folio) + { +- struct folio *folio = damon_get_folio(PHYS_PFN(paddr)); + struct rmap_walk_control rwc = { +- .rmap_one = __damon_pa_mkold, ++ .rmap_one = damon_folio_mkold_one, + .anon_lock = folio_lock_anon_vma_read, + }; + bool need_lock; + +- if (!folio) +- return; +- + if (!folio_mapped(folio) || !folio_raw_mapping(folio)) { + folio_set_idle(folio); +- goto out; ++ return; + } + + need_lock = !folio_test_anon(folio) || folio_test_ksm(folio); + if (need_lock && !folio_trylock(folio)) +- goto out; ++ return; + + rmap_walk(folio, &rwc); + + if (need_lock) + folio_unlock(folio); + +-out: ++} ++ ++static void damon_pa_mkold(unsigned long paddr) ++{ ++ struct folio *folio = damon_get_folio(PHYS_PFN(paddr)); ++ ++ if (!folio) ++ return; ++ ++ damon_folio_mkold(folio); + folio_put(folio); + } + +-- +2.39.2 + diff --git a/patches/next/mm-damon-paddr-implement-damon_folio_young.patch b/patches/next/mm-damon-paddr-implement-damon_folio_young.patch new file mode 100644 index 0000000..f9e1157 --- /dev/null +++ b/patches/next/mm-damon-paddr-implement-damon_folio_young.patch @@ -0,0 +1,92 @@ +From 2f862e6bd56d36f74fc6b4d93e63d0f7a2cf19e9 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Tue, 5 Mar 2024 16:03:19 -0800 +Subject: [PATCH] mm/damon/paddr: implement damon_folio_young() + +damon_pa_young() receives physical address, get the folio covering the +address, and show if the folio is accessed since the last check. Split +the internal logic for checking access to the given folio, for future +reuse of the logic from code that already got the folio of the address +of the question. Also, change the rmap walker function's name from +__damon_pa_young() to damon_folio_young_one(), for consistent naming. + +Signed-off-by: SeongJae Park <sj@kernel.org> +Tested-by: Honggyu Kim <honggyu.kim@sk.com> +--- + mm/damon/paddr.c | 32 +++++++++++++++++++------------- + 1 file changed, 19 insertions(+), 13 deletions(-) + +diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c +index 5e6dc312072c..25c3ba2a9eaf 100644 +--- a/mm/damon/paddr.c ++++ b/mm/damon/paddr.c +@@ -79,8 +79,8 @@ static void damon_pa_prepare_access_checks(struct damon_ctx *ctx) + } + } + +-static bool __damon_pa_young(struct folio *folio, struct vm_area_struct *vma, +- unsigned long addr, void *arg) ++static bool damon_folio_young_one(struct folio *folio, ++ struct vm_area_struct *vma, unsigned long addr, void *arg) + { + bool *accessed = arg; + DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, addr, 0); +@@ -111,38 +111,44 @@ static bool __damon_pa_young(struct folio *folio, struct vm_area_struct *vma, + return *accessed == false; + } + +-static bool damon_pa_young(unsigned long paddr, unsigned long *folio_sz) ++static bool damon_folio_young(struct folio *folio) + { +- struct folio *folio = damon_get_folio(PHYS_PFN(paddr)); + bool accessed = false; + struct rmap_walk_control rwc = { + .arg = &accessed, +- .rmap_one = __damon_pa_young, ++ .rmap_one = damon_folio_young_one, + .anon_lock = folio_lock_anon_vma_read, + }; + bool need_lock; + +- if (!folio) +- return false; +- + if (!folio_mapped(folio) || !folio_raw_mapping(folio)) { + if (folio_test_idle(folio)) +- accessed = false; ++ return false; + else +- accessed = true; +- goto out; ++ return true; + } + + need_lock = !folio_test_anon(folio) || folio_test_ksm(folio); + if (need_lock && !folio_trylock(folio)) +- goto out; ++ return false; + + rmap_walk(folio, &rwc); + + if (need_lock) + folio_unlock(folio); + +-out: ++ return accessed; ++} ++ ++static bool damon_pa_young(unsigned long paddr, unsigned long *folio_sz) ++{ ++ struct folio *folio = damon_get_folio(PHYS_PFN(paddr)); ++ bool accessed; ++ ++ if (!folio) ++ return false; ++ ++ accessed = damon_folio_young(folio); + *folio_sz = folio_size(folio); + folio_put(folio); + return accessed; +-- +2.39.2 + diff --git a/patches/next/mm-damon-paddr-support-DAMOS-filter-type-YOUNG.patch b/patches/next/mm-damon-paddr-support-DAMOS-filter-type-YOUNG.patch new file mode 100644 index 0000000..27e66d8 --- /dev/null +++ b/patches/next/mm-damon-paddr-support-DAMOS-filter-type-YOUNG.patch @@ -0,0 +1,34 @@ +From 3af2786e697d4a9a74e238584d677779f9008c7e Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Tue, 5 Mar 2024 16:04:59 -0800 +Subject: [PATCH] mm/damon/paddr: support DAMOS filter type YOUNG + +DAMOS filter of type YOUNG is defined, but not yet implemented by any +DAMON operations set. Add the implementation to the DAMON operations +set for the physical address space, paddr. + +Signed-off-by: SeongJae Park <sj@kernel.org> +Tested-by: Honggyu Kim <honggyu.kim@sk.com> +--- + mm/damon/paddr.c | 5 +++++ + 1 file changed, 5 insertions(+) + +diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c +index 310b803c6277..5685ba485097 100644 +--- a/mm/damon/paddr.c ++++ b/mm/damon/paddr.c +@@ -214,6 +214,11 @@ static bool __damos_pa_filter_out(struct damos_filter *filter, + matched = filter->memcg_id == mem_cgroup_id(memcg); + rcu_read_unlock(); + break; ++ case DAMOS_FILTER_TYPE_YOUNG: ++ matched = damon_folio_young(folio); ++ if (matched) ++ damon_folio_mkold(folio); ++ break; + default: + break; + } +-- +2.39.2 + diff --git a/patches/next/mm-damon-sysfs-Add-a-file-for-simple-checking-memcg-.patch b/patches/next/mm-damon-sysfs-Add-a-file-for-simple-checking-memcg-.patch new file mode 100644 index 0000000..b5c4af6 --- /dev/null +++ b/patches/next/mm-damon-sysfs-Add-a-file-for-simple-checking-memcg-.patch @@ -0,0 +1,52 @@ +From 258286213eb74dc716e1108d4a51bc1504e9c79c Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 18 Nov 2022 23:50:59 +0000 +Subject: [PATCH] mm/damon/sysfs: Add a file for simple checking memcg ids and + paths + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/sysfs.c | 26 ++++++++++++++++++++++++++ + 1 file changed, 26 insertions(+) + +diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c +index 6fee383bc0c5..bc6204d66755 100644 +--- a/mm/damon/sysfs.c ++++ b/mm/damon/sysfs.c +@@ -1885,7 +1885,33 @@ static void damon_sysfs_ui_dir_release(struct kobject *kobj) + kfree(container_of(kobj, struct damon_sysfs_ui_dir, kobj)); + } + ++static ssize_t debug_show(struct kobject *kobj, ++ struct kobj_attribute *attr, char *buf) ++{ ++#ifdef CONFIG_MEMCG ++ struct mem_cgroup *memcg; ++ char *path = kmalloc(sizeof(*path) * PATH_MAX, GFP_KERNEL); ++ ++ if (!path) ++ return -ENOMEM; ++ ++ for (memcg = mem_cgroup_iter(NULL, NULL, NULL); memcg; memcg = ++ mem_cgroup_iter(NULL, memcg, NULL)) { ++ cgroup_path(memcg->css.cgroup, path, PATH_MAX); ++ pr_info("id: %u, path: %s\n", mem_cgroup_id(memcg), path); ++ } ++ ++ kfree(path); ++#endif ++ ++ return 0; ++} ++ ++static struct kobj_attribute damon_sysfs_debug_attr = ++ __ATTR_RO_MODE(debug, 0400); ++ + static struct attribute *damon_sysfs_ui_dir_attrs[] = { ++ &damon_sysfs_debug_attr.attr, + NULL, + }; + ATTRIBUTE_GROUPS(damon_sysfs_ui_dir); +-- +2.39.2 + diff --git a/patches/next/mm-vmscan-remove-ignore_references-argument-of-recla.patch b/patches/next/mm-vmscan-remove-ignore_references-argument-of-recla.patch new file mode 100644 index 0000000..ef09b37 --- /dev/null +++ b/patches/next/mm-vmscan-remove-ignore_references-argument-of-recla.patch @@ -0,0 +1,95 @@ +From c8d9cbaa9ab430e46e774b8075a96cee6b043eb9 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 8 Mar 2024 16:23:33 -0800 +Subject: [PATCH] mm/vmscan: remove ignore_references argument of + reclaim_pages() + +All reclaim_pages() callers are setting ignore_references as true. +Remove the argument to remove any possible confusion. + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + mm/damon/paddr.c | 2 +- + mm/internal.h | 2 +- + mm/madvise.c | 4 ++-- + mm/vmscan.c | 6 +++--- + 4 files changed, 7 insertions(+), 7 deletions(-) + +diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c +index 974edef1740d..18797c1b419b 100644 +--- a/mm/damon/paddr.c ++++ b/mm/damon/paddr.c +@@ -283,7 +283,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) + } + if (install_young_filter) + damos_destroy_filter(filter); +- applied = reclaim_pages(&folio_list, true); ++ applied = reclaim_pages(&folio_list); + cond_resched(); + return applied * PAGE_SIZE; + } +diff --git a/mm/internal.h b/mm/internal.h +index 5d5e49b86fe3..6194592386a6 100644 +--- a/mm/internal.h ++++ b/mm/internal.h +@@ -1051,7 +1051,7 @@ extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, + unsigned long, unsigned long); + + extern void set_pageblock_order(void); +-unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references); ++unsigned long reclaim_pages(struct list_head *folio_list); + unsigned int reclaim_clean_pages_from_list(struct zone *zone, + struct list_head *folio_list); + /* The ALLOC_WMARK bits are used as an index to zone->watermark */ +diff --git a/mm/madvise.c b/mm/madvise.c +index a366ff8e0a6d..40d688c934e9 100644 +--- a/mm/madvise.c ++++ b/mm/madvise.c +@@ -444,7 +444,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, + huge_unlock: + spin_unlock(ptl); + if (pageout) +- reclaim_pages(&folio_list, true); ++ reclaim_pages(&folio_list); + return 0; + } + +@@ -558,7 +558,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, + pte_unmap_unlock(start_pte, ptl); + } + if (pageout) +- reclaim_pages(&folio_list, true); ++ reclaim_pages(&folio_list); + cond_resched(); + + return 0; +diff --git a/mm/vmscan.c b/mm/vmscan.c +index bca2d9981c95..fdea8c663f9c 100644 +--- a/mm/vmscan.c ++++ b/mm/vmscan.c +@@ -2147,7 +2147,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, + return nr_reclaimed; + } + +-unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references) ++unsigned long reclaim_pages(struct list_head *folio_list) + { + int nid; + unsigned int nr_reclaimed = 0; +@@ -2170,11 +2170,11 @@ unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references + } + + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), +- ignore_references); ++ true); + nid = folio_nid(lru_to_folio(folio_list)); + } while (!list_empty(folio_list)); + +- nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), ignore_references); ++ nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), true); + + memalloc_noreclaim_restore(noreclaim_flag); + +-- +2.39.2 + diff --git a/patches/next/patches-written-or-reviewed-by-SJ-but-not-merged-in-.patch b/patches/next/patches-written-or-reviewed-by-SJ-but-not-merged-in-.patch new file mode 100644 index 0000000..198651a --- /dev/null +++ b/patches/next/patches-written-or-reviewed-by-SJ-but-not-merged-in-.patch @@ -0,0 +1,18 @@ +From 5ff5e5c40bb640ef9be9b66150db82416ab7c390 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 30 Jun 2023 19:05:31 +0000 +Subject: [PATCH] === patches written or reviewed by SJ but not merged in -mm + === + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/GflqK3Cq | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/GflqK3Cq + +diff --git a/damon_meta_changes/GflqK3Cq b/damon_meta_changes/GflqK3Cq +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/selftests-damon-_damon_sysfs-support-commit_schemes_.patch b/patches/next/selftests-damon-_damon_sysfs-support-commit_schemes_.patch new file mode 100644 index 0000000..7ed592d --- /dev/null +++ b/patches/next/selftests-damon-_damon_sysfs-support-commit_schemes_.patch @@ -0,0 +1,29 @@ +From f204e433d88834ab93930fcb24670d324b67e1f4 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Mon, 12 Feb 2024 17:41:44 -0800 +Subject: [PATCH] selftests/damon/_damon_sysfs: support + commit_schemes_quota_goals + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + tools/testing/selftests/damon/_damon_sysfs.py | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/tools/testing/selftests/damon/_damon_sysfs.py b/tools/testing/selftests/damon/_damon_sysfs.py +index d23d7398a27a..dc6696219387 100644 +--- a/tools/testing/selftests/damon/_damon_sysfs.py ++++ b/tools/testing/selftests/damon/_damon_sysfs.py +@@ -361,6 +361,10 @@ class Kdamond: + stat_values.append(int(content)) + scheme.stats = DamosStats(*stat_values) + ++ def commit_schemes_quota_goals(self): ++ return write_file(os.path.join(self.sysfs_dir(), 'state'), ++ 'commit_schemes_quota_goals') ++ + class Kdamonds: + kdamonds = [] + +-- +2.39.2 + diff --git a/patches/next/series b/patches/next/series new file mode 100644 index 0000000..14f9bef --- /dev/null +++ b/patches/next/series @@ -0,0 +1,39 @@ +69f8e5dc5ac5e1ec6fcc97400bc561d179e9501c +mark-start-of-DAMON-hack-tree.patch +Add-damon-suffix-to-the-version-name.patch +temporal-fixes.patch +Revert-kselftest-runner.sh-Propagate-SIGTERM-to-runn.patch +patches-written-or-reviewed-by-SJ-but-not-merged-in-.patch +DAMOS-filter-type-YOUNG.patch +mm-damon-paddr-implement-damon_folio_young.patch +mm-damon-paddr-implement-damon_folio_mkold.patch +mm-damon-add-DAMOS-filter-type-YOUNG.patch +mm-damon-paddr-support-DAMOS-filter-type-YOUNG.patch +Docs-mm-damon-design-document-young-page-type-DAMOS-.patch +Docs-admin-guide-mm-damon-usage-update-for-young-pag.patch +Docs-ABI-damon-update-for-youg-page-type-DAMOS-filte.patch +young-filter-followup.patch +mm-damon-paddr-check-access-in-page-level-again-for-.patch +mm-damon-paddr-do-page-level-access-check-for-pageou.patch +mm-vmscan-remove-ignore_references-argument-of-recla.patch +trivial-fixes.patch +Docs-admin-guide-mm-damon-usage-fix-wrong-example-of.patch +mm-damon-core-initialize-esz_bp-from-damos_quota_ini.patch +commits-aiming-not-to-be-posted.patch +mm-damon-Add-debug-code.patch +mm-damon-sysfs-Add-a-file-for-simple-checking-memcg-.patch +mm-damon-core-add-todo-for-DAMOS-interval-validation.patch +mm-damon-core-add-debugging-purpose-log-of-tuned-esz.patch +Add-debug-log-for-PSI.patch +hacks-in-progress.patch +tests-improvement.patch +selftests-damon-_damon_sysfs-support-commit_schemes_.patch +docs-improvement.patch +Docs-mm-damon-design-add-API-link-to-damon_ctx.patch +Docs-mm-damon-design-use-a-list-for-supported-filter.patch +commit-cleanup.patch +mm-damon-implement-DAMON-context-input-only-update-f.patch +mm-damon-core-reduce-fields-copying-using-temporal-l.patch +mm-damon-core-a-bit-more-cleanup-and-comments.patch +ACMA.patch +drivers-virtio-virtio_balloon-integrate-ACMA-and-bal.patch diff --git a/patches/next/temporal-fixes.patch b/patches/next/temporal-fixes.patch new file mode 100644 index 0000000..2bfb883 --- /dev/null +++ b/patches/next/temporal-fixes.patch @@ -0,0 +1,17 @@ +From 4347dc08a478e7725eab60f6cd856ba485264589 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Fri, 30 Jun 2023 19:05:08 +0000 +Subject: [PATCH] === temporal fixes === + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/M9DezupS | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/M9DezupS + +diff --git a/damon_meta_changes/M9DezupS b/damon_meta_changes/M9DezupS +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/tests-improvement.patch b/patches/next/tests-improvement.patch new file mode 100644 index 0000000..de4807b --- /dev/null +++ b/patches/next/tests-improvement.patch @@ -0,0 +1,17 @@ +From 08647047a2c75d0de641f286c97322a0e739cef4 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Thu, 11 Apr 2024 16:08:17 -0700 +Subject: [PATCH] ==== tests improvement ==== + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/fxg59jjv | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/fxg59jjv + +diff --git a/damon_meta_changes/fxg59jjv b/damon_meta_changes/fxg59jjv +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/trivial-fixes.patch b/patches/next/trivial-fixes.patch new file mode 100644 index 0000000..5615128 --- /dev/null +++ b/patches/next/trivial-fixes.patch @@ -0,0 +1,17 @@ +From 744118913be23e3b29c7da0e2509cf18730e0faa Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Thu, 11 Apr 2024 16:04:34 -0700 +Subject: [PATCH] ==== trivial fixes ==== + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/xmRVOvHE | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/xmRVOvHE + +diff --git a/damon_meta_changes/xmRVOvHE b/damon_meta_changes/xmRVOvHE +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + diff --git a/patches/next/young-filter-followup.patch b/patches/next/young-filter-followup.patch new file mode 100644 index 0000000..a0ab1e2 --- /dev/null +++ b/patches/next/young-filter-followup.patch @@ -0,0 +1,17 @@ +From eb22161272a16d10e5dbf8be17196a3d93a04361 Mon Sep 17 00:00:00 2001 +From: SeongJae Park <sj@kernel.org> +Date: Thu, 11 Apr 2024 16:09:05 -0700 +Subject: [PATCH] ==== young filter followup ==== + +Signed-off-by: SeongJae Park <sj@kernel.org> +--- + damon_meta_changes/tk1fLQP5 | 0 + 1 file changed, 0 insertions(+), 0 deletions(-) + create mode 100644 damon_meta_changes/tk1fLQP5 + +diff --git a/damon_meta_changes/tk1fLQP5 b/damon_meta_changes/tk1fLQP5 +new file mode 100644 +index 000000000000..e69de29bb2d1 +-- +2.39.2 + |