diff options
author | Darrick J. Wong <djwong@kernel.org> | 2023-07-18 18:10:47 -0700 |
---|---|---|
committer | Zorro Lang <zlang@kernel.org> | 2023-07-23 12:56:22 +0800 |
commit | d28912bad3c00b3a0303d1b62fbbe97c44c578a8 (patch) | |
tree | 2c58f09e3c7090d5f4ddc37cbcb44d730cfeee5d | |
parent | 1cd6b612992a59d1a20bcd2c1f072e79ccd6dd60 (diff) | |
download | xfstests-dev-d28912bad3c00b3a0303d1b62fbbe97c44c578a8.tar.gz |
generic/558: avoid forkbombs on filesystems with many free inodes
Mikulas reported that this test became a forkbomb on his system when he
tested it with bcachefs. Unlike XFS and ext4, which have large inodes
consuming hundreds of bytes, bcachefs has very tiny ones. Therefore, it
reports a large number of free inodes on a freshly mounted 1GB fs (~15
million), which causes this test to try to create 15000 processes.
There's really no reason to do that -- all this test wanted to do was to
exhaust the number of inodes as quickly as possible using all available
CPUs, and then it ran xfs_repair to try to reproduce a bug. Set the
number of subshells to 4x the CPU count and spread the work among them
instead of forking thousands of processes.
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Darrick J. Wong <djwong@kernel.org>
Tested-by: Mikulas Patocka <mpatocka@redhat.com>
Reviewed-by: Bill O'Donnell <bodonnel@redhat.com>
Reviewed-by: Zorro Lang <zlang@redhat.com>
Signed-off-by: Zorro Lang <zlang@kernel.org>
-rwxr-xr-x | tests/generic/558 | 27 |
1 files changed, 18 insertions, 9 deletions
diff --git a/tests/generic/558 b/tests/generic/558 index 4e22ce656b..510b06f281 100755 --- a/tests/generic/558 +++ b/tests/generic/558 @@ -19,9 +19,8 @@ create_file() local prefix=$3 local i=0 - while [ $i -lt $nr_file ]; do + for ((i = 0; i < nr_file; i++)); do echo -n > $dir/${prefix}_${i} - let i=$i+1 done } @@ -39,15 +38,25 @@ _scratch_mkfs_sized $((1024 * 1024 * 1024)) >>$seqres.full 2>&1 _scratch_mount i=0 -free_inode=`_get_free_inode $SCRATCH_MNT` -file_per_dir=1000 -loop=$((free_inode / file_per_dir + 1)) +free_inodes=$(_get_free_inode $SCRATCH_MNT) +# Round the number of inodes to create up to the nearest 1000, like the old +# code did to make sure that we *cannot* allocate any more inodes at all. +free_inodes=$(( ( (free_inodes + 999) / 1000) * 1000 )) +nr_cpus=$(( $($here/src/feature -o) * 4 * LOAD_FACTOR )) +echo "free inodes: $free_inodes nr_cpus: $nr_cpus" >> $seqres.full + +if ((free_inodes <= nr_cpus)); then + nr_cpus=1 + files_per_dir=$free_inodes +else + files_per_dir=$(( (free_inodes + nr_cpus - 1) / nr_cpus )) +fi mkdir -p $SCRATCH_MNT/testdir +echo "nr_cpus: $nr_cpus files_per_dir: $files_per_dir" >> $seqres.full -echo "Create $((loop * file_per_dir)) files in $SCRATCH_MNT/testdir" >>$seqres.full -while [ $i -lt $loop ]; do - create_file $SCRATCH_MNT/testdir $file_per_dir $i >>$seqres.full 2>&1 & - let i=$i+1 +echo "Create $((nr_cpus * files_per_dir)) files in $SCRATCH_MNT/testdir" >>$seqres.full +for ((i = 0; i < nr_cpus; i++)); do + create_file $SCRATCH_MNT/testdir $files_per_dir $i >>$seqres.full 2>&1 & done wait |