High Initial Memory Consumption of RabbitMQ Nodes on Centos Stream 9

Team RabbitMQ and community members have recently identified a curious scenario where a freshly started node could
consume a surprisingly high amount of memory, say, 1.5 GiB or so. We’d like to share our findings with the community
and explain what short term and longer term workarounds are available.

Some recent Linux distributions, such as ArchLinux, RHEL 9, and CentOS Stream 9, ship a recent version of systemd
that sets the default open file handle limit is set to 1073741816 or about one billion.
This is much higher than the default used by older distributions such as CentOS 8.

For a lot of software this doesn’t change anything. However, the Erlang runtime will allocate more memory upfront on systems with a very high limit.
This leads to surprisingly high footprint of newly started RabbitMQ nodes without any data or meaningful client activity.

There are two ways to mitigate this problem:

What value would be more appropriate for your given environment depends on the workload. Default values in the 50,000 to 100,000 range
should support plenty of concurrent client connections, queues and streams for many cases without causing
excessive upfront memory allocation by the runtime.