How to know how much memory is availabe on a virtual server since MemAvailable isn't present in meminfo?
When running cat /proc/meminfo
, you get these 3 values at the top:
MemTotal: 6291456 kB
MemFree: 4038976 kB
Cached: 1477948 kB
As far as I know, the "Cached" value is disk caches made by the Linux system that will be freed immediately if any application needs more RAM, thus Linux will never run out of memory until both MemFree and Cached are at zero.
Unfortunately, "MemAvailable" is not reported by /proc/meminfo, probably because it is running in a virtual server. (Kernel version is 4.4)
Thus for all practical purposes, the RAM available for applications is MemFree + Cached.
Is that view correct?
linux memory cache meminfo
|
show 1 more comment
When running cat /proc/meminfo
, you get these 3 values at the top:
MemTotal: 6291456 kB
MemFree: 4038976 kB
Cached: 1477948 kB
As far as I know, the "Cached" value is disk caches made by the Linux system that will be freed immediately if any application needs more RAM, thus Linux will never run out of memory until both MemFree and Cached are at zero.
Unfortunately, "MemAvailable" is not reported by /proc/meminfo, probably because it is running in a virtual server. (Kernel version is 4.4)
Thus for all practical purposes, the RAM available for applications is MemFree + Cached.
Is that view correct?
linux memory cache meminfo
1
I don't want to gold-hammer this closed, but this question is relevant if not a duplicate. I’m surprised you don’t haveMemAvailable
, it was added in 3.14.
– Stephen Kitt
9 hours ago
The accepted answer from that question uses /proc/zoneinfo, which isn't available on my vserver as well
– Roland Seuhs
8 hours ago
What doesuname -a
output?
– Stephen Kitt
8 hours ago
uname -a: Linux host 4.4.0-042stab134.8 #1 SMP Fri Dec 7 17:16:09 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux
– Roland Seuhs
5 hours ago
I suspect this is an OpenVZ system with a kernel which is really based on 2.6.32, not 4.4.
– Stephen Kitt
4 hours ago
|
show 1 more comment
When running cat /proc/meminfo
, you get these 3 values at the top:
MemTotal: 6291456 kB
MemFree: 4038976 kB
Cached: 1477948 kB
As far as I know, the "Cached" value is disk caches made by the Linux system that will be freed immediately if any application needs more RAM, thus Linux will never run out of memory until both MemFree and Cached are at zero.
Unfortunately, "MemAvailable" is not reported by /proc/meminfo, probably because it is running in a virtual server. (Kernel version is 4.4)
Thus for all practical purposes, the RAM available for applications is MemFree + Cached.
Is that view correct?
linux memory cache meminfo
When running cat /proc/meminfo
, you get these 3 values at the top:
MemTotal: 6291456 kB
MemFree: 4038976 kB
Cached: 1477948 kB
As far as I know, the "Cached" value is disk caches made by the Linux system that will be freed immediately if any application needs more RAM, thus Linux will never run out of memory until both MemFree and Cached are at zero.
Unfortunately, "MemAvailable" is not reported by /proc/meminfo, probably because it is running in a virtual server. (Kernel version is 4.4)
Thus for all practical purposes, the RAM available for applications is MemFree + Cached.
Is that view correct?
linux memory cache meminfo
linux memory cache meminfo
edited 6 hours ago
Braiam
23.4k1977140
23.4k1977140
asked 13 hours ago
Roland SeuhsRoland Seuhs
1415
1415
1
I don't want to gold-hammer this closed, but this question is relevant if not a duplicate. I’m surprised you don’t haveMemAvailable
, it was added in 3.14.
– Stephen Kitt
9 hours ago
The accepted answer from that question uses /proc/zoneinfo, which isn't available on my vserver as well
– Roland Seuhs
8 hours ago
What doesuname -a
output?
– Stephen Kitt
8 hours ago
uname -a: Linux host 4.4.0-042stab134.8 #1 SMP Fri Dec 7 17:16:09 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux
– Roland Seuhs
5 hours ago
I suspect this is an OpenVZ system with a kernel which is really based on 2.6.32, not 4.4.
– Stephen Kitt
4 hours ago
|
show 1 more comment
1
I don't want to gold-hammer this closed, but this question is relevant if not a duplicate. I’m surprised you don’t haveMemAvailable
, it was added in 3.14.
– Stephen Kitt
9 hours ago
The accepted answer from that question uses /proc/zoneinfo, which isn't available on my vserver as well
– Roland Seuhs
8 hours ago
What doesuname -a
output?
– Stephen Kitt
8 hours ago
uname -a: Linux host 4.4.0-042stab134.8 #1 SMP Fri Dec 7 17:16:09 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux
– Roland Seuhs
5 hours ago
I suspect this is an OpenVZ system with a kernel which is really based on 2.6.32, not 4.4.
– Stephen Kitt
4 hours ago
1
1
I don't want to gold-hammer this closed, but this question is relevant if not a duplicate. I’m surprised you don’t have
MemAvailable
, it was added in 3.14.– Stephen Kitt
9 hours ago
I don't want to gold-hammer this closed, but this question is relevant if not a duplicate. I’m surprised you don’t have
MemAvailable
, it was added in 3.14.– Stephen Kitt
9 hours ago
The accepted answer from that question uses /proc/zoneinfo, which isn't available on my vserver as well
– Roland Seuhs
8 hours ago
The accepted answer from that question uses /proc/zoneinfo, which isn't available on my vserver as well
– Roland Seuhs
8 hours ago
What does
uname -a
output?– Stephen Kitt
8 hours ago
What does
uname -a
output?– Stephen Kitt
8 hours ago
uname -a: Linux host 4.4.0-042stab134.8 #1 SMP Fri Dec 7 17:16:09 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux
– Roland Seuhs
5 hours ago
uname -a: Linux host 4.4.0-042stab134.8 #1 SMP Fri Dec 7 17:16:09 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux
– Roland Seuhs
5 hours ago
I suspect this is an OpenVZ system with a kernel which is really based on 2.6.32, not 4.4.
– Stephen Kitt
4 hours ago
I suspect this is an OpenVZ system with a kernel which is really based on 2.6.32, not 4.4.
– Stephen Kitt
4 hours ago
|
show 1 more comment
1 Answer
1
active
oldest
votes
That view became outdated. The kernel now provides an estimate for available memory, in the MemAvailable
field. This value is significantly different from MemFree + Cached
.
/proc/meminfo: provide estimated available memory [kernel change description, 2014]
Many load balancing and workload placing programs check /proc/meminfo
to estimate how much free memory is available. They generally do this
by adding up "free" and "cached", which was fine ten years ago, but is
pretty much guaranteed to be wrong today.
It is wrong because Cached
includes memory that is not freeable as page cache, for example shared
memory segments, tmpfs, and ramfs, and it does not include reclaimable
slab memory, which can take up a large fraction of system memory on
mostly idle systems with lots of files.
Currently, the amount of
memory that is available for a new workload, without pushing the
system into swap, can be estimated from MemFree, Active(file),
Inactive(file), and SReclaimable, as well as the "low" watermarks from
/proc/zoneinfo. However, this may change in the future, and user space
really should not be expected to know kernel internals to come up with
an estimate for the amount of free memory. It is more convenient to
provide such an estimate in /proc/meminfo. If things change in the
future, we only have to change it in one place.
...
Documentation/filesystems/proc.txt:
...MemAvailable
: An estimate of how much memory is available for
starting new
applications, without swapping. Calculated from MemFree,
SReclaimable, the size of the file LRU lists, and the low
watermarks in each zone.
The estimate takes into account that the system needs some
page cache to function well, and that not all reclaimable
slab will be reclaimable, due to items being in use. The
impact of those factors will vary from system to system.
I expect the points about Cached
/ the page cache are the most noticeable ones, when you look at personal computers. Even in normal operation you can have a fair amount in tmpfs/shmem, and this cannot be reclaimed, it can only be moved to swap. This may even include some graphics memory allocations.
Whereas, the point about "lots of files" might not be relevant for many PC workloads. Even so, I currently have 500MB reclaimable slab memory on my laptop (out of 8GB of RAM). This is due to ext4_inode_cache
(over 300K objects). It happened because I recently had to scan the whole filesystem, to find what was using my disk space :-). I used the command df -x / | sort
, but e.g. Gnome Disk Usage Analyzer would do the same thing.
[edit] Memory in control groups
So-called "Linux containers" are built up from namespaces
, cgroups
, and various other features according to taste :-). They may provide a convincing enough environment to run something almost like a full Linux system. Hosting services can build such containers and sell them as "virtual servers" :-).
Control groups include the ability to set memory limits on the processes inside them. If you run your application inside such a cgroup, then not all of the system memory will be available to the application :-). So, how can we see the available memory in this case?
The interface for this differs in a number of ways, depending if you use cgroup-v1 or cgroup-v2.
My laptop install uses cgroup-v1. I can run cat /sys/fs/cgroup/memory/memory.stat
. This includes total_shmem
. shmem, including tmpfs, counts towards the memory limits. I guess total_rss - (total_cached - total_shmem)
is non-reclaimable. Plus the file memory.kmem.usage_in_bytes
, representing kernel memory including slabs. Though I'm not sure if memory.kmem.tcp.*
is counted under memory.kmem.*
, or independently. The cgroup-v1 document says it does not reclaim any slab memory when hitting the limit, and there is not a separate counter to view reclaimable slabs.
cgroup-v2 is different. I think the root (top-level) cgroup doesn't support memory accounting. cgroup-v2 still has a memory.stat
file. All the fields sum over child cgroups, so you don't need to look for total_...
fields. There is a file
field, which means the same thing as cache
. Surprisingly I don't see an overall field like rss
inside memory.stat
; I guess you would have to add up individual fields. There are separate stats for reclaimable and unreclaimable slab memory; I think a v2 cgroup is designed to reclaim slabs when it starts to run low on memory.
Linux cgroups do not automatically virtualize /proc/meminfo
(or any other file in /proc
), so that would show the "host" values. This would confuse VPS customers, but it is also possible to use namespaces to replace /proc/meminfo
with a file faked up by the specific container software. How useful the fake values are, would depend on what that specific software does.
systemd
believes cgroup-v1 cannot be securely delegated e.g. to containers. I looked inside a systemd-nspawn
container on my cgroup-v1 system. I can see the cgroup it has been placed inside, and the memory accounting on that. On the other hand the contained systemd
does not set up the usual per-service cgroups for resource accounting. If memory accounting was not enabled inside this cgroup, I assume the container would not be able to enable it.
I assume if you're inside a cgroup-v2 container, it will look different to the root of a real cgroup-v2 system, and you will be able to see memory accounting for its top-level cgroup. Or if the cgroup you can see does not have memory accounting enabled, you will hopefully be delegated permission to enable memory accounting in systemd
(or equivalent).
1
Official doc elixir.bootlin.com/linux/v5.0-rc5/source/Documentation/…
– stark
12 hours ago
1
it clicky nao. I use GitHub links because they show the first release containing the commit (similar togit describe --contains
). Found it linked as a TL;DR by an SU question, which turned out to be just quoting the section added to proc.txt. But for this question, the commit description is just perfect IMO :-).
– sourcejedi
12 hours ago
MemAvailable doesn't seem to be available on most virtual servers... what to do then?
– Roland Seuhs
10 hours ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f499649%2fhow-to-know-how-much-memory-is-availabe-on-a-virtual-server-since-memavailable-i%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
That view became outdated. The kernel now provides an estimate for available memory, in the MemAvailable
field. This value is significantly different from MemFree + Cached
.
/proc/meminfo: provide estimated available memory [kernel change description, 2014]
Many load balancing and workload placing programs check /proc/meminfo
to estimate how much free memory is available. They generally do this
by adding up "free" and "cached", which was fine ten years ago, but is
pretty much guaranteed to be wrong today.
It is wrong because Cached
includes memory that is not freeable as page cache, for example shared
memory segments, tmpfs, and ramfs, and it does not include reclaimable
slab memory, which can take up a large fraction of system memory on
mostly idle systems with lots of files.
Currently, the amount of
memory that is available for a new workload, without pushing the
system into swap, can be estimated from MemFree, Active(file),
Inactive(file), and SReclaimable, as well as the "low" watermarks from
/proc/zoneinfo. However, this may change in the future, and user space
really should not be expected to know kernel internals to come up with
an estimate for the amount of free memory. It is more convenient to
provide such an estimate in /proc/meminfo. If things change in the
future, we only have to change it in one place.
...
Documentation/filesystems/proc.txt:
...MemAvailable
: An estimate of how much memory is available for
starting new
applications, without swapping. Calculated from MemFree,
SReclaimable, the size of the file LRU lists, and the low
watermarks in each zone.
The estimate takes into account that the system needs some
page cache to function well, and that not all reclaimable
slab will be reclaimable, due to items being in use. The
impact of those factors will vary from system to system.
I expect the points about Cached
/ the page cache are the most noticeable ones, when you look at personal computers. Even in normal operation you can have a fair amount in tmpfs/shmem, and this cannot be reclaimed, it can only be moved to swap. This may even include some graphics memory allocations.
Whereas, the point about "lots of files" might not be relevant for many PC workloads. Even so, I currently have 500MB reclaimable slab memory on my laptop (out of 8GB of RAM). This is due to ext4_inode_cache
(over 300K objects). It happened because I recently had to scan the whole filesystem, to find what was using my disk space :-). I used the command df -x / | sort
, but e.g. Gnome Disk Usage Analyzer would do the same thing.
[edit] Memory in control groups
So-called "Linux containers" are built up from namespaces
, cgroups
, and various other features according to taste :-). They may provide a convincing enough environment to run something almost like a full Linux system. Hosting services can build such containers and sell them as "virtual servers" :-).
Control groups include the ability to set memory limits on the processes inside them. If you run your application inside such a cgroup, then not all of the system memory will be available to the application :-). So, how can we see the available memory in this case?
The interface for this differs in a number of ways, depending if you use cgroup-v1 or cgroup-v2.
My laptop install uses cgroup-v1. I can run cat /sys/fs/cgroup/memory/memory.stat
. This includes total_shmem
. shmem, including tmpfs, counts towards the memory limits. I guess total_rss - (total_cached - total_shmem)
is non-reclaimable. Plus the file memory.kmem.usage_in_bytes
, representing kernel memory including slabs. Though I'm not sure if memory.kmem.tcp.*
is counted under memory.kmem.*
, or independently. The cgroup-v1 document says it does not reclaim any slab memory when hitting the limit, and there is not a separate counter to view reclaimable slabs.
cgroup-v2 is different. I think the root (top-level) cgroup doesn't support memory accounting. cgroup-v2 still has a memory.stat
file. All the fields sum over child cgroups, so you don't need to look for total_...
fields. There is a file
field, which means the same thing as cache
. Surprisingly I don't see an overall field like rss
inside memory.stat
; I guess you would have to add up individual fields. There are separate stats for reclaimable and unreclaimable slab memory; I think a v2 cgroup is designed to reclaim slabs when it starts to run low on memory.
Linux cgroups do not automatically virtualize /proc/meminfo
(or any other file in /proc
), so that would show the "host" values. This would confuse VPS customers, but it is also possible to use namespaces to replace /proc/meminfo
with a file faked up by the specific container software. How useful the fake values are, would depend on what that specific software does.
systemd
believes cgroup-v1 cannot be securely delegated e.g. to containers. I looked inside a systemd-nspawn
container on my cgroup-v1 system. I can see the cgroup it has been placed inside, and the memory accounting on that. On the other hand the contained systemd
does not set up the usual per-service cgroups for resource accounting. If memory accounting was not enabled inside this cgroup, I assume the container would not be able to enable it.
I assume if you're inside a cgroup-v2 container, it will look different to the root of a real cgroup-v2 system, and you will be able to see memory accounting for its top-level cgroup. Or if the cgroup you can see does not have memory accounting enabled, you will hopefully be delegated permission to enable memory accounting in systemd
(or equivalent).
1
Official doc elixir.bootlin.com/linux/v5.0-rc5/source/Documentation/…
– stark
12 hours ago
1
it clicky nao. I use GitHub links because they show the first release containing the commit (similar togit describe --contains
). Found it linked as a TL;DR by an SU question, which turned out to be just quoting the section added to proc.txt. But for this question, the commit description is just perfect IMO :-).
– sourcejedi
12 hours ago
MemAvailable doesn't seem to be available on most virtual servers... what to do then?
– Roland Seuhs
10 hours ago
add a comment |
That view became outdated. The kernel now provides an estimate for available memory, in the MemAvailable
field. This value is significantly different from MemFree + Cached
.
/proc/meminfo: provide estimated available memory [kernel change description, 2014]
Many load balancing and workload placing programs check /proc/meminfo
to estimate how much free memory is available. They generally do this
by adding up "free" and "cached", which was fine ten years ago, but is
pretty much guaranteed to be wrong today.
It is wrong because Cached
includes memory that is not freeable as page cache, for example shared
memory segments, tmpfs, and ramfs, and it does not include reclaimable
slab memory, which can take up a large fraction of system memory on
mostly idle systems with lots of files.
Currently, the amount of
memory that is available for a new workload, without pushing the
system into swap, can be estimated from MemFree, Active(file),
Inactive(file), and SReclaimable, as well as the "low" watermarks from
/proc/zoneinfo. However, this may change in the future, and user space
really should not be expected to know kernel internals to come up with
an estimate for the amount of free memory. It is more convenient to
provide such an estimate in /proc/meminfo. If things change in the
future, we only have to change it in one place.
...
Documentation/filesystems/proc.txt:
...MemAvailable
: An estimate of how much memory is available for
starting new
applications, without swapping. Calculated from MemFree,
SReclaimable, the size of the file LRU lists, and the low
watermarks in each zone.
The estimate takes into account that the system needs some
page cache to function well, and that not all reclaimable
slab will be reclaimable, due to items being in use. The
impact of those factors will vary from system to system.
I expect the points about Cached
/ the page cache are the most noticeable ones, when you look at personal computers. Even in normal operation you can have a fair amount in tmpfs/shmem, and this cannot be reclaimed, it can only be moved to swap. This may even include some graphics memory allocations.
Whereas, the point about "lots of files" might not be relevant for many PC workloads. Even so, I currently have 500MB reclaimable slab memory on my laptop (out of 8GB of RAM). This is due to ext4_inode_cache
(over 300K objects). It happened because I recently had to scan the whole filesystem, to find what was using my disk space :-). I used the command df -x / | sort
, but e.g. Gnome Disk Usage Analyzer would do the same thing.
[edit] Memory in control groups
So-called "Linux containers" are built up from namespaces
, cgroups
, and various other features according to taste :-). They may provide a convincing enough environment to run something almost like a full Linux system. Hosting services can build such containers and sell them as "virtual servers" :-).
Control groups include the ability to set memory limits on the processes inside them. If you run your application inside such a cgroup, then not all of the system memory will be available to the application :-). So, how can we see the available memory in this case?
The interface for this differs in a number of ways, depending if you use cgroup-v1 or cgroup-v2.
My laptop install uses cgroup-v1. I can run cat /sys/fs/cgroup/memory/memory.stat
. This includes total_shmem
. shmem, including tmpfs, counts towards the memory limits. I guess total_rss - (total_cached - total_shmem)
is non-reclaimable. Plus the file memory.kmem.usage_in_bytes
, representing kernel memory including slabs. Though I'm not sure if memory.kmem.tcp.*
is counted under memory.kmem.*
, or independently. The cgroup-v1 document says it does not reclaim any slab memory when hitting the limit, and there is not a separate counter to view reclaimable slabs.
cgroup-v2 is different. I think the root (top-level) cgroup doesn't support memory accounting. cgroup-v2 still has a memory.stat
file. All the fields sum over child cgroups, so you don't need to look for total_...
fields. There is a file
field, which means the same thing as cache
. Surprisingly I don't see an overall field like rss
inside memory.stat
; I guess you would have to add up individual fields. There are separate stats for reclaimable and unreclaimable slab memory; I think a v2 cgroup is designed to reclaim slabs when it starts to run low on memory.
Linux cgroups do not automatically virtualize /proc/meminfo
(or any other file in /proc
), so that would show the "host" values. This would confuse VPS customers, but it is also possible to use namespaces to replace /proc/meminfo
with a file faked up by the specific container software. How useful the fake values are, would depend on what that specific software does.
systemd
believes cgroup-v1 cannot be securely delegated e.g. to containers. I looked inside a systemd-nspawn
container on my cgroup-v1 system. I can see the cgroup it has been placed inside, and the memory accounting on that. On the other hand the contained systemd
does not set up the usual per-service cgroups for resource accounting. If memory accounting was not enabled inside this cgroup, I assume the container would not be able to enable it.
I assume if you're inside a cgroup-v2 container, it will look different to the root of a real cgroup-v2 system, and you will be able to see memory accounting for its top-level cgroup. Or if the cgroup you can see does not have memory accounting enabled, you will hopefully be delegated permission to enable memory accounting in systemd
(or equivalent).
1
Official doc elixir.bootlin.com/linux/v5.0-rc5/source/Documentation/…
– stark
12 hours ago
1
it clicky nao. I use GitHub links because they show the first release containing the commit (similar togit describe --contains
). Found it linked as a TL;DR by an SU question, which turned out to be just quoting the section added to proc.txt. But for this question, the commit description is just perfect IMO :-).
– sourcejedi
12 hours ago
MemAvailable doesn't seem to be available on most virtual servers... what to do then?
– Roland Seuhs
10 hours ago
add a comment |
That view became outdated. The kernel now provides an estimate for available memory, in the MemAvailable
field. This value is significantly different from MemFree + Cached
.
/proc/meminfo: provide estimated available memory [kernel change description, 2014]
Many load balancing and workload placing programs check /proc/meminfo
to estimate how much free memory is available. They generally do this
by adding up "free" and "cached", which was fine ten years ago, but is
pretty much guaranteed to be wrong today.
It is wrong because Cached
includes memory that is not freeable as page cache, for example shared
memory segments, tmpfs, and ramfs, and it does not include reclaimable
slab memory, which can take up a large fraction of system memory on
mostly idle systems with lots of files.
Currently, the amount of
memory that is available for a new workload, without pushing the
system into swap, can be estimated from MemFree, Active(file),
Inactive(file), and SReclaimable, as well as the "low" watermarks from
/proc/zoneinfo. However, this may change in the future, and user space
really should not be expected to know kernel internals to come up with
an estimate for the amount of free memory. It is more convenient to
provide such an estimate in /proc/meminfo. If things change in the
future, we only have to change it in one place.
...
Documentation/filesystems/proc.txt:
...MemAvailable
: An estimate of how much memory is available for
starting new
applications, without swapping. Calculated from MemFree,
SReclaimable, the size of the file LRU lists, and the low
watermarks in each zone.
The estimate takes into account that the system needs some
page cache to function well, and that not all reclaimable
slab will be reclaimable, due to items being in use. The
impact of those factors will vary from system to system.
I expect the points about Cached
/ the page cache are the most noticeable ones, when you look at personal computers. Even in normal operation you can have a fair amount in tmpfs/shmem, and this cannot be reclaimed, it can only be moved to swap. This may even include some graphics memory allocations.
Whereas, the point about "lots of files" might not be relevant for many PC workloads. Even so, I currently have 500MB reclaimable slab memory on my laptop (out of 8GB of RAM). This is due to ext4_inode_cache
(over 300K objects). It happened because I recently had to scan the whole filesystem, to find what was using my disk space :-). I used the command df -x / | sort
, but e.g. Gnome Disk Usage Analyzer would do the same thing.
[edit] Memory in control groups
So-called "Linux containers" are built up from namespaces
, cgroups
, and various other features according to taste :-). They may provide a convincing enough environment to run something almost like a full Linux system. Hosting services can build such containers and sell them as "virtual servers" :-).
Control groups include the ability to set memory limits on the processes inside them. If you run your application inside such a cgroup, then not all of the system memory will be available to the application :-). So, how can we see the available memory in this case?
The interface for this differs in a number of ways, depending if you use cgroup-v1 or cgroup-v2.
My laptop install uses cgroup-v1. I can run cat /sys/fs/cgroup/memory/memory.stat
. This includes total_shmem
. shmem, including tmpfs, counts towards the memory limits. I guess total_rss - (total_cached - total_shmem)
is non-reclaimable. Plus the file memory.kmem.usage_in_bytes
, representing kernel memory including slabs. Though I'm not sure if memory.kmem.tcp.*
is counted under memory.kmem.*
, or independently. The cgroup-v1 document says it does not reclaim any slab memory when hitting the limit, and there is not a separate counter to view reclaimable slabs.
cgroup-v2 is different. I think the root (top-level) cgroup doesn't support memory accounting. cgroup-v2 still has a memory.stat
file. All the fields sum over child cgroups, so you don't need to look for total_...
fields. There is a file
field, which means the same thing as cache
. Surprisingly I don't see an overall field like rss
inside memory.stat
; I guess you would have to add up individual fields. There are separate stats for reclaimable and unreclaimable slab memory; I think a v2 cgroup is designed to reclaim slabs when it starts to run low on memory.
Linux cgroups do not automatically virtualize /proc/meminfo
(or any other file in /proc
), so that would show the "host" values. This would confuse VPS customers, but it is also possible to use namespaces to replace /proc/meminfo
with a file faked up by the specific container software. How useful the fake values are, would depend on what that specific software does.
systemd
believes cgroup-v1 cannot be securely delegated e.g. to containers. I looked inside a systemd-nspawn
container on my cgroup-v1 system. I can see the cgroup it has been placed inside, and the memory accounting on that. On the other hand the contained systemd
does not set up the usual per-service cgroups for resource accounting. If memory accounting was not enabled inside this cgroup, I assume the container would not be able to enable it.
I assume if you're inside a cgroup-v2 container, it will look different to the root of a real cgroup-v2 system, and you will be able to see memory accounting for its top-level cgroup. Or if the cgroup you can see does not have memory accounting enabled, you will hopefully be delegated permission to enable memory accounting in systemd
(or equivalent).
That view became outdated. The kernel now provides an estimate for available memory, in the MemAvailable
field. This value is significantly different from MemFree + Cached
.
/proc/meminfo: provide estimated available memory [kernel change description, 2014]
Many load balancing and workload placing programs check /proc/meminfo
to estimate how much free memory is available. They generally do this
by adding up "free" and "cached", which was fine ten years ago, but is
pretty much guaranteed to be wrong today.
It is wrong because Cached
includes memory that is not freeable as page cache, for example shared
memory segments, tmpfs, and ramfs, and it does not include reclaimable
slab memory, which can take up a large fraction of system memory on
mostly idle systems with lots of files.
Currently, the amount of
memory that is available for a new workload, without pushing the
system into swap, can be estimated from MemFree, Active(file),
Inactive(file), and SReclaimable, as well as the "low" watermarks from
/proc/zoneinfo. However, this may change in the future, and user space
really should not be expected to know kernel internals to come up with
an estimate for the amount of free memory. It is more convenient to
provide such an estimate in /proc/meminfo. If things change in the
future, we only have to change it in one place.
...
Documentation/filesystems/proc.txt:
...MemAvailable
: An estimate of how much memory is available for
starting new
applications, without swapping. Calculated from MemFree,
SReclaimable, the size of the file LRU lists, and the low
watermarks in each zone.
The estimate takes into account that the system needs some
page cache to function well, and that not all reclaimable
slab will be reclaimable, due to items being in use. The
impact of those factors will vary from system to system.
I expect the points about Cached
/ the page cache are the most noticeable ones, when you look at personal computers. Even in normal operation you can have a fair amount in tmpfs/shmem, and this cannot be reclaimed, it can only be moved to swap. This may even include some graphics memory allocations.
Whereas, the point about "lots of files" might not be relevant for many PC workloads. Even so, I currently have 500MB reclaimable slab memory on my laptop (out of 8GB of RAM). This is due to ext4_inode_cache
(over 300K objects). It happened because I recently had to scan the whole filesystem, to find what was using my disk space :-). I used the command df -x / | sort
, but e.g. Gnome Disk Usage Analyzer would do the same thing.
[edit] Memory in control groups
So-called "Linux containers" are built up from namespaces
, cgroups
, and various other features according to taste :-). They may provide a convincing enough environment to run something almost like a full Linux system. Hosting services can build such containers and sell them as "virtual servers" :-).
Control groups include the ability to set memory limits on the processes inside them. If you run your application inside such a cgroup, then not all of the system memory will be available to the application :-). So, how can we see the available memory in this case?
The interface for this differs in a number of ways, depending if you use cgroup-v1 or cgroup-v2.
My laptop install uses cgroup-v1. I can run cat /sys/fs/cgroup/memory/memory.stat
. This includes total_shmem
. shmem, including tmpfs, counts towards the memory limits. I guess total_rss - (total_cached - total_shmem)
is non-reclaimable. Plus the file memory.kmem.usage_in_bytes
, representing kernel memory including slabs. Though I'm not sure if memory.kmem.tcp.*
is counted under memory.kmem.*
, or independently. The cgroup-v1 document says it does not reclaim any slab memory when hitting the limit, and there is not a separate counter to view reclaimable slabs.
cgroup-v2 is different. I think the root (top-level) cgroup doesn't support memory accounting. cgroup-v2 still has a memory.stat
file. All the fields sum over child cgroups, so you don't need to look for total_...
fields. There is a file
field, which means the same thing as cache
. Surprisingly I don't see an overall field like rss
inside memory.stat
; I guess you would have to add up individual fields. There are separate stats for reclaimable and unreclaimable slab memory; I think a v2 cgroup is designed to reclaim slabs when it starts to run low on memory.
Linux cgroups do not automatically virtualize /proc/meminfo
(or any other file in /proc
), so that would show the "host" values. This would confuse VPS customers, but it is also possible to use namespaces to replace /proc/meminfo
with a file faked up by the specific container software. How useful the fake values are, would depend on what that specific software does.
systemd
believes cgroup-v1 cannot be securely delegated e.g. to containers. I looked inside a systemd-nspawn
container on my cgroup-v1 system. I can see the cgroup it has been placed inside, and the memory accounting on that. On the other hand the contained systemd
does not set up the usual per-service cgroups for resource accounting. If memory accounting was not enabled inside this cgroup, I assume the container would not be able to enable it.
I assume if you're inside a cgroup-v2 container, it will look different to the root of a real cgroup-v2 system, and you will be able to see memory accounting for its top-level cgroup. Or if the cgroup you can see does not have memory accounting enabled, you will hopefully be delegated permission to enable memory accounting in systemd
(or equivalent).
edited 4 hours ago
answered 12 hours ago
sourcejedisourcejedi
24.2k439106
24.2k439106
1
Official doc elixir.bootlin.com/linux/v5.0-rc5/source/Documentation/…
– stark
12 hours ago
1
it clicky nao. I use GitHub links because they show the first release containing the commit (similar togit describe --contains
). Found it linked as a TL;DR by an SU question, which turned out to be just quoting the section added to proc.txt. But for this question, the commit description is just perfect IMO :-).
– sourcejedi
12 hours ago
MemAvailable doesn't seem to be available on most virtual servers... what to do then?
– Roland Seuhs
10 hours ago
add a comment |
1
Official doc elixir.bootlin.com/linux/v5.0-rc5/source/Documentation/…
– stark
12 hours ago
1
it clicky nao. I use GitHub links because they show the first release containing the commit (similar togit describe --contains
). Found it linked as a TL;DR by an SU question, which turned out to be just quoting the section added to proc.txt. But for this question, the commit description is just perfect IMO :-).
– sourcejedi
12 hours ago
MemAvailable doesn't seem to be available on most virtual servers... what to do then?
– Roland Seuhs
10 hours ago
1
1
Official doc elixir.bootlin.com/linux/v5.0-rc5/source/Documentation/…
– stark
12 hours ago
Official doc elixir.bootlin.com/linux/v5.0-rc5/source/Documentation/…
– stark
12 hours ago
1
1
it clicky nao. I use GitHub links because they show the first release containing the commit (similar to
git describe --contains
). Found it linked as a TL;DR by an SU question, which turned out to be just quoting the section added to proc.txt. But for this question, the commit description is just perfect IMO :-).– sourcejedi
12 hours ago
it clicky nao. I use GitHub links because they show the first release containing the commit (similar to
git describe --contains
). Found it linked as a TL;DR by an SU question, which turned out to be just quoting the section added to proc.txt. But for this question, the commit description is just perfect IMO :-).– sourcejedi
12 hours ago
MemAvailable doesn't seem to be available on most virtual servers... what to do then?
– Roland Seuhs
10 hours ago
MemAvailable doesn't seem to be available on most virtual servers... what to do then?
– Roland Seuhs
10 hours ago
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f499649%2fhow-to-know-how-much-memory-is-availabe-on-a-virtual-server-since-memavailable-i%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
I don't want to gold-hammer this closed, but this question is relevant if not a duplicate. I’m surprised you don’t have
MemAvailable
, it was added in 3.14.– Stephen Kitt
9 hours ago
The accepted answer from that question uses /proc/zoneinfo, which isn't available on my vserver as well
– Roland Seuhs
8 hours ago
What does
uname -a
output?– Stephen Kitt
8 hours ago
uname -a: Linux host 4.4.0-042stab134.8 #1 SMP Fri Dec 7 17:16:09 MSK 2018 x86_64 x86_64 x86_64 GNU/Linux
– Roland Seuhs
5 hours ago
I suspect this is an OpenVZ system with a kernel which is really based on 2.6.32, not 4.4.
– Stephen Kitt
4 hours ago