Time |
Nick |
Message |
00:28 |
MTDiscord |
<savilli> Are you sure M_TRIM_THRESHOLD would even help there? According to man, the default value from M_TRIM_THRESHOLD is 128*1024 bytes. In the forum thread, the amount of supposedly freed memory is significantly bigger, and yet the memory is not returned. |
01:13 |
|
proller joined #minetest-dev |
01:17 |
|
proller joined #minetest-dev |
04:00 |
|
MTDiscord joined #minetest-dev |
05:31 |
|
Noisytoot joined #minetest-dev |
07:27 |
sfan5 |
libc tuning is out of scope for MT imo |
07:32 |
sfan5 |
pgimeno: hmmm from the issue report on the LJ repo that wasn't so clear to me |
07:35 |
sfan5 |
...however people also said it affects android |
07:46 |
pgimeno |
relevant to the memory issue: https://stackoverflow.com/questions/27945855/force-free-to-return-malloc-memory-back-to-os |
07:47 |
pgimeno |
sfan5: yes but it appears to affect Android differently |
07:58 |
pgimeno |
apparently the problem lies in whether the OS is able to mmap the memory within 128 MB of the critical symbols, and that's what varies with the OS |
07:59 |
pgimeno |
(128 MB for ARM obviously) |
08:06 |
pgimeno |
IIUC, the workaround given in the issue (using -Wl,-image_base,<somewhere far away>) works because it places the library in a virtual address that isn't surrounded by occupied addresses |
08:09 |
pgimeno |
unfortunately it also has a side effect, which is to prevent ASLR for the LuaJIT library |
09:52 |
sfan5 |
sounds like a possible short term workaround would be to place vm_exit_handler inside its own section? |
11:01 |
pgimeno |
not sure how that would help, if you mean by placing them at the end then I doubt it would |
11:15 |
|
SpaceManiac joined #minetest-dev |
11:23 |
|
behalebabo joined #minetest-dev |
11:39 |
|
hlqkj joined #minetest-dev |
12:37 |
|
imi joined #minetest-dev |
12:50 |
|
Lupercus joined #minetest-dev |
14:05 |
|
Sokomine joined #minetest-dev |
14:08 |
sfan5 |
the section can be mapped independently with free space around it |
14:48 |
MTDiscord |
<savilli> I tested M_TRIM_THRESHOLD and it doesn't do shit. Gigabytes of memory are still not returned. |
14:49 |
MTDiscord |
<savilli> The only way to return memory I found is to call malloc_trim |
14:49 |
MTDiscord |
<savilli> Probably the same situation as here https://stackoverflow.com/questions/38644578/understanding-glibc-malloc-trimming |
15:38 |
sfan5 |
the thing with mapblocks is they use an allocation of a nice round 16384 bytes, mmap makes the most sense for these and that should be directly freeable |
15:39 |
sfan5 |
looks like glibc.malloc.mmap_threshold is 131072 so nevermind |
15:40 |
|
Desour joined #minetest-dev |
15:46 |
|
Thomas-S_ joined #minetest-dev |
17:51 |
MTDiscord |
<luatic> i see. have we considered switching to a malloc library like jemalloc or tcmalloc? would there be a strong reason against this? |
18:00 |
sfan5 |
by "switching to it" you mean bundling it by default? |
18:01 |
MTDiscord |
<luatic> yes |
18:04 |
MTDiscord |
<andrey2470t> As for my ambient light: it has one approval from appgurueu, is waiting for a second approval mandatory from anyone else to be merged? Because like my PR after that approval is in the same situation as before: it is awaiting for unknowingly something and someone, but the feature freeze is approaching |
18:06 |
MTDiscord |
<andrey2470t> Moreover it was reviewed already multiple times, so does it make sense to await for reviews still? |
18:06 |
MTDiscord |
<savilli> I tested few alternative allocators - tcmalloc and mimalloc also don't return the memory, but jemalloc does. I'm not sure about its performance tho. |
18:11 |
MTDiscord |
<josiah_wi> I believe jemalloc has good performance. |
18:11 |
MTDiscord |
<josiah_wi> I haven't tested it personally, but that's what I've heard. |
18:17 |
[MTMatrix] |
<grorp> andrey2470t: it's neither trivial nor does it fulfill the requirements for the experimental one approval rule, so yes. |
18:17 |
[MTMatrix] |
<grorp> considering the amount of discussion that's already happened on the PR, giving it a second review should be relatively easy though. I'll have a look in the evening. |
18:21 |
MTDiscord |
<andrey2470t> It is nice to hear, grorp 🙂 Thanks. |
18:22 |
MTDiscord |
<rollerozxa> re: memalloc: interesting 🤔 so is this why RAM usage for minetest servers on linux always steadily rises without freeing any until it OOMs? |
18:44 |
Mantar |
I think that may depend on the game somewhat, I haven't noticed that happening on Exile servers |
18:51 |
sfan5 |
if it's just the mapblock data then a custom allocator for that would be easy |
19:22 |
|
imi joined #minetest-dev |
20:21 |
MTDiscord |
<savilli> In theory, yes. But it should be thread-safe and somehow smart to return the excessive memory. |
20:30 |
|
Thomas-S joined #minetest-dev |
20:31 |
MTDiscord |
<luatic> assuming it's just the mapblocks feels like it would end in whack-a-mole. after we "fix" the mapblocks using a custom allocator, the same issue with all kinds of other allocations would probably crop up. |
20:43 |
sfan5 |
well if that fixes 90% of "leaks" then it's probably good enough for a long time |
20:43 |
sfan5 |
but I agree it's better fixed in the allocator (glibc in this case) |
20:46 |
MTDiscord |
<josiah_wi> It should be very low effort to test an allocator such as jemalloc. |
20:46 |
sfan5 |
savilli already did :) |
20:47 |
MTDiscord |
<josiah_wi> Oh, right. |
21:09 |
sfan5 |
@savilli a fixed size arena allocator is really easy. get a big memory block and keep a bitmap of used blocks. big would mean e.g. 16MB |
21:10 |
sfan5 |
this would not necessarily perform well but certainly better than what is happening with glibc. and it's guaranteed to free everything when you reach zero blocks |
21:10 |
sfan5 |
with access to the data structure you could even implement memory compaction so you don't get the worst case of having multiple chunks and all of them 1% used |
21:10 |
sfan5 |
or just call madvice(MADV_DONTNEED) on those parts yourself |
21:12 |
sfan5 |
to be clear I would use this for the MapBlock::data array. I reckon this is what is triggering the bad glibc behavior |
21:15 |
sfan5 |
*or* we could side-step all of this by just calling malloc_trim ourselves |
21:19 |
Desour |
why does the mem usage grow so much in the first place anyways? |
21:19 |
Desour |
is it fragmentation? |
21:22 |
sfan5 |
the server starts unloading data after 30s only, this point is the peak memory usage |
21:23 |
sfan5 |
system bytes   = 1242898432 |
21:23 |
sfan5 |
in use bytes   = 1146413296 |
21:23 |
sfan5 |
wait let me graph this |
21:29 |
sfan5 |
https://0x0.st/X8Qt.png this is with malloc_trim. also the data only starts once the first unload happens (after 30s like I said) |
21:29 |
sfan5 |
oops, actually malloc_trim doesn't make a difference here |
21:29 |
sfan5 |
but you can see it in external monitoring (e.g. htop) |
21:30 |
sfan5 |
without malloc_trim the RSS memory is basically equal to the "system" line |
21:33 |
Desour |
so server owners can already decrease the server_unload_unused_data_timeout setting |
21:39 |
sfan5 |
sure but that only works around the fact that RSS never decreases from the peak |
21:44 |
Desour |
but it might make the peak at least a little lower in some cases |
21:57 |
sfan5 |
anyway I commented my results in the forum post |
22:01 |
sfan5 |
http://sprunge.us/8wtD9b?diff pushing in 15m |
22:28 |
|
fluxionary joined #minetest-dev |
22:34 |
|
panwolfram joined #minetest-dev |
22:45 |
MTDiscord |
<savilli> I replaced MapBlock::data allocation with mmap and munmap calls and it fixed the problem 🎉 . It means a custom allocator would work too. |