Time Nick Message 03:48 thelounge650 test 03:48 [MTMatrix]_ Well, it looks like the new matrix bridge is working, on the test run 03:56 MisterE123 test 03:59 MTDiscord oh thats a problem: on matrix, I cant see who is speaking from discord 04:02 MTDiscord Is this any better? 04:02 [MTMatrix] |MisterE|nope 04:06 MTDiscord test again (sorry) 04:12 MisterE123 Matrix bridge is down again due to discord user ids not showing up in matrix 14:17 Krock Sokomine: there are a few open issues about errors and also pull requests on https://github.com/Sokomine/mg_villages/issues . Might you have time to look at them? Alternatively it could also be arranged to move the repository to minetest-mods so that multiple contributors could maintain it 14:17 Krock including you, of course. 14:27 erle public service announcement: mm3d is a good 3d modeling program (but you need to use the C locale, because it is confused about commas and dots in file formats when saving otherwise) 14:28 erle also don't be surprised if installing mm3d gets you blender, it's the stupidest dependency in debian ever lol 15:52 muurkha erle: thanks! 15:52 muurkha erle: glad to hear about your text rendering library and the goatseing 15:56 muurkha mm3d would benefit from a tutorial I think 16:01 erle muurkha i hope kilbith is gone for good? 16:01 erle i mean what i heard was broadly that he was always a total ass to everyone 16:01 ROllerozxa kilbith is almost certainly gone for good 16:01 erle nice 16:02 erle hey, were anyone of you at ccc camp? if so, did we maybe meet? also, anyone going to be at congress if there is one? 16:02 Malo95 hey has anyone noticed skinsdb is offline? http://minetest.fensta.bplaced.net/ 16:02 Malo95 whats the deal with this, has a mirror popped up or are we without a skins dabatase currently? 16:02 muurkha I think you drove away kilbith this time 16:02 muurkha seems unlikely he's gone for good tho 16:03 Malo95 whats going on with kilbith 16:03 ROllerozxa Malo95: it's been down for a while, unfortunate but inevitable considering it's been hosted on a freehost 16:03 Malo95 is there any backup 16:03 muurkha he's been gone since the last time he argued with erlehmann 16:03 Malo95 any mirror 16:03 ROllerozxa muurkha: well he's banned and I don't think a ban appeal is on the table any time soon for him considering his behaviour 16:03 Malo95 whats the original author say, any comment from him? 16:03 muurkha ROllerozxa: that's good to hear 16:04 muurkha but he could join with another name if he wanted to grief people 16:05 Malo95 So currently there is no mirror or any way for people to get skins in skinsdb? 16:05 ROllerozxa Malo95: the hoster has been MIA for a while afaik, so something like this was pretty inevitable coming 16:05 Malo95 The mod itself ships with no skins, so if the db site it down there is also no place to download the skins. 16:05 Malo95 Ah ok so the guy is MIA too, damn. 16:05 Malo95 Some one should have backed it all up but 16:06 Malo95 it's not been down that long 16:06 ROllerozxa I'm sure some people have a backup of the skins database they can share 16:06 MTDiscord I have one, but I don't know if I can share it :P 16:06 ROllerozxa but it was also filled with copyright infringement... 16:18 MTDiscord Malo95: you can also try internet archive´s wayback machine 16:33 MTDiscord I also could share a copy, not sure how legal it is though 16:59 Sokomine Krock: my problem is that i really dislike 2fa. i've forgotten my passphrase for the 2nd or 3rd time. i can still log into github but would have to change the key again. for now, i've put mg_villages on a gitlab site hosted by the server admin of the server your land 16:59 Sokomine that's a gitea instance and works password-only. but i have no idea how to sync the repositories 17:02 MTDiscord if you set up remotes in your local repo, you can do git push gitea && git push github for example 17:03 MTDiscord or if that gitea instance has woodpecker ci or whatever the ci for gitea is called, can make an action on commit that pushes the updates to github 17:03 Sokomine yes. but my problem is that i can't push from my local repo to github anymore as that requires a key. the password for the key is cached locally. thus i type it only once in a while. and i forget it in the meantime 17:04 Sokomine hm. if the site could push the updates to github that'd be very fine. doesn't have to be automaticly. i'd be perfectly fine with doing that manually 17:04 Krock Sokomine: hmm .. since when is 2FA enforced? I did not and probably will not set this up 17:05 Sokomine if i could tell github on its website that i want to sync the mg_villages repo with https://gitea.your-land.de/Sokomine/mg_villages that'd be great 17:05 MTDiscord https://github.blog/2023-03-09-raising-the-bar-for-software-security-github-2fa-begins-march-13/ 17:05 Sokomine it was for me. i couldn't proceed without setting it up 17:08 Krock hmm.. https://i.postimg.cc/wMVVFrPx/grafik.png 17:13 MTDiscord Sokomine: https://docs.gitea.com/next/usage/repo-mirror#setting-up-a-push-mirror-from-gitea-to-github 17:14 Sokomine krock: hm. strange 17:18 Sokomine bla: that does seem to be a good solution. looking into it right now 17:37 Sokomine ok...says it's syncinc now. hope it works! that'd be a great relief 17:53 MTDiscord looks like it worked 17:53 Sokomine HAH! bla solved it! my mg_villages github repositiory is up to date again :-)))) 17:53 Sokomine rubenwardy: mg_villages is available and working again :-))) 17:54 Sokomine that's a huge relief. pushing from the command line with password is way easier than that *censored* 2fa github uses 17:55 MTDiscord now go fix something to check if the automatic mirroring works too lol 17:55 Sokomine but mg_villages is fixed so far :-) at least the manual update worked :-)) 17:57 Krock that's great. thank you for taking care of it. 17:58 rubenwardy awesome! published on cdb again 18:00 Krock I wonder whether there's an option on github to indicate that it's a mirror 18:00 MTDiscord you can turn issues off at least 18:00 Krock well that does not solve the issues though 18:00 Sokomine i was really stuck there. remembering a password that i have to use once every couple of month? not much chance :-( 18:01 rubenwardy is the gitlab repo public? 18:01 Krock Sokomine: use KeepassXC to keep track of your passwords 18:01 Sokomine 2fa is...it has its points, but most of the time it decreases security considerably 18:01 rubenwardy use a password manager, increases security by allowing your passwords to be randomly generated and all different 18:01 Sokomine well, there's some caching of passphrases involved on the command line somehow. but that makes forgetting just easier 18:01 rubenwardy 2fa improves security for those that don't have a password manager 18:01 Sokomine that's a good sum-up, yes 18:04 Sokomine still ought to offer a modpack again. but perhaps with contentdb that is less needed 18:05 rubenwardy CDB automatically installs deps, so there's no need to have a modpack with mg_villages and handle schematic 18:07 Sokomine that's good. but there are a few optional dependencies that are good to have but not necessary - like cottages (gives way more village types) and some additional village types that can be installed 18:12 MinetestBot 02[git] 04RisingLeaf -> 03minetest/minetest: Do not render objects that are invisble into the shadow map 136601515 https://github.com/minetest/minetest/commit/660151572fdf848ee8416eea480df42a3b317df4 (152023-08-26T18:12:17Z) 18:13 erle i did a talk on build system failure modes at ccc camp 2023 https://media.ccc.de/v/camp2023-57415-fantastic_build_system_failure_modes_and_how_to_fix_them 18:14 erle this is relevant to minetest because it is the project i know which has the most miscompiles i ever encountered 18:14 erle muurkha you might be interested 18:15 erle TL;DR: constructing a DAG for a build, then toposorting and then scheduling build steps is impossible for the general case, because you do not have enough information to construct the full DAG at the time you want to construct it. a build should rather be seen as a function that can be partially evaluated and needs to be iterated until it reaches a fixed point. think: recompile latex until the page numbers and references match up. 18:17 erle this means that no amount of fixing and testing can make DAG-toposort systems (e.g. make) get the general case right. it is easy to avoid all of these problems by using a top-down recursive build system instead. 18:17 erle i don't expect anyone to ditch cmake after the talk, but it can help to figure out when it does not make sense to spend work on fixing cmake problems once you recognize the problem is unfixable. 18:18 muurkha hmm, because for example you don't know which header files something is going to #include until you try to compile it, and some of your header files are dynamically generated? 18:18 erle yes, that is one case 18:18 erle i believe ninja has special cased c and c++ header files for that 18:18 erle but there is a simpler example 18:18 erle consider a target that needs to be always out of date at the time it is needed 18:19 erle e.g. a website footer that includes the date the page was built 18:19 muurkha that sounds like a hack to get the build system to do non-build-system things 18:19 erle the DAG thing schedules it once and every page has the same timestamp at the bottom 18:19 erle this is trivially wrong 18:19 muurkha do you have an example that wouldn't violate build reproducibility? 18:20 erle nothing violates build reproducibility, as long as you rebuild until you reach the fixed point, which you need to do anyway all the time 18:20 muurkha including the date the page was built violates build reproducibility! 18:20 erle the simple case here you build using make and then use the output of gcc -MD -MF to make a new makefile and then rebuild using that is a case where you need 2 steps to get to the fixed point 18:21 erle muurkha then have the subtarget include the name of the parent target somewhere or so 18:21 erle i mean then obviously it needs to be rebuilt every single time 18:22 erle in any case, the DAG is a nice abstraction for a problem that is adjacent to, but ultimately not really, the problem a build system usually tries to solve 18:22 muurkha to me it sounds like you're saying "sometimes people want to do arbitrary computation that can access any data in the world as part of their compilation" 18:22 erle that is true 18:22 erle but what about the latex example thing 18:23 muurkha where you have to rebuild until the reference page numbers stop changing? 18:23 erle no external information, but you have to shake it until the page numbers match up with the references 18:23 erle yes 18:23 erle that is intuitively a fixed point 18:23 muurkha it is 18:23 erle and you have to rebuild *every* project until you reach a fixed point, it is only that most of the time you reach it in one or two steps, so the DAG thing kinda seems to work 18:23 erle ultimately it does not though 18:23 muurkha "work" is undefined here 18:23 erle just watch the talk and then talk to me about it afterwards? otherwise i repeat myself a lot 18:24 erle the last part about redo is only so that people know it is possible 18:24 erle to solve the issues with 4 primitves 18:24 muurkha if "work" means "compute what you want" then yes sometimes people want to compute things that aren't a conventional build system 18:24 muurkha I don't usually watch talks 18:24 erle i suggest to read the microsoft paper “build systems a la carte” 18:24 erle for the purpose of my talk, excel is a build system 18:25 erle it has inputs (cells), rules (formulas), outputs (cells) 18:25 erle and incremental rebuilds 18:25 erle also recursion etc. 18:25 erle in any case, the most common answer to my assertion is “then just don't do that” but people are doing it 18:26 erle you don't have enough info to construct a correct DAG in one pass unless you are being very careful about everything, which i have so far only seen in the embedded space (where you have maybe 12 dependencies overall. a modern C hello world has probably more.) 18:27 erle muurkha, are you familiar with DJB redo design (not the apenwarr one, it has a DAG cheating device built-in i think) 18:28 erle if yes, that solves all problems i mention in the talk, so you don't have to watch it 18:31 erle muurkha thanks for the reproducibility thing btw. but i build my webpage using redo, so it's a real problem ;) 18:31 erle muurkha do you have maybe something on build systems in your books? 18:32 erle muurkha, this is the microsoft paper https://www.microsoft.com/en-us/research/uploads/prod/2018/03/build-systems.pdf 18:36 erle muurkha the “wanting to compute arbitrary stuff” thing is very very common, even in reproducible builds. the errors i address are not about reproducibility of a single build, but about getting the same result between an incremental and a full build. 18:36 erle for that you need the repeated evaluation fixed point top-down thing to make it easy 18:37 erle if you don't want incremental builds, you can always rebuild everything from zero every time 18:37 erle which is trivially correct, unless your build system has no fixed point 18:37 erle i have seen makefiles where the correct binary fell out only every second time 18:37 erle because oscillations lol 19:04 muurkha I'm vaguely familiar with DJB's redo, thanks to apenwarr and to you 19:05 muurkha I keep meaning to read the SPJ paper 19:07 muurkha I agree that Excel is a build system; I've written a bit about them (though arguably nothing written on the topic by someone who hasn't read the SPJ paper merits being read) 19:08 muurkha probably the most relevant piece I've written is https://dercuano.github.io/notes/dependency-bibliography.html 19:08 muurkha but there's also https://dercuano.github.io/notes/blob-computation.html and https://dercuano.github.io/notes/blob-computation-notes.html 19:11 muurkha you'll probably also be interested in https://dercuano.github.io/topics/self-adjusting-computation.html 19:23 erle muurkha the apenwarr implementation of redo caches dependency checks in ways that make it easier to implement, but ultimately result in nasty surprises. i wrote my implementation only because apenwarr would not fix these things (because it would make the implementation arguably correct, but slower). 19:31 erle muurkha, ccache is subtly wrong 19:31 erle basically it uses the wrong cache key 19:31 erle i forgot what the correct one was 19:32 erle but you sometimes get false positives in the cache 19:32 erle which means a wrong build or no artifact at all 19:58 muurkha erle: not surprising 19:58 muurkha this reminds me of the C vs. Lisp debates of 35 years ago 20:17 erle muurkha in what ways? 20:17 erle i am 35, so 20:17 erle no memories 20:32 erle muurkha so i came up with this tool yesterday, redo-gcc 20:32 erle basically it invokes gcc and straces the compiler 20:32 erle and records every file gcc did not find as a non-existence dependency and every file it found as a dependency 20:32 erle the latter according to -MD -MF 20:33 erle good idea? 21:15 muurkha erle: yeah, there are a lot of notes on that kind of thing in Dercuano 21:16 muurkha I think it's a good idea 21:23 erle muurkha so how do you deal with non-existence dependencies normally? 21:23 erle e.g. the target being invalidated if a new file or new build rule is created 22:20 MTDiscord does this message show on matrix 22:24 MTDiscord No 22:24 MTDiscord The bridge is currently down. We might have a fix in a few hours. Or not, who knows 22:28 MTDiscord oh i thought the whole thing was supposed to end 22:28 MTDiscord did that change 22:29 MTDiscord Not sure what you mean... 22:29 MTDiscord i thought whatever we were using for the bridge was gonna stop servicing 22:31 MTDiscord https://discord.com/channels/369122544273588224/747163566800633906/1144928932383162379 22:35 muurkha erle: I think nonexistence dependencies are a really interesting question 22:36 [MTMatrix] MisterE | Surprise: the matrix bridge worketh, thanks luatic. 22:36 muurkha thanks luatic! 22:38 muurkha one crude way to deal with them is to treat them as read dependencies on the directory containing the nonexistent file. this is "sound", in the sense that it will correctly rebuilding things every time rebuilding is necessary, but not "complete" in the sense of *only* rebuilding things for which rebuilding is necessary 22:38 muurkha completeness is generally considered an infeasible goal because it requires distinguishing changes to source files that affect the compiler's output from changes that don't, which in general requires solving the halting problem for the compiler 22:41 muurkha but recompiling the entire world every time you add a new file to /usr/include might be too far from completeness to be useful 22:42 MTDiscord sorry, one more test 22:42 muurkha in things like https://www.mail-archive.com/kragen-tol@canonical.org/msg00146.html "Make for URLs, or dependency-directed compositing" nonexistence dependencies are straightforward 22:42 erle muurkha my redo implementation does ne deps correctiy i think 22:42 erle you can't really model them properly as normal deps 22:43 erle also just to be clear, make is unfixable 22:44 muurkha yeah, I wasn't suggesting literally using make 22:44 erle every DAG-toposort system i have seen can not really accomodate NE deps 22:44 muurkha but in that setup, GETting a resource that is currently nonexistent produces a representation with a 404 error code, and as long as that representation is a valid representation for that resource, you don't trigger a rebuild 22:44 muurkha it doesn't require any special handling 22:45 erle looks like an adjacent, but different problem 22:45 erle i solved that with redo-stamp 22:45 erle which can declare a target semantically up to date while it is being built 22:45 muurkha how? 22:45 erle this is a bit weird 22:45 erle but exit code 123 means “even though this target was being scheduled for rebuild, during the rebuild it turned out it was actually fine as it is” 22:46 erle so the build system then works with the updated info that the target was up to date all along 22:46 erle p sure you can not do that with the toposort 22:46 muurkha btw, I think it would serve you well rhetorically to stop claiming that system X is "incorrect" because, although it solves problem X' correctly as the author intended, it doesn't solve problem Y' that you think is more important 22:46 erle a simple use case: you have something which is byte-wise different but semantically identical 22:47 erle library got recompiled with or without new symbols stuff or so 22:47 muurkha aha, now I understand. yes, that seems like a good approach 22:47 muurkha although you can do it in a purer DAG system by adding an extra build step which extracts out the semantics into a new bytewise identical file 22:48 erle muurkha in terms of build systems and redo in particular i use “incorrect” usually for optimization attempts that result in build differentials from the most naive implementation. 22:49 muurkha that approach, although it's more expensive and hard to work into existing systems, does have the advantage that it prevents a build step that incorrectly† returns 123 from incorrectly† preventing later things from being built 22:49 erle wdym 22:49 erle i do not use “incorrect” for stuff that merely takes longer btw 22:49 muurkha well, suppose you have a library which may be built with or without debugging information 22:50 erle yes 22:50 muurkha maybe that debugging information doesn't end up in your final executable and so it doesn't matter 22:50 muurkha as an alternative to having the build step that builds the library return 123 to say that all that has changed is the inconsequential presence of this debugging information 22:51 muurkha you can add another build step that makes a stripped version of the library 22:51 muurkha and then link with the stripped version of the library 22:51 muurkha which will be bytewise identical if all that changed was the no-longer-present debugging information 22:52 muurkha a blob-hash-based rebuild system will detect this and not bother to relink the executable 22:52 erle an example of my usage of “incorrect”: take the apenwarr redo-always. it does, contrary to its name, not declare a target as always out of date in that particular implementation of redo, because of some caching. every implementation of redo that is simpler and every description of the command i have seen though claims that the target is always out of date. the difference is a caching layer that solves problem X' which is 22:52 erle not problem X, which is solved according to the documentation and also solved by every simpler implementation. 22:53 erle apenwarr probably thought that it would not make a difference at first, but the unit tests for apenwarr redo only work because something reaches inside the state database of the build system at runtime and forces it to *actually* rebuild always. 22:53 erle so, do you have a better term for “programmer added more complexity and is now solving a different problem than the original one” 22:53 muurkha but I don't think apenwarr had the objective of behaving indistinguishably from a simple build script in shell; if he did he would have accepted your bug report 22:54 muurkha I think he wanted to write a build system that he liked better than make 22:55 erle yeah, but he ended up trying to write a top-down recursive build system in terms of a DAG-toposort-bottom-up one which is generally not possible 22:55 erle and the reason he does not accept my report is basically “why would anyone do this” 22:56 erle i mean i know why apenwarr does this, parallelism is way easier to do with X' than with X 22:56 erle and you can cheat a lot on dependency checks 22:56 erle but ultimately, it results in incremental builds with apenwarr redo that do not match a full rebuild 22:56 erle don't quote me on that, i'm sleepy 22:57 erle but i consider a tool not doing what it's own documentation suggests as incorrect 22:57 erle oh and it's also a composition issue 22:57 erle you basically can not “nest” invocations of redo (which is why the unit testing does not work) 22:58 erle in any case, it does no longer solve the original problem, but a related one that is easier to solve 22:58 erle lots of software does this with good reasons 22:58 erle take fast inverse square root 22:58 erle it's incorrect (and illegal) 22:58 erle but it did exactly that 22:58 erle so how would you call the scenario “programmer choses to solve X' instead of X” in general? 22:58 erle i mean, it is not always wrong to do so 23:00 erle i suggest to look at the implementation yourself to get the picture btw 23:01 muurkha erle: I think "why would anyone do this" can be translated as "I'm not trying to solve the problem you correctly identify that this software does not solve" 23:01 erle or strace it and find out it does way fewer dependency checks than it logically would have to do 23:02 muurkha I think the general term for that scenario is "Worse is Better" 23:02 erle yeah, but in this place, better is better 23:02 muurkha you could be right about that, but it doesn't go without saying 23:03 muurkha I mean it isn't always true so it requires a convincing demonstration in any particular case 23:04 erle my implementation is a fraction of the code and does not have a dependency on python or sqlite and IIRC it is faster for everything but parallel builds where jobs take less than 1 second. 23:04 erle something like that 23:04 muurkha nice 23:04 erle the funny thing is that if you benchmark it you will notice that apenwarr redo can still be faster 23:05 erle but the reason is because it cheats on dependency checks i think 23:05 erle i can do the same: check no dependency, output an empty file 23:05 erle suddenly i am the fastest 23:06 erle in any case, i think i am a “better correct than fast” person and apenwarr is a “better fast than correct” person and i just lucked into a constructive proof that nope you don't need sqlite or so many lines of python. 23:06 erle i am not so sure about the parallelism though 23:06 erle if you know about the jobserver thing (with the fifo tokens), maybe you could look into it 23:06 erle currently i am busywaiting with sleep 1 23:06 erle which is certainly not optimal 23:07 erle also apenwarr redo has this output interleaving for parallel builds which i find nice 23:08 muurkha not familiar with the jobserver thing 23:08 muurkha have you looked at how GNU tail works on Linux, btw? 23:08 erle oh, this is a trick you'll like 23:08 muurkha tail -f I mean 23:08 erle so you want to schedule N jobs in parallel 23:09 erle you create a fifo, write N tokens into it (bytes? i forgot) 23:09 erle each job starting tries to read a token from the fifo 23:09 erle each job done writes one there 23:09 erle so you start all the jobs 23:09 muurkha that's a good idea. so a pipe is a semaphore 23:09 muurkha (with a maximum capacity of PIPE_BUF, but that's good enough for almost all cases) 23:09 erle i heard a girl i explained this to say the same thing a few days ago 23:10 erle so for recursive builds this is a bit weird, because you don't want to block on subtargets 23:10 erle so sometimes you need additional arithmetic on when to add or remove tokens 23:10 erle muurkha, i wrote this some time back, it might do what i described http://news.dieweltistgarnichtso.net/bin/jobserver 23:12 erle but look into my redo-ifchange to see it in action 23:13 erle i wonder why i did the argument thing that way 23:13 erle i probably had a reason 23:13 erle no idea 23:16 erle muurkha i got the fifo approach from simon richter (?) years ago when we walked through berlin at night. i believe that he mentioned some software uses it. maybe the make jobserver? no idea 23:16 muurkha for a recursive build, when you block a task because it's missing a prerequisite, maybe you could put a token back into the pipe 23:17 muurkha for as long as you're blocked 23:17 erle maybe i am doing that, look at redo-ifchange 23:17 muurkha that way only the actively compiling tasks are counted 23:18 muurkha not familiar with this unlink shell command 23:18 erle it calls unlink(2) 23:18 erle to unlink, and possibly delete, a file 23:18 muurkha well, presumably, but isn't that what rm does? 23:19 muurkha we should probably have a place to discuss this where it would be on topic 23:19 muurkha maybe #scannedinavian 23:19 erle it is much safer than rm and also possibly more efficient, because it does not do anything else than accept a single file that can not be named --help or --version 23:20 erle rm -rf argument accidents go wild 23:20 erle unlink at most deletes one file 23:20 muurkha though maybe it's healthy for sfan5 to see you having a conversation where he isn't flaming you ;) 23:20 muurkha apparently unlink is, much to my surprise, in coreutils 23:21 erle i always use it instead of rm -rf if i can do that 23:21 erle i wonder if minetest is still using a vulnerable libpng hehe 23:21 erle i did mention that before some release 23:22 muurkha pretty sure the copy of minetest I'm running on here is linked with the Debian system libpng 23:22 erle ah 23:22 erle only windows users at risk then hehe 23:22 muurkha libpng16.so.16 => /lib/x86_64-linux-gnu/libpng16.so.16 (0x00007fc84b1fa000) 23:22 muurkha yeah, fuck them anyway 23:23 erle muurkha btw, if i did not mention it, uncompressed TGA fed through zlib is one of the best image formats for textures of the type and size of minetest. i found that when writing my text rendering library months ago. 23:23 muurkha that's very surprising, the Paeth compressor in PNG doesn't buy you aanything? 23:23 erle it was indeed surprising 23:23 erle and it also holds true for bigger image sizes, up to a limit 23:24 erle i believe there are two reasons for it 23:24 muurkha I mean zlib is written by two of the authors of libpng, which uses it for compression 23:24 erle first, TGA has *way* lower overhead, no checksums etc. we already know that even an uncompressed TGA can beat a small PNG on filesize, but the sizes have to be like 8×8 or so. 23:25 erle second, the compression in PNG is done by chunk if i am not mistaken. 23:25 erle which is obviously suboptimal 23:25 erle oh and third 23:25 erle the whole framing structure of PNG is hard to compress because of the checksums 23:26 erle i think 23:26 muurkha hmm, apparently I was wrong about that; Gailly and Adler aren't credited in libpng 23:26 erle basically, whatever was done at the quake era (put TGA in PK3 archives, i.e. basically zip files) is the best possible solution for small textures (for surprisingly large values of small) 23:27 erle i only ever found that out when i rendered a lot of text to a png and tried pngcrush and zlibbing a tga 23:27 erle sorry 23:27 erle i mean when i rendered a lot of text to a TGA, then converted it to PNG and then both compressed the TGA and used pngcrush on the PNG 23:28 erle it's also a write-speed issue 23:28 muurkha hmm, have you tried feeding uncompressed 8×8 TGAs through something like ncompress? 23:28 erle whatever ncompress is, it is not in the lua api of minetest 23:28 muurkha LZW 23:28 erle also 8×8 is not the size i am going for 23:28 erle give me a moment 23:30 muurkha zlib.compress(b'') in Python 3 gives an 8-byte output 23:31 muurkha compress