Minetest logo

IRC log for #minetest, 2022-07-22

| Channels | #minetest index | Today | | Google Search | Plaintext

All times shown according to UTC.

Time Nick Message
00:21 fling joined #minetest
00:37 Lesha_Vel joined #minetest
00:39 est31 joined #minetest
00:47 fling joined #minetest
01:06 est31 joined #minetest
01:59 Verticen joined #minetest
02:18 toluene joined #minetest
02:35 sparky4 joined #minetest
03:00 muurkha erstazi: what's the platform where you're having the shader size limit?
03:02 muurkha sfan5: I like playing Minetest on my prehistoric GPU, so how Minetest runs on prehistoric GPUs matters to me even if it doesn't matter to you
03:03 muurkha there are a lot of games that won't run at all with my GPU, or run but not in a playable way, so I appreciate that Minetest does
03:03 muurkha erstazi: sorry, missend
03:05 muurkha apparently the platform is OpenGL 1.4 Mesa 20.3.5 with an Intel Mobile 945GM/GMS, if anyone else is wondering
03:11 nuala joined #minetest
03:11 muurkha I'm also running Mesa 1.4, on an Intel GM45 Express, but with Mesa 21.2.6
03:11 muurkha but OpenGL version 2.1 with Mesa 21.2.6
03:12 muurkha rather than 1.4
03:12 muurkha I haven't tried enabling shadows
03:12 Izaya Is that something along the lines of an Atom netbook?
03:12 Yad joined #minetest
03:18 Verticen joined #minetest
03:23 muurkha no, it's a 2.2GHz dual-core Pentium T4400
03:26 muurkha apparently Intel sold that chip from 02009 to god knows when, probably 02014 or so?
03:30 muurkha presumably since the OpenGL version string says "2.1" I'd still be able to run a hypothetical version of Minetest that required OpenGL 2
03:53 behalebabo joined #minetest
03:56 MTDiscord1 joined #minetest
04:00 MTDiscord1 joined #minetest
05:40 calcul0n_ joined #minetest
05:40 est31 joined #minetest
05:44 fling joined #minetest
05:55 est31 joined #minetest
05:56 cranezhou joined #minetest
06:01 est31 joined #minetest
06:03 erle joined #minetest
06:33 olliy joined #minetest
06:36 est31 joined #minetest
06:39 sfan5 muurkha: if you have OpenGL 2 then it will continue to run but it may run worse in the future
06:41 sfan5 if/when we get our renderer rewrite done keeping support for the legacy pipeline is basically duplicating all of the work for little gain
06:52 MTDiscord joined #minetest
06:54 erle sfan5 at this point i want to re-iterate that shaders were introduced before openGL 2.0. the functions are there, they are just differently named.
06:54 erle but as far as i can see, they work the same
06:55 erle or is there any difference in the *ARB functions and the ones without the suffix?
06:55 erle (internally, it seems that libraries like glew and irrlicht do know that and abstract it away, but of course if one thinks “shaders = opengl 2.0” you will only expose the opengl 2.0 functions)
06:56 erle i suspect that technically, those GPU are probably direct X 9 cards
06:56 erle that either got opengl 2.0 drivers or got openGL 1.x drivers + shader extensions later on
06:57 erle this will be a moot point on linux anyway, because wayland
06:58 erle basically almost all GPUs older than the one I am using are neither able to use shaders or have wayland
07:00 erle also, having openGL 2.0 or openGL ES is no guarantee minetest will run, given that its shaders appear to exceed the (admittedly quite modest) instruction limit of shader model 2 and the (quite generous) limit of openGL ES
07:02 est31 joined #minetest
07:04 erle yesterday i looked at hecks “openGL 2.0 core binding” and then at irrlicht and i realized why libraries generally don't do this kind of thing
07:12 sfan5 I can't promise you that opengl 1.4 support will keep working even if you have shader extensions
07:16 erle i know
07:17 erle but statically switching between, e.g. glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB) and v = glCreateShader(GL_VERTEX_SHADER) can be done using an ifdef
07:18 erle or even at runtime
07:21 cranezhou_mt joined #minetest
07:30 sfan5 I didn't say it can't be done
07:31 Izaya asked if it was a netbook because I have an atom netbook and minetest is pretty much the only 3D accelerated game it can run :p
07:33 erle Izaya if you have linux, paste the output of “lspci -vv” and ”glxinfo” and someone can tell you a) if it *can* work b) if it is *likely* to work
07:37 Izaya OpenGL 1.4 baybee https://shadowkat.net/tmp/dc18.txt
07:38 muurkha erle: I'm also running Mesa 1.4, on an Intel GM45 Express, but with Mesa 21.2.6, but OpenGL version 2.1 rather than 1.4
07:38 muurkha I hvaen't tried enabling shadows
07:38 Izaya It's complete potato but it runs minetest at a solid 30 FPS. And Doom. What more do you need? :D
07:39 muurkha sfan5: I hope I don't have to keep using old versions of Minetest
07:39 erle Izaya, yeah but you have GL_ARB_fragment_shader, GL_ARB_vertex_shader, GL_ARB_shader_objects, GL_ARB_shading_language_100
07:39 erle so you are in exactly the same position as me
07:39 erle > 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03)
07:39 erle well, no surprises here
07:40 muurkha Izaya: yeah, it's pretty great that Minetest is so undemanding of hardware
07:41 muurkha it seems sort of silly that it's even as demanding of hardware as it is, though, given that the entire world is made out of one-meter cubes
07:41 erle this again shows the difference between developers who want to have as little code as possible and players who want stuff to work on their hardware
07:41 erle muurkha indeed, minetest rendering can be optimized much more. this has been known for a long time actually.
07:42 muurkha I mean it really isn't aiming for photorealism
07:42 erle everyone's favourite example are particles, that each get drawn separately
07:42 erle instead of batching them
07:42 muurkha but the open world makes it a challenge
07:42 muurkha I mean Dark Souls can give you a loading screen in between levels and ensure that no level ever has too many polys visible
07:42 muurkha Minetest can't do that
07:42 erle minetest has view range
07:43 Izaya fancy shadows would be harder than in minecraft even because you can't see all the vertical chunks at once so there *might* be something in the way that you can't see
07:43 erle and it could, indeed, drop polys, but that would probably lead to holes in the world
07:43 Izaya but also you can probably ignore anything yyou can't see
07:43 muurkha erle: do you think that if you run Mesa 21.2.6 your display controller will get OpenGL 2.1 support?
07:43 Izaya I think that mode I've seen with the low-poly far-view thing is really cool
07:43 erle muurkha i doubt it. but in any way, it would not change the capabilities of the hardware
07:44 Izaya I don't think it's been in Minetest for a long time though, only seen screenshots
07:44 muurkha mine is "Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07)" so it might be a little newer than yours
07:45 erle and the hardware capabilities of the intel GM9xx are: exceed the shader instruction limit by even one instruction and instead of realtime rendering you get 4 fps or less
07:45 muurkha Izaya: yeah, I feel like for far view you ought to be able to do a reasonable job with "painted" scenery
07:45 erle like, even mesa software rendering is faster for that case
07:45 muurkha too bad mesa isn't smart enough to switch to software rendering
07:45 erle it would not help
07:45 erle the software rendering also can not keep up
07:46 erle at least not that i have noticed
07:46 erle doubling or even tripling the fps is not helping when you start out at 4fps :P
07:46 muurkha tripling to 12fps makes it look like animation
07:46 Izaya 12 FPS is a lot more playable than 4
07:46 muurkha instead of a slideshow
07:46 muurkha it's not *very* playable but it at least gives you the illusion of movement
07:47 muurkha as for "difference between developers ... and players..." well sure it's unsurprising that I, as a player, would like sfan5 to make his code run on my laptop.  but I'm not a paying customer of his, and if I really want to make it work, nothing is stopping me from putting in the effort to make it work myself
07:47 muurkha (hers? theirs? sorry if misgendered)
07:49 muurkha I can hardly criticize sfan5 if they prefer not to spend their leisure time on things that benefit me but not them
07:51 erle well, the problem is not when devs don't make something new run well
07:51 erle the problem is when they make something run worse on not-their-hardware to make it run better on the-hardware-they-own
07:51 muurkha particularly if, as is claimed, it trades off against being able to do other 3-D work that would make Minetest better in other ways
07:52 erle this has been claimed indeed
07:53 muurkha erle: you can still run the old version of Minetest.  even merge in changes that don't affect GM9xx playability, or make changes that improve it
07:53 muurkha I'd like to reliably get more than 20fps myself
07:54 erle i fully intend to fork minetest as soon as it stops running on the majority of my hardware and do exactly that, try to merge everything that isn't making it worse.
07:54 erle muurkha, have you tried subsampling?
07:54 muurkha subsampling?
07:54 erle sorry
07:54 erle wrong word
07:54 erle undersampling
07:54 muurkha undersampling?
07:54 erle it renders the world at a lower resolution
07:55 erle but the GUI is still crisp
07:55 erle try “undersampling = 2” in your config
07:55 muurkha interesting, let's see
07:55 erle or even “undersampling = 4“
07:55 erle this might lead to modest performance improvements
07:55 erle for me it can be the difference between unplayable and playable in complex scenes
07:55 erle i cap fps to 30 though
07:55 erle so i mostly don't see much of the improvements
07:56 erle muurkha tell me how much it improved pls
07:56 muurkha so, current status: 20-21 fps tramping around in the snow outside my house
07:56 erle without undersampling?
07:56 muurkha yes
07:57 muurkha with undersampling = 4: 27-35 fps
07:57 muurkha this is fantastic and I will do it forever
07:57 erle nice :)
07:57 muurkha thank you!
07:57 erle pixelate the voxel game
07:58 erle sfan5 btw have you tried recent versions of minetest on imx6 or imx8? there seems to be some issue with mesa, maybe only on wayland.
07:58 erle like, i had to fake the openGL version via mesa environment variable so minetest would even start
07:59 erle can't remember the error message rn, i'll post it as soon as i get to it again
07:59 Izaya undersampling eh
08:00 muurkha erle: did my comment above about painted backdrop scenery make sense?
08:00 erle muurkha which one?
08:00 muurkha 07:45 < muurkha> Izaya: yeah, I feel like for far view you ought to be able to do a reasonable job with
08:00 muurkha "painted" scenery
08:01 muurkha basically if there's a mountain 200 meters away if you occasionally render it into a big texture, and then put up a billboard where the mountain is
08:01 muurkha that could reduce the poly count a lot
08:01 muurkha maybe even a few different layers of billboard so that you don't lose all parallax
08:01 Izaya the old implementation did a per-chunk heightmap and then did a super low poly mesh around the rendered terrain, I gather
08:02 Izaya relying on the terrain generator and seed to provide the information
08:02 muurkha huh, interesting!
08:02 erle i suggest you ask someone else about it, not me. i have done stuff on minetest over 10 years ago and then again only became more active since the start of 2020. ask me about clouds or something lol.
08:03 Izaya I have only seen screenshots of it, fwiw, never used it, but it did mean that changes to terrain didn't apply to the far view
08:03 toluene joined #minetest
08:03 erle (i am making cloud granularity adjustable rn)
08:03 muurkha I really enjoyed your bass guitar tune synth btw
08:03 erle oh thanks :)
08:03 erle muurkha have you seen my talk “making music with a c compiler” from the SIGINT conference?
08:04 erle excuse my bad englishing in the talk
08:04 muurkha of course ;)
08:04 erle yay
08:04 muurkha have you tried concurrently synthing two KS strings with very slightly different fundamental frequencies, btw?  it sounds like a piano instead of a guitar
08:04 erle for these kinds of music, everyone who understands how it works ends up making their own little things
08:04 erle which is funny
08:04 erle like, everyone who understands how libglitch works immediately makes their own take on it
08:05 muurkha heh, just like Scheme and Forth
08:05 erle i once met a guy who also had a libglitch
08:05 erle but his one was a 3d library
08:05 erle so we were drunk maybe
08:05 erle and discussed of just merging them
08:05 erle to reduce confusion
08:05 muurkha for highish frequencies you might need some version of an allpass filter in order to get the beat frequencies low enough to sound good
08:05 erle i don't think we ever met again
08:06 muurkha pianos typically have three strings for the high frequencies rather than two
08:06 erle muurkha i have never tried to make piano sound like that. but it shouldn't be too hard, starting from the guiter example. maybe you want to make it? or did you make it already?
08:07 muurkha the trouble I've had with this approach is that it always ends up sounding like a twelve-string guitar instead
08:08 muurkha but less so for bass notes
08:08 muurkha http://canonical.org/~kragen/sw/dev3/ks-string.c was my attempt
08:10 erle if you ever come to berlin, say hi
08:12 muurkha likewise, in Buenos Aires :)
08:13 erle that's a bit far away tbh
08:14 muurkha I haven't been to Europe since 02004
08:15 erle wow cloud rendering sucks
08:15 erle (may even be partially my fault?)
08:16 est31 joined #minetest
08:16 muurkha it seems likely that I'll die before having another chance to visit Europe
08:16 erle why
08:16 erle are you at risk of dying soon or is it exceedingly unlikely that you'll get there?
08:17 muurkha apparently I wrote that synthesis program in 02015
08:17 erle muurkha before you die, i would like your opinion on my implementation of the redo build system. i can not imagine anyone better than you to evaluate it.
08:18 erle http://news.dieweltistgarnichtso.net/bin/redo-sh.html
08:18 muurkha cool!
08:18 erle it's only 400 lines, but faster, more flexible and arguably more powerful than make
08:18 erle given that it is a djb design, that is not surprising
08:19 erle i met djb twice, but he seems kind of not-interested-in-chatting-with-commoners
08:19 muurkha you might want to relink the introduction-to-redo link
08:19 erle oh, it's dead
08:19 erle anyway, i have man pages
08:19 erle https://web.archive.org/web/20160818124645/http://jdebp.eu./FGA/introduction-to-redo.html
08:20 erle the thing is, daniel j bernstein writes makefiles as if he had a redo implementation privately available
08:20 erle and is then just translating the instructions to a makefile
08:20 erle now it may be all in his head, but it is a thing you only realize when you have a redo implementation
08:21 muurkha also you might want to retitle the link to https://github.com/chneukirchen/redo-c as "Leah Neukirchen's redo", and by the way this table is the best thing ever about redo
08:21 muurkha but why is your implementation not in the table?
08:22 muurkha oh, it is, it's the top item
08:22 muurkha that could benefit from clarification
08:22 erle oh i should retitle that indeed
08:22 erle muurkha, this is an interesting exploration of how it works https://web.archive.org/web/20170326021857/http://news.dieweltistgarnichtso.net/posts/redo-gcc-automatic-dependencies.html
08:23 erle also the game “liberation circuit” (an RTS where every unit is scriptable) is built with redo. you can also build it using make, but every second compile was wrong last i checked hehe.
08:23 muurkha oh hey, I didn't know about the inotifywait command, this is awesome
08:24 erle redo is also one of the things where people tend to write their own implementations
08:24 erle but for the most part, they skip the hard challenges
08:24 erle for example, avery pennaruns implementation is not actually implemented as a recursive build system
08:24 erle i'd call it perfect hacker news reader bait
08:24 muurkha this seems like the first implementation of redo I've seen which it would be reasonable to bundle into a software distribution as a build system
08:25 erle there is a smaller one, called do, by avery pennarun
08:25 muurkha yeah, but it's only useful as a bootstrap for his redo
08:25 erle but his python implementation is so broken i'd call it a fraud
08:25 erle i think that internally, it sorts the targets and then does a topological sort
08:25 erle which is oh-so-close to how make works
08:25 muurkha that sounds fine to me, why do you think it's a fraud?
08:26 erle it is almost fine, but it breaks down once you do parallel builds or a target gets invalidated during the build
08:26 erle basically, the toposort is an optimization of the ”better fast than correct” brand
08:26 erle in the end, it leads to a ”redo-always” target not always being rebuilt
08:27 erle there is a code smell in pennarun redo that exposes that this is ultimately cheating
08:27 erle namely, that it can not run its own test suite with redo commands alone
08:27 erle it has to reach into the database to work around this
08:27 muurkha hmm, interesting; I guess I have to refresh my memory on redo to understand this issue
08:27 erle well, think for example of a target that always should reflect some online resource
08:28 erle or a target that simply contains the output of “date -Imin”
08:28 erle you want it to change every minute
08:28 erle in a recursive build system, the dependency check should be at the time of building the target (or skipping that build)
08:28 erle i.e. lazy evaluation
08:28 erle the toposort approach requires eager evaluation though
08:29 erle which means a target can only be built once during a ”run” (a concept that you have to introduce to make the pennarun redo test cases work)
08:30 erle i suspect that people do it for two reasons: a) they do not actually feel they can implement a recursive build system, so they do the classical toposort thing they know from make and cmake and so on b) you can cheat on parallelism with this
08:30 muurkha maybe they just think it's the wrong thing and that what make does is the right thing
08:31 erle oh and c) you can skip dependency checks that are going to be almost always (but not always actually) result in “target is up to date”
08:31 erle nah, it's just way easier to implement and results in a speedup
08:31 erle at the cost of the build not being correct sometimes
08:31 erle classic ”better fast than correct”
08:31 erle and it's not about make being right or wrong, it's about being recursive or not
08:32 muurkha not sure that ahead-of-time toposort is actually easier to implement than something like straightforward recursion
08:33 erle well, you can build the graph and then delegate
08:33 erle hmmm, maybe i misremembered it. one other way to do it is to aggresively cache dependency checks.
08:33 erle but that is functionally the same as the ahead-of-time-toposort
08:33 erle i.e. same results of “faster in the usual case, wrong in some cases”
08:34 erle also consider parallelisation
08:34 muurkha I'm reminded of the first time I did a dataflow system and ended up with exponential-time computations
08:35 muurkha because A depended on B and C, which each depended on both D and E, which each depended on F
08:35 erle ah yes, these diamond style dependencies are common, but i actually do handle them well
08:35 muurkha so changing F would recompute A four times, the first three times from inconsistent pictures of the world
08:35 erle in any way i can run circles around an implementation that has to start a python interpreter all the time
08:36 muurkha that is, it would glitch (in the EE sense) to impossible values
08:36 erle by using busybox or dash
08:36 erle oh i see
08:36 erle that won't happen here though
08:36 erle the thing is, in the end pennarun et al will probably claim that “redo-always” is just to make sure a target will be rebuilt “at least once”, instead of “always”, but that introduces two discontinuities:
08:36 muurkha on this laptop starting CPython takes 18 ms and starting dash takes 2 ms
08:38 erle first, building a target might skip building it erroneously if you are in an environment (script or shell) that was started from a redo process
08:39 erle second, composability of different builds running concurrently is broken in subtle ways
08:39 muurkha I don't think "always" is a thing redo can deliver; at best it can only rebuild things when you run it, not the rest of the time when it isn't running
08:39 erle of course, “redo-always” is simply a dependency for adding an impossible to satisfy dependency
08:40 erle i mean a shortcut for
08:40 erle so if i depend on /dev/null with a hash of XXXXX or so it can never be satisfied
08:40 erle now if you have an impossible to satisfy dependency, the naive view is that ”redo-ifchange $target” will rebuild that target every time
08:40 muurkha and even when it is running, some of the time it won't have finished rebuilding the thing you're trying to always rebuild
08:41 erle and it turns out that the naive view is correct as in “less code, less problems down the road”, but it's harder to reason about it from a toposort POV
08:41 muurkha I feel like rebuilding the same target more than once during a build is generally undesirable, although you correctly point out that there are times when it is actually what you want, like LaTeX page numbers
08:41 erle yeah, or the test suite
08:41 erle in fact, i find the test suite thing hilarious
08:42 muurkha especially if there's the possibility of an exponentially large number of rebuilds of the same build artifact
08:42 erle like, having to edit the database with the dependency information while running test cases is *so* close to being self-aware of the mess you made
08:42 erle but apparently not close enough ;)
08:43 muurkha I'm comfortable with tests using special magic access to a system to verify it's doing the right thing internally
08:43 erle to be clear: the reason why you want it during test cases is because during a test case you would build a target, then change a dependency of it and then build it again
08:43 erle i am not comfortable with it because it is not papering over an issue inherent in testing
08:43 erle but over a real issue
08:43 muurkha sure, that makes sense
08:44 erle for example, i have redo-always targets that download some file and feed it to redo-stamp (a funny shortcut for: treat this target as up to date if the stdin of redo-stamp hashes the same as last time)
08:44 erle so the build rule has to always run to download the file
08:44 erle but then it has to conditionally abort
08:45 erle pennarun redo (and every redo implementation that can't build a target more than once) simply will download that once and pat their backs
08:45 erle but i do it in a loop, obviously
08:46 erle anyways, the issue is not abstract correctness
08:46 erle but very concrete correctness. i wrote my redo *because* of the failings of pennarun redo
08:46 vampirefrog joined #minetest
08:46 erle because i had a podcast static site generator for http://warumnicht.dieweltistgarnichtso.net/
08:47 erle and encoding them takes a very long time
08:47 erle as each file was encoded thrice (mp3, vorbis, opus)
08:47 erle this meant that any kind of incorrect rebuild lead to either several hours more of waiting for the encoder
08:48 erle or that something was not updated
08:48 erle muurkha if you end up using redo, please show me some ways you use it. i am always interested in how it works.
08:49 erle for example, for liberation circuit, avery pennarun has a funny thing with precompiled headers (i wrote the dofiles, pennarun improved them)
08:49 erle which cuts down on C++ building time
08:49 erle muurkha, you know this? https://github.com/linleyh/liberation-circuit
08:50 erle here you can see how i strace the compiler to find all dependencies https://github.com/linleyh/liberation-circuit/blob/master/src/default.o.do
08:50 erle and non-existence dependencies
08:52 erle muurkha, btw, what redo can do is ultimately relevant to minetest (as currently incremental builds do not work reliably), but so far no one wanted to abandon make or cmake: https://github.com/minetest/minetest/issues/11749
08:53 sfan5 (that issue is a good example for erlehmann annoying devs to no end for an obscure issue nobody cares about)
08:53 erle given that bisecting is a pain in the ass to the extent that i know about 5 or 6 people who are simply not doing it anymore, i disagree
08:54 erle also i have offered to do all the work of adding a build system that does this correctly and it was rejected
08:55 erle in the end, i think random weirdos like me just value things like correctness of speed of incremental builds much more than you
09:00 specing_ joined #minetest
09:03 erle ultimately, given the amount of “you have to build minetest completely new in a clean build directory” advice that is given as a response to every single miscompile, i am quite sure that people would appreciate it if the thing would … just work. which it kinda obviously does not, unless you are developing really close to git HEAD.
09:04 erle e.g. i remember that switching back and forth between 5.4.1 and 5.5.0 tags could get you into a state where you had to build completely new or you'd get a miscompile or no compile
09:04 erle which is complete garbage tier for a build system
09:05 erle anyway, i know you don't care about it, but acknowledging that more than one person (me) kinda would like fast & correct incremental builds might go a long way
09:14 muurkha I wonder if some example redofiles would make it easier to discuss the pros and cons of different approaches to deciding when to build things
09:15 ROllerozxa I think it's a bit of a chicken and egg situation. If you were to actually link to a repository containing a new finished Minetest build system that is faster or more accurate than the current one then it would almost definitively be merged, and if not people could copy it over to the source tree themselves and build with it that way unofficially.
09:16 muurkha because, in the abstract, arguments like "apenwarr's toposort strategy is fast and therefore bad" are not very convincing
09:16 erle it's not bad because it's fast
09:16 erle it's bad because it is not recursive
09:16 erle which leads to the errouneously-not-rebuild targets
09:16 erle which is exactly what you want to prevent when not using make
09:17 muurkha it would be helpful to look at particular ways that you are using redo in order to understand when apenwarr's strategy gives worse results than yours
09:18 erle so ”the thing can not pass its own test suite without implementing a workaround for the issue” is not enough?
09:19 sfan5 ROllerozxa: well no not really
09:19 muurkha sfan5: I don't agree that incorrect incremental builds are "an obscure issue nobody cares about", possibly because I spent a year early in my programming career mostly cleaning up after an unreliable build system
09:19 sfan5 a new build system has support, testing and maintenance costs. you don't just add it and it's done
09:20 sfan5 anything written from scratch will almost surely perform worse than the battle-tested cmake code
09:20 erle muurkha, i built my entire website using redo. you can use git to clone it. the website runs test cases for software i have published. i.e. it runs sub-redos.
09:20 muurkha at the time, though, doing a full build from scratch on that project took about a week
09:20 sfan5 "but it can do incremental builds" is in no relation to that
09:20 erle sfan5 you keep saying such things without benchmarking them
09:21 erle i'd not even try it if there was not a huge advantage over cmake
09:21 sfan5 your viewpoint is too narrow, way too narrow
09:21 sfan5 I'm not even talking about build speed
09:21 erle also the thing is 400 lines of code. the BMP decoder is larger.
09:21 erle and the minimal always-build-everything variant from pennarun is 100 lines of code
09:21 erle that did not change in years
09:22 sfan5 I'm talking about whether the build system can correctly detect X11 on a five year old Solaris version
09:22 erle that's simply grunt work
09:22 sfan5 well I'm not doing it
09:23 sfan5 neither is any other coredev
09:23 muurkha but it's grunt work that's already done, isnt it?
09:23 YuGiOhJCJ joined #minetest
09:23 muurkha with the current build system
09:23 erle yeah, but since redo is basically recursive shell scripting, you can just replace parts piece-by-piece
09:23 sfan5 the whole proposal is to get rid of cmake to make incremental builds work
09:23 erle even invoke cmake yourself
09:23 erle no, the proposal is to add a build system
09:23 erle you can happily continue to use cmake
09:23 sfan5 ah no that's even worse, you don't want two potentially incompatible different ways to build Minetest
09:24 erle and enjoy your sometimes-faulty builds that are reasonably fast on your machine, but take 20+ minutes on mine
09:24 muurkha erle: with respect to "the thing can not pass its own test suite" my understanding of what you said was more like "apenwarr's redo test suite cannot be expressed as a build"
09:24 muurkha which seems fine to me
09:24 muurkha there are lots of systems whose test suites can't be written in the system itself
09:25 erle muurkha the problem is that nothing that resembles it in terms of recursive invocations of redo or having targets built multiple times can be evaluated correctly by apenwarr redo. like … my website.
09:25 muurkha I agree that erroneously-not-rebuilt targets are a very bad thing indeed for a build system
09:25 erle the interesting thing is that make erroneously not rebuilding is why people want to switch away from make
09:26 erle giving them then just the same thing dressed up in nicer clothes is peak hipster development
09:26 erle i want to eliminate entire categories of errors
09:26 muurkha hmm, I want to switch away from make because it can't handle spaces in filenames
09:26 erle sfan5 have you looked at redo or at the build rules of liberation circuit actually?
09:26 sfan5 no
09:26 sfan5 and I won't
09:27 YuGiOhJCJ joined #minetest
09:28 muurkha it's true that if you don't tell make about your dependencies then it can erroneously fail to rebuild things
09:28 muurkha or erroneously build them in the wrong order, especially with make -j
09:28 muurkha but that's not really an inherent problem with make that redo solves
09:29 muurkha although, as you pointed out, you can usually solve it (on Linux!) with enough strace hackery
09:30 muurkha but redo will also erroneously fail to rebuild things if you don't tell it about a dependency
09:30 erle sfan5, i guess it is different then than you saying ”a TGA can't ever be smaller than a PNG” or rubenwardy claiming i can't have shaders without openGL 2.0 or me claiming a filesystem block is always 4k? i mean, all of those were “mostly true, but not really”, but we looked at the details in the end.
09:31 erle if you don't look at the details, then you will never actually update your beliefs
09:31 muurkha if you want a system which reliably always rebuilds everything whose dependencies have changed, you need something that runs the build steps in an isolated namespace
09:31 lemonzest joined #minetest
09:31 erle muurkha, the difference is that the category of “nonexistence dependencies” can not *ever* be considered by a system like make.
09:31 erle like, conceptually
09:31 erle only systems that produce the dependency information as a side effect of the build can consider it
09:32 erle because you don't have the information before the build
09:32 muurkha like clearmake, nix, guix, or Vesta
09:32 erle i only know nix a bit, but generally, probably yes
09:32 erle it's a category of build systems
09:32 erle redo-sh is only the smallest solution for it
09:33 erle muurkha consider a header file. the prepocessor looks at it in the local directory at path A, then in /usr/local somewhere at path B, then finds it at path C. C is the dependency and can be tracked. if A or B get created, the target is dirty and needs to be rebuilt, but make will never be able to do it.
09:33 erle A and B are non-existence dependencies
09:33 muurkha right, you have to be able to detect that gcc was trying to read A and B
09:33 erle the only system i have seen in wide use that actually can handle nonexistence dependencies is udev
09:34 muurkha well, clearmake and Vesta handle them too
09:34 muurkha but in a different way
09:34 erle and that is a build system in the same way that excel is a build system (i.e. in academic terms, but not in practical terms)
09:34 mrgreymatter joined #minetest
09:34 erle yeah, i know. ninja special-cases C header files, but that's cheating too.
09:34 muurkha it's not a special case
09:34 erle it's a special case in the way that you actually need to handle it in a general way. or i do not understand how it is doing it. care to enlighten me?
09:35 muurkha in Vesta or Clearmake or Nix or Guix each build step is run in an isolated filesystem namespace, like a Docker container
09:35 erle uh, i guess that's one way to do it
09:35 erle like, being absolutely sure you'll get all of it
09:36 muurkha this ensures that the only things it can depend on are the things you've specified should be visible
09:36 erle ROllerozxa btw that kind of opposition is why i haven't even started. i offered to make it as a separate repo, but the interest was not even ”not interested”, but more like “cmake does not suck enough for us to justify to even think about it, get lost”.
09:37 erle muurkha interesting, but it seems way too limited. you might not *know* what you want to depend on until you get to the build rule.
09:37 muurkha and integration with a versioning system allows them to know when one of the things you've specified to import into that namespace has changed
09:37 muurkha yes, that's true!  and this is where the divergence happens
09:37 erle yeah, that's a way to make make-style systems work i guess
09:37 erle simply *never* have non-existence dependencies
09:38 muurkha in systems like make or redo, when you get to the build rule with the undetected dependency, you silently get incorrect output
09:38 muurkha in systems like Vesta or Nix, you get a build error saying that it can't find jpeg.h or whatever
09:38 erle true, but you can not detect all dependencies automatically technically
09:38 muurkha sure you can!
09:38 erle i talked to an engineer recently about non-existence dependencies and she was like “wwow good that i always specify all headers with exact paths”
09:39 erle think about stuff like redo-stamp. you can't, unless you add a separate pre-processing step to acquire resources.
09:39 muurkha I mean Vesta runs its own NFS server, so it can detect every time your build step attempts to open a file
09:39 erle that sounds a bit over-the-top tbh
09:39 erle but then again i strace the compiler
09:39 muurkha the strace hack you mentioned is another approach, yes
09:39 erle so i should be silent on over-the-top ;)
09:39 muurkha also you could use FUSE
09:39 erle oh, i have an idea
09:40 muurkha Vesta predates FUSE by many years
09:40 erle you can make a lot of people very angry and confused if you add the output of non-existence dependencies to clang or gcc
09:40 erle simply because most systems, prominently make, will not be able to handle the information, but it would make it more obvious that it is there
09:40 muurkha as for opposition, I'd say, don't worry about opposition
09:41 muurkha if you write a working redo build for Minetest, *you* benefit from it
09:41 muurkha because you get working incremental builds
09:41 erle muurkha, btw, if you use my redo implementation, check out redo-dot. it prints the entire dependency tree including non-existence dependencies and implicit dependencies (e.g. dependencies on build rules and non-existence dependencies on not-yet-written more general build rules that might exist in the future)
09:41 muurkha and then you can tell other people about it, and maybe they'll use it too
09:41 muurkha and maybe they'll get it to work on five-year-old versions of Solaris
09:42 erle i am curious about other systems that can print the dep tree
09:42 muurkha and, maybe more importantly for Minetest, Microsoft Windows, which is not really sh's strong point, or redo's
09:42 erle so far i have not found any
09:42 erle friend of mine installed git on windows and it comes with sh
09:42 muurkha yeah, there's also WSL
09:42 erle if you have git, you have enough of the deps to run redo
09:42 erle i mean, my redo
09:42 muurkha but cmake does a lot more than just run on windows
09:42 erle i know i know
09:42 muurkha as I understand it, it invokes the visual studio compilers in the appropriate way for things to build
09:43 erle as i said, i don't want to replace cmake, even if i think the position “yes we know it does not capture all deps, but we don't care” is not exactly my own
09:43 erle btw, that post-checkout hook from https://github.com/minetest/minetest/issues/11749 was my first idea on how to make builds faster
09:43 erle but it only works if the build system is reliable
09:45 erle so with minetest it has the hilarious effect of fixing one bug (preventing unnecessary rebuilds) that mostly masks another (not all depenedncies are tracked, but if you rebuild too much, you won't notice)
09:45 erle also coincidentally, you can actually get into the situation that it creates by hand
09:46 erle (accidentally)
09:46 erle muurkha you might be interested in this discussion of build systems https://www.microsoft.com/en-us/research/uploads/prod/2018/03/build-systems.pdf
09:46 muurkha it's unsurprising for people to be unconvinced to adopt a new, not-yet-written build script using a build system they're unfamiliar with
09:46 muurkha I mean they can't tell how good it is until it exists
09:46 muurkha it might be worse
09:46 erle well, liberation circuit exists and it is a large C++ project
09:47 muurkha ah yes, "build systems a la carte"
09:47 erle look, IMO the problem is not ”i am not convinced”, the problem is “i am not convinced and i will not look at the issue in-depth enough to ever make me reconsider, so don't even try”
09:47 erle that's demotivating
09:47 erle i mean, maybe i misunderstood the stance
09:47 muurkha yes, it seems that sfan5 has a certain dislike for you
09:47 muurkha sometimes that happens, and it can be demotivating
09:47 erle nah, i think it's a dislike for the things i do
09:48 muurkha but it's not the end of the world, you don't have to change their mind
09:48 erle i mean i dislike plenty of things that sfan5 does too, but ultimately, we are able to communicate
09:48 muurkha I mean if what you do is sufficiently useful to other people they will probably change sfan5's mind
09:48 erle and i believe neither of us is trying to be abrasive, like e.g. kilbith
09:48 erle it's just different priorities
09:48 sfan5 I'm not sure what magic insight should convince me that replacing CMake or adding a second build system is a good idea
09:49 sfan5 it's also not that I'm unconvinced that this is possible
09:49 sfan5 it is just not feasible within the practical needs of Minetest
09:49 sfan5 s/feasible/worth it/ perhaps
09:49 muurkha sfan5: it might not be a good idea; it's hard to know until there's a worked example to look at and compare
09:49 muurkha it's certainly possible to have a build system that does reliable incremental builds
09:49 muurkha and in my experience it's usually a very valuable thing
09:49 erle yeah, and it does not have to be redo. redo is just the simplest one.
09:50 erle the difference is between build systems that can do it and that can not do it.
09:50 muurkha hmm, I'm pretty sure CMake can also do reliable incremental builds
09:50 muurkha though I have to admit my acquaintance with CMake is not very deep
09:50 erle and right now, there are a bunch of issues that i can't even meaningfully bisect, because i have to rebuild every time and i just don't want to wait that long.
09:50 muurkha automate the rebuild and bisect it overnight
09:51 erle i think it would take less time to redo-ify the build then lol
09:51 erle muurkha there are two issues here: first, currently cmake does not capture all dependencies. improving that is probably welcome by the coredevs.
09:51 muurkha likely
09:52 muurkha and even if it weren't, who cares?  you still benefit from the improvement yourself
09:52 muurkha and so would anyone else who decided to use it
09:52 muurkha it's free software, you don't need the core devs' permission to change your copy of it
09:52 erle second, the one person i saw trying to improve the non-existence dependency handling of cmake ended up with a solution that was significantly larger than my redo implementation and did much less, in a racy way too.
09:52 muurkha or to share your changes with other people
09:53 erle the problem is ultimately that cmake is a thing that delegates the building to another – possibly less capable – system
09:53 muurkha do you mean "larger than my implementation of redo" or "larger than my new working build system for Minetest written as *.do files"?
09:54 muurkha "larger than my implementation of redo" would be entirely unsurprising, because your implementation of redo is apparently very nicely compact indeed
09:54 muurkha although I have only read a small part of it so far!
09:55 erle let me rephrase “the amount of code that you have to add to cmake to *only* get a badly working racy version of non-existence dependency handling is larger than the entirety of my redo implementation, regardless of which project i am talking about”
09:55 erle and the problem with it is that conceptually, this problem is simply not solvable using cmake
09:55 erle so you can only get so far
09:56 erle any solution will be on a kludge level of “compiling twice with make and using the outputs from the first build as inputs for the second”
09:56 erle like, not that exactly, but the kind of dirty hack that works most-of-the-time-but-not-always
09:56 erle and works-most-of-the-time-but-not-always is the state we have anyway
09:56 muurkha oh, I see
09:56 muurkha I think there are significant differences between degrees of empirical reliability
09:57 muurkha something that never works is fine
09:57 muurkha something that works half the time is annoying but probably also fine
09:57 muurkha something that works 99% of the time is a real pain in the ass
09:57 muurkha something that works 99.9% of the time is probably pretty okay
09:57 erle so cmake works well enough to develop stuff close to git HEAD. but it does not work well enough to switch between far diverging branches or far apart commits – which coincidentally means that bisecting sucks hard.
09:57 muurkha something that works 99.99% of the time is fine
09:58 muurkha something that works 99.999% of the time is difficult to distinguish from being correct
09:58 erle the problem is not as simple
09:58 erle as the non-working is not uniform
09:58 muurkha true
09:59 erle i mean, i totally believe that sfan5 does not run into this very often, while i run into it all the time
09:59 muurkha yeah.  sfan5 probably doesn't know about the Beautiful Gift of Git Bisect
09:59 erle oh he does
09:59 erle but simply having a faster machine cancels the pain out
09:59 muurkha yeah, that does help a lot
10:00 muurkha incremental builds are "just" an optimization
10:00 erle also this is complicated a bit more because the default failure mode of not considering non-existence dependencies in C or C++ builds is having a binary that works, but does not correspond to the current source code
10:00 muurkha but they're a very significant optimization, especially when it comes to C++
10:00 muurkha yeah
10:01 muurkha well, maybe not "works"
10:01 erle which is also a reason why many people claim that ne deps are never a problem for them
10:01 muurkha sometimes it's "segfaults" instead
10:01 erle yeah that too
10:01 erle but usually it's just cobbled together of whatever was erroneously not rebuilt
10:01 erle and that mostly works
10:02 erle i also suspect that this is the reason why switching between far away revisions and rebuilding incrementally rarely works in minetest
10:02 erle it's just that too much changed
10:02 muurkha things that work when they shouldn't is a bigger problem than segfaults
10:02 erle i agree
10:03 erle did you read my article about the hello world example?
10:03 muurkha I don't see it linked on http://news.dieweltistgarnichtso.net/bin/redo-sh.html
10:04 erle the wayback machine might not have it, but these are the deps (solid line) and ne deps (dashed line) of a hello world https://mister-muffin.de/p/x1d1.png
10:04 toluene joined #minetest
10:04 erle i have never seen a makefile that includes all of them (which is impossible anyway, since make can not handle ne deps)
10:05 erle muurkha, ig i should update that page soon anyway, or leah is going to think i'm a total buttface :/
10:05 muurkha hmm, can't you tell make that foo depends on the directory /usr/lib/gcc/i686-linux-gnu/5/include?
10:05 erle yeah, keep trying
10:05 erle if you find a solution, i'd be interested
10:06 muurkha or whatever the lowest existing ancestor is
10:06 erle but so far it takes people 20 minutes to about 3 hours to figure out that make is not fixable
10:06 erle you basically have to rebuild everything every time to be guaranteed a correct build
10:06 muurkha well, it does have some problems
10:06 erle which, coincidentally, is the only way you get a minetest binary that corresponds to the source
10:06 muurkha the biggest one for me is actually that it can't handle filenames with spaces
10:06 erle rebuild every time
10:07 muurkha but #2 is that it can't detect dependencies you don't tell it about
10:07 ShadowBot https://github.com/minetest/minetest/issues/2 -- Burned wood
10:07 erle well, i suggest to try my redo implementation and then tell me what other build systems you have encountered that did something better or different
10:07 erle haha, this is the #1 issue with the bot (glowstone lol)
10:07 ShadowBot https://github.com/minetest/minetest/issues/1 -- GlowStone code by anonymousAwesome
10:07 muurkha which is a problem that redo shares with make, doesn't it?
10:07 muurkha I mean it doesn't go around stracing things normally
10:07 muurkha or running its own NFS server or FUSE server
10:08 muurkha problem #3 is maybe that make can't tell when a file has been rebuilt in an identical way
10:08 ShadowBot https://github.com/minetest/minetest/issues/3 -- Furnace segfault
10:08 muurkha which redo solves with hashes
10:09 erle not all redo implementations do. some only compare the file timestamps lol
10:09 erle i compare timestamps and hash if different
10:09 muurkha oh, I maybe assumed they all followed apenwarr on that
10:09 erle which is guaranteed to be faster
10:09 erle than always hashing
10:09 muurkha well, it's guaranteed to not be slower
10:09 erle not entirely sure
10:10 muurkha also though don't some filesystems have one-second granularity on timestamps?
10:10 muurkha or two seconds for FAT
10:10 erle yeah, but if you are rebuilding that fast, spurious rebuilds are not going to kill your project
10:10 erle oh
10:10 erle i see
10:10 muurkha you might be saving the file that fast instead
10:10 muurkha a source file
10:10 erle yeah, this is a problem that you have anyway
10:11 muurkha well, the Vesta approach doesn't ;)
10:11 erle the practical issue is of course another one
10:11 erle “i am building a 16GB sd card image and don't want to hash it just to figure out if it changes”
10:11 erle so you just look at the inode
10:11 erle the image is of course a sparse file
10:11 erle but the hashing algo does not know that
10:11 erle ninja actually has had a PR that added hashing as a possible strategy
10:11 muurkha yeah, ideally your filesystem would hash your 16GB sd card image as you're building it and deliver the hash to the build system on request
10:11 erle which would have made it much faster
10:12 erle i agree
10:12 muurkha which a lot of filesystems are doing anyway for things like deduplication and media error detection
10:12 muurkha they just don't have a standard way to find out what the hash is
10:12 Yad joined #minetest
10:12 erle the thing is, without that patch, redo-sh (and every other filesystem that does the timestamp-hash check) can run circles around ninja, which i find hilarious
10:13 erle filesystem i meant build system
10:13 erle damn
10:13 erle i need to drink sth
10:13 muurkha well, it depends on the system you're building
10:13 erle ”can”, not “will”
10:13 muurkha right ;)
10:13 muurkha another deficiency in the standard filesystem interface is that a lot of times you'd like to use a stable snapshot of the filesystem for building from
10:14 muurkha and not worry whether the file you saved halfway through the build was used
10:14 muurkha or, perhaps, used in a partly written state
10:14 erle btw, do you maybe have an explanation on how for small textures (think 16×16 @ 24bpp) zipping up uncompressed bitmaps saves more space than zipping up optimized PNGs? or do you have experience with tilesheets maybe?
10:14 erle i think tilesheets might improve it even more
10:15 erle my suspicion is that the prefilter can't really shine at those sizes and the 69 byte non-negotiable overhead dominates the PNG
10:15 muurkha I was puzzled by that when you mentioned it the other day
10:15 erle it surprised me too
10:15 erle especially since every additional compression step makes it worse
10:15 erle regardless of format
10:15 muurkha your guess about the 69-byte header sounds very plausible
10:15 erle basically, compressing twice is bad
10:16 erle well, it's not a 69 byte header. it's the structure of the file, the framing. including checksums, which hamper compression.
10:16 muurkha the other night I wrote a thing for which I might want to compress small bitmaps
10:16 erle some of it is header
10:16 erle show and tell?
10:16 muurkha namely http://canonical.org/~kragen/sw/dev3/raycast
10:17 erle i think devine (of 100 rabbits) actually chose TGA for uxn, because it also deals with small bitmaps
10:17 muurkha the map is editable, but it's not sharable or savable
10:17 muurkha I was thinking something like http://canonical.org/~kragen/sw/dev3/raycast#!as8dga80g0a8jsdg0aj0gaj
10:17 muurkha but 64×64 bits is rather large to put into the URL
10:18 erle is it lol
10:18 erle the thing is, you can actually save a bit more space with A1R5G5B5 color format
10:18 erle but that actually loses information obviously
10:18 muurkha well, I mean, it's 682 bytes if you base64-encode it
10:18 erle probably even more by using paletted images, hmm
10:19 muurkha so I was thinking maybe I could use the Paeth predictor from PNG and then RLE-encode what was left
10:19 erle wouldn't that be a perfect candidate for just using RLE?
10:19 erle like, it's a maze
10:19 erle it will have long corridors
10:19 erle riiight?
10:19 muurkha RLE doesn't compress vertical walls or corridors well
10:19 muurkha but with Paeth it would
10:20 erle good luck then
10:20 erle also wtf, how can this page get past noscript
10:21 muurkha it shouldn't
10:21 muurkha maybe you whitelisted our server previously
10:21 erle hmm
10:21 erle maybe a misclick of mine
10:21 erle it does not execute scripts on reload
10:21 muurkha aha
10:22 muurkha I was going to suggest, or maybe you configured noscript to block WebGL but not JS, and it doesn't use WebGL
10:22 erle i do indeed block webgl, since once on a hacker camp i had a funny experience
10:22 erle we reached the end of a CTF challenge
10:22 erle and some bonkers animation played
10:22 erle and it shut down my computer through overheating
10:22 erle surely that was not intended
10:23 erle but it was in a tent in summer on a very hot day
10:23 muurkha basically I was like "it's goofy that on this 2.2 GHz dual core CPU I can't get more than 15fps reliably out of Minetest, I'll write my own 3-D renderer with hookers and blow"
10:23 muurkha but uh
10:23 muurkha I spent two hours dicking with HTML instead
10:24 muurkha there isn't any way to do the mouse pointer warping capture thing that Minetest does in HTML5, is there?
10:26 muurkha because unbounded mouselook would be a big plus for this kind of thing
10:28 orwell96 joined #minetest
10:28 muurkha https://gitlab.com/kragen/bubbleos/blob/master/yeso/sdf.lua is a different 3-D engine I wrote with another GUI toolkit that also can't do mouse pointer warping or capture and so also can't do unbounded mouselook, but in that case I could just add the capability to Yeso; harder for me to do that with HTML5
10:31 erle no idea. but there is someone doing minetest on a web page, i forgot who.
10:31 erle maybe that person knows
10:33 erle muurkha do you have an opinion on uxn?
10:36 muurkha uxn is cool, and I like its goals, but I would have done a lot of things differently
10:37 muurkha on the other hand, as Christine Lavin sings, "The reality of me cannot compete with the dreams you have of her. And the love you've given me is not as sweet as the feelings that she stirs."
10:38 muurkha hypothetical software can always be better than any software that currently exists :)
10:40 muurkha a lot of the uxn design consists of extra complexity to compensate for the low performance of the virtual machine as it's implemented
10:42 erle explain?
10:42 erle like what
10:45 muurkha pixels, sprites, ADSR envelopes, stuff like that
10:46 muurkha stack machines need to execute about twice as many instructions to do a given job, which doubles the interpretive overhead, which is already punishing in a simple implementation
10:47 muurkha it's hard to write a compiler from uxn code to native machine code
10:47 muurkha in part because of the handling of self-modifying code
10:52 muurkha using a stack machine also increases the difficulty of writing that compiler, because you have to do register allocation
10:53 muurkha with a register VM you can do a simple compiler with a static correspondence of VM registers to physical machine registers and memory locations
10:57 muurkha also, I think 16-bit addressing is a mistake
10:58 muurkha if you look at the platform list in https://github.com/hundredrabbits/awesome-uxn#emulators
10:59 muurkha there are about 20 platforms but the only ones that don't have 32-bit addressing are the Gameboy ("experimental") and the IBM PC ("incomplete")
11:01 erle i think you have different ideas about simplicity than those hackers on a boat
11:01 erle 16 bit ought to be enough for anything i would do with uxn if i did something
11:01 erle so why would anyone want more?
11:01 muurkha no, I'm talking about how to achieve the goals they have set for themselves and are working towards
11:01 erle it's not like you'd use it because it's so capable
11:02 erle so, simplicty of implementation ig
11:02 muurkha in particular there are no uxn/Varvara implementations for C64, NES, Apple ][, etc., so the limitation to 16-bit addressing is not helping them
11:02 erle yet
11:02 muurkha or TI calculator
11:03 erle i may not be very well qualified to talk about it, it's just that mentally i compare it to bullshit like PICO-8 and then it's miles ahead
11:03 erle PICO-8 is like „what if we cargo-cult the demoscene but like, flash games”
11:03 muurkha I don't think it's going to help them.  I have a 32-bit ARM here that runs on 30 picojoules per instruction
11:03 erle and compared to *that*, i think uxn is stellar. but it's a very low bar
11:03 erle the bar is basically ano the ground
11:03 erle on
11:03 muurkha that's at least an order of magnitude lower power than any 8-bit or 16-bit processor I know of
11:04 erle but do you think uxn fares better on “solution fits in head” with its limitation, or worse?
11:04 muurkha PICO-8 (and, more interestingly, TIC-80) are cool, they have drawn lots of people into making games for them
11:05 muurkha they're evidently very accessible
11:05 erle yeah, pico-8 looks cool, but any optimization techniques you are learning there are more for the “i minify my javascript” crowd
11:05 muurkha pixel art is, sure, cargo culting 80s console pixel art
11:05 erle it lives up to its name
11:05 erle a *fantasy* console
11:05 erle minetest is much more impressive imo
11:06 muurkha agreed!  much more
11:06 erle and that even with all its faults!
11:06 erle btw, do or did you develop stuff on/for minetest engine?
11:06 muurkha I've only ever written some stupid tiny mods in Lua
11:07 muurkha and helped my girlfriend write some slightly less stupid ones
11:07 muurkha well, and compile Minetest from source, which was unnecessarily difficult because of not tagging the submodule
11:07 muurkha but Devine and Rekka want "permacomputing": stuff that works forever, doesn't break, runs on minimal energy.  I think this is a very good goal
11:08 erle i agree
11:08 muurkha and making roms that can run on a Nintendo 3DS or a GBA or a Pico or Linux is a good means to that goal
11:08 erle i have that goal for redo as well btw
11:08 Lesha_Vel_ joined #minetest
11:08 erle you could send my implementation in the 80ies and it might work
11:09 muurkha you can try it in SIMH if you want
11:09 erle what's that?
11:09 muurkha on an emulated PDP-11 with 2.9BSD
11:09 muurkha SIMH is a simulator for lots of old computers
11:10 muurkha a big problem with Varvara is that there's so much functionality in the base platform (outside the uxn CPU) that there's going to be a lot of pressure to change it
11:10 muurkha which breaks compatibility between implementations
11:11 muurkha and the fact that that stuff is needed for efficiency means that you're limited by what the I/O devices can do
11:11 erle i doubt you can pressure the 100 rabbits into anything
11:11 erle you can convince them, maybe
11:12 muurkha they'll pressure themselves
11:12 muurkha I didn't mean social pressure
11:12 erle oh i see
11:12 erle this is funny with the scam that is urbit
11:12 muurkha it would be a lot better to have a VM with an efficient enough implementation of the CPU that you can implement sprites and ADSR envelopes and FM synthesis etc. as libraries
11:12 erle so urbit went super big on their VM
11:12 muurkha yeah, Urbit has a similar problem with jets
11:12 erle so big that they counted the version numbers backwards
11:12 erle in kelvin
11:12 erle when they reached 0 kelvin, it is frozen
11:12 erle lol
11:12 erle sooooo
11:13 muurkha but it's worse because jets are not documented
11:13 erle i think at some one-digit-kelvin or so someone noticed a huge bug
11:13 erle and they silently changed it
11:13 muurkha heh
11:13 erle without changing the version number
11:13 erle because they are actually afraid to come ever so closer to zero
11:13 erle or that's my implementation of it
11:14 erle interpretation
11:14 erle damnit
11:14 muurkha anyway, uxn is designed to be as easy as possible to implement without being painful to program in assembly
11:14 erle urbit is a scam even without jets
11:14 muurkha and I think that's the wrong approach
11:14 muurkha well, that's overstating it
11:14 muurkha I think it's an approach that could be improved on
11:14 erle well, years ago i learned just enough hoon to read it
11:14 erle and i learned that they basically skip *every* hard problem
11:15 erle with smoke and mirrors
11:15 erle and constant renaming of features
11:15 muurkha which hard problems?
11:15 erle for example, the correspondence between their markdown code and some C code that is supposed to be the same is more than questionable
11:15 muurkha improved on by designing it to be as easy as possible to implement *efficiently* without being painful to *use as a compiler target*
11:15 muurkha that's the jets problem
11:15 erle no, it's the entire urbits ecosystems problem
11:16 erle there is no guarantee that the jet will do what you expect
11:16 erle they simply crap out code in hoon and annotate it with some jet
11:16 muurkha no, I mean, specifically "jet" is the Urbit word for "some C code that is supposed to be the same"
11:16 erle yeah, but the problem is pervasive
11:16 erle like why code hoon at that point
11:16 erle if there is no verification going on
11:16 muurkha is there a different hard problem they're skipping, other than the ones they're doing with jets?
11:17 erle i think the whole “you might have to reboot your universe for hard changes” thing made me stop trying it out
11:17 muurkha I think a better starting point than Forth for uxn/Varvara would probably be something like Wasm or RISC-V
11:17 erle basically, you have to upgrade specific components in lockstep for them to continue to work, not out of lazyness or need, but because the lead dev is a monarchist
11:17 erle and believes in a centralized system
11:17 muurkha haha
11:18 muurkha although, without the RISC-V scrambled instruction encoding, which isn't helpful if you're compiling the program to native code
11:18 muurkha and Wasm and even RISC-V are too large, you'd have to cut them down
11:19 erle urbit was designed by mencious moldbug, a type of guy who despises nazis because in his view *obviously* a wise (white, christian, etc.) philosopher king by divine right would be a better leader than some random genocider with a grudge
11:19 erle and it permeates the software he writes the same way that early GNU software is an expression of stallmans view on politics
11:19 muurkha hmm, are you seriously suggesting that Moldbug wants to be ruled by Christians?  do you not know he's Jewish?
11:20 erle ig i may have misremembered some of it
11:20 erle thanks for calling me out on such stuff
11:20 erle it has been years that i have read some of his writings
11:20 erle and i did not know about it
11:20 muurkha but of course a philosopher king would be a better leader than anyone else or any other form of government, by definition.  the only problem is that they don't exist
11:21 muurkha anyway, back to simple computing systems
11:21 muurkha I think the TIC-80/uxn "rom" idea is great
11:21 erle not his problem. and tbh it's not my problem if he's like that. it's my problem if his software is unnecessarily creating issues for me because it's centralized to a ridiculous extent.
11:21 erle and with urbit, asking everyone to just recreate their stuff is a thing
11:21 muurkha I didn't know that about rebuilding the universe
11:22 muurkha I mean that's specifically what Urbit was supposedly designed to avoid, right?
11:22 erle it's simple: your ships or planets or whatever they call it from years ago will not work anymore
11:22 erle yeah, supposedly
11:22 erle as i said, it's a scam
11:22 erle i am suprised you can get investor money for such stuff
11:22 muurkha right now you can't
11:22 erle but then again, juicero got investor money for squeezing packs
11:22 erle yeah but back then you could
11:22 muurkha yeah
11:23 erle anyways, i am writing this not because i hate moldbug (though someone once told me that dragging the guy to a meeting with investors is “like bringing your racist redneck uncle”) but because urbit is the kind of system that will waste your time, because you need to learn a lot about it to figure out that it does not hold the promises in any way
11:23 muurkha I'd really like to have a "rom"-based system where you could reasonably expect that your existing software "roms" will run on new implementations, even implementations written by people who couldn't test against an existing implementation
11:24 muurkha I don't think uxn meets that bar, in part because of the peripherals in Varvara
11:25 muurkha the closest approaches I know of to that are Brainfuck, Chifir, and the Cult of the Bound Variable
11:25 erle i wrote libglitch with only access to a header file, so i am obviously in favor of simple things like that
11:25 erle what's chifir
11:25 erle also silly brainfuck implementations will have bignum issues
11:25 muurkha Nguyen and Kay's "Cuneiform Tablets" paper
11:25 muurkha yes, also Brainfuck doesn't specify the peripherals either
11:26 muurkha you can emit ANSI escape codes from it if you want
11:26 erle like one of the easiest brainfuck implementation is just using search replace
11:26 erle to use a very long ringbuffer or so
11:26 erle or a big array
11:26 erle stupid shit like that
11:26 muurkha but there's no way to turn on cbreak mode
11:27 erle wdym
11:27 erle the fun thing about urbit btw is that you actually have to waste all that time that i wasted to verify that what i assert is true
11:27 muurkha Linus Åkesson wrote an implementation of the game of life for BF
11:27 erle like there is no shortcut really
11:27 erle because hoon is so idiosyncratic
11:27 muurkha I sat down and wrote a BF implementation in about 45 minutes in C from the spec
11:27 erle with other stuff you don't really need to know, you can observe the results
11:28 muurkha and then I could run Åkesson's Life
11:28 muurkha but I had to hit Enter for each generation
11:28 erle like “oh this actually builds the same binary in incremental and from-scratch mode”
11:28 erle nice
11:28 muurkha because there's no way to do a nonblocking keyboard read, BF just has stdin and stdout
11:29 muurkha you could define a different kind of terminal that sends it an idle byte 60 times a second or something
11:29 muurkha but all of that terminal stuff is outside the BF spec
11:29 muurkha also, getting BF to perform well is as hard as getting Nock to perform well
11:29 muurkha you need to put jets in your BF
11:30 muurkha but it was still super inspiring to be able to sit down and implement a VM from the spec in under an hour and then just run an interesting program in it
11:30 erle oh. nock is legit except for that kelvin thing
11:30 natewrench joined #minetest
11:30 erle it's just stupidly slow
11:30 erle and i think optimizations could work differently
11:30 muurkha yeah.  also its memory usage isn't well characterized
11:31 muurkha which is another thing BF and uxn and Chifir did right
11:31 erle but i refuse to spend more thought on that when i can improve minetest clouds instead: https://github.com/minetest/minetest/issues/11685#issuecomment-1192440600
11:31 muurkha like, there's no way to say how much memory you need to run a given Nock program, it depends on your GC
11:34 muurkha I think the Cult of the Bound Variable manages memory explicitly too
11:34 erle are you talking about lisp what
11:35 muurkha the UM of the Cult of the Bound Variable is one of the very few virtual machines that have been successfully reimplemented by several different people without access to a working implementation
11:36 muurkha http://www.boundvariable.org/task.shtml
11:37 muurkha specifically, 365 different teams successfully wrote (probably) independent implementations on the same day
11:37 muurkha well, the same three-day weekend
11:38 muurkha some more people have done it since then
11:38 erle more successful than redo i would say
11:38 erle with redo there are different levels of “i implement this in a few hours”
11:38 erle level 1 is not handling non-existence dependencies, but making it recursive
11:38 erle it's the simplest thing that can possibly be useful
11:39 erle level 2 is not implementing stuff like redo-stamp i would say, because it is a bit weird to conditionally build and then notice the target is up to date while you are building it
11:39 muurkha the implementation in https://www.cs.cmu.edu/~crary/papers/2006/bound-variable.pdf is 55 lines of C, and the authors claim that it works
11:40 muurkha that is, it's shorter than my BF implementation, and also a great deal more practical as a compilation target
11:41 erle level 3 is figuring out that your approach inhibits rebuilding targets several times and then claiming stuff like “this implementation passes all test cases of redo-sh” (while silently removing or adjusting the redo-always tests)
11:41 muurkha it's also considerably shorter than any uxn implementation I've found
11:41 erle i'll look at it
11:41 muurkha but it shares the problem with BF that it doesn't specify I/O devices
11:42 muurkha I'm not yet convinced that rebuilding a target more than once is correct behavior
11:42 erle why?
11:43 erle if A depends on B and i change B, what could possibly be the reason for “redo-ifchange A” to not rebuild it?
11:43 muurkha clearly A should be rebuilt in that case; the question is only whether it should be rebuilt more than once
11:44 erle oh, it should only be rebuilt once on that invocation
11:44 erle but that is the thing
11:44 muurkha usually it's a performance problem if it happens, and (depending on exactly when it's rebuilt repeatedly) it can easily be a termination problem --- or, what comes to the same thing in practice, an exponential-factor slowdown
11:44 erle nope
11:45 muurkha you've said that doesn't happen with your implementation
11:45 muurkha but I don't yet understand why
11:45 erle the problem is it will be rebuilt zero-times if you are using pennarun redo and you are inside a script inside a redo “run” that rebuilt it before
11:46 erle so suddenly the question if a target that is clearly out of date does depend on the question if some previous process rebuilt it in the context of building dependencies for a target that your thing A might also be a dependency of
11:46 erle if that target gets rebuilt i mean
11:46 muurkha right, which is generally the right way to handle that sort of dependency thing
11:46 erle nope
11:46 muurkha because it avoids nontermination, exponential slowdown, and inconsistency problems
11:46 erle it's generally the right way if you are going toposort and “i know all my dependencies” route
11:47 erle but you *don't* know all dependencies
11:47 erle i have had a guy tell me that he does engineering and he does probabilistic builds. basically you shake the thing and then see if it satisfies some criterion.
11:47 erle if it does not do that, you build it again
11:47 muurkha hahaha
11:47 erle clearly that will run into an endless loop if the target is only built once with pennarun redo
11:47 muurkha yeah, I've worked on projects that used a "flaky" plugin for pytest
11:48 erle and it will not with my thing
11:48 muurkha if the test fails, it runs it again until it succeeds
11:48 erle yeah but in his case it was more an optimizing problem for an FPGA or so
11:48 muurkha yeah, optimization is often like that
11:48 erle where the probabilistic approach was good enough, but might take 3 tries
11:48 erle i think that's a perfectly good reason to say for a build process after building something ”you know, i need that built again”
11:49 erle and it is not only unexpected, but feels downright malicious for a build system to skip that build
11:49 muurkha that's not at all the way I see it
11:49 MinetestBot [git] sfan5 -> minetest/minetest: Use newer NDK r23c for android build 2183b35 https://github.com/minetest/minetest/commit/2183b35ba4cda762e3176a7b442dd786e749b35d (2022-07-22T11:13:35Z)
11:49 erle i mean, i am talking about the end result here, skipping a build
11:49 erle the reason is that apenwarr redo skips dependency checks that technically it has to do
11:49 muurkha how do you avoid the exponential-time blowup I described earlier?
11:50 erle could you describe it again, in more simple terms? i did not understand it
11:50 muurkha digraph problem { F -> {D E} -> {B C} -> A; }
11:51 erle yeah
11:51 erle the classic diamond dependency problem
11:51 muurkha F changed, causing D to be recomputed, which causes B to be recomputed, which causes A to be recomputed, and then D having been recomputed causes C to be recomputed, which causes A to be recomputed again,
11:51 erle though i do the arrows the other way around i think
11:51 muurkha and then F having changed causes E to be recomputed, which causes B to be recomputed again, which causes A to be recomputed a third time
11:51 muurkha yeah, you did, sorry
11:51 erle oh. i see what you mean
11:52 erle the answer is very simple
11:52 cranezhou joined #minetest
11:52 muurkha finally E's recomputation causes C to be recomputed again, which causes A to be recomputed (correctly, at last) a fourth time
11:52 erle additionally to saving a “target is up to date” information for every file, you save a list of expected inode timestamps and hashes for each dependency with each thing you build
11:52 erle which leads to the following scenario:
11:53 erle everything is built exactly once
11:54 erle basically, i think that problem only happens if you view “target is up to date” separated from the dependency tree
11:54 erle which is a very simple view of the world, but it also really fits the erroneous toposort approach
11:56 erle muurkha what do you think about my answer?
11:56 erle is it satisfying?
11:56 erle also redo does not work this way. if F changes, then this invalidates A first.
11:57 erle because a dependency check is basically: 1. is this target up to date? 2. are all dependencies of this target up to date? 3. are all non-existence dependencies of this target up to date?
11:57 erle but then again redo-ifchange is a command that does “rebuild this target if it is not up to date”
11:57 erle i can see why you are viewing it the way you presented it though
11:57 erle and why you made the arrows that way
11:58 erle it's the toposort way of thinking about a dependency problem
11:58 erle correct?
11:58 erle basically, what DJB figured out is that if you think about the problem from the other side, i.e. top-down instead of bottom-up, you can sidestep all kinds of problems
11:59 debiankaios joined #minetest
11:59 muurkha how does that end up rebuilding things more than once?
11:59 erle in this case, i see no reason to rebuild more than once
11:59 muurkha in which case does it?
12:00 erle if you have a dependency that can never be satisfied. or if you have a dependency that changes during the build process and is supposed to do so.
12:00 muurkha I wrote the arrows that way because that's the direction of the dataflow and my most painful experience with this was writing a dataflow library
12:00 erle for example, imagine i had a static site generator
12:00 erle a really stupid shell script thingy
12:00 muurkha doesn't rebuilding a dependency cause it to change during the build process?
12:00 erle so i want to include a footer on every page
12:00 erle that includes when the page was rendered
12:01 erle and there i use $(date -Imin)
12:01 erle and redo-always
12:01 erle because every time a page is doing ”redo-ifchange footer.html && cat footer.html” it should have the current time
12:02 erle there are a ton of problems like that and the usual response to basically all of them from people who use make et al is “don't do that … instead, just rebuild everything every time and inline stuff and so on”
12:02 muurkha yeah, admittedly that is my thought in this case ;
12:02 muurkha ;)
12:02 erle basically, not rebuilding stuff when explicitly asked to hampers composition
12:03 erle well, what if you have a build system that reads a /dev/camera device and should generate a HTML file when the cam changes?
12:03 erle or what if you wanted to implement something like udev in terms of redo? i.e. the existence or non-existence of a device being plugged in or out as a conditional in the build process
12:03 muurkha it should produce an output build that reflects the state of /dev/camera at the beginning of the build
12:03 muurkha generally if I had a file in my build that got changed during the build, and some of the files got built with the old version and some during the new version, that's a serious problem in the build
12:04 erle not for the problem at hand, which is “loop over this for 10 hours and give me all the pictures that were different than the previous one”
12:04 muurkha like if I have a .h file that gets built from a grammar specification or something
12:04 erle nah, it can be entirely legit
12:04 muurkha if I have different parts of my source base that get built with different versions of the same .h file, that's a nasty bug
12:04 erle and trying to find counter-examples for everything i can come up with is pointless anyway
12:05 erle out of curiousity, how would you solve my “webpage is using redo and shows the results of redo test cases” use case?
12:05 muurkha I think the examples you're describing can be handled by doing multiple build runs rather than a single one
12:05 erle with pennarun redo obviously any test case that tests if a target is rebuilt if the source changes could fail
12:05 erle look, the concept of a “run” of redo is the problem here
12:06 muurkha I don't know, but I agree that it's an important case, unlike the /dev/camera or udev cases
12:06 erle you only need the concept of a “run” if you try to implement a recursive build system with an approach that is useful for make-style non-recursive builds
12:06 erle but if you are using that approach, you get all kinds of bugs
12:06 erle this is only one of them
12:07 muurkha I would say that you avoid all kinds of bugs: namely, inconsistency, potential non-termination, and exponential-time builds
12:07 muurkha but maybe you have a better way to avoid them
12:07 erle the thing is, apenwarr redo could totally refuse to work if the world changed during the run
12:07 erle but it doesn't
12:08 muurkha unfortunately our filesystems don't provide us that ability
12:08 erle no, but if you are going to cache aggressively and toposort anyway, you could check all the time
12:08 erle or at start and end
12:08 erle that the world is what you expect
12:09 erle the problem is basically, that the toposort/caching approach is not only wrong in the abstract, it relies on assumptions that are not only almost never true, but sometimes go explicitly against user expectations
12:09 erle and the only advantages are a) less dependency checks b) easier parallelization
12:10 muurkha I don't agree, yet
12:10 erle non-termination is not a problem btw, if you want something like a service manager using redo for dependency handling of shellscripts
12:11 erle and there is no guarantee that your build process terminates anyway, except if you do “timeout 120 redo” or so to force it to quit
12:11 muurkha wait, are you saying that it's *desirable* that a redo build can be non-terminating even if all of its steps terminate?
12:11 definitelya joined #minetest
12:11 erle i have a hunch that it will always terminate if all of the steps terminate. care to give a counter-example?
12:12 muurkha well, I don't know of a counter-example, because I don't understand your implementation strategy well enough yet
12:12 erle i can give you something, it is indeed desirable that a build can be non-terminating if some step is not terminating
12:12 erle my prime example for it is using inotify in a loop
12:12 erle so the top-level dofile just redoes stuff when i save files
12:13 erle the common answer to that is “to use this with apenwarr redo, use a wrapper shell script”, but here we are again: composition is hampered
12:14 muurkha yeah, I definitely don't think that's a desirable behavior for a build system
12:14 erle what exactly?
12:15 erle anyways, i can not ever forgive a build system to not rebuild a target because the programmer chose to skip some dependency checks in the name of speed
12:15 muurkha having the top-level dofile run inotify in a loop instead of running inotify in a loop in a shell script
12:15 erle but dofiles are (usually) shell scripts
12:15 muurkha that's true
12:15 erle it's just shell scripts with dependencies. or anything that can be executable, really.
12:15 muurkha yes
12:16 muurkha but the important thing here is that they are being run as part of a build system
12:16 muurkha so they should not do that
12:16 erle that's actually only important if you do not care about recursive composition
12:16 erle but having a top-down recursive build system is the entire point of redo
12:16 erle implementing it in terms of a bottom-up make-style system is simply not solving any real problems over make
12:17 definitelya left #minetest
12:17 erle lipstick on a pig and so on
12:17 muurkha hmm, I think having a simple, expressive build system is the entire point of redo
12:17 erle true, but for me, simplicity also means “solution fits in head”
12:17 cranezhou joined #minetest
12:17 definitelya joined #minetest
12:17 muurkha but rebuilding the same target multiple times during a build is a giant red flag for me
12:18 muurkha usually I'm willing to have my build system go to virtually any length to make sure that never happens
12:18 erle look, the bottom-up make-style implementation means that ”if a target is not up to date, it will be rebuilt when you do redo-ifchange $target” changes into “if a target is not up to date, it might or might not be rebuilt, based on the moon phase and what you had for breakfast”
12:18 erle i think i may have found the misunderstanding
12:19 erle you worry about accidentally rebuilding targets multiple times
12:19 erle i have yet to see this happen, look at the parallel builds code for how i prevent it
12:19 erle it is very inefficient and for years i thought it would not be possible
12:20 erle i am not saying it can't happen, but i put great efforts into handling the diamond dependency structure
12:20 erle digraph diamond { A -> B; A -> C; B -> D; C -> D }
12:20 muurkha either it's rebuilding the same binary twice, in which case it could have avoided doing it the first time, or it's rebuilding two different versions of the same file.  in the second case, either the first version of the file was used as an input to something else, or it wasn't.  if it was used, then that other thing that it was used for is now out of sync and thus inconsistent with it.  if it wasn't
12:20 muurkha used, then again, building it the first time could and should have been avoided
12:20 muurkha worse, it might have errored out the first time, terminating the whole build process
12:21 erle i am not worrying about accidental rebuilds happening. i have NEVER seen them happen because i try to make sure it never happens *unless it is requested*
12:21 erle whereas apenwarr redo makes sure it never happens by making sure you can't even ask for it
12:21 YuGiOhJCJ joined #minetest
12:21 muurkha asking for it doesn't fit into my mental model of what a build is
12:21 erle which simplifies the code, but extremely complicates the mental model (“runs” are a ridiculous concept if you ask me)
12:22 erle also apenwarr redo could totally error out in this case
12:22 erle but if it did instead of silently not rebuilding, it might look a bit silly
12:23 erle also … how am i going to do a build process with some inotify loop or reading a camera or doing test cases then anyway?
12:23 muurkha outside of redo, from a shell script
12:23 muurkha or runit or systemd etc.
12:23 erle but how do you know that shell script is not called from a parent redo process?
12:23 muurkha I don't run runit or systemd from a parent redo process
12:24 muurkha because if something is doing an inotify loop then what it's doing is not a "build" as I understand the term
12:24 muurkha but I do agree that the LaTeX page number thing is an important example, and including the redo test results in a web page built from redo is an important case
12:25 muurkha I don't agree that Avery's reason for not doing that is that he thought it was less correct but faster
12:25 erle well, then latex is not a ”build“
12:25 muurkha you can definitely run latex as part of your build process
12:25 erle what do you think the reason is then?
12:26 erle it is vastly simpler to implement it in the apenwarr way if you want to optimize the order of things to be built
12:26 erle i.e. for parallel builds
12:26 muurkha probably avoiding inconsistency
12:26 erle na, it's less consistent if you do that
12:26 muurkha "consistency" in the sense I am using the term is not a gradable adjective
12:27 erle okay please explain
12:27 muurkha consistency is when the view of the world as seen by a process represents a single point in conceptual time
12:28 erle also just to be clear, are you okay with an out-of-date target not being rebuilt if a build rule or a user explicitly ask for it to be rebuilt if it is out of date?
12:28 erle because that is what it comes down to for me
12:28 muurkha no, out-of-date targets not being rebuilt are inconsistency
12:28 muurkha *is inconsistency
12:29 erle well, then skipping it because the build system thinks the target is up to date because of toposort or aggressive caching leads to inconsistencies whenever the world does not match the snapshot taken at the beginning of the build
12:29 muurkha if it is skipped, yes, but not if it is rebuilt later on during the build
12:29 muurkha as long as nothing reads its outdated state during the build
12:30 erle but that's exactly the point
12:30 erle the only time you would care about something not being rebuild when you explicitly ask for it is when you need its state during the build
12:30 erle i mean the only time during the build
12:30 muurkha my conception of the build process is that it produces some set of build artifacts, each of which is a pure function of the state of the source code
12:31 erle yes, that's one of the “nice in theory, but wrong in practice” assumptions
12:31 erle it's a good default
12:31 erle but you need to bend over a lot to make any arbitrary problem fit this mold
12:31 erle and some just won't
12:31 muurkha yes, that is true, but I think it is okay for a build system not to handle any arbitrary problem, if the result is that it handles builds better
12:31 erle especially since not only your dependencies might change during the build
12:32 erle with a recursive implementation you could totally build build rules
12:32 erle during the build
12:32 erle (i do that sometimes)
12:32 muurkha yes, that is true
12:33 erle well, what is ”better”? my implementation is faster and smaller than pennaruns, and in years i have never seen a case where ”do not rebuild a target if explicitly asked to rebuild it every time” was correct.
12:33 erle so i see no metric on which the “wrong” (i.e. not mine) approach wins except “less effort on the part of the programmer”
12:33 erle but a build system is only written once
12:33 erle and executed a lot
12:33 muurkha that is good, but the differences in behavior that you are describing do not sound like improvements to me
12:34 muurkha except for the test-results-in-docs case and the LaTeX page numbers case
12:34 erle as i said, i doubt they are 100% intentional. they are a side effect of faulty skipping of dependency checks.
12:35 muurkha but from my point of view a build system is a way to incrementalize a pure function so that it runs faster
12:35 erle and a side effect of using questionable optimizations for parallel builds
12:35 muurkha and the things you're describing don't sound like pure functions
12:35 erle obviously they are not
12:35 erle for very simple reasons
12:35 erle for example, i may or may not want to rebuild a binary when my compiler is updated
12:36 erle so i can't assume that all inputs have side effects i care about
12:36 erle on the other hand, if libpng is updated, i probably want to rebuild
12:36 muurkha I definitely 100% want to make sure that updating my compiler rebuilds all my binaries
12:36 erle yes, you want that
12:37 muurkha otherwise how will I find out that the compiler upgrade introduced a bug in one of my object files?
12:37 erle also then there are tons of cases where you need to speculatively build a target only to say “i was wrong, this target is actually up to date”
12:37 muurkha I won't find out until I edit the corresponding source file six months from now
12:37 muurkha and then I'll forget that I upgraded the compiler
12:38 erle these targets usually use a combination of redo-always (to always build the target) and redo-stamp (to abort prematurely with a state where the target is considered up to date even though the build rule did not build a new target)
12:38 muurkha make doesn't do this, and this is a significant diedeficiency i make
12:38 muurkha *deficiency in
12:38 erle i am interested in how you would solve it
12:38 muurkha Nix and Guix do
12:38 erle well, it is trivial to do this with redo, but you are not forced to do
12:38 erle i have been thinking of adding a helper wrapper though that captures everything
12:39 muurkha that could be helpful!
12:39 erle so about the speculative rebuilding
12:39 erle how would you handle that if you only look at each target one time, even if asked to do it multiple times?
12:41 erle oh, i have another recursive use case: say you have a build rule to log disk utilization and file it away.
12:41 erle like, post it to syslog or some pastebin or append it to an archive
12:41 erle obviously that is not a pure function
12:41 erle but in the apenwarr case, it is impossible to do this several times during the course of another project
12:42 erle to, for example, track disk utilization over the time of the build
12:42 erle because the sceond time you explicitly ask the build system to do it, it will be like “welp, i already did”
12:42 calcul0n joined #minetest
12:42 erle even if doing the exact same commands from outside a dofile would make it generate it
12:42 muurkha hmm, by "speculative rebuilding" I think you mean rebuilding something one of whose inputs has changed, but in a way that didn't affect it?
12:43 erle yes, something like that. but the usual case is that you just always rebuild it by adding a dependency that can never be satisfied, then abort later with your custom check.
12:43 muurkha for example, maybe an executable links with a .a library which changed, but the files it happens to pull in are unchanged, so the binary is the same
12:44 muurkha the standard way to hande that is to wait to rebuild it until all of its dependencies are up to date
12:44 erle yes, but that runs into a halting problem
12:44 erle because strictly speaking, this is not about waiting for dependencies
12:44 erle it's about targets that have prerequisites that you can not express in dependencies or non-existence dependencies
12:45 erle e.g. “at this point in the build, generate a zip file and copy it over to the usb drive, if it is plugged in”
12:45 muurkha I agree that debug messages such as logging disk utilization are not part of a pure function
12:45 erle i think that brings us closer
12:45 muurkha yeah, I'm fine with excluding "copy it over to the usb drive if it is plugged in" from the build system
12:46 erle the thing is, stuff that you care about when rebuilding multiple times is likely a side effect (in terms of pure functions)
12:46 muurkha I really don't want my build system to be able to produce different results depending on whether or not a usb drive is plugged in
12:46 erle well, you can't really choose here
12:46 erle you either get the ”i can do this” or the ”i can not do this, but some targets will unexpectedly not be rebuilt”
12:47 erle it just depends on how many dependency checks you do
12:48 erle anyways, i noticed that you respond to a lot of “well, in this use case the wrong implementation strategy results in havoc” to “well, i don't have that use case”
12:48 muurkha that's not what I'm saying
12:48 erle then please elaborate
12:48 muurkha I'm saying that the behaviors you want in your build system sound like "havoc" to me
12:49 erle it's not that i *want* them
12:49 erle they are a logical result of doing stringent dependency checks
12:49 erle i noticed that they happend and use them though
12:49 erle and you obviously can't rely on them if your build system skips some checks randomly
12:49 muurkha from my point of view, that means that doing what you are calling "stringent dependency checks" is a bad idea, because it can have profoundly undesirable effects on the build system
12:50 erle which are?
12:50 erle you always have the non-termination issue
12:50 muurkha for example, it might rebuild the same file more than once, or produce different output depending on whether a USB drive is plugged in, or maybe even loop infinitely (though you've said that this, and the exponential-time bogeyman, aren't things that happen in practice)
12:50 erle you never have a pure function. a lot of important things are side effects.
12:51 muurkha non-termination is not an issue for traversing a dependency DAG, except if the computation of one of the nodes is itself nonterminating
12:52 erle so what i do not understand: the ONLY time where I think a build system should rebuild the same file more than once is if you explicitly ask for it and it is out of date. how is it undesirable to silently skip on that?
12:52 erle i mean how is it desirable
12:52 erle like, are you *sure* you don't worry about *accidentally* rebuilding several times?
12:52 erle because that simply does not happen as far as i can see
12:53 muurkha by "explicitly ask for it" do you mean that more than one build artifact {B C} depend on the same other build artifact (D)?
12:53 muurkha *depends
12:53 muurkha if B depends on D, and C depends on D, and B gets rebuilt because D changed, and somehow D changes again, the correct behavior after that is not to rebuild only C
12:53 muurkha it is to rebuild both B and C
12:54 erle i am talking about a case where B and C both depend on D and are not built in parallel and D dependency checks are explicitly overridden always return ”out of date”
12:54 muurkha otherwise B is inconsistent with D
12:54 muurkha oh, I see
12:54 erle you can get the consistent behaviour by just not overriding that dependency check
12:54 erle but overriding it means you *don't* want consistency
12:55 erle like in the case of latex or the test cases or the html footer displaying the date
12:55 muurkha that can't possibly be what it means, because consistency is never something I would not want
12:55 erle yeah, you'd never do the footer html thing
12:55 erle i get it
12:56 muurkha yeah, probably not.  or if I did I would put the current date in a file that's tracked as one of the build inputs, a source file
12:56 muurkha otherwise I have an irreproducible build
12:56 erle then think “a gif that shows the current time”
12:56 muurkha which is something I very very much do not want
12:57 erle the last not-reproducible build i made for people was something like
12:57 erle 1. download file from API
12:57 erle 2. generate HTML for that
12:57 muurkha I mean, I have wasted thousands of hours of my life because builds were irreproducible
12:57 erle 3. on each page of the PDF output print the date and time of the current step in the build
12:57 erle this was explicitly requested
12:57 muurkha that sounds inherently irreproducible
12:58 erle yes
12:58 muurkha that's the kind of thing I want my build system to prevent
12:58 erle so what's your response to that, you should never do it?
12:58 muurkha if possible, anyway
12:58 erle the thing is, that file was modified all the time, even while building
12:58 erle so i had a build rule that would always be out of date to produce the file
12:58 erle but it prematurely aborts if the HEAD request for the file reports the same Etag
12:59 muurkha no, I do that kind of thing too, except for modifying the file all the time; I save the downloaded file and the current date in my filesystem, and check them into source control when I'm using source control, so that I can ensure that the build is reproducible
13:00 erle well, the problem here is that such stuff is exceedingly hard unless you go a “i'll do everything all the time” route
13:00 erle at least in shell script territory
13:00 erle but redo adds a few commands that make it trivial
13:01 erle i could of course check that file into git as well
13:01 muurkha well, I might be erring by conceptually trying to force it into my mold of "a build system"
13:01 erle but that would *also* be a side effect
13:01 erle i think your mold of a build system is a bottom-up build system that handles pure functions
13:01 muurkha and there are a number of thing that people do in builds that don't fit well into this mold
13:01 muurkha no, it can be top-down too
13:02 muurkha it doesn't matter which direction you start the graph traversal from, much anyway
13:02 muurkha make is top-down
13:02 erle yeah but make can't handle non-existence dependencies
13:02 muurkha right
13:02 muurkha you've mentioned LaTeX
13:02 erle because you can only find them in one direction, bottom-up
13:02 muurkha I'll add incrementally updating .a files
13:02 erle unless i am mistaken
13:02 erle i mean top-down
13:02 erle sorry
13:02 erle i haven't drunk enough, brb
13:02 muurkha no, that isn't the reason make doesn't handle non-existence dependencies
13:03 muurkha you could easily imagine extending it with a magic character like ! to specify files that should fail to exist for a Makefile rule to be considered satisfied
13:03 erle rope
13:03 erle nope
13:04 muurkha normally this would only occur in .d files output by makedepend
13:04 erle you could not, because the way it works you would only have that information after the build
13:04 muurkha and updated when you do the build step
13:04 muurkha that's true!
13:04 erle so during the build, it depends on which way you do things
13:04 erle if you do it the wrong way around, you need to build at least twice (and that is assuming your build is reproducible)
13:04 erle and indeed i have seen the ”use make to build twice” strategy in the wild
13:04 muurkha but that's okay because the first time you do a build you don't have any .o files
13:05 muurkha so it needs to build them anyway, and when it builds them it generates the .d files that list the files that it read (.h, etc.) and the files that it tried to read and didn't find
13:05 erle anyways, the issue is that if you build up the graph from the “wrong” end, you are not guaranteed to have enough information at build time to determine up-to-dateness
13:05 muurkha it doesn't matter at all which end you build it from
13:06 muurkha when you evaluate a graph node, you find out all the information you used to evaluate it
13:06 erle it does matter, unless you freeze the world during the build
13:06 sys4 joined #minetest
13:06 muurkha well yes of course
13:06 muurkha I'm assuming you freeze the world during the build
13:06 erle yes, bold assumption
13:06 erle i mean, i made my redo implementation for the podcast thing
13:06 erle build times of hours
13:06 muurkha anyway back to what I was saying
13:07 erle in which times i could totally update the system
13:07 muurkha LaTeX, incrementally updating .a files (for which make has a special case)
13:07 erle could you tell me more about the .a files?
13:07 erle i do not know how this works and how make special-cases it
13:07 muurkha oh, well, a .a file is like a .zip without compression
13:07 muurkha it's a .zip of .o files typically, with a symbol table
13:08 muurkha and the ar command has an option to replace a given member of the .a with a new version
13:08 erle i do appreciate it that the redo solution has not needed special-casing so far, except for looking up build rules (naively you could assume that not finding a build rule you might look up a rule to build the build rule, but that can go on forever or until you run into the recursion limit)
13:08 erle sounds for me as if anything involving that is not a pure function
13:08 erle right?
13:08 muurkha right!
13:09 erle that's the kind of info why i like talking to people in the know about build systems
13:09 muurkha the idea is that if you have 47 .o files and you have updated one of them, then it will be more efficient to use that ar option to update the .a instead of rebuilding the .a from scratch
13:09 erle so how does make special-case them?
13:09 muurkha I forget the syntax, it's something like libfoo.a(bar.o)
13:09 muurkha this is no longer a useful thing to do I think
13:10 muurkha so it wouldn't surprise me if the functionality has been removed from modern versions of make
13:10 erle btw, i do care about reproducible artifacts
13:10 muurkha maybe you can find it in the GNU make manual
13:10 muurkha or the O'Reilly make book
13:10 erle look here http://news.dieweltistgarnichtso.net/bin/tar-create
13:10 erle i made my own tarball creation thingy
13:11 muurkha incremental linkers like purelink are another exception
13:11 erle because a detached PGP signature needs a bit-exact output
13:11 muurkha although very closely analogous to the .a case
13:11 muurkha the incremental linker can update an executable by replacing just one of the .o files in it
13:12 erle which leads me to another wart of make
13:12 muurkha chainsawing out the old version of the .o file, glomming in the new one, and relinking all the symbols that previously pointed into that .o file to point to the new symbols
13:12 erle not writing to a temporary file and then atomically replacing the target
13:12 erle sounds like it is trivial to do in a shell script
13:12 erle so that would be the kind of build rule that is always out of date, right?
13:13 muurkha hmm, why would that always be out of date?
13:13 erle well, what would it depend on?
13:13 muurkha writing to a temporary file and atomically replacing the target just sounds like a thing you should do whenever you're creating a build artifact
13:14 muurkha it doesn't depend on anything any more than /bin/as does
13:14 erle yes, which is why redo does it by default
13:14 muurkha right
13:14 erle well, when in the build process would you do “replace just one of the .o files”?
13:14 muurkha GNU make tries to fake it by deleting the output file if you interrupt it
13:14 erle that sounds like cutting the power leaves you with half a binary
13:14 muurkha yup
13:15 erle one more reason to not use it then ig
13:15 muurkha well, it's one more design defect of make
13:15 erle well, i basically have stopped using make entirely
13:15 erle given that writing build rules for it is difficult
13:15 muurkha it can't really avoid it as long as the compiler is opening the output file with the final result
13:15 erle and you get all those weirdo behaviours
13:16 erle it could totally do it like redo, i.e. provide the compiler with a temporary file
13:16 erle and then mv that once the build succeeded
13:16 erle but why doesn't it?
13:16 erle i mean it's not like it's a big deal, is it?
13:16 erle i would very much prefer for make to be fixable
13:16 erle same for cmake btw
13:16 erle stuff that can be fixed should be fixed wherever
13:17 erle even if the tool is ultimately unfit for my purpose
13:17 muurkha because in many cases the compiler is opening the output file with the final-result filename
13:18 muurkha if your rule says, for example, $(CC) $(CFLAGS) -c $<
13:18 muurkha the output filename doesn't occur in the command line; the compiler computes it from $<, the input filename
13:18 muurkha which is terrible, of course
13:19 muurkha but make can't change that without breaking compatibility with existing Makefiles
13:19 muurkha even if you write $(CC) $(CFLAGS) -c $< -o $@ it doesn't help that much
13:20 muurkha because if make starts interpolating .tmp.output.8023 for $@ instead of sendnam.o, it's going to break Makefile rules that say things like `basename $@`
13:20 erle i see
13:20 erle oh btw, regarding not breaking compatibility
13:21 erle i think apenwarr redo broke compat with „the basename of the target” at some point
13:21 muurkha you could define a new magic output variable named something like $(dest) that has that behavior
13:21 erle with the justification of “this new shiny thing is cooler”
13:21 erle and it seems to be the case with a lot of stuff popular on hacker news that compatibility is an afterthought
13:21 muurkha I'm extremely skeptical of your guesses at apenwarr's justifications at this point ;)
13:21 erle you can read the mailing list yourself
13:22 erle and more importantly, ask apenwarr
13:22 erle and read the github issues
13:22 muurkha I'm not sure I could find the relevant messages or issues at this point
13:22 erle the actual justification was something like “yes, the documentation said this for years, but no one really cares, right?”
13:22 erle which is more in line with every compat breaking ever
13:22 erle if you then speak up, then ”okay, no one except THAT person cares, right?”
13:22 erle etc.
13:23 erle anyway, you can ask apenwarr and i am pretty sure that you'll get a “building targets multiple times is an abomination, even if the user asks for it”
13:23 erle in fact, i suggest to ask and correct me if i am wrong
13:24 muurkha heh
13:24 erle well, it's not uncommon for people to have exactly that stance
13:24 muurkha oh, so here's another thing that doesn't fit very well into the top-down Makefile mold
13:24 muurkha where you have one build step that produces multiple output files
13:25 erle in fact, i think that the reason i make a stink about it is not because i'm trying to be contrarian, but because i have the use cases where it breaks down
13:25 erle i'd probably not care if i did not
13:25 muurkha one example is where you have a compilation step that produces a .d file for make
13:25 erle same way i don't care that the linux terminal does not support more than 512 glyphs
13:25 erle build steps with multiple outputs can just produce a tarball though
13:25 muurkha so that running the C compiler once produces both foo.o and foo.d
13:26 muurkha yeah, I've thought about that possibility.  or a subdirectory
13:26 erle you can't reasonably depend on a directory though
13:26 erle i mean it might depend on your filesystem
13:26 erle i heard the muiltiple outputs thing a lot and i also had that problem
13:26 muurkha don't all your files depend on your filesystem?
13:26 erle ba-dum tssss
13:27 erle despite the obvious aesthetic issue of producing an unnecessary archive file, the tarball solution works fine
13:27 Fixer joined #minetest
13:27 muurkha why can't you reasonably depend on a directory?
13:27 erle well, to start with: what does that even mean?
13:27 muurkha (usually you really want to depend on things in the directory though)
13:27 erle when is a directory out of date? when is it not?
13:27 erle what do you do if it does not exist?
13:29 muurkha maybe each build step should create a subdirectory
13:29 muurkha to contain its zero or more outputs
13:29 fling joined #minetest
13:29 erle muurkha you have not answered the question though
13:30 erle what makes a directory up to date or not
13:30 erle i have thought about it and not come to a solution that satisfied me
13:30 erle but i am curious about your musings, even if half-finished
13:30 erle half-finished thoughts are sometimes the best
13:30 erle as they show where the mind goes
13:31 muurkha well, one possibility is that you can't depend on the directory, only on things in it; another possibility is that the directory is considered to have changed whenever anything in it changes (which is admittedly hard to implement if it can contain further subdirectories)
13:31 muurkha if it doesn't exist then hopefully the build step that populates it will also create it if it doesn't exist
13:31 muurkha much like a tarfile
13:31 erle btw, apenwarr redo has a feature i like very much, but haven't found important enough to implement: log linearization for parallel builds
13:32 erle basically, tailing all your parallel builds at once
13:32 muurkha linearization?
13:32 erle you build 4 things at once
13:32 erle my implementation will just output everything to stdout
13:32 erle apenwarr redo collects the logs and displays them under each other
13:32 erle belonging to the appropriaate target i think
13:33 muurkha oh, so that the messages for a given file occur together?
13:33 muurkha that does sound nice
13:33 erle it is quite useful
13:33 erle if you do parallel builds
13:33 erle which i usually don't
13:34 erle btw, i am very interested in your opinion of my implementation for parallel builds
13:34 erle it is very weird i know
13:34 erle but i could not come up with anything better
13:34 muurkha I will look!
13:34 erle if i could improve on the busy wait lock in it that could speed it up a bit
13:34 erle and i know you are a shell script wizard too
13:34 muurkha but I should probably sleep now
13:34 erle i vaguely remember a story where you wrote an IRC client while being bored during a debian install
13:34 erle that was you, right?
13:34 muurkha yeah
13:34 muurkha I don't know, there's a lot I don't know about shell scripts
13:34 erle hehe
13:35 erle yeah, but you know about algorithms
13:35 erle and you make clever hacks
13:35 muurkha but yeah, the only language I had available was bash
13:35 erle exactly the person i'd like to ask
13:35 muurkha so that's what I wrote the IRC client in
13:35 erle i despise bash. it's too complicated.
13:35 erle POSIX sh is okay, because limited. personally, i use rc shell everywhere except for stuff that needs to be portable.
13:35 erle like as my day-to-day shell
13:36 erle you will find shell cleverness in apenwarr do as well
13:36 erle apenwarr takes much more care to use workarounds for ancient shells than i do
13:37 erle so when i have a project built with redo, i usually just include apenwarr do
13:37 erle it's not like a normal user would build stuff more than once, right? ;)
13:37 erle good night muurkha
13:37 kamdard joined #minetest
13:38 muurkha goodnight!
13:49 Wuzzy joined #minetest
14:03 Yad joined #minetest
14:09 erle joined #minetest
14:10 erle muurkha i looked it up to refresh my memory and moldbug seemed profoundly anti-christian (in particular, anti-protestantism), he just wrote about christianity (which he identifies with universalist ideals) enough that it stayed in my mind.
14:20 MinetestBot [git] Wuzzy2 -> minetest/minetest_game: Move Japanese key translations to keys mod b64868e https://github.com/minetest/minetest_game/commit/b64868ef929cc13f3169bd409507278697da112a (2022-07-22T14:19:31Z)
14:20 MinetestBot [git] Wuzzy2 -> minetest/minetest_game: Update translation templates 350c523 https://github.com/minetest/minetest_game/commit/350c52319ea47e0c00ea0cf44fc862cac9b4d41d (2022-07-22T14:19:31Z)
14:21 MinetestBot [git] Wuzzy2 -> minetest/minetest_game: Update German translation e229236 https://github.com/minetest/minetest_game/commit/e229236bc2b2bfd373a5e5eb3686334612b9b17b (2022-07-22T14:19:31Z)
14:34 fling joined #minetest
14:54 fling joined #minetest
15:19 fling joined #minetest
15:25 Yad lua_api.txt says `minetest.register_entity` takes a list called `initial_properties` but I see in games such as Exile that's not required?
15:25 Yad I mean the property declarations there, are not enclosed in an `initial_properties` list and just go directly in the entity definition list
15:26 sfan5 that deprecated, don't do it
15:30 Yad Neglecting to enclose in `initial_properties` is deprecated?
15:31 Yad sfan5: I'll be sure to enclose then. :) My main question is what values can the `visual` property have in an entity definition?
15:32 Yad Is it the same as in object properties? `visual = "cube" / "sprite" / "upright_sprite" / "mesh" / "wielditem" / "item"`
15:32 sfan5 initial_properties *are* the object properties
15:33 Yad Nice.
15:33 sfan5 it's no different from calling self.object:set_properties({ insert contents here }) in the on_activate callback
15:33 Yad Spiffy. :D
15:35 Yad So there's no option for the `drawtype = "nodebox"` concept with entities? I have to simply make the nodebox-shaped shape in e.g. Blender as a mesh?
15:35 ronoaldo joined #minetest
15:38 sfan5 indeed
15:38 Yad sfan5: Thanks. ^^b
15:41 MTDiscord <MisterE> Wait... with ents you can use nodeboxes can't you? Certainly for the selectionbox, I thought also for the 'mesh'. I guess it might be that you can have it mimic a node, and that node can have a nodebox
15:47 erle Yad sfan5 MisterE i am pretty sure that the arrow in mineclone is a fake node entity
15:48 sfan5 you can have an entity look like an item which works if you node does not have an inventory image
15:48 sfan5 +r
15:48 sfan5 but also several other properties don't apply or don't work in this state
16:00 Taoki joined #minetest
16:53 ___nick___ joined #minetest
16:58 erle vampirefrog i think if this is about mods, lets chat here
16:59 vampirefrog okay first of all I've installed a mod, how can I tell if the server picks it up?
17:00 erle did you install it server-side?
17:00 vampirefrog yes
17:00 sfan5 /mods
17:02 rubenwardy vampirefrog: for the connect thing to listen to events, do you own the server?
17:02 vampirefrog yes
17:02 rubenwardy if you do, then some custom mod using the http API is probably the best ide
17:02 rubenwardy or you could tail the logs
17:02 vampirefrog nah I don't mind writing some lua
17:03 rubenwardy there are chat bridge mods already:  IRC, MAtrix, and DiscordMT
17:03 vampirefrog yes but I want a chat server, not a bridge
17:04 vampirefrog how can I get minetest client to print to console?
17:04 erle it does that by default i think
17:04 erle maybe only in dev versions?
17:05 vampirefrog https://i.imgur.com/rcQzMF9.png
17:05 vampirefrog im not on a dev version
17:06 vampirefrog so I just want the output of /mods in the console so I can copy and search through it
17:07 natewrench joined #minetest
17:11 natewrench can i buy and sell minetest accounts?
17:12 sfan5 can't sell something that doesn't exist
17:12 vampirefrog I don't think that's going to stop him
17:16 vampirefrog okay so I've tried installing the mod in /var/games/minetest-server/.minetest/mods and /usr/share/games/minetest/mods and it won't show up in the output of the /mods command
17:16 vampirefrog which I assume lists mods alphabetically
17:16 natewrench have you checked the user folder button in game
17:16 natewrench under about at main meny
17:17 natewrench Open User Data Directory
17:17 vampirefrog okay but what does this have to do with the server?
17:17 vampirefrog the server runs as a unix service, no GUI
17:18 natewrench well usually mods are loaded from the local directory
17:18 erle natewrench please tell me about your offers of accounts
17:18 rubenwardy vampirefrog: ~/.minetest/mods
17:18 natewrench erle: you cant buy and sell accounts because you dont need to make an account to play minetest
17:19 vampirefrog okay so like I said, this is on a Ubuntu server, and I've tried /var/games/minetest-server/.minetest/mods already
17:20 erle is that really the home folder of the user though
17:20 ROllerozxa natewrench: well yes but it'd have to be one single account on a specific server, whether any minetest account on a server would be valuable enough for anyone to sell and anyone would be willing to buy one I don't know
17:21 natewrench https://wiki.minetest.net/Setting_up_a_server < it says check in your local minetest directoy
17:23 wallabra_ joined #minetest
17:24 vampirefrog okay it looks like I had to enable the mod in world.mt as well
17:25 MinetestBot [git] SmallJoker -> minetest/minetest: Mainmenu: Escape server_favorite_delete path 8dcbca1 https://github.com/minetest/minetest/commit/8dcbca1068225b6896c3630b390c64d5946d2c73 (2022-07-22T17:04:19Z)
17:27 wallabra joined #minetest
17:34 Verticen joined #minetest
17:48 wallabra joined #minetest
17:51 Thermoriax joined #minetest
18:01 ___nick___ joined #minetest
18:03 ___nick___ joined #minetest
18:11 vampirefrog trying to make a mod that has an open websocket
18:12 Talkless joined #minetest
18:13 vampirefrog is there one that already does this?
18:14 MinetestBot [git] SmallJoker -> minetest/minetest: Util: Use quotation marks for safe path handling 2351c95 https://github.com/minetest/minetest/commit/2351c9561265d4136f78ce3dd73c0c77acfed711 (2022-07-22T18:13:10Z)
18:15 Krock vampirefrog: irc mod
18:15 vampirefrog interesting
18:15 vampirefrog and it functions as a http server with a websocket endpoint?
18:15 vampirefrog I was talking about a server websocket, not a client one
18:15 Krock https://github.com/minetest-mods/irc
18:16 Krock I just know that it uses something similar to that
18:20 vampirefrog it seems to use luasocket
18:20 vampirefrog which is not the same thing as a websocke
18:20 vampirefrog which is not the same thing as a websocket
18:22 Krock how so? it's a socket and you need Lua bindings for it
18:22 Krock unless you have FFI to access the C API directly
18:48 Sokomine since when is the formspec element "model" supported? it seems very useful to me; but servers may encounter very old clients who probably don't know it. does anyone know which fs version is at least needed?
18:49 Krock Sokomine: 5.4.0 according to quick lua_api.txt browsing
18:50 Krock https://github.com/minetest/minetest/blob/5.4.0/doc/lua_api.txt  <-- replace 5.4.0 with 5.3.0 and notice how Ctrl+F yields no results
18:51 Sokomine yes. i did look and didn't find any particular version. some other elements are listed...so there's probably no formspec_version for it?
18:53 Krock formspec_version is only needed for parameter count changes
18:53 Krock or other non-backwards-compatible stuff
18:54 Sokomine oh! intresting. thanks. what will other clients do? probably nothing?
18:54 Krock you could still use the protocol version to check for compatibiilty
18:54 Krock yes. they'll ignore it
18:55 Sokomine hmm. right now there's a distinction for fs version 1, 2 and 3...those handle scrolling a bit diffrently
19:06 Sokomine very neat feature...
19:17 ryzenda joined #minetest
19:18 ryzenda https://pcgamer.com/no-nfts-in-minecraft-mojang-says/ Given that https://nft.gamestop.com/ went live a little over a week ago, will there be any consideration of NFT accessibility within Minetest?
19:18 calcul0n joined #minetest
19:20 Noisytoot ryzenda: What kind of consideration could there be?
19:22 Noisytoot Minetest's license is a free software license (LGPLv2.1+) and so allows using it for any purpose.
19:24 MTDiscord <luatic> There are no "executive" decisions to be made here, this is free software. I doubt that the MT community would fall for NFT crap though.
19:25 ryzenda Noisytoot, I'm not sure exactly since I personally have practically no participation in anything NFT other than excited about many of the implementable possibilities, but thinking out of top of my head, ... given that "an NFT is essentially a digital certificate of ownership," that perhaps assets owned by players could be in the form of NFTs, and potentially transferrable to be used in any Minetest server environment
19:25 ryzenda or, not transferrable, but accessible from any Minetest server
19:28 Noisytoot ryzenda: NFTs are not necessary for that (or for anything, they're just useless wastes of money and energy). You could just have a way for users to upload those assets to a server (although I'm not sure exactly what assets you're talking about).
19:28 MinetestBot [git] Wuzzy2 -> minetest/minetest_game: Update Lojban translation 697b028 https://github.com/minetest/minetest_game/commit/697b028e430a4c92f06960e4a62abe791cc82629 (2022-07-22T19:28:31Z)
19:29 Sokomine perhaps MESE is an nft in the mt world. who knows? the operation of mese is not entirely understood
19:31 ROllerozxa when will c55 sell the minetest NMPR as an NFT
19:31 ryzenda 5 months ago I glanced at https://old.reddit.com/r/Superstonk/comments/sgp0xo/the_metaverse_is_already_here_its_minecraft_a/ and I'm currently watching the video https://youtu.be/_6FDPuvxaKA which also describes similar ideas I was thinking of just now
19:37 definitelya To the Mooooooon!
19:37 definitelya But seriously, what even?
19:41 Noisytoot ryzenda: That just seems to be a dumb, wasteful (of energy), and unnecessarily complicated way of buying in-game items that the server operator could just download gratis.
19:44 vampirefrog joined #minetest
19:53 proller joined #minetest
19:58 natewrench joined #minetest
20:13 natewrench yea NFTs are like reciepts you get when you buy a physical video game at gamestop, helps to prove you didnt steal the game or theft, the transfer of ownership happened from gamestop to you
20:13 natewrench without a reciept cops think you stole it
20:13 natewrench except its the opposite when it comes to digital files
20:15 Krock I prefer NTFS over NFTs
20:15 natewrench yea its real dumb, maybe useful not for art but accounts, or licenses like drm
20:16 natewrench allow you to sell and transfer accounts
20:16 vampirefrog joined #minetest
20:17 MTDiscord <MarkTheSmeagol> except that while they can prove the path from A to B, they cannot prove that point A was the actual, authentic original without using some form of external resource. Keep in mind that NFTs want to keep everything on the blockchain, with no external resources at all.
20:17 natewrench thats impossible
20:17 natewrench how do you stuff an image directly on chain
20:18 natewrench hmm i could see using p2p torrents where everyone has a copy but only one is unique
20:18 sfan5 there is nothing impossible about putting large amounts of data on a blockchain
20:18 sfan5 it just makes the blockchain get too large to be practically usable
20:18 MTDiscord <MarkTheSmeagol> And if you are using an external server to prove authenticity, then there is no reason not to do everything on said server, like pretty much all DRM already does.
20:19 Krock git is a blockchain so eh
20:19 natewrench hmm what if the drm is on the chain you download said chain to unlock your game locally through your private key
20:19 natewrench oh no always nft drm worse than always online
20:19 natewrench forget it
20:20 definitelya_ joined #minetest
20:21 Krock https://www.youtube.com/watch?v=fC7oUOUEEi4
20:23 muurkha you can put a Merkle tree root for large amounts of data on a blockchain
20:24 muurkha which is pretty much how git works, just without the blockchain
20:24 muurkha since there's no Satoshi consensus for the latest commit
20:24 natewrench i thought git needed a central server?
20:29 muurkha no
20:29 muurkha git is purely peer-to-peer
20:57 natewrench oh i was thinking about central repo
21:00 specing_ joined #minetest
21:06 natewrench Hi I am using Linux Mint 20.3 and the copy of minetest on the repos was 5.1.1, I downloaded and compiled 5.5.1 but where do I replace the minetest package installed with the one i compiled?
21:10 rubenwardy Uninstall the Linux Mint package first. Make sure you've compiled with -DRUN_IN_PLACE=0, and then run sudo make install
21:11 rubenwardy this will install system wide
21:11 rubenwardy it's important to uninstall first otherwise there may be some conflicts
21:17 natewrench oh no thats why it didnt work i used DRUN IN PLACE = 1
21:25 rubenwardy Interesting video about open source game development https://youtu.be/z5sjwqUten0
21:30 sfan5 !title
21:30 MinetestBot No title found.
21:30 sfan5 ugh
21:31 rubenwardy "Space Station 13": Behind one of the largest open source games
21:31 rubenwardy They're insane
21:35 rubenwardy Hahaha, they give their contributors scores. You gain points by making bug fixes, and spend them by making features
21:38 natewrench no cant be used to play games on mint
21:47 toluene joined #minetest
21:51 muurkha heh
21:51 muurkha sfan5: I think saxo has working YouTube title code
22:33 kaeza joined #minetest
22:34 panwolfram joined #minetest
22:50 fling joined #minetest
22:55 fling joined #minetest
23:42 cranezhou joined #minetest

| Channels | #minetest index | Today | | Google Search | Plaintext