Time |
Nick |
Message |
00:16 |
|
peterz joined #minetest |
00:19 |
|
clavi joined #minetest |
00:19 |
|
clavi joined #minetest |
00:20 |
|
peterz joined #minetest |
01:01 |
|
smk joined #minetest |
01:11 |
|
peterz joined #minetest |
01:16 |
|
peterz joined #minetest |
01:28 |
|
sparky4 joined #minetest |
01:34 |
|
v-rob joined #minetest |
01:35 |
|
cation joined #minetest |
03:15 |
|
v-rob joined #minetest |
03:19 |
|
fling_ joined #minetest |
03:24 |
|
YuGiOhJCJ joined #minetest |
03:35 |
|
appguru joined #minetest |
03:49 |
|
Thelie joined #minetest |
04:25 |
|
v-rob joined #minetest |
04:47 |
|
Sobinec joined #minetest |
05:00 |
|
MTDiscord joined #minetest |
05:10 |
|
amfl2 joined #minetest |
05:18 |
|
Waffelo joined #minetest |
05:22 |
|
sparky4 joined #minetest |
05:37 |
MTDiscord |
<jordan4ibanez> I have not had much time to do anything but I have found a nice solid base for the loop yielding in cargo that has reached stability |
05:37 |
MTDiscord |
<jordan4ibanez> After I finish this gamejam insanity I will shovel my entire brain into it |
05:56 |
|
TheSilentLink joined #minetest |
06:04 |
|
TheSilentLink joined #minetest |
07:07 |
|
YuGiOhJCJ joined #minetest |
07:09 |
|
YuGiOhJCJ joined #minetest |
07:13 |
|
mrkubax10 joined #minetest |
07:22 |
|
calcul0n_ joined #minetest |
07:52 |
|
Sobinec joined #minetest |
08:00 |
|
diceLibrarian2 joined #minetest |
09:22 |
|
grorp joined #minetest |
09:24 |
MinetestBot |
[git] mazes-80 -> minetest/minetest: Warning: inform about entity name when bug detected about attachement… e7be135 https://github.com/minetest/minetest/commit/e7be135b78444d3241d3bc7938d1faceb6084319 (2023-12-15T09:22:58Z) |
09:24 |
MinetestBot |
[git] sfan5 -> minetest/minetest: Clean up porting.h a bit d4123a3 https://github.com/minetest/minetest/commit/d4123a387c91b8659cbe41ce0c64d734cac74095 (2023-12-15T09:23:19Z) |
09:24 |
MinetestBot |
[git] sfan5 -> minetest/minetest: Improve clock_gettime usage bd06466 https://github.com/minetest/minetest/commit/bd06466d3af4f6a645ff264720eda5137090efbe (2023-12-15T09:23:19Z) |
09:24 |
MinetestBot |
[git] numberZero -> minetest/minetest: Reduce test framework macrosity 64b5918 https://github.com/minetest/minetest/commit/64b59184d1d620040844df73f350abdddc5873ec (2023-12-15T09:23:32Z) |
09:24 |
MinetestBot |
[git] (6 newer commits not shown) |
09:42 |
|
imi joined #minetest |
09:45 |
MinetestBot |
[git] grorp -> minetest/minetest: Remove usage of removed "PP" macro 3c60d35 https://github.com/minetest/minetest/commit/3c60d359edf190116401eab79ba51f796631aaf1 (2023-12-15T09:28:07Z) |
10:08 |
MTDiscord |
<jordan4ibanez> c55 you were talking about "is minetest lua?". Well I've been on a rampage with productivity in typescript. I've been studying our cargo ecosystem. Testing and testing. 4 refactors before even taking the first step. mlua, the one I originally showed was my final option before I stepped into writing any scripting code. Well mlua has luaujit, roblox's luajit implementation. Typescript luajit. Check this out lua local s: number = 2 |
10:08 |
MTDiscord |
function add(a: number, b: number): number return a + b end print(add(1, 1), typeof(s)) -- 2 number that's really, really cool. A whole plethora of runtime bugs stopped in an instant |
10:12 |
celeron55 |
typed lua doesn't sound like a bad long term idea as long as a jitted implementation of it has a solid enough maintenance team behind it |
10:13 |
MTDiscord |
<jordan4ibanez> The entirety of roblox's dev team seems like a pretty big team |
10:15 |
celeron55 |
mlua does seem like a solid choice |
10:45 |
|
s20 joined #minetest |
11:37 |
|
SpaceManiac joined #minetest |
12:49 |
|
calcul0n joined #minetest |
13:58 |
|
appguru joined #minetest |
14:59 |
|
jaca122 joined #minetest |
15:02 |
|
Sobinec joined #minetest |
15:23 |
|
Talkless joined #minetest |
15:36 |
|
s20 joined #minetest |
15:43 |
MTDiscord |
<luatic> celeron55: could just go the TS way, have typed lua transparently compile to lua by removing the type info, then throw luajit at it. no need to go the great lengths of writing a jit unless you really want to see performance gains from static typing. |
15:45 |
MTDiscord |
<jordan4ibanez> We're not writing a jit, it comes with mlua |
15:51 |
|
definitelya joined #minetest |
16:07 |
|
est31 joined #minetest |
16:18 |
|
jaca122 joined #minetest |
16:18 |
MTDiscord |
<warr1024> Every now and then, I'm getting this error message in my logs: ERROR[CurlFetch]: HTTPFetch for servers.minetest.net/announce failed: Couldn't resolve host name |
16:18 |
|
s20 joined #minetest |
16:19 |
MTDiscord |
<warr1024> It's working like 99% of the time, but these are getting kind of annoying. I don't want to filter the warning out of my monitoring system though, because it COULD also indicate an ACTUAL problem if it starts happening consistently. |
16:19 |
MTDiscord |
<warr1024> I wonder ... is there some way I can improve the reliability of DNS lookups? I'm running MT inside docker, and using systemd-resolved for DNS caching at the host level. |
16:20 |
celeron55 |
that seems like a problem generally caused by the client's network connection |
16:21 |
MTDiscord |
<warr1024> The network itself seems to be mostly reliable, so either this is some kind of DNS-specific shenanigan (the initial request is a cache miss due to expired TTL, sends the request, times out, and then doesn't have enough time to retry) or it's some Docker-related bullshit. |
16:22 |
MTDiscord |
<warr1024> One mitigation I had in mind is to just constantly nslookup through the cache, so that the cache entry never gets much of a chance to go stale and maximizing the chance that the request is a cache hit when MT does it, but that seems rather sloppy, and there's a chance the cache doesn't work the way I think it does and I leak a ton of requests onto poor unsuspecting upstream servers. |
16:24 |
celeron55 |
servers.minetest.net is a cname record at name.com pointing to kitsunemimi.pw, and the dns of that is hosted by namecheap (registrar-servers.com). these are not small providers and on top of that there's probably multiple layers of caching before the records end up on a server that you are actually querying as a client |
16:24 |
MTDiscord |
<warr1024> A more extreme option would be to bypass the TTL entirely and just periodically check upstream servers (on my own schedule) and then inject the result directly into /etc/hosts, which systemd-resolved supposedly parses, so that MT is always working on an eager-cached local entry and never depending on the live network... |
16:24 |
|
flowersandsharks joined #minetest |
16:24 |
MTDiscord |
<warr1024> I'm not worried about bringing them down or anything, lol, I just don't want to end up being seen as abusing the service. |
16:25 |
celeron55 |
well if you want a bomb proof solution you could just add each dns name to /etc/hosts |
16:26 |
celeron55 |
you'll surely find out when the server is moved. until then it will work amazingly |
16:26 |
MTDiscord |
<warr1024> Right, that's what I'm thinking ... and just having a polling script to account for if sfan5 ever has to move hosts or something so I don't have to manually fix it... |
16:26 |
celeron55 |
it'll likely move less often than once per many years |
16:28 |
celeron55 |
i could be wrong. not sure what the hosting provider or machine is actually like though |
16:28 |
MTDiscord |
<warr1024> Well, for now, I guess I can just test whether the manual hosts entry fixed the problem (verifying whether Docker really is indeed using my system DNS config) and if it does, then I can contemplate the auto-checker/updater. I should know in a few days whether the warning messages have gone away (I usually get like a couple per day). |
16:28 |
celeron55 |
if it's some dyndns style thing, then of course it'll change wildly and often |
16:29 |
flowersandsharks |
how long does it take minetest to compile? |
16:31 |
MTDiscord |
<warr1024> kitsunemimi.pw TTL is an hour, not the minute or so that most dynamic providers use, so it looks like it's not designed to change often. |
16:33 |
celeron55 |
*.minetest.net on the other hand uses 5 minute TTLs, mostly just because it's name.com's default. i guess i should lengthen them for at least some subdomains |
16:34 |
celeron55 |
it should still work fine though. making it longer is essentially just a workaround |
16:37 |
sfan5 |
flowersandsharks: depends on your hardware, could be anything from 3 minutes to one hour |
16:50 |
MTDiscord |
<warr1024> Interesting ... I had always sort of thought of TTLs as just being an indication of how often a thing is expected to change, but not really an authoritative limit on serving stale data; I mean, if you can't get fresh data for whatever reason, you can always fall back on the cached data as a survival strategy. I guess that's maybe a privileged perspective considering that TLS will catch any cases where that assumption breaks, but DNS was |
16:50 |
MTDiscord |
originally designed outside of a context where you'd expect TLS on basically everything 🤔 |
16:53 |
MTDiscord |
<warr1024> I guess since LE issues DV certs at basically the drop of a hat, it's possible that I could still run into trouble by trusting even TLS if I don't pin the pubkey, though ... and pinning the pubkey MIGHT be even more problematic than pinning the IP, since LE doesn't give a ton of incentive not to just rotate pubkeys whenever, like if you move hosts and don't bother to migrate the old keys and such. |
16:54 |
MTDiscord |
<warr1024> Granted, it's just an announce, so about the worst anyone could do is blackhole my announcement anyway; it was destined to be public anyway. |
17:00 |
|
Thelie joined #minetest |
17:00 |
|
fluxionary joined #minetest |
17:08 |
|
mrkubax10 joined #minetest |
17:17 |
sfan5 |
I'm pretty sure some LE implementations default to generating a new key for every cert |
17:23 |
MTDiscord |
<warr1024> Heh, that's fairly cursed. Considering the low level of trust LE establishes, being able to manually pin a pubkey is really the only good option for increasing security above baseline... :-| |
18:33 |
|
Sobinec joined #minetest |
18:41 |
|
sparky4 joined #minetest |
19:43 |
|
TheSilentLink joined #minetest |
19:49 |
|
qqq joined #minetest |
20:15 |
|
jaca122 joined #minetest |
20:22 |
|
khimaros_ joined #minetest |
20:33 |
|
khimaros_ joined #minetest |
20:48 |
|
khimaros_ joined #minetest |
21:02 |
|
Talkless joined #minetest |
21:15 |
|
Niklp9 joined #minetest |
21:19 |
|
Niklp9 joined #minetest |
21:50 |
|
sparky4 joined #minetest |
21:51 |
|
CRISPR joined #minetest |
22:24 |
|
khimaros__ joined #minetest |
22:47 |
|
appguru joined #minetest |
23:01 |
MTDiscord |
<warr1024> Warning message I often see when one particular client (who seems to have a high latency connection) connects: ERROR[Server]: Got packet command: 23 for peer id 4113 but client isn't active yet. Dropping packet |
23:01 |
MTDiscord |
<warr1024> Command is apparently always 23, though peer id obviously changes |
23:02 |
MTDiscord |
<warr1024> Doesn't seem to affect gameplay from what I can tell. Is it diagnostically interesting though? Anything I can/should try to do about it? Maybe it indicates some kind of underlying bug? |
23:16 |
|
sparky4 joined #minetest |
23:32 |
|
panwolfram joined #minetest |