Time |
Nick |
Message |
00:50 |
* mtvisitor |
reviewed the contributors statistics of minetest(luantic) products/projects on the github.com website. |
00:50 |
mtvisitor |
https://github.com/minetest/minetest/graphs/contributors |
01:16 |
|
CRISPR joined #minetest |
01:56 |
|
SFENCE joined #minetest |
02:04 |
|
SwissalpS joined #minetest |
02:38 |
|
SFENCE joined #minetest |
02:56 |
|
SFENCE joined #minetest |
03:22 |
|
SwissalpS joined #minetest |
03:47 |
|
SFENCE_arch joined #minetest |
03:58 |
|
Verticen joined #minetest |
04:15 |
|
SFENCE joined #minetest |
04:20 |
MTDiscord |
<jordan4ibanez> mtvisitor when will you publish the report? |
04:25 |
|
yezgromafic joined #minetest |
04:55 |
|
YuGiOhJCJ joined #minetest |
04:58 |
|
SFENCE joined #minetest |
05:00 |
|
MTDiscord joined #minetest |
05:14 |
|
chilledfrogs joined #minetest |
05:57 |
|
SFENCE joined #minetest |
06:40 |
|
HuguesRoss_ joined #minetest |
06:44 |
|
Verticen joined #minetest |
07:00 |
|
SFENCE joined #minetest |
07:20 |
|
SFENCE joined #minetest |
07:52 |
|
SFENCE_arch joined #minetest |
07:54 |
|
SFENCE joined #minetest |
07:58 |
|
AccSwtch50 joined #minetest |
07:58 |
|
AccSwtch50 left #minetest |
08:02 |
|
cranezhou joined #minetest |
08:10 |
mtvisitor |
i would say: no report and no comment. |
08:10 |
mtvisitor |
swift110-mobile: hi. |
08:10 |
mtvisitor |
🍺 |
08:14 |
|
sinvet joined #minetest |
08:39 |
|
ireallyhateirc joined #minetest |
08:47 |
|
mdhughes_ joined #minetest |
09:37 |
|
tarsovbak joined #minetest |
09:47 |
|
Warr1024 joined #minetest |
09:56 |
|
SFENCE joined #minetest |
10:00 |
|
SFENCE joined #minetest |
10:02 |
|
MacroFaxSax joined #minetest |
10:06 |
|
tarsovbak joined #minetest |
10:09 |
|
mdhughes_ joined #minetest |
10:12 |
|
gregon joined #minetest |
10:20 |
|
SFENCE joined #minetest |
10:35 |
|
SFENCE joined #minetest |
10:39 |
MinetestBot |
[git] cx384 -> minetest/minetest: Fix register_ore ore_type error handling d4378a7 https://github.com/minetest/minetest/commit/d4378a74d3c593c9dd4dfbba30120049e8128102 (2024-11-15T10:37:17Z) |
10:39 |
MinetestBot |
[git] grorp -> minetest/minetest: Get rid of depth buffer workaround in the render pipeline code (#15407) a9fe831 https://github.com/minetest/minetest/commit/a9fe83126a3f5c7240359272495eaf3eecd9a272 (2024-11-15T10:38:56Z) |
10:39 |
MinetestBot |
[git] sfence -> minetest/minetest: Add some info to compiling README 58dd421 https://github.com/minetest/minetest/commit/58dd42166df90425521a09102dc06ef1e9b783d1 (2024-11-15T10:39:08Z) |
10:45 |
|
SFENCE joined #minetest |
11:00 |
MTDiscord |
<_devsh_> @grorp why havent you changed the setRenderTarget to properly make an explicit Framebuffer object? your life would be so much easier |
11:01 |
MTDiscord |
<_devsh_> something like an IFramebuffer with explicit creation parameters |
11:01 |
MTDiscord |
<_devsh_> does minetest shadowmapping do that insanely stupid thing every irrlicht shadowmapping demo/extension did? |
11:03 |
MTDiscord |
<_devsh_> which is rendering the linear depth to color attachment and using that instead of a proper sampler2DShadow ? |
11:04 |
MTDiscord |
<_devsh_> hmmm seems like another gl 3.x thing, cause depth access-as-a-texture is a DX10 thing |
11:07 |
|
SFENCE joined #minetest |
11:20 |
MinetestBot |
[git] SmallJoker -> minetest/minetest: Dynamic shadows: whitelist the 'opengl3' driver 87ac32e https://github.com/minetest/minetest/commit/87ac32edeafafea662b62dc0a96fda9b663737b4 (2024-11-15T11:18:48Z) |
11:20 |
MinetestBot |
[git] SmallJoker -> minetest/minetest: Non-SDL: Add opengl3 support 4838eb2 https://github.com/minetest/minetest/commit/4838eb2f7de8477ef5be7e064d072954ff78e36c (2024-11-15T11:18:48Z) |
11:20 |
MinetestBot |
[git] SmallJoker -> minetest/minetest: IrrlichtMt: Document Driver/Device compatibility 8f03b70 https://github.com/minetest/minetest/commit/8f03b705841e4fbdb11f6c09facdc0a9a6f5abda (2024-11-15T11:18:48Z) |
11:20 |
MinetestBot |
[git] kno10 -> minetest/minetest: Improve documentation of liquid_surface (#15012) 46f0baf https://github.com/minetest/minetest/commit/46f0baff090855d08e512c2913a5f1b9636f5019 (2024-11-15T11:19:41Z) |
11:20 |
sfan5 |
I don't see a sampler2DShadow in our code so I guess that's the case |
11:22 |
MinetestBot |
[git] sfan5 -> minetest/minetest: Remove BMP image support (#15434) 11837d4 https://github.com/minetest/minetest/commit/11837d4623bf0eab0b706ed2cc81bf0bcfb79da8 (2024-11-15T11:21:30Z) |
11:24 |
MTDiscord |
<_devsh_> @sfan5 you do realize that literally 4x slows down the shadowmap drawing ? and its the shitty GPUs that get hurt the most |
11:26 |
MTDiscord |
<_devsh_> when do do shadowmapping by rendering into a depth FBO with no color attachment and have no fragment shader set, the GPU knows it doesn't need to run the pixel shader |
11:27 |
MTDiscord |
<_devsh_> depth buffer is literally just HTile and gets dumped to a texture only at the end when you unbind the FBO |
11:27 |
MTDiscord |
<_devsh_> there are special fast paths for this, because so many games do shadowmapping |
11:28 |
MTDiscord |
<_devsh_> also when you use samplerNDShadow[Array] and a comparison sampler object/setting, you get 4 depth comparisons for the price of 1 texture tap (bilinear interpolation hw gets repurposed) |
11:29 |
MTDiscord |
<_devsh_> so instead of getting a depth value, you give it a depth value you want tested against the texels, and you get 0, 0.25, 0.5, 0.75, 1.0 back |
11:30 |
MTDiscord |
<_devsh_> furthermore the depth can be non-linear for the comparison, so there's no linearization math to run |
11:31 |
|
jluc joined #minetest |
11:31 |
sfan5 |
I do not realize but also I did not write the code in the first place. the guy who contributed it is not even here anymore. |
11:32 |
sfan5 |
it again boils down to there being no dedicated "graphics person" in MT |
11:32 |
sfan5 |
or anyone with solid graphics programming experience, to be more exact |
11:33 |
MTDiscord |
<_devsh_> also because you only have a D16 or D32F attachment, the modern mobile GPUs benefit because the bpp are 2 or 4 and not 8 if you abuse color attachment for depth, this means the on-chip tile can be 2x larger, so the shadowmap draws in less passes & there are less buckets for the scene geometry |
11:34 |
MTDiscord |
<_devsh_> well if someone wants to learn send them to -> https://discord.gg/B63yyBz8 |
11:35 |
MTDiscord |
<greenxenith> (for those on the IRC side, that is a Graphics Programming Discord server with 16k members) |
11:35 |
MTDiscord |
<_devsh_> where making a minecraft blocky thing is literally a rite of passage |
11:36 |
MTDiscord |
<_devsh_> everyone and their mother is making one in GL |
11:36 |
|
SFENCE joined #minetest |
11:39 |
sfan5 |
since you mentioned it, I'm wondering does D24 have performance benefits over D32? |
11:41 |
MTDiscord |
<jordan4ibanez> I gave into the temptation |
11:44 |
|
Norkle joined #minetest |
11:49 |
MTDiscord |
<_devsh_> It shouldn't, d24 exists only because d32 only exists as a float and floating point depth is a dx10 thing iirc |
11:50 |
MTDiscord |
<_devsh_> Also d24 left 8 bits free for stencil |
11:50 |
MTDiscord |
<_devsh_> So the aim was for d24+stencil to be faster than d32 + separate stencil while having same cost as d16+stencil |
11:51 |
|
Artea joined #minetest |
11:51 |
MTDiscord |
<_devsh_> Although d16+stencil maaay be marginally faster with separate depth and stencil attachments so that they live separately |
11:52 |
MTDiscord |
<_devsh_> Internally a lot of GPUs do d24 with d32f, because there exist vulkan GPU reports that don't list standalone d24 format support |
11:52 |
MTDiscord |
<_devsh_> Also mantissa on a float32 just happens to be 24bit if you count the implicit 1 |
11:53 |
sfan5 |
i see, ty |
11:54 |
MTDiscord |
<_devsh_> Modern GPUs give no shits about what format you used, they can promote everything to 32f depth and truncate when writing out the texture |
11:54 |
MTDiscord |
<_devsh_> The GPU doesn't actually update the texels in any render target attachment as it draws, stuff remains on chip and gets flushed out when you unbind |
11:55 |
MTDiscord |
<_devsh_> That's why you can't sample from an image that's attached to the currently bound fbo |
11:55 |
MTDiscord |
<_devsh_> That and race conditions obvs |
12:21 |
|
SFENCE joined #minetest |
12:55 |
MTDiscord |
<jordan4ibanez> I don't know what to talk about in the graphics discord lol |
13:19 |
MTDiscord |
<_devsh_> Vulkan and Fortran, your thing |
13:19 |
MTDiscord |
<_devsh_> I come bearing gifts https://developer.arm.com/documentation/SDEN-3735689/latest |
13:20 |
erle |
> its the shitty GPUs that get hurt the most |
13:20 |
erle |
bold of you to assume that these kind of arguments work here |
13:20 |
|
Thelie joined #minetest |
13:20 |
MTDiscord |
<_devsh_> whatever you're hitting at I'm not getting it |
13:20 |
erle |
_devsh_ if you want a quick win proving that reordering gives performance, look at inventory drawing |
13:21 |
erle |
_devsh_ the “correct” answer to “you made it work worse on shitty hardware” is evidently “your GPU is old enough to drink” ;) |
13:21 |
MTDiscord |
<_devsh_> me and sfan5 were talking about shadowmpa drawing |
13:22 |
MTDiscord |
<_devsh_> shadowmap gets drawn with 2 materials tops if you really need to care about that |
13:22 |
MTDiscord |
<_devsh_> anyway thats kinda orthogonal to the draw order / sorting discussion |
13:22 |
erle |
x2048 is no longer here sadly |
13:22 |
|
SFENCE joined #minetest |
13:22 |
erle |
i think? |
13:22 |
erle |
x2048 made the shadow map code |
13:22 |
erle |
IIRC |
13:25 |
erle |
_devsh_ i once suggested faster shadows for entities by simply calling addShadowVolumeSceneNode() and it was summarily dismissed with statements like “you need a newer GPU to get fancy effects like shadows” |
13:25 |
erle |
even though it worked fine on a GMA 950 (?) |
13:26 |
erle |
so *faster* shadow rendering is not exactly a winning argument unless i am very mistaken |
13:27 |
erle |
_devsh_ see result here: https://github.com/minetest/minetest/issues/13164#issuecomment-1780237779 |
13:28 |
erle |
setOptimization(scene::ESV_NONE) needs to be set because the optimization relies on 3d volumes being closed (if you would fill it with water, it would not leak) and the player model is not IIRC |
13:30 |
erle |
_devsh_ https://mister-muffin.de/p/OS2p.png ^_^ |
13:33 |
MTDiscord |
<rollerozxa> I wonder what they think of legacy opengl gore in the graphics programming server |
13:34 |
erle |
tired: opengl core |
13:34 |
erle |
wired: opengl gore |
13:36 |
MTDiscord |
<rollerozxa> haha |
13:47 |
erle |
_devsh_ i think i can summarize what i wanted to say as “arguments that make it work better on gaming GPUs are much better received than stuff that does not matter with powerful hardware, but does matter on potato GPU”. with inventory rendering it was that i reached the limits of 7000 something drawcalls when inventory was open and my studs mod was active. old intel GPUs glitch out. but you could have the inventory performance issue with every GPU, it |
13:47 |
erle |
is just that it is particularly bad the less capable it becomes. |
13:49 |
erle |
and as i said before, i have come to the conclusion that this is normal when you talk to devs who have powerful hardware. if you find performance issues that are negligible on high-spec GPUs, they will dismiss them if only low-spec GPUs are affected, *even if* the performance issue happens regardless of what GPU you have |
13:49 |
erle |
(low-spec GPU just craps out earlier) |
13:50 |
erle |
look at openra if you want to see how it plays out |
13:51 |
erle |
(openra no longer runs on platforms it could run fine on a decade ago or so due to graphics pipeline changes – but due to tight coupling, you can not simply use the old one when everyone else uses the new one) |
13:51 |
erle |
_devsh_ have a link to the games/demos you made? i am very interested |
13:52 |
erle |
rollerozxa will rollertest still be a thing or not? |
13:56 |
yezgromafic |
hey erle have you thought about adopting riscv? im not suggesting that as a fix for what you are saying, im just curious what you think about it seeing as you care about hardware freedom |
13:56 |
erle |
yezgromafic i own a reform 1 (of which there are only 12 or 13 working models) and a reform 2 (for which i am currently waiting for replacement batteries) |
13:56 |
erle |
yezgromafic go over to https://mnt.re if you are interested |
13:58 |
erle |
yezgromafic the ultimate hardware freedom is obviously the MNT amiga graphics card (ZZ 9000). internally it uses an FPGA so you can just … download a new hardware revision hahahahahahaaha |
13:58 |
erle |
well i doubt the xilinx thing is open enough though hmmm |
14:00 |
yezgromafic |
this looks interesting |
14:03 |
yezgromafic |
the 1200e price tag does jump at you a little |
14:04 |
erle |
well if you care about money, buy refurbished thinkpads and use them forever |
14:05 |
yezgromafic |
yeah but you pay for it in other ways haha |
14:06 |
yezgromafic |
i dont think hardware freedom should be something reserved only for old devices so i totally support this mnt stuff |
14:08 |
yezgromafic |
so... can your reform run luanti |
14:08 |
erle |
yezgromafic it could in the past, but surely it won't at some point in the future because niche hardware |
14:08 |
sfan5 |
https://www.raptorcs.com/TALOSII/ by the way I heard this thing can also run on entirely free software |
14:09 |
erle |
blender already doesn't run on it (and many old devices that it used to run fine on) |
14:09 |
erle |
yezgromafic look at the trailer it shows some block game action ;) |
14:10 |
erle |
my friend li0n even made a MNT reform mod |
14:10 |
erle |
where in game you have to get resources |
14:10 |
erle |
and build the computer yourself |
14:10 |
erle |
all nodeboxes |
14:10 |
MTDiscord |
<_devsh_> they are routinely disgusted whenever some eastern country student comes to ask questions about glBegin and glEnd and GL < 3.x |
14:11 |
erle |
ah, snobs! |
14:11 |
MTDiscord |
<_devsh_> erle using stencil shadows is not tractable, extracting silhoettes is not fun |
14:11 |
|
SFENCE joined #minetest |
14:11 |
erle |
_devsh_ i have an aesthetic argument for stencil shadows and a performance one “i can easily have multiple light sources on garbage hardware and have at look okay-ish” |
14:12 |
MTDiscord |
<_devsh_> they are not robust, I guess thats why everyone dismissed stencil shadow bvolums |
14:12 |
erle |
_devsh_ not that it matters |
14:12 |
MinetestBot |
[git] alek13 -> minetest/minetest_game: Make walls connect to nodes in a new `wall_connected` group (#3159) 92daf3e https://github.com/minetest/minetest_game/commit/92daf3e6f4900a1245817c7e41e70bca33afaa40 (2024-11-15T14:10:14Z) |
14:12 |
erle |
i am pretty sure people dismissed them because they can't do colored shadows |
14:12 |
MTDiscord |
<_devsh_> if you want stencil-like shadows raytrace them |
14:12 |
MTDiscord |
<_devsh_> 🙃 |
14:12 |
erle |
and because soft stencil shadows have shadow bleeding (particularly bad on self-shadowing) |
14:13 |
erle |
am i wrong? |
14:13 |
MTDiscord |
<_devsh_> stencil shadows are a very particular hack, they make everything else downstream complicated |
14:14 |
MTDiscord |
<_devsh_> snteicl shadows work my masking out drawcalls |
14:14 |
erle |
have an example of what becomes more complicated? |
14:14 |
MTDiscord |
<_devsh_> shading |
14:14 |
MTDiscord |
<_devsh_> if you want ambient_light_0+light_2 |
14:16 |
MTDiscord |
<_devsh_> you need to draw your mesh once just for ambient |
14:16 |
MTDiscord |
<_devsh_> then not only do you need to re-set the stencil buffer and rasterize it to get the shadows |
14:17 |
MTDiscord |
<_devsh_> but you also need to draw the mesh again normally, sample the textures again, and do the light calc only for one light |
14:17 |
|
ireallyhateirc joined #minetest |
14:18 |
MTDiscord |
<_devsh_> so 2 lights = 3 drawcalls that do near full pixel shader math + 2 drawcalls to draw stencil shadows |
14:18 |
MTDiscord |
<_devsh_> also stencil shadows by definition involve shitton of pipeline state switching |
14:18 |
MTDiscord |
<_devsh_> cause you draw front faces with one stencil state and back faces with another |
14:19 |
MTDiscord |
<_devsh_> this is why deferred shading got invented in 2007ish |
14:31 |
MTDiscord |
<_devsh_> at least with deferred shading you can dump all the lighting into an L-Buffer |
14:32 |
MTDiscord |
<_devsh_> then read back when you actually do the shading |
14:32 |
MTDiscord |
<_devsh_> its still not great because you have to draw the scene twice |
14:32 |
MTDiscord |
<_devsh_> Z-prepass to have a depth buffer, and then a EQUAL depth comparison shading pass |
14:33 |
MTDiscord |
<_devsh_> so your little 60k polygon budget turns into 30k |
14:34 |
MTDiscord |
<_devsh_> this is why real deferred shading got invented, where you spit out stuff to a GBuffer, and then the z-prepass becomes optional |
14:35 |
|
jluc joined #minetest |
14:35 |
erle |
_devsh_ so what do you think is the reason then it is so much more performant in the minetest/luanti case? |
14:36 |
MTDiscord |
<_devsh_> than what? |
14:38 |
MTDiscord |
<_devsh_> https://developer.nvidia.com/gpugems/gpugems/part-ii-lighting-and-shadows/chapter-9-efficient-shadow-volume-rendering |
14:38 |
MTDiscord |
<_devsh_> > Fill rate is the Achilles heel of shadow volumes. Shadow volumes cover many pixels and have a lot of overdraw. This is particularly troublesome for point lights, which create shadows that get bigger the farther they are from the caster. |
14:40 |
MTDiscord |
<_devsh_> the worst part of stencil shadows is the fact you need to draw them every frame and you need to recompute the silhouette meshes every frame |
14:41 |
MTDiscord |
<_devsh_> the last state of the art stencil shadow research was this https://developer.nvidia.com/gpugems/gpugems3/part-ii-light-and-shadows/chapter-11-efficient-and-robust-shadow-volumes-using |
14:41 |
MTDiscord |
<_devsh_> and it required Geometry Shaders to extract the silhouette |
14:42 |
MTDiscord |
<_devsh_> the Intel GPUs are really weird, because they have no VRAM, so dynamically updating the geometry using the CPU doesn't hurt as much as discrete GPUs that have their own VRAM and need upload over PCIe |
14:42 |
MTDiscord |
<_devsh_> also the early Intel GPUs are so shit, that actually offloading some of the vertex shader onto the CPU helps to free them up to do more pixel shading work |
14:43 |
MTDiscord |
<_devsh_> but by catering to them you tank the performance for new Intel GPUs, Dekstop GPUs and even mobile GPUs |
14:44 |
|
SFENCE joined #minetest |
14:44 |
MTDiscord |
<_devsh_> an RTX 2060 has 336GB/s bandwidth |
14:45 |
MTDiscord |
<_devsh_> but only supports PCIE 3.0 |
14:46 |
MTDiscord |
<_devsh_> and PCIE 3.0 x16 only have 16GB/s in a single direction |
14:49 |
MTDiscord |
<_devsh_> you can hurt a modern integrated (Intel, AMD, Mobile ARM, etc.) very little by keeping all your data in a Device Local buffer |
14:49 |
erle |
_devsh_ offloading stuff to CPU is a mesa topic, isn't it? |
14:49 |
MTDiscord |
<_devsh_> whereas a dekstop GPU you can do 20x damage |
14:49 |
MTDiscord |
<_devsh_> dafuq? |
14:50 |
erle |
well last time i asked about some intel GPU driver issue (it was culling front faces instead of back faces) i was told they *still* maintain support for GMA 950 and have a newer driver than i had |
14:50 |
erle |
that uses CPU rendering for some stuff that the GPU is barely capable of |
14:50 |
erle |
which makes it less accurate but faster in a lot of cases |
14:50 |
MTDiscord |
<_devsh_> P.S. the RTX 4090 has 1008 GB/s bandwidth and people stick them into PCIE 3.0 x16 slots, so there's a 60x slowdown in rendering |
14:52 |
MTDiscord |
<_devsh_> I had a conversation with @Shlaplombitomous Cranloutmeister in #offtopic about the fact that really old APIs tend to get CPU emulated because people don't have the stomach to maintain 50 different codepaths for GPUs with different ISAs, or even different GPUs behind the same modern API |
14:52 |
MTDiscord |
<_devsh_> as time progresses you can expect most platforms to implement GL 1.4 or even 2.1 on the CPU |
14:52 |
erle |
well in this case it is simply that the thing is dog slow once you go over a limit |
14:53 |
erle |
e.g. you reach a specific amount of shader instructions on GMA 950? 1 fps. mesa software rendering is faster in that case. |
14:53 |
erle |
not that it is playable, but it gets like 5 fps lol |
14:53 |
MTDiscord |
<_devsh_> yes, because thats a Shader Model 2.0 card |
14:53 |
erle |
also it unrolls all loops in the shaders |
14:53 |
MTDiscord |
<_devsh_> the thing is, if you keep bringing up the GMA 950 and the fact that it does something faster/slower and use that to guide your design decisions... minetest deserves every single irrlicht/GL bug it gets |
14:53 |
erle |
which is evidently bad if you are instruction-limited |
14:54 |
erle |
_devsh_ they already broke it |
14:54 |
erle |
it's no longer working |
14:54 |
MTDiscord |
<_devsh_> DX8-9 GPUs can't do flow control, period, SM 2.0 all shaders need to be unrolled |
14:54 |
MTDiscord |
<_devsh_> or unrollable |
14:55 |
erle |
all i'm saying is that for the general case, “this is so bad, performance will crater on old GPUs” will probably not be taken seriously as an argument |
14:55 |
erle |
as long as it works on somewhat-recent ones |
14:55 |
erle |
regardless of how bad it is |
14:55 |
MTDiscord |
<_devsh_> i dont know what you're arguing for in all honesty |
14:56 |
MTDiscord |
<_devsh_> I'd tell you to draw a new baseline at WebGPU / GLES 3.1 and start throwing irrlicht code out the window |
14:56 |
erle |
you already did! |
14:56 |
erle |
_devsh_ now assist the devs, will you? |
14:56 |
erle |
you seem capable |
14:56 |
MTDiscord |
<_devsh_> I'm already assisting, I'm telling you what algos are best |
14:56 |
MTDiscord |
<_devsh_> now you know what to google |
14:57 |
erle |
yeah good luck |
14:57 |
MTDiscord |
<_devsh_> no, you good luck.. I don't have a horse in this race |
14:58 |
erle |
well i got thrown off the horse, to keep the metaphorgotten way of speaking |
14:59 |
ireallyhateirc |
I'd bribe someone to improve 3D stuff in Luanti but I don't have money lol |
15:00 |
erle |
_devsh_ “telling people what to google” is usually not seen as too helpful, even if you are 100% correct |
15:00 |
erle |
i know exactly one dev that listens to me when i do that and that is cora |
15:00 |
erle |
(like, every time) |
15:01 |
MTDiscord |
<_devsh_> @rubenwardy @luatic I fogrot to mention something funny, you don't need to worry about dropping users that have GLES 3.1 , llvmpipe implements GLES 3.2 on your CPU, can be built for any OS |
15:01 |
erle |
everyone else likes working code (for good reasons) |
15:01 |
erle |
like, at least a prototype |
15:01 |
MTDiscord |
<_devsh_> amputating shitty GL gives you working code |
15:01 |
erle |
ireallyhateirc i would bribe people to put back support for my GPU lol or help maintain a fork |
15:02 |
MTDiscord |
<_devsh_> erle are you the sole 950 GMA user? |
15:02 |
erle |
_devsh_ no. i'm the sole user that is complaining. forget about it. |
15:02 |
MTDiscord |
<_devsh_> why are you using a 950 GMA in 2024 ? |
15:03 |
erle |
not this shit again |
15:03 |
|
Thelie joined #minetest |
15:03 |
MTDiscord |
<_devsh_> can I send you my company's trash ? |
15:03 |
MTDiscord |
<_devsh_> what GPU you want |
15:03 |
erle |
i have a reform2 and at some point it will work as well for me as the old thinkpads did |
15:04 |
erle |
(i.e. when the batteries arrive earliest) |
15:04 |
sfan5 |
I just wanted to mention that erle's hardware thankfully does not guide our design decisions at Luanti (formerly Minetest). |
15:04 |
erle |
i already told that |
15:04 |
|
Desour joined #minetest |
15:05 |
MTDiscord |
<_devsh_> I mean if someone has some religious objection to using a GPU with a Unified Shading pipeline and compute shaders, you could always tell them to run llvmpipe |
15:05 |
erle |
LMAO |
15:05 |
MTDiscord |
<_devsh_> then they get GLES 3.2 or Vulkan 1.3 and can call it a day https://vulkan.gpuinfo.org/displayreport.php?id=34287 |
15:05 |
ireallyhateirc |
every time I hear "formerly Minetest" I recall Sneed's Feed and Seed, formerly Chuck's |
15:05 |
erle |
the artist formerly known as prince |
15:06 |
erle |
_devsh_ i appreciate your offer, but better give your old hardware to poor people who have no laptop, like students or so. |
15:06 |
erle |
or refugees |
15:07 |
MTDiscord |
<_devsh_> https://openbenchmarking.org/result/1801028-AL-LLVMPIPIN75 |
15:08 |
erle |
23 fps in openarena? let me channel luatic and say: barely playable lol |
15:09 |
MTDiscord |
<_devsh_> in 1080p |
15:09 |
MTDiscord |
<_devsh_> I'm pretty sure that once you drop to 720p you get 46 fps |
15:10 |
* jluc |
reads https://blog.minetest.net/2024/10/13/Introducing-Our-New-Name/ |
15:12 |
erle |
thanks for introducing my new name btw :3 |
15:12 |
MTDiscord |
<_devsh_> apparently llvm-pipe is stupid fast on the Apple M2 |
15:13 |
erle |
_devsh_ i may be wrong on this, but i kinda doubt llvmpipe is the goose that shits golden apples for underpowered netbooks/thinkpads, cause these don't have that much compute to begin with? |
15:13 |
erle |
do you know? |
15:13 |
MTDiscord |
<_devsh_> ok lets split people into two categories |
15:13 |
MTDiscord |
<_devsh_> GPU Amishes and Poor People |
15:13 |
erle |
lmao |
15:13 |
MTDiscord |
<_devsh_> Poor people can be crowdfunded or sent GPUs with GLES 3.1 / WebGPU support |
15:14 |
MTDiscord |
<_devsh_> or just literally dig them out of the trash |
15:14 |
MTDiscord |
<_devsh_> and GPU Amishes should just get a CPU, that way they get truly Open Source Graphics |
15:14 |
erle |
lmao |
15:14 |
celeron55 |
erle is from germany, have you heard of the economic situation there |
15:14 |
erle |
hahahahahahaha |
15:14 |
MTDiscord |
<_devsh_> yes soon the germans will come to Poland to steal our jobs |
15:15 |
rubenwardy |
Cheap doesn't necessarily mean old. You can get second hand lower tier hardware that'll support opengl 3.2 |
15:15 |
erle |
i'm not poor right now. but the thinkpad i write on was given to me over 10 years ago when i was poor. |
15:15 |
erle |
this is why i suggest to send your old hardware to poor people |
15:15 |
MTDiscord |
<_devsh_> if you wanna game on a 20 year old GPU, then play 20 year old games on it |
15:16 |
erle |
rubenwardy see this is the issue i see. if the second hand HW is powerful enough, llvmpipe is not necessary. if it is not powerful, llvmpipe is not helping (or so i think). |
15:16 |
erle |
_devsh_ do you know magic the gathering? |
15:16 |
rubenwardy |
There's a difference between hardware power and modern standard support |
15:17 |
MTDiscord |
<_devsh_> https://www.ebay.de/sch/i.html?_from=R40&_nkw=geforce&_sacat=0&rt=nc&_udhi=20 |
15:17 |
erle |
i just hope the reform2 (imx8) will not be cut off anytime soon |
15:18 |
erle |
because that's the best laptop i have |
15:19 |
MTDiscord |
<_devsh_> https://www.ebay.de/sch/i.html?_from=R40&_nkw=geforce&_sacat=0&_udhi=20&LH_ItemCondition=1000%7C1500%7C2010%7C2020%7C2030%7C2500%7C3000&rt=nc&LH_BIN=1 |
15:19 |
erle |
(it still gets outperformed by an old thinkpad on some workloads) |
15:19 |
MTDiscord |
<_devsh_> you can get a GeForce 980 for 50 EUR to fix up a fan |
15:20 |
erle |
i think you disregard that every single computer i am talking about is a laptop |
15:21 |
erle |
> GC7000 will support the newly released OpenGL ES 3.1 API |
15:21 |
erle |
well i hope it is better than intel integrated opengl 2.x support lol |
15:21 |
MTDiscord |
<_devsh_> you can get a laptop for < $100 that supports DX11 |
15:22 |
erle |
just stop. minetest/luanti will not put back support for my hardware and i am not a gamer, so will rather stop playing than buy new hardware for gaming. |
15:23 |
erle |
i'll try to make it work on reform/reform2 and maybe fork (together with other people who have other reasons for low-spec GPUs, e.g. lots of old harwdare laying around, RYF fetishization) |
15:23 |
erle |
that's the solution |
15:23 |
MTDiscord |
<_devsh_> you can get a GeForce 710 1GB for 20 EUR |
15:23 |
erle |
_devsh_ yeah but i can't cram it into an old thinkpad or a new reform, can i? |
15:24 |
erle |
the only thing i am bemoaning is that up until recently, *every* computer that could run wayland could run luanti/minetest. and i have like at least 3 or four of them laying around here. |
15:24 |
celeron55 |
we already gave about 10 extra years for the GMA 950 compared to every other actively developed game on the planet. at some point, it's time to pick up newer standards that off-the-shelf hardware is designed for, and 10 years late sounds just about right to me |
15:25 |
erle |
the vivante GC3000 will be fine, probably. i'll just not use this computer for gaming anymore. and i doubt anyone is breaking network compat any time soon. |
15:25 |
MTDiscord |
<_devsh_> the correct way to choose is "is this GPU still getting driver updates" |
15:25 |
celeron55 |
there's no point in putting effort in developing essentially a retro game. the potential audience is unnecessarily small |
15:25 |
erle |
_devsh_ that is funny, because the GMA950 *was* still getting driver updates at least of last year |
15:25 |
erle |
unless i am mistaken |
15:25 |
MTDiscord |
<_devsh_> that way you cut off Windows at 3 year old Intel GPUs and have a longer tail on Linux where the support actually exists |
15:26 |
erle |
like, mesa updates |
15:26 |
ireallyhateirc |
<MTDiscord> <_devsh_> yes soon the germans will come to Poland to steal our jobs |
15:26 |
MTDiscord |
<_devsh_> newer Mesa version doesn't count |
15:26 |
ireallyhateirc |
LMAO |
15:26 |
erle |
_devsh_ why not, if it results in better performance and more capabilities? |
15:26 |
MTDiscord |
<_devsh_> Mesa shares 80% of the driver code between all GPUs |
15:26 |
MTDiscord |
<_devsh_> AMD, Intel, Nvidia, ARm, etc. all use the same shader compiler |
15:26 |
MTDiscord |
<_devsh_> and shader IR |
15:26 |
erle |
is this where llvmpipe comes in again? |
15:27 |
MTDiscord |
<_devsh_> Mesa is the only sanely designed driver onthe planet, it actually kinda does what WDDM and DirectX does on Windows |
15:27 |
celeron55 |
guess what hardware i started MT on? well, of course a GMA 950 laptop. that was in 2010 and it was very outdated for any kind of gaming purpose even back then |
15:27 |
erle |
celeron55 that kinda explains why it still runs so well on it! |
15:28 |
MTDiscord |
<_devsh_> DirectX driver quality is much higher than OpenGL/Vulkan on Windows, because Microsoft codes half the implementation |
15:28 |
MTDiscord |
<_devsh_> including the shader compiler |
15:28 |
celeron55 |
i still have the laptop, but that doesn't mean i use it for anything. it's uselessly slow for anything other than running outdated automotive diagnostic software on windows xp |
15:28 |
erle |
i had a dell inspiron 6400 at that time, Integrated on system board - Intel 945 GM |
15:29 |
MTDiscord |
<_devsh_> btw this 25 EUR GPU still gets driver updates |
15:29 |
MTDiscord |
https://www.ebay.de/itm/365211233629?_skw=geforce&itmmeta=01JCR6RNSJSV3J4JGHMT430RVP&hash=item5508492d5d%3Ag%3A2hwAAOSwtVJnJ0Sz&itmprp=enc%3AAQAJAAAA4HoV3kP08IDx%2BKZ9MfhVJKk8MggSK6xM%2FDpVyBRjXNrdWSBnUc7%2BYz5jfXi0pQlCXu0kwT9bdhnyWK2%2BxWncNAswYMJVnAecCDmA3%2BlqEp98BYnxK%2BNExGXR1tgI8x8K4jv1RaeoCsvqO3rvwlVXb23Ywd3SHRRk%2FXLW1ObDNVpW%2B1Tod70TMRqSBkWT9cr%2Fo%2FImQCNqqSAqYcbfhduznsWS4qPvnYbIHjBkrNCZlTL7mFX7pk2G6MnP51Ye0R23ymkkW4xK%2FtzQY5rljyRW6ZirVxE |
15:29 |
MTDiscord |
1oZGaaoTyHbyjtfWD%7Ctkp%3ABk9SR_rc4obmZA&LH_BIN=1&LH_ItemCondition=1000%7C1500%7C2010%7C2020%7C2030%7C2500%7C3000 |
15:29 |
MTDiscord |
<_devsh_> also laptops with the MXM slot can have their GPUs updated |
15:30 |
erle |
celeron55 the only thing that i found that don't work well on old laptops besides 3d gaming is electron apps. youtube/videocalling/vlc/mplayer and non-electron chat apps work fine … IF you put in a SSD |
15:30 |
erle |
like, i stuck SSDs in all old thinkpads actually and that made them work fine |
15:30 |
erle |
you can install debian on it |
15:30 |
MTDiscord |
<_devsh_> look, a 970 with 4 GB of VRAM |
15:30 |
MTDiscord |
https://www.ebay.de/itm/186668013890?_skw=geforce&itmmeta=01JCR6RNSKTAF3FPJQ7XNYYSVF&hash=item2b7647d942%3Ag%3AzoUAAOSwDwdm1wON&itmprp=enc%3AAQAJAAAAwHoV3kP08IDx%2BKZ9MfhVJKncZATOsm9ssEL9y1tOCNZJbpuh0F0a0KH9ZiqLJPLhT3HvYw45L3I4rB%2B4iU7hj0Rt5uGvHZugDFzZf31YjT0JI1JhmWPZAHmFiBSURVWKCFclhPyIL6RDCTK5PgugamR4PgKPlc9AtPDuxwyDQRJ59Sg6WUBvnHf7zl1jT13tF1RLopoqNuni6qLNgsWEnj9yIxbSn%2FZ9YgeyumLurJlUQ%2BIk3Nd8P1eQ3nOhiVHZfQ%3D%3D%7Ctkp%3ABk9SR4Dd4obmZA&LH_BIN=1& |
15:30 |
MTDiscord |
LH_ItemCondition=1000%7C1500%7C2010%7C2020%7C2030%7C2500%7C3000 |
15:30 |
erle |
_devsh_ stop posting ebay links |
15:31 |
erle |
_devsh_ the way the people i know who use older hardware (e.g. university students, paupers) do it is go to some refurbmished store and choose the thinkpad they are comfortable with |
15:31 |
erle |
the only person i know who buys dedicated GPUs is my bf and he is a gamer |
15:32 |
lemonzest |
I got a couple of laptops and old dells sff, putting ssds in them made them fly, older systems the cpu was not really the issue, being starved of data was |
15:32 |
MTDiscord |
<_devsh_> else do you want me to find you a $100 laptop ? |
15:33 |
lemonzest |
I got a ThinkPad X270 with 8GB RAM/256GB SSD from eBay for around that (I upgraded it tho afterwards) |
15:33 |
erle |
_devsh_ no i want you to start posting about how the devs can make it work better again instead. and ideally write some code because if your ideas are any good, you'll have to cram them down people's throats (metaphorically) |
15:33 |
MTDiscord |
<_devsh_> ACER Chromebook 13 CB5-311-T3X0 Nvidia Tegra 2GB RAM 16GB is on sale for 75 GBP |
15:33 |
MTDiscord |
<_devsh_> that thing has GL 4.5 desktop ! |
15:33 |
erle |
lemonzest minetest/luanti also has a long-standing “slow i/o can and will degrade your performance” issue IIRC |
15:34 |
MTDiscord |
<_devsh_> erle i will not write code for minetest |
15:34 |
lemonzest |
yeah this X270 was 94 GBP, has an i5-6200U with HD 520 iGPU |
15:34 |
MTDiscord |
<_devsh_> HD520 iGPU is pretty ok |
15:34 |
erle |
_devsh_ then be prepared that people who have less knowledge than you will do it and possibly not do what *you* want |
15:34 |
lemonzest |
yeah has OGL 4.6 and Vulkan 1.3 support |
15:35 |
erle |
lemonzest i find it amazing how computers are so fast that some things are just wasteful nowadays, but you only notice on 10 to 20 year old hardware. |
15:35 |
erle |
or on embedded devices |
15:35 |
lemonzest |
I added 32GB RAM, 2x 512GB SSD (SATA + NVME) Intel AX210 for Wifi 6E |
15:35 |
erle |
like, IIRC the gajim chat client used to re-render the entire conversation |
15:36 |
erle |
which made the chat slower the longer you chatted |
15:36 |
lemonzest |
ouch |
15:36 |
erle |
but only on slower machines this was noticeable, because the devs had faster ones |
15:36 |
erle |
“accidentally quadratic” is the sweet spot for that i think |
15:36 |
lemonzest |
yeah, the dead cells devs code on "potato" hardware and it showed up some stuff in their code |
15:37 |
erle |
you iterate over some things. for each of those things, you accidentally do something that iterates over all of those. and as long as your number of things is low, no one will notice. |
15:37 |
erle |
lemonzest what is dead cells? |
15:37 |
lemonzest |
indie metroid rouge like platformer |
15:38 |
celeron55 |
according to videocardbenchmark.net, my current laptop GPU is 114x faster than the GMA 950 i had back then |
15:38 |
lemonzest |
random generated levels etc |
15:38 |
celeron55 |
well, altough the score probably isn't perfectly linear |
15:38 |
MTDiscord |
<_devsh_> lemonzest, this is interesting about the HD 520 that it has Vulkan 1.3 |
15:38 |
lemonzest |
In Linux anyways |
15:38 |
lemonzest |
I run Debian Sid on it |
15:39 |
lemonzest |
it runs proton games pretty good (which now require vulkan 1.3 because of dxvk) |
15:40 |
MTDiscord |
<_devsh_> awesome |
15:40 |
MTDiscord |
<_devsh_> dxvk devs have their heads screwed on right |
15:40 |
MTDiscord |
<_devsh_> vulkan 1.3 >> DirectX 12 |
15:40 |
MTDiscord |
<_devsh_> we have GPU pointers 😄 |
15:41 |
MTDiscord |
<_devsh_> you just ask for vkGetDeviceaddress of a vkBuffer and you can send it as a uint64_t to a shader, and treat it as a pointer |
15:42 |
erle |
debian is pretty nice |
15:42 |
erle |
it runs on potato |
15:42 |
erle |
well, they will eventually drop support for x86 i guess |
15:43 |
lemonzest |
Mostly tho for the past 12y I been running Fedora |
15:43 |
erle |
but like, x11, wayland, everything nice |
15:43 |
erle |
fedora is a bit too “stuff breaks easily because it is always the NEWEST SHIT” for me |
15:43 |
erle |
they also deprecate hardware support pretty aggressively |
15:44 |
lemonzest |
had not noticed, I run it on a Dell Latitude E7470 thats about 7-8yo and it runs perfectly fine (for its age) |
15:45 |
erle |
well, e.g. for legacy BIOS they did stuff like “you can boot using your existing installation and we will not break it, but the installer will not support BIOS installation anymore”, which is probably the nicest way to not make people angry, but “engineer” your numbers in a way that at some point you don't have any legacy BIOS users |
15:45 |
erle |
(and then you can drop everything) |
15:45 |
erle |
this also extends to dropping e.g. VESA support |
15:46 |
erle |
the *only* reason i was able to get this laptop (R60) usable was because of VESA support actually – it needed a BIOS update to not hang at boot with non-VESA graphics, but have fun downloading and burning that to a CD without any kind of graphics support |
15:46 |
lemonzest |
Last time I heard of VESA support it was when I was trying to play Duke 3D back in DOS |
15:47 |
erle |
well you need it every time your graphics driver situation is properly fucked |
15:47 |
erle |
otherwise, i think only in early boot |
15:48 |
erle |
with VESA i could at least get like 5fps X11 desktop at 1400x1050 |
15:48 |
erle |
anyway, i would expect fedora to drop support for hardware WAY before luanti/minetest does |
15:48 |
erle |
because they are so aggressive at it |
15:49 |
lemonzest |
Yup |
15:50 |
erle |
this is also why i think debian is super nice. i can remember no surprises like this, except once (ex gf of mine updated, systemd transition made her system unbootable) |
15:51 |
erle |
(well we fixed it) |
15:54 |
|
gregon joined #minetest |
15:56 |
erle |
might have been 1280×1024 actually? i checked and can't find if vesa supports 1400x1050 |
15:56 |
|
fling_ joined #minetest |
15:57 |
erle |
https://en.wikipedia.org/wiki/VESA_BIOS_Extensions#Linux_video_mode_numbers |
15:57 |
|
est joined #minetest |
15:57 |
erle |
> As indicated earlier, the VESA standard defines a limited set of modes; in particular, none above 1280×1024 are covered and, instead, their implementation is completely optional for graphics adapter manufacturers. As vendors are free to utilize whatever additional values they please […] |
16:18 |
|
shaft joined #minetest |
16:28 |
|
SpaceManiac joined #minetest |
16:31 |
MTDiscord |
<jordan4ibanez> I guess it is in fact yolo time |
16:38 |
|
shaft joined #minetest |
16:39 |
shaft |
Thanks for doing the linux release rubenwardy |
16:41 |
shaft |
Something's wrong with the new gltf animation I think. It works beautifully once I figured out how to not export empty STEP type animations from Blender but for some reason higher frame_speed values in set animations make the animation go faster and lower ones make them go slower. The documentation says frame_speed is fps, so it should be the other way around, right? |
16:42 |
MTDiscord |
<luatic> STEP interpolation is not supported indeed |
16:43 |
shaft |
My 24 fps animation roughly matches frame_speed 3 in set_animation() |
16:43 |
MTDiscord |
<luatic> Animation speed linearly grows with frame_speed, that's also as expected. Note that gltf frames are typically in seconds. |
16:44 |
shaft |
*2 |
16:46 |
shaft |
So how do I arrive from 24fps with 80 frames to the value ~2 |
16:46 |
shaft |
? |
16:48 |
MTDiscord |
<luatic> You should usually use a frame_speed of 1 to observe the same animation speed in Luanti as in your model editor, otherwise I'd question the export. |
16:49 |
|
MacroFaxSax joined #minetest |
16:49 |
shaft |
Oh, I'm just confused because the default value is 15 |
16:49 |
shaft |
Yes 1 matches it |
16:49 |
shaft |
Then everything is good |
16:49 |
erle |
why is the default 15 in the first place? |
16:49 |
erle |
luatic did you use afl-fuzz etc. on the gltf parser already? |
16:50 |
erle |
or afl++ |
16:50 |
MTDiscord |
<luatic> erle: no time for that |
16:50 |
MTDiscord |
<luatic> erle: the default is one because i can't break things :'( |
16:50 |
MTDiscord |
<luatic> i'll get a chance to set an (IMO) saner default when i implement a new set_animations method |
16:50 |
erle |
luatic if you have some test case that *only* loads a model (no rendering) i can try to show you how to do it |
16:50 |
erle |
it's super easy |
16:51 |
erle |
well not today |
16:51 |
erle |
but eventually |
16:51 |
MTDiscord |
<luatic> erle: there are basic unit tests for it in the repo. they're not entirely decoupled irrlicht rendering (they do initialize the SDL device and all that iirc) but they do run headless. |
16:52 |
erle |
luatic the issue is that if you do more work then a) fuzzing is slower than it needs to b) you find bugs in that other code instead of in the parser |
16:52 |
MTDiscord |
<luatic> remove rendering, insert from before irrlicht |
16:52 |
erle |
needs to be |
16:52 |
MTDiscord |
<luatic> erle: well i think this could be fine because it's just test case setup basically |
16:52 |
MTDiscord |
<luatic> erle: needs to be = needs to b) :D |
16:53 |
MTDiscord |
<jordan4ibanez> GLTF is time based, irrlicht is framerate based |
16:53 |
erle |
well i am only an expert on parser bugs |
16:53 |
MTDiscord |
<jordan4ibanez> we jammed a square peg into the round hole |
16:53 |
erle |
jordan4ibanez read up on apollo 13 air filters for some fun |
16:53 |
MTDiscord |
<jordan4ibanez> I am using one right now |
16:53 |
erle |
you are using an apollo capsule? i press X to doubt |
16:54 |
erle |
https://www.nasa.gov/history/afj/ap13fj/15day4-mailbox.html |
16:54 |
MTDiscord |
<jordan4ibanez> I'll be honest, I am not sure exactly how we would do step based animation in the current engine implementation |
16:54 |
MTDiscord |
<luatic> I am sure |
16:54 |
MTDiscord |
<jordan4ibanez> Yay |
16:54 |
MTDiscord |
<luatic> But I want to unfuck it first |
16:54 |
MTDiscord |
<jordan4ibanez> Sounds like a good plan tbh |
16:54 |
MTDiscord |
<luatic> Mixing refactoring and fixing is already semi-questionable |
16:55 |
MTDiscord |
<luatic> If I was to mix refactoring, fixing and new features reviewers would just drop dead |
16:55 |
erle |
https://spacecenter.org/apollo-13-infographic-how-did-they-make-that-co2-scrubber/ |
16:55 |
MTDiscord |
<luatic> (rightfully) |
16:55 |
erle |
luatic is correct, putting too much stuff in PRs just results in “i hope it works” reviews |
16:55 |
erle |
see, e.g. mineclone2 and successors “armor rewrite” and “mob rewrite” and “nether portal rewrite” |
16:56 |
erle |
(there is a mob rewrite every year or so i guess? :D) |
16:56 |
MTDiscord |
<jordan4ibanez> Yeah no you both make a great point |
16:57 |
MTDiscord |
<jordan4ibanez> So it is, fixes, refactor, then new features |
16:58 |
shaft |
Why is min_minetest_version not in the Luanti documentation? |
16:59 |
shaft |
Is it a contentdb variable? |
17:00 |
shaft |
Oh it is. Never mind |
17:02 |
|
Blockhead256 joined #minetest |
17:04 |
Blockhead256 |
while we're on the topic of animated gltf... |
17:05 |
Blockhead256 |
I figured out how to enable embedded gltf and that you have to use "scene" animation mode, but then |
17:05 |
Blockhead256 |
"unsupported interpolation, only linear is supported". Interpolation of what? Animation frames? Textures? |
17:06 |
MTDiscord |
<luatic> animation frames |
17:06 |
MTDiscord |
<luatic> most probably your exporter is producing STEP (no interpolation) frames somewhere |
17:07 |
MTDiscord |
<luatic> (because i'm yet to see CUBICSPLINE) |
17:07 |
erle |
luatic what exporter do you use or suggest actually? maybe a writeup of how you make gltf models would be useful |
17:08 |
Blockhead256 |
yes my gltf has STEP interpolations.. how do I ask blender to make them linear? |
17:10 |
MTDiscord |
<luatic> simplest thing you can do is just s/STEP/LINEAR in the JSON, if it still works okay chances are it actually doesn't really depend on it being STEP |
17:10 |
shaft |
I had to (1) select all keyframes of the animation, right click/interpolation mode/Linear (2) disable "Animation/Optimize Animation/Force keeping channels for bones" in the exporter |
17:11 |
MTDiscord |
<luatic> why is "Force keeping channels for bones" on by default :thonking: |
17:11 |
shaft |
It took me hours to figure out (2) but thanks to the text format gltf I was able to see if there were STEP animations remaining |
17:12 |
shaft |
luatic. It always is. I don't know why Blender does this. |
17:12 |
Blockhead256 |
sorry this may be a blender noob question but.. on the animation workspace, I can't actually see my keyframes |
17:13 |
Blockhead256 |
I think this model was re-imported at some point from b3d, which maybe lost that info |
17:13 |
shaft |
You have to select the animation node in the scene tree |
17:14 |
Blockhead256 |
oh yes, if I select the armature it shows up. thanks |
17:15 |
shaft |
Man I'm glad we don't have to deal with b3d anymore |
17:16 |
ireallyhateirc |
Would be great if the workflow with Blender/gltf was documented somewhere (wiki, api?) |
17:17 |
shaft |
Yes. |
17:17 |
Blockhead256 |
what would the the process be if I had to convert the interpolations to linear? is there any easy way inside blender, or is it "eyeball the curve by inserting more keyframes"? |
17:17 |
shaft |
Please copy everything I wrote there. |
17:17 |
Blockhead256 |
yep I got my wiki account today, and I intend on writing it up and putting it there |
17:17 |
Blockhead256 |
but I first have to discover how it works myself |
17:18 |
MTDiscord |
<luatic> Blockhead256: You can approximate a STEP (literally 0 to 1 jump) using linear functions in an ugly manner, yes |
17:18 |
|
gregon joined #minetest |
17:19 |
Blockhead256 |
luatic: thankfully, lerp works fine for my animation, in fact it kind of looks that way already |
17:20 |
Blockhead256 |
I'm more concerned for anyone trying to get smooth interp models into Luanti, since e.g. cubic would need to be redone |
17:20 |
MTDiscord |
<luatic> Basically say you want a step from t to t'. You'd insert two more frames at t'' = (t + t') +- epsilon, one with the old value, one with the new one. Basically you'd be doing linear interpolation for a split fraction of a second and hoping nobody notices. I believe that if you choose epsilon small enough, you may have a good chance that the probability of that happening is effectively zero. |
17:20 |
Blockhead256 |
but I guess that can wait/I can write a "good luck eyeballing it lol" comment in the tutorial material |
17:20 |
ireallyhateirc |
I believe you could just bake interpolation as keyframes... |
17:21 |
MTDiscord |
<luatic> I'm yet to see cubicspline, I think step is a more of a problem right now because as we just established Blender unnecessarily uses it by default even if you only have linear animations |
17:21 |
Blockhead256 |
ireallyhateirc: sure, but is there an easy way to ask blender to bake other interps to linear? that's the question |
17:21 |
ireallyhateirc |
no idea |
17:22 |
Blockhead256 |
I suppose we ask modellers to do that process themselves for now and hold our breath for higher-order interps support later in Luanti.. ... |
17:24 |
Blockhead256 |
'right click"... heh., my blender control scheme is a bit wacky: left click select, right-click pan, W for menu |
17:25 |
Blockhead256 |
*right-click orbit. Middle click orbit is unergonomic (moreover, my middle click is broken) |
17:27 |
yezgromafic |
all this blender stuff brings back memories... hopefully i will pick it up once i actually start making mods |
17:27 |
Blockhead256 |
well it works now.. the scale is off by the normal 1:10 but the model's loaded. Thanks everybody |
17:41 |
|
Glaedr joined #minetest |
17:43 |
|
SFENCE joined #minetest |
17:49 |
Blockhead256 |
riddle me this: why is my model out of scale such that the scale factor 0.316 is approximately the original size? |
17:50 |
ireallyhateirc |
aren't Minetest models 5 times bigger/smaller ? |
17:51 |
Blockhead256 |
the ordinary rule for entities is 1:10, not 5 |
17:51 |
Blockhead256 |
at least, it was when I was dealing with OBJ and B3D |
17:51 |
Blockhead256 |
I guess I need to look into my Blender units |
17:52 |
Blockhead256 |
I suppose "you reopen your blend files and re-export as gltf" was just a dream anyway |
17:52 |
ireallyhateirc |
why is the scale 1:10 again? Just to make the life of the 3D artist more painful? |
17:53 |
rubenwardy |
Pretty much |
17:53 |
rubenwardy |
It's literally BS |
17:53 |
rubenwardy |
Block Size |
17:53 |
Blockhead256 |
https://forum.luanti.org/viewtopic.php?p=440597#p440597 chime in if you have comments, I have to go |
17:54 |
ireallyhateirc |
rubenwardy, by block size you mean what, a mapblock? |
17:54 |
ireallyhateirc |
or fool's block - node |
17:55 |
rubenwardy |
I guess it means node |
17:55 |
rubenwardy |
In the engine, BS = 10 which is where the x10 comes from |
17:56 |
ireallyhateirc |
this gives me a headache. if we're larping that 1 node = 1 meter then the engine should work that way |
17:58 |
rubenwardy |
Getting rid of this is planned for 6.0. Who knows when that'll be though |
18:00 |
ireallyhateirc |
I'll simply live in pain until then |
18:00 |
|
gregon joined #minetest |
18:01 |
|
tarsovbak joined #minetest |
18:48 |
|
SFENCE joined #minetest |
19:12 |
erle |
btw, anyone else have this thing that like IBM-era thinkpads are *much* longer living than newer ones? |
19:13 |
erle |
like, i used a p14s for work and … the “6” key broke. just so. touchpads would fail … etc. |
19:14 |
erle |
meanwhile, the only thing that i had ever berak in older models were display (inverter?), battery (put a new one in), hinges (because of particular violence i inflicted on it i guess) |
19:14 |
erle |
berak → break |
19:18 |
ireallyhateirc |
planned obsolesence I guess |
19:18 |
ireallyhateirc |
I have a used T60 from 2008 that works great (for a toaster), while new thinkpads break after 3-5 years |
19:19 |
MTDiscord |
<_devsh_> https://www.furygpu.com/ |
19:19 |
MTDiscord |
<_devsh_> Erle, pure enough for you? |
19:22 |
Desour |
I have the thing that my laptop is quite shit in longlivety |
19:24 |
Desour |
it's a only 6 years old laptop (the powerful gaming setup of one of the coredevs), and the keyboard is already breaking, and battery also expanded |
19:31 |
ireallyhateirc |
I got a good "made for Windows 2000" office HP keyboard from a trashcan, better than most stuff out there |
19:32 |
ireallyhateirc |
to my surprise it was really clean and had no dust or breadcrumbs inside |
19:33 |
[ |
I broke my old X200's keyboard (kind of, it's still usable) and middle mouse button (which is now unusable) by attempting to remove them so I could clean the keyboard. Also the backlight stopped working and the internal speaker doesn't work (except I don't think it's the speaker itself, since I tried replacing it and it still didn't work) |
19:34 |
ireallyhateirc |
I lost a rubber cap from the keyboard upon cleaning so the "Home" button is down |
19:34 |
[ |
erle: was it actually the display or the backlight like mine? |
19:34 |
ireallyhateirc |
but idk what it's used for in the first place |
19:39 |
erle |
[ oh it's backlight for sure |
19:39 |
erle |
> I have a used T60 from 2008 that works great (for a toaster), while new thinkpads break after 3-5 years |
19:39 |
erle |
my experience |
19:40 |
MTDiscord |
<_devsh_> [_ is fury GPU pure enough for you? |
19:40 |
erle |
x230 and x260 (work machines) also broke within several years |
19:41 |
erle |
> high-end graphics card of the mid 1990s |
19:41 |
erle |
yeah lol |
19:41 |
erle |
_devsh_ this is not a retro competation |
19:43 |
erle |
_devsh_ my rule of thumb for GPUs, they should be able to do modern X11/wayland and be able to display relatively lightweight modern desktop environments (e.g. xfce/i3/sway) without shitting themselves. |
19:44 |
erle |
anything i have seen that is weaker than that will make browsing the modern web a pain |
19:44 |
erle |
(tearing/redrawing issues mostly) |
19:44 |
Krock |
a framebuffer + software rendering can already be enough for that nowadays |
19:47 |
|
SFENCE joined #minetest |
19:49 |
erle |
Krock if you enjoy the smell of burned plastic, everything can be solved with llvmpipe |
19:51 |
erle |
https://ctrl-alt-rees.com/2024-08-13-intel-945gm-express-i915-gma-950-modesetting-driver-llvmpipe-linux.html :D |
19:54 |
|
SFENCE joined #minetest |
19:56 |
|
fluxionary joined #minetest |
20:00 |
MTDiscord |
<jordan4ibanez> My dude, it is useless |
20:03 |
cheapie |
erle: llvmpipe is fun, even Vulkan ray tracing works in software rendering now :D |
20:04 |
MTDiscord |
<_devsh_> "works" is a bit of a stretch |
20:04 |
cheapie |
...granted, even in 640x480 it's well into "hours per frame" territory, but it works |
20:04 |
MTDiscord |
<_devsh_> "I did no perf testing because I didn't have the will to wait for a frame to finish rendering" |
20:05 |
cheapie |
IIRC when I tried Quake II RTX on a V1756B in 640x480, it rendered more or less fine but took about 2.5 hours/frame. |
20:05 |
MTDiscord |
<_devsh_> Quake II RTX is an NV tech demo |
20:05 |
erle |
rubenwardy no idea if you still use reddit, but the sticky needs updating https://old.reddit.com/r/Minetest/ |
20:05 |
cheapie |
Yes, but it runs on non-Nvidia hardware too |
20:06 |
MTDiscord |
<_devsh_> llvmpipe can be a lot faster though |
20:06 |
MTDiscord |
<_devsh_> they just need to go harder on the compiler |
20:06 |
MTDiscord |
<_devsh_> its a pretty dope thing btw |
20:06 |
cheapie |
Luanti is generally somewhat playable with llvmpipe these days, which is neat. |
20:06 |
|
jluc_ joined #minetest |
20:06 |
MTDiscord |
<_devsh_> > Shaders, point/line/triangle rasterization and vertex processing are implemented with LLVM IR which is translated to x86, x86-64, or ppc64le machine code |
20:07 |
MTDiscord |
<_devsh_> its the perfect solution |
20:07 |
erle |
note that the linked blog post shows that you get faster performance out of old hardware *without* llvmpipe (like ~30% or so?) |
20:08 |
MTDiscord |
<_devsh_> 30% gain is laughing stock |
20:08 |
cheapie |
It depends on the hardware, on a box I had with VIA Chrome9 HD graphics llvmpipe in Linux was about the same speed as actually using the GPU in Windows. |
20:09 |
cheapie |
(not RT of course, I don't want to wait all week for one frame) |
20:09 |
MTDiscord |
<_devsh_> like if your GPU can only beat your CPU by 30% its a nighrmare |
20:09 |
MTDiscord |
<_devsh_> you can easily dig a cpu that can do 16x more cores than your CPU thats bundled with the iGPU |
20:10 |
cheapie |
The CPU was a VIA Eden X2 U4200, so it's not like I even had llvmpipe running on something fast :P |
20:10 |
MTDiscord |
<_devsh_> I think there's still room for LLVM-pipe to grow |
20:11 |
MTDiscord |
<_devsh_> not 100% sure if the thing always vectorizes with AVX if it can |
20:11 |
MTDiscord |
<_devsh_> it shoudl |
20:12 |
MTDiscord |
<_devsh_> so that subgroupSize=8 in your shaders |
20:12 |
MTDiscord |
<_devsh_> llvm-pipe should run quite fast on compute shaders |
20:12 |
MTDiscord |
<_devsh_> they kinda map nicely to CPU cores |
20:13 |
MTDiscord |
<_devsh_> workgroup should be just large enough to use up all your AVX/SSE registers |
20:13 |
cheapie |
"use a GPU to emulate a GPU" sounds about as silly as that one time I did GPU compute on software rendering |
20:13 |
|
SFENCE joined #minetest |
20:14 |
cheapie |
(unless you mean "should handle compute shaders well" and not actually "run quite fast on compute shaders") |
20:14 |
MTDiscord |
<_devsh_> I mean run quite fast |
20:14 |
MTDiscord |
<_devsh_> AVX can do 8 invocations at once, and AVX512 can do 16 |
20:14 |
MTDiscord |
<_devsh_> there are about 8 or 16 avx registers |
20:15 |
MTDiscord |
<_devsh_> so you can comfortably emulate a 64-256 wide workgroup |
20:15 |
cheapie |
You're still confusing me here, you're saying it would run quite fast on computer shaders (which would make no sense to do) but then still listing CPU stuff, but then denying that you meant the other way around? |
20:15 |
cheapie |
compute* |
20:15 |
MTDiscord |
<_devsh_> I mean using CPU to run a compute shader |
20:15 |
MTDiscord |
<_devsh_> 1 core = 1 execution unit |
20:15 |
cheapie |
OK, that makes more sense |
20:16 |
MTDiscord |
<_devsh_> Haswell (first CPU with AVX) has about 32kb data cache per core |
20:16 |
cheapie |
llvmpipe does already support Vulkan compute shader stuff, it works fine FWIW |
20:16 |
MTDiscord |
<_devsh_> GL/VK only mandate a minimum of 16kb shared memory |
20:16 |
MTDiscord |
<jordan4ibanez> How the heck do you do GPU compute in software rendering |
20:16 |
[ |
_devsh_: I am [, not [_. [_ doesn't ping me |
20:16 |
MTDiscord |
<_devsh_> just comptue shaders, not GPU |
20:16 |
[ |
also haswell isn't the first CPU with AVX, haswell is the first with AVX2 |
20:17 |
[ |
AVX is sandybridge I think |
20:17 |
cheapie |
jordan4ibanez: Use something that uses Vulkan for compute, and then force it to use llvmpipe as the "device" - llvmpipe supports this |
20:18 |
MTDiscord |
<jordan4ibanez> You're using the cpu to do cpu things but in an emulator :thonking: |
20:18 |
cheapie |
Yes, it's usually somewhat slower than just using the CPU directly, but not by as much as it sounds. |
20:18 |
MTDiscord |
<_devsh_> anyway any CPU with >16kb L1 cache per core with AVX can nicely spoof a GPU for the purposes of compute |
20:18 |
MTDiscord |
<jordan4ibanez> That's just ridiculous. Nice work |
20:19 |
MTDiscord |
<_devsh_> there's no emulation, the GLSL gets compiled to raw x86 |
20:19 |
MTDiscord |
<_devsh_> shaders/pipelines kinda turn into DLLs |
20:19 |
MTDiscord |
<jordan4ibanez> Hmm, so you can use glsl as the strangest compiled language to exist |
20:19 |
MTDiscord |
<_devsh_> I actually do a similar thing in Nabla, I can run portions of HLSL on the CPU |
20:20 |
MTDiscord |
<jordan4ibanez> Hmm, I see. This is all very fascinating |
20:21 |
MTDiscord |
<_devsh_> 2 years ago we started remaking std:: for HLSL https://www.youtube.com/watch?v=JGiKTy_Csv8 |
20:22 |
MTDiscord |
<_devsh_> then we went "fuck it" and defined a subset of HLSL that also compiles as C++ https://www.youtube.com/watch?v=JCJ35dlZJb4&t=16s |
20:22 |
MTDiscord |
<jordan4ibanez> It's only a matter of time before the Linux kernel just runs on a GPU with power wires connected to it |
20:22 |
MTDiscord |
<_devsh_> has been done already |
20:22 |
MTDiscord |
<jordan4ibanez> The horror |
20:23 |
MTDiscord |
<jordan4ibanez> I'm just kidding, that's pretty cool |
20:23 |
MTDiscord |
<_devsh_> https://blog.pimaker.at/texts/rvc1/ |
20:23 |
MTDiscord |
<_devsh_> pixel shader that emulates a RISC-V CPU |
20:24 |
MTDiscord |
<jordan4ibanez> I wonder why we just haven't using a GPU as the entire computer |
20:24 |
MTDiscord |
<jordan4ibanez> Haven't started* |
20:24 |
MTDiscord |
<_devsh_> because GPUs give no forward progress guarantees |
20:25 |
MTDiscord |
<_devsh_> and absolutely shitty perf |
20:29 |
MTDiscord |
<_devsh_> there's also this https://www.phoronix.com/news/DOOM-ROCm-LLVM-Port |
20:32 |
MTDiscord |
<_devsh_> the TL;DR is that GPUs rely on cooperative scheduling (like DOS) and its easy to fuck up the stability, they're not good for context switching between lots of different kernels thath all require a time slice |
20:33 |
MTDiscord |
<_devsh_> they have horrible memory access latencies (you don't notice them because they oversubscribe their cores, Hyperthreading on CPU is 2 threads on one core, GPUs often have 8) |
20:34 |
MTDiscord |
<_devsh_> and there's really shallow pipelining, they don't have branch predictors and shit |
20:34 |
MTDiscord |
<_devsh_> also the caches are muuuch muuuuch smaller |
20:34 |
MTDiscord |
<_devsh_> for a core |
20:34 |
MTDiscord |
<jordan4ibanez> Oh yeah, that is true. Good at the race, but fall on the walk |
20:35 |
|
jluc_ joined #minetest |
20:36 |
MTDiscord |
<_devsh_> like a CPU will have stupid amounts of cache per core, whereas GPU has more cache but also more threads to share it amongst |
20:36 |
MTDiscord |
<_devsh_> the 4090 is a monster though |
20:36 |
MTDiscord |
<_devsh_> nearly 100 MB of cache |
20:37 |
MTDiscord |
<_devsh_> 74 to be exact |
20:39 |
MTDiscord |
<jordan4ibanez> Hyper optimized for the task of parallel computation of matrix math. Shallow. Push those pixels, depth buffer, what goes in front of what, etc etc. But then you try to run synchronization task scheduler and it just farts |
20:40 |
MTDiscord |
<_devsh_> 4090 has 128 AVX512-like cores, thats the same as an AMD EPYC |
20:40 |
MTDiscord |
<jordan4ibanez> 100 mb of cache on a GPU is pretty neat, I would be able to play timberborn at 5 fps with a colony of 2k |
20:40 |
MTDiscord |
<jordan4ibanez> I wish I was joking |
20:41 |
MTDiscord |
<jordan4ibanez> Hmm, so you pair the 4090 with a thread ripper and you get the ultimate gaming machine |
20:42 |
MTDiscord |
<_devsh_> EPYC 9754 has 256 MB of cache 😄 |
20:42 |
MTDiscord |
<jordan4ibanez> Remember, that's per chiplet |
20:42 |
MTDiscord |
<_devsh_> vastly inferior bandwidth to RAM though, 460.8 GB/s |
20:42 |
MTDiscord |
<jordan4ibanez> AMD does their cache info very weirdly |
20:43 |
MTDiscord |
<_devsh_> 9755 tops out near 600 GB/s |
20:43 |
MTDiscord |
<_devsh_> must say a 4090 RTX is great value for money |
20:43 |
MTDiscord |
<_devsh_> 5-6x cheaper than a comparable CPU |
20:44 |
MTDiscord |
<jordan4ibanez> I just use my smol efficient 9700x with my 6800xt tuned way down |
20:45 |
MTDiscord |
<jordan4ibanez> I would consider an Nvidia if their drivers on Linux worked correctly though |
20:46 |
|
SFENCE joined #minetest |
20:46 |
MTDiscord |
<jordan4ibanez> I hop from freebsd to Linux to windows so I kind of need something that allows that |
20:47 |
MTDiscord |
<_devsh_> why the fuck do the consumer GPUs with less cores run crysis better? |
20:47 |
MTDiscord |
<_devsh_> https://cdn.discordapp.com/attachments/749727888659447960/1307084517068247090/121086.png?ex=673904d1&is=6737b351&hm=7143b109fa9ca9ea654a21161ecb7276a5a8b742653f08b65df72d1323725302& |
20:47 |
MTDiscord |
<jordan4ibanez> Probably was tuned on the cpu of the day |
20:48 |
MTDiscord |
<_devsh_> llvmpipe probably doesn't pin affinity to core, so you get a lot of false sharing |
20:48 |
MTDiscord |
<_devsh_> iMHO the depth buffering is probably implemented as atomicMax on 32bit uints |
20:48 |
MTDiscord |
<_devsh_> I'm kinda guessing cause I don't want to go into the rabbithole of reading that source |
20:49 |
MTDiscord |
<_devsh_> the numbers are consistent with that though |
20:49 |
MTDiscord |
<_devsh_> they should probably implement a tiler GPU |
20:50 |
MTDiscord |
<_devsh_> 32kb of L1 data means you can probably split it halfsies vertex cache and framebuffer |
20:51 |
MTDiscord |
<_devsh_> that lets you do 64x32 tiles |
20:54 |
MTDiscord |
<_devsh_> the upside is that you don't need to use atomics to rasterize in parallel |
20:55 |
MTDiscord |
<jordan4ibanez> Also on that thing again, the nt kernel might choke when using high thread count cpus |
20:55 |
MTDiscord |
<jordan4ibanez> You're gonna turn the cpu into a huge GPU lol |
20:56 |
MTDiscord |
<_devsh_> the AMD gpus have chiplets |
20:56 |
MTDiscord |
<_devsh_> they have horrible latencies cross-chiplet |
20:57 |
MTDiscord |
<_devsh_> and the worst thing is that you can't pin threads to cores without admin privilage in most OSes |
20:57 |
MTDiscord |
<_devsh_> on PS4 and PS5 its required |
21:02 |
MTDiscord |
<jordan4ibanez> Oh the cpu |
21:03 |
MTDiscord |
<_devsh_> sorry meant CPU |
21:07 |
MTDiscord |
<jordan4ibanez> I don't really notice any kind of performance impact from the chiplet design. But I'll tell you, when I first got this thing on sale literally a week after it came out somehow. The kernel I was using was definitely not designed for it and the performance was horrendous. I just installed the liquorix and it mostly fixed it on mint. But then I went to arch and it's pretty dang good |
21:08 |
MTDiscord |
<jordan4ibanez> Believe it or not I literally had no idea this thing just came out at that time, so when I checked the release date I was like "day zero tester", aw shit no |
21:09 |
MTDiscord |
<jordan4ibanez> This thing can handle me constantly compiling so I lub it |
21:12 |
cheapie |
I'm on a 5950X here, this thing is pretty nice too even if this is an older platform by now. |
21:12 |
|
Talkless joined #minetest |
21:12 |
MTDiscord |
<jordan4ibanez> 7nm zen 3 will hold up for at least another decade no questions asked |
21:13 |
MTDiscord |
<jordan4ibanez> Never mind the fact you literally have the top of the 5000 series lol |
21:13 |
cheapie |
I was on a 2700X before this, but that wasn't really enough to get any reasonable performance out of Luanti, and this thing was relatively cheap by the time I got it. |
21:14 |
MTDiscord |
<jordan4ibanez> There you go. A good deal is a good deal |
21:15 |
erle |
cheapie what are you doing that makes a 2700X a bad performance option? |
21:15 |
MTDiscord |
<_devsh_> the key to getting good threaded perf is to have the threads not talk to each other |
21:16 |
cheapie |
erle: Luanti is insanely CPU-heavy when you're anywhere near any buildings, with the 2700X I could only pull off a view range of 50-100 depending on how built-up the area was. |
21:16 |
cheapie |
With the 5950X I can do 100-200 |
21:17 |
MTDiscord |
<jordan4ibanez> https://tenor.com/view/u-unlimited-power-star-wars-lightning-gif-14542323 |
21:17 |
erle |
cheapie uh, not my experience. where these buildings plastered with texmod-abusing stuff though? |
21:17 |
MTDiscord |
<jordan4ibanez> You mean, using the game engine? |
21:17 |
MTDiscord |
<_devsh_> most of what I do is compiling, hence I tend to get the highest Mhz RAM my mobo and CPU can afford + Pro Evo 990 |
21:17 |
cheapie |
Not a ton, but it seems to be glass (which commercial buildings have a lot of) that kills performance more than texmods anyway. |
21:18 |
erle |
right, multiple giant glass walls are performance killers for me too. |
21:18 |
MTDiscord |
<_devsh_> say hello to lack of early-Z |
21:18 |
cheapie |
Texmods seem to cause stuttering if used a lot, glass is just uniformly slow |
21:18 |
erle |
but most lag on public non-anarchy servers i got from servers that use texmods for signs |
21:18 |
erle |
and then have A LOT OF THOSE SIGNS at spawn |
21:18 |
ireallyhateirc |
in Exile we get terrible lags when semi-transparent ice is on the sea |
21:18 |
MTDiscord |
<jordan4ibanez> This is what I love to do :) They are their process, they can pass messages into queues, it's very neat and tidy |
21:18 |
erle |
it was one of my motivations for making unicode_text actually |
21:18 |
MTDiscord |
<_devsh_> texture upload-update is rearing its head, irrlicht sucks ass on texture uploads |
21:19 |
cheapie |
Moving digiscreen from texmods to [png helped a lot with that mod, but that's not really the biggest problem one. |
21:19 |
MTDiscord |
<_devsh_> I did give into some C++20 masturbation once, and made my own ringbuffer + async_future |
21:19 |
ireallyhateirc |
someone hit irrlicht with a shovel |
21:19 |
MTDiscord |
<_devsh_> you know the irrilicht IFile ? |
21:19 |
MTDiscord |
<jordan4ibanez> That's okay, at least you had fun doing it |
21:20 |
MTDiscord |
<jordan4ibanez> Well, I hope you had fun doing it lol |
21:20 |
cheapie |
I was able to run the things quite fast with [png too, especially if I disabled Luacontroller overheating: https://cheapiesystems.com/media/2024-09-18%2020-32-25.webm |
21:20 |
MTDiscord |
<jordan4ibanez> That dreaded IFile I've passed it many times in the folder |
21:20 |
MTDiscord |
<_devsh_> in Nabla all file I/O happens on a dedicated thread |
21:21 |
MTDiscord |
<_devsh_> so IFile::read and write return an async future |
21:21 |
MTDiscord |
<_devsh_> thats also cancellable |
21:21 |
cheapie |
That donut thing is "hardware" accelerated with a digistuff GPU, which also improves performance a lot more than it sounds like it would. |
21:22 |
|
jluc joined #minetest |
21:23 |
MTDiscord |
<jordan4ibanez> I can't even like route my way through the existing implementation from the layers of inheritance, I can't imagine what IFile does in it's current state |
21:23 |
MTDiscord |
<jordan4ibanez> We gonna duct tape on nabla IFile to minetest lol |
21:24 |
MTDiscord |
<_devsh_> you'd need my Isystem |
21:25 |
MTDiscord |
<jordan4ibanez> https://tenor.com/view/ifht-carbon-dealer-mtb-mountainbike-gif-20969945 |
21:25 |
cheapie |
...one of these days I really need to just write that "demo" for the Luacontroller+digistuff GPU+digiscreen that I keep thinking about, maybe with some music on a digistuff noteblock too if I ever develop any musical skills :P |
21:25 |
cheapie |
I already have quite a few ideas of ways I could do various old-school demo effects in that "GPU" |
21:25 |
MTDiscord |
<jordan4ibanez> There's a digistuff GPU now |
21:26 |
MTDiscord |
<_devsh_> https://github.com/Devsh-Graphics-Programming/Nabla/blob/72f847d0aa649f67d453f9c63cf93185b6c91d9e/include/nbl/system/ISystem.h#L26 |
21:26 |
MTDiscord |
<_devsh_> this took me very near to the ned of my comfort zone https://github.com/Devsh-Graphics-Programming/Nabla/blob/72f847d0aa649f67d453f9c63cf93185b6c91d9e/include/nbl/system/IAsyncQueueDispatcher.h |
21:26 |
cheapie |
jordan4ibanez: Only for the last ~4 years |
21:26 |
cheapie |
It doesn't do a ton, it's just a basic 2D accelerator. |
21:27 |
cheapie |
https://github.com/mt-mods/digistuff/blob/master/docs/gpu.txt |
21:27 |
MTDiscord |
<jordan4ibanez> I have spotted a fellow better comments extension user |
21:28 |
cheapie |
Or https://cheapiesystems.com/media/DGL5440.pdf if you prefer the "datasheet-style" page |
21:28 |
MTDiscord |
<jordan4ibanez> How the heck does it accelerate when it's running in lua |
21:28 |
MTDiscord |
<jordan4ibanez> I am, confusion |
21:28 |
MTDiscord |
<jordan4ibanez> But I think I get the gist |
21:29 |
erle |
cheapie check this donut out https://git.minetest.land/erle/tga_encoder/src/branch/master/donut.lua |
21:29 |
cheapie |
Two ways - Luacontrollers disable JIT when running user code (so they can count instructions) while this doesn't have to, and sending tables over digilines is expensive from a Luacontroller |
21:29 |
cheapie |
erle: I did some basic 3D experimentation a while back, with a Luacontroller doing the math and the "GPU" drawing the lines: https://cheapiesystems.com/media/luac-3dcube.webm |
21:30 |
MTDiscord |
<jordan4ibanez> Now, don't tempt me too much before I end up writing a graphical os for the lua controller lol |
21:30 |
MTDiscord |
<jordan4ibanez> How the heck did they get it to not leak memory, is it a bunch of entities? |
21:30 |
cheapie |
With the digiscreen changes (they came after that) and overheating disabled, I can get around 25 FPS out of it. |
21:31 |
cheapie |
The GPU doesn't display anything at all, you connect it to rgblightstone or digiscreen to use as a display. |
21:31 |
erle |
i bet it leaks some amount of memory. concerns about leaking memory was why i made xmaps (or mcl_maps) never realtime. |
21:32 |
cheapie |
rgblightstone (not shown here) uses either param2 coloring in 256-color mode or one entity per pixel in true-color mode. digiscreen (the one in the videos) uses one entity per 16x16 panel. |
21:32 |
erle |
oh neat |
21:32 |
MTDiscord |
<jordan4ibanez> Very smart |
21:32 |
MTDiscord |
<jordan4ibanez> I'd probably crash the game abusing that lol |
21:32 |
cheapie |
But the output from it is just digilines in a rather simple format, you can make other arbitrary displays for it too. |
21:33 |
MTDiscord |
<jordan4ibanez> I will make boom 2 for the digiline thing |
21:33 |
MTDiscord |
<jordan4ibanez> Well, can you link displays together? |
21:33 |
cheapie |
Yes, sort of |
21:33 |
erle |
jordan4ibanez i think luatic or someone else once did bad apple as entity fuckery? |
21:33 |
MTDiscord |
<jordan4ibanez> Hmmmm |
21:34 |
MTDiscord |
<jordan4ibanez> Oh yeah, probably luatic lol |
21:34 |
cheapie |
The GPU has a limit of 64x64 pixels per buffer (but you get more than one). rgblightstone can make panels as big as you want, while digiscreen is just a 16x16 panel but they do tile seamlessly. |
21:34 |
cheapie |
The GPU has a "sendregion" command to make the latter less painful. |
21:36 |
cheapie |
Here's an example of rgblightstone (true-color version) in use, in this case it's logically arranged as 20x1. The LuaC is generating HSV color values and having the GPU turn them into RGB: https://cheapiesystems.com/media/mtwraith.webm |
21:36 |
MTDiscord |
<jordan4ibanez> Did you make a multi core amd GPU in minetest with a fan and heat sinks lol |
21:36 |
MTDiscord |
<jordan4ibanez> Cpu* |
21:36 |
cheapie |
Just the heatsink, it's meant to look like a Wraith Prism :P |
21:37 |
MTDiscord |
<jordan4ibanez> That it does, that it does |
21:37 |
MTDiscord |
<warr1024> > rgblightstone Nice, blightstone that comes in rg |
21:37 |
cheapie |
I did this at one point too, the GPU makes this sort of thing remarkably simple: https://cheapiesystems.com/media/2024-09-01%2015-46-26.webm |
21:37 |
erle |
regarding color nodes, i once made this thing that generates 16 nodes with 4096 colors https://git.minetest.land/erle/tga_encoder/src/branch/master/colormap_generator.lua |
21:38 |
MTDiscord |
<jordan4ibanez> Dude |
21:38 |
erle |
any comments about perceptions and gamma please leave them at the door |
21:38 |
cheapie |
IIRC it uses buffer 0 as the framebuffer and stores the logo in black+white in buffer 1. To draw it it fills buffer 2 with a solid color, ANDs buffer 1 with it to get the logo in that color, then copies that back to buffer 0 at a specified offset. |
21:39 |
cheapie |
AND is one of the blitter modes the GPU natively supports |
21:40 |
MTDiscord |
<jordan4ibanez> Oh yeah, that does make sense, that's why it's so fast |
21:41 |
cheapie |
The donut one (which was an ad screen for the side of a donut shop someone else built in-game) uses the "overlay" blitter mode a lot, which is just a straight copy except that one specific color in the source is taken to mean "don't copy this pixel, leave the destination unchanged" |
21:43 |
cheapie |
There are enough modes for this thing that I was able to implement Conway's Game of Life in it, with the Luacontroller just sending instructions and having no knowledge of the current state, and the blitter doing the math instead. |
21:46 |
MTDiscord |
<_devsh_> @Shlaplombitomous Cranloutmeister do you know gettext ? |
21:46 |
cheapie |
https://gist.github.com/cheapie/0235bdbb46e401c2eade1900b89d7bcf - line 70 uses a Mooncontroller extension but I think it'll run on a plain LuaC too |
21:51 |
|
Verticen joined #minetest |
21:52 |
cheapie |
It's been a while, but IIRC the way this works is that each pixel can be either ffffff for alive or 000000 for dead, then lines 16-40 build a bunch of copies of this, but at a much lower brightness (only 111111 for alive), shift them in all 8 directions, and add them together, yielding an image in buffer 2 where the brightness of each pixel reflects its neighbor count (111111 = 1 neighbor, 222222 = 2 neighbors, etc.) |
21:55 |
cheapie |
Then lines 45-57 mask out a bunch of colors (by copying over top of black in "overlay" mode and telling it to skip pixels of the color to mask out), 58-64 turn this into two images, one in buffer 4 where all cells with 2 living neighbors are ffffff and one in buffer 5 where all cells with 3 living neighbors are ffffff. |
21:55 |
cheapie |
Then the logic operations on 65-66 turn these (and the current state) into the new state, 67 copies that back to the framebuffer, and 68 sends it to the screen. |
22:03 |
|
Desour joined #minetest |
22:37 |
|
CRISPR joined #minetest |
22:53 |
|
MiniontobyPI joined #minetest |
23:34 |
|
panwolfram joined #minetest |
23:46 |
|
anemofilia joined #minetest |
23:54 |
|
MiniontobyPI joined #minetest |