| blast007 | wow, there's a lot of those :) | 01:14 |
|---|---|---|
| *** c0pyn1nj4 <c0pyn1nj4!~c0pyn1nj4@user/c0pyn1nj4> has quit IRC (Quit: Leaving) | 01:21 | |
| tupone | I have a lot of fixes for those. I will commit later. I missed plugins for now | 06:36 |
| *** Sgeo <Sgeo!~Sgeo@user/sgeo> has quit IRC (Read error: Connection reset by peer) | 07:28 | |
| *** Ribby is now away: Auto-away | 09:11 | |
| blast007 | I was also trying a local AI model with opencode to see if it could repeatedly build and fix the warnings. It requires a bit of directing, but it's fixed something like a third of them already. | 09:50 |
| *** Cobra_Fast is now away: offline | 10:00 | |
| *** Cobra_Fast is back | 10:00 | |
| *** FastLizard4 is back | 10:27 | |
| tupone | I have done. I want just to test better. I know it is only a compiler things so if it compiles is good. I only left plugins, if you can work on them | 10:29 |
| tupone | I am also adding final where I can, so on the leaf of the hyerarchy it could use direct call if it can | 10:33 |
| tupone | What do you need to run opencode? It works locally? Need CUDA ? | 10:42 |
| blast007 | I'm running qwen3-coder:30b on ollama through rocm. I have an AMD Ryzen AI Max+ 395, which has 128GB of unified memory. So I can allocate around 108GB of RAM to the GPU part of the APU. | 10:57 |
| blast007 | (Then I have opencode on a virtual machine and just have it talk over the network to that ollama instance, because I don't trust opencode or any agent running on my actual computer) | 10:57 |
| blast007 | that model takes around 43GB of VRAM, at least on my setup. | 11:01 |
| blast007 | I'm not actually sure it's saving any time though. It's adjusting whitespace when it isn't necessary, so I'd have to clean that up. And it tends to break stuff a lot, or get stuck. Or it will run a "Final confirmation of successful compilation" and just run a shell script that echos out "Build completed successfully" and be all happy with itself. | 11:06 |
| blast007 | I'm not a huge fan of opencode either. It comes bundled with some online models, but I only wanted to use my local ollama instance. Had to specifically add a "disabled_providers" configuration key that disabled those, or else it would default to one of them. | 11:10 |
| *** Ribby <Ribby!uid380536@id-380536.helmsley.irccloud.com> has quit IRC (Quit: Connection closed for inactivity) | 11:14 | |
| tupone | I'm searching for something cheap to test pytorch (I maintain pytorch on gentoo) and run models. I lost my NVIDIA (retired) so I am only with CPU :( | 11:19 |
| BZNotify | 2.4 @ bzflag: atupone pushed 1 commit (https://github.com/BZFlag-Dev/bzflag/compare/cc5c4f279be2...090f82d2d7ab): | 11:44 |
| BZNotify | 2.4 @ bzflag: atupone 090f82: Add override and final keyword (https://github.com/BZFlag-Dev/bzflag/commit/090f82d2d7abd111ff1a4a77df077e8952fb6062) | 11:44 |
| BZNotify | bzflag: atupone synchronized pull request #334 "Use glm" (https://github.com/BZFlag-Dev/bzflag/pull/334) | 11:47 |
| BZNotify | bzflag: atupone synchronized pull request #306 "Add plugin system in python" (https://github.com/BZFlag-Dev/bzflag/pull/306) | 11:56 |
| BZNotify | bzflag: atupone synchronized pull request #373 "Use GLAD2 instead of GLEW" (https://github.com/BZFlag-Dev/bzflag/pull/373) | 11:56 |
| BZNotify | bzflag: atupone closed pull request #373 "Use GLAD2 instead of GLEW" (https://github.com/BZFlag-Dev/bzflag/pull/373) | 11:58 |
| BZNotify | bzflag: atupone commented on pull request #373 (https://github.com/BZFlag-Dev/bzflag/pull/373#issuecomment-4154489381): No need to go glad now | 11:58 |
| *** FastLizard4 is now away: AWAY from keyboard | 12:23 | |
| *** FastLizard4 is now away: GONE - Screen Detached and Disconnected from IRC (I'm probably asleep, at work, or doing something in real life) | 12:59 | |
| blast007 | tupone: did something change with GLEW that migrating isn't useful now? I know that GLEW upstream has been talking about some changes/improvements around the GLX vs EGL stuff, but I don't know if anything has actually happened with that yet. | 14:44 |
| *** Cobra_Fast is now away: offline | 16:55 | |
| *** Cobra_Fast is back | 16:55 | |
| *** Lantizia_ <Lantizia_!~Lantizia@user/lantizia> has joined #bzflag | 19:22 | |
| *** Tobbi_ <Tobbi_!~Tobbi@SuperTux/Tobbi> has joined #bzflag | 19:26 | |
| *** tupone_ <tupone_!~tupone@gentoo/developer/tupone> has joined #bzflag | 19:26 | |
| *** Tobbi <Tobbi!~Tobbi@SuperTux/Tobbi> has quit IRC (*.net *.split) | 19:31 | |
| *** Lantizia <Lantizia!~Lantizia@user/lantizia> has quit IRC (*.net *.split) | 19:31 | |
| *** tupone <tupone!~tupone@gentoo/developer/tupone> has quit IRC (*.net *.split) | 19:31 | |
| *** Ribby <Ribby!uid380536@id-380536.helmsley.irccloud.com> has joined #bzflag | 19:58 | |
| tupone_ | blast007: I don't know. I remember a discussion with you that said that glad is not necessary now (well, a years ago) | 20:44 |
| *** Tobbi_ is now known as Tobbi | 21:05 | |
| *** bozo16 <bozo16!~bozo16@2804:378:9162:e700:3144:94e8:65ad:3829> has quit IRC (Quit: Leaving) | 21:25 | |
| *** bozo16 <bozo16!~bozo16@2804:378:9162:e700:8a0b:5866:4524:6599> has joined #bzflag | 21:40 | |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!