That’s…. actually a controversial opinion in the Java community, but I stand by it. The text editor is good enough - IntelliJ’s has the best editor, but Eclipse does well enough, especially with a couple tweaks to the auto-complete settings (see below), and the non-blocking completion added a while back. But it’s everything surrounding the editor that sells it for me. Probably the best thing is just how fast everything is - Eclipse is crammed with features without detriment to performance, it even runs really well on shit craptops where IntelliJ is basically unusable. Eclipse builds projects instantly and keeps them automatically built so your code is always ready to run. The Maven integration is excellent, it rarely actually runs Maven unless you explicitly ask it to as it manages that stuff by itself, which helps contribute to the speed. The Git tools are pretty good - good enough that an entire standalone IDE based on them exists. And other minor stuff, like unpinned windows actually working, and automatic hot reloading working out of the box - no need to dive into internal settings. I like Eclipse. It works for me.
It’s not perfect, the dark mode kind of sucks (especially for anything that’s not the code editor, dialogs can look really ugly, some plugins aren’t fully compatible with it), the macOS interface is a little bit jank and the Linux interface uses GTK+ which I could probably write an entire rant .plan for on its own. (KDE Plasma’s Breeze theme makes it more than bearable though). But depsite any of its issues I find it overall a nicer experience than alternatives.
While I’m talking about it, my Eclipse set up is the Java EE IDE with the following plugins:
Plus auto-complete settings that I find essential (Java > Editor > Content Assist):
.ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
Half of this post might actually be better placed on my /uses page, actually…
]]>Yeah, I don’t use this blog much. My usual excuses are that my life isn’t that interesting, and I procrastinate a lot so I rarely have anything to actually write about. Which is funny, because I do tend to like writing things? It’s just that blogging is a lot of effort. I have to create a file! And write front matter! And if I want to add images that’s a sort-of annoying song and dance. Then I have to commit it! After that it’s all automated - after pushing, GitLab CI will build the site and deploy it here. But there’s still a lot of work leading up to that.
I would use Twitter or Mastodon, but the text limit and poor markup is discouraging. They’re also social networks, and I mostly just want to scream into the void without it screaming back, if that makes sense. (Makes you wonder why I set up comments on this blog then…) And Twitter sucks, for obvious reasons.
I am considering setting up IcculusFinger - having a single file I can just update through the day and gets archived automatically is appealing to me. No titles, no responses, no categorisation, just thoughts. It appeals to me. It’s also very retro, fingering dates back to 1971. Jesus. (And, of course, the innuendo potential is infinite.)
But honestly it might just be best to do that but on this blog. I kind of get it into my head to use this for bigger write-ups, which rarely happen. In reality it’s fine to use this for whatever thoughts come to me. I sort of considered moving back to WordPress - if only because creating posts is slightly easier - but it’s not really worth it.
The only thing that’s happened to this site recently is that I switched it to the auto
skin for Minima - basically, it has a dark mode if your system is set to that.
You’re welcome.
It’s ugly, but dark modes are.
Anyway… here’s some things that’s happened since the last time I posted:
So - this is something I was dealing with since I was working on the Ludum Dare version of Catacombs 51. For some reason, the fog shader I wrote was making the wall tiles darker, and the floor tiles even darker.
It’s not immediately noticable in the above image, but you might be able to tell that enemies are brighter than the floor, for example.
Clueless as to the actual cause, I just worked around this by making sprites darker to blend in. It works better with 51’s lighting, but compare this damaged wall sprite versus the wall tile it’s based on. The sprite is darkened to compensate for the fact that the fog was making it darker.
Eventually I was tweaking the new lighting system further and ran into this issue again. This was months ago now, so the actual problem solving process has been forgotten now. But I eventually threw the game into RenderDoc and saw this:
The layers had DEPTH! They had a Z axis! And my shader was taking that into account when calculating the distance for fog all along.
The fix was simple… if aggressive:
From ecf169888bd4facdcd35d9f79a3c9a52b155baac Mon Sep 17 00:00:00 2001
From: Sean Baggaley
Date: Wed, 27 Dec 2023 22:02:56 +0000
Subject: [PATCH] Lighting tweaks and fixes
diff --git a/shaders/shd_main/shd_main.fsh b/shaders/shd_main/shd_main.fsh
index ed61898..005aab5 100644
--- a/shaders/shd_main/shd_main.fsh
+++ b/shaders/shd_main/shd_main.fsh
@@ -25,7 +25,9 @@ uniform float u_brightness;
void main() {
vec3 lighting = vec3(0, 0, 0);
vec4 albedo = texture2D(gm_BaseTexture, v_vTexcoord);
- float fog = 1.0 - (distance(v_vPosition, u_playerPos) / 320.0);
+ // Fucking GameMaker!! LAYERS ARE LITERALLY 3D!!
+ // LOWER LAYERS ARE LITERALLY BEHIND UPPER LAYERS ON THE Z AXIS!!! WHY?!?!?!?!
+ float fog = 1.0 - (distance(v_vPosition.xy, u_playerPos.xy) / 196.0);
vec3 ambient = albedo.rgb * u_ambientColour;
for (int i = 0; i < u_numLights; i++) {
This is what it looked like immediately after the fix:
Much better.
For good measure, here’s what it looks like today:
It’s not hugely different. But I have returned the decor to regular brightness.
The new revealer powerup will show you the location of the exit ladder, weapons, powerups, and lore items. It has a nice fancy effect when used:
I added controller support to Catacombs Plus. It works reasonably well, I think. The control scheme is a familar twin-stick shooter layout: left stick moves, right stick aims, trigger to shoot.
There’s also an aim assist which will snap the cursor to enemies in the direction you’re looking at. This is pretty helpful, aiming needs precision which is hard to get manually.
An annoying detail of my movement code is that the left stick controls acceleration, not movement speed. So, if your stick is tilted half-way to the left, you will reach max speed slower, but you will still reach max speed. I’m not too sure how to fix this.
I’ve worked on some additional level styles, which I’ll keep secret for now.
There’s also boss levels, which currently occur every 5th level. These are levels using existing styles, with a single much more difficult enemy based on the enemies from that level. Killing the enemy drops a ladder to exit the level, and a bunch of items.
The second wave of Splatoon 3 DLC, Side Order, finally released last Thursday - after 18 months of waiting (they announced the DLC like a month before the game even released!) It’s enjoyable, but a bit underwhelming? For the time we’ve waited, I expected more content, especially story-wise. There’s only a limited number of maps, which is fine, but they can get repetitive fairly quickly. Each run takes about 30 minutes, and there’s twelve palettes to get through, so I 100%d them all within 8ish hours (including my few failed runs). There’s only three bosses, so those get old and familiar really quickly.
Despite this, it’s still fun to play. It’s solid Splatoon, accumulating chips and becoming stupidly OP is satisfying. Hacks make it rewarding as your prior runs can influence your future runs, and you can tone down the hacks once you get familiar and develop strategies on completing objectives and maps.
It’s basically the gameplay of my Catacombs mixed with Splatoon and Nintendo levels of polish. Now I just have to think of which ideas to steal back 😉
I’ve been slowly working through the Command & Conquer games again recently, starting with the Tiberian Dawn remaster and now Tiberian Sun.
I really like this game. I love the style and setting, the sort of apocalyptic desolate landscape ruined by Tiberium is served well by the isometric style and lighting. The gameplay is pretty solid C&C, and of course the soundtrack is really good. I’m working through it slowly - currently finished the Nod campaign and I’m about half-done with GDI - but I’m having a good time when I do play.
I probably wanted to talk about some other stuff here but if so then I forgot what it was.
See you next time I post? Which will probably be in three months again. Ugh.
]]>The best thing to do in this situation, really, was submit a bug report… unfortunately, I don’t want to send my entire project over, and I didn’t know the cause of the bug to make a minimum reproducible example. So, I decided to try and figure it out myself.
My first thought was to throw it into the GameMaker debugger.
While I was suspect this was a bug in the runner’s C++ causing the game to go boom, it still might be worth testing if the runner has more checks or error handling in debug mode.
Turns out that not only does the debugger not affect error handling, but it’s actually controlled by the gml_release_mode
pragma, which I don’t use (so all error handling is on).
However, the run log gave us some hints…
Enacting reset hack
Going to proc room
Game controller created
C:\ProgramData/GameMakerStudio2/Cache/runtimes\runtime-2023.8.2.152/windows/x64/Runner.exe exited with non-zero status (-1073741819)
elapsed time 00:00:05.6052134s for command "C:\ProgramData/GameMakerStudio2/Cache/runtimes\runtime-2023.8.2.152/bin/igor/windows/x64/Igor.exe" -j=20 -options="C:\Users\Sean\AppData\Local\GameMakerStudio2\GMS2TEMP\build.bff" -v -- Windows Run started at 11/03/2023 01:55:41
FAILED: Run Program Complete
For the details of why this build failed, please review the whole log above and also see your Compile Errors window.
So, to make sense of this log, I have to explain how things work. GameMaker uses rooms, which are basically levels. Rooms can be persistent, meaning the objects and state of the room will remain even when the game switches to another room. Catacombs has two main rooms, the menu room and the main game room. The main room is persistent, which facilitates being able to pause the game, as objects in a persistent room will not update while in another room.
When resetting the run, I want to reset everything in the main room. The easiest way to do this is to make the room no longer persistent, and then re-load the room. There are two problems with this, however: you can only change a room’s persistence while in that room, but you can’t switch to the room you’re already in. To work around this I implemented a hack: when you reset your run, it sets a flag and sends you back from the menu room to the main room. The game controller then notices this flag, unsets the room persistence, then sends you to a different room, whose sole purpose is to send you back to the main room.
So, that explains the debug messages I left.
But there’s another clue in that exit code.
-1073741819
has the hexidecimal form 0xC0000005
, which is the Windows error code for an access violation, also known as a segmentation fault.
Basically, the application is trying to access invalid memory.
As this is happening during a reset, it’s likely trying to access an object that no longer exists.
But how could this be happening? If everything’s reset, why would the game be accessing things that don’t exist?
To figure this out we’re going to need a debugger that can understand the native code the runner is blowing up in. For this I’ll use WinDbg, Microsoft’s standalone debugger. Simply run the game and attach WinDbg to it, and we’ll get some juicy data when the game crashes.
The stack trace is usually the first place to look - it shows the functions in which the crash happened.
Ah. That’s not very useful. The Windows runner has no debug symbols, meaning there’s no mapping of a function’s address to its name. But not all hope is lost - the macOS runner does have these symbols, and macOS even shows us a full crash report, no debugger needed!
And it gives us another hint: the game is trying to compute the bounding box of an object, but it appears that object doesn’t exist, so it goes boom.
We’re close now, but we still don’t know what code is checking that bounding box. GameMaker code runs in a virtual machine, instead of native code, so the stack trace before then is just “the runner is running your code”. Not useful.
Fortunately, GameMaker has a feature called the YoYo Compiler, which translates GML into C++. This C++ gets compiled and linked against the rest of the runner code, producing a new executable file. With debugging symbols! Let’s throw it into WinDbg…
And we can see where in my code the crash begins: a function called lights_tick
.
So it’s to do with the lighting engine: lights_tick
is a function that runs every frame to update the state of each light.
Let’s take a look at that function…
function lights_tick() {
struct_foreach(global.lights, function(i, light) {
if (light.object != noone) {
light.pos = [
light.object.bbox_left + (light.object.bbox_right - light.object.bbox_left) / 2,
light.object.bbox_top + (light.object.bbox_bottom - light.object.bbox_top) / 2
];
}
});
}
Lights are stored in a global variable: a struct mapping its ID to its data.
Each light may have an object associated with it.
If it does, lights_tick
will set the position of that light to the centre of that object, calculated from the bounding box.
Now we have all the clues needed to figure out the crash. As I said, lights are stored in a global variable - and global variables always persist across rooms. The mistake is now fairly obvious: I was not resetting the lights variable between rooms. This causes the game to update lights from the previous run, causing it to reference objects that no longer exists, making it crash.
I fixed it by resetting the lights struct when the game controller was created. The longest diagnoses have the simplest solutions…
But it’s weird because I later realised that if I passed the object’s ID, instead of the object’s struct1, it would handle the error properly. I suspect it might be due to it being a reference to a struct contained in a struct? I attempted to make a minimal example project that does the same thing, but even there it was handled properly:
I’m not really sure what actually caused it to segfault like this, but at least I fixed it.
GML is a weird language. Referring to an object (such as via self
) gets you a struct containing that object’s fields.
If you want an actual reference to the object itself, you need self.id
. See this page in the manual.
This trips me up constantly. ↩
First off, I am still using GameMaker as the engine. With the recent Unity drama I am a little hesitant about basing it on a proprietary engine, but as much as I actually like writing my own engine tech I’m purely focused on making a game for once, and I already got the basics down on this one a year ago. GameMaker have at least promised they won’t do anything similar, and even say they’re taking steps to ensure you can continue to use the license attached to a given version - so that gives me peace, I suppose. At least my perpetual licenses still work after they switched to subscriptions some time back.
Most of the changes are technical, a lot of code refactoring has been happening over the month or so I’ve been working on this. Gameplay-wise, the most immediate thing is that the ten-second countdown is gone! Now you can explore the level and progress at your own pace. The countdown will remain in a separate “51 Mode”.
All of the other changes are technical… Were you to sit down and play it, probably the first thing you’d notice compared to the Ludum Dare version is the framerate. Originally, the game used a fixed time step of 60 fps, meaning it was not only locked at that framerate, but all logic and movement was written with the assumption it was running at 60 Hz. Catacombs Plus can now run at any arbitrary framerate, and most stuff is no longer bound to a fixed timestep. Some code still uses a fixed 60 Hz clock, like weapon cooldowns, but anything that moves - players, enemies, bullets - now uses delta time, meaning it’s smooth regardless of the actual frame rate the game runs at, and won’t go slow-motion when framerate drops.
Also immediately obvious is the lighting system: it actually has one. Catacombs 51 just used a shader that tinted the world a random colour and made the level get darker the further away from the player, but Plus has an actual lighting system. It’s primitive and probably needs more work, but it’s fine right now. It’s also not terribly efficient - making it a post-processing shader would probably be best, as right now it calculates all the lighting every time something is drawn. However, performance is pretty good on my PC and my M1 Pro MacBook, the two things I have to test.
Another nice change is resolution. While the game still uses a 448×252 camera, it will now render that camera at whatever size the window is, whereas Catacombs 51 would always render the game at 896×504. This means that the game will always be nice and crisp instead of becoming a horribly pixelly mess at anything over than the default resolution. Additionally, the camera now adjusts to your aspect ratio, instead of being locked at 16:9, meaning it’ll work on whatever crazy display you throw it on:
I’ve also reworked the enemy AI significantly. Previously, the enemy logic was incredibly simple: it would check line of sight to the player, and if it had seen them at any point, it would continuously plot a path to the player and follow it. The zombies wouldn’t collide with anything, they would phase through not only each other but even the player, which was especially annoying as it wasn’t particularly easy to shoot a zombie that was inside you… Plus has scrapped this old crappy code for an actual AI system based on behaviour trees, making the AI code much more flexible, modular, and overall easier to work with. Right now, regular zombies basically retain the same behaviour, but gun zombies see actual improvement: they now only shoot when the player is in their direct line of sight, and will start strafing around the player when they get close.
Not only that, but zombies and the player have collision now, so they’re less likely to walk through eachother. It still happens, unfortunately, as the collision code throughout the entire game is still a mess. But that’s something I’ll work on.
Catacombs Plus won’t just be limited to technical improvements, I also intend to add more content: more levels, more enemies, more weapons, more powerups, and balance everything hopefully nicely. It’d also be nice to add multiplayer - both co-op and maybe even a competitive mode - but that’ll be difficult, so we’ll see.
If I do see this project through to completion then it’ll be a paid product, but I’ll probably only sell it for a fiver at most.
]]>The main reason behind this change is that WordPress is just a big heavy thing that I don’t need most of. It’s also a huge attack surface - I keep it up to date, but the sheer amount of bots and stuff trying to break into wp-login.php and find potential exploits in files in theme and plugin files is ridiculous. Jekyll has none of these issues because it’s a static site generator: it just spews out HTML files. There’s no server-side code at all. Everything loads instantly and you can host it bloody anywhere.
I have the source for this site set up in a Git repository hosted on my own GitLab instance. I use GitLab CI to automatically build and deploy the site here (drinkybird.net) via rsync when I update the master branch. I may move the site to Amazon S3 + CloudFront eventually, but I still have other stuff here that I’d have to move first.
Jekyll uses raw Markdown (or HTML) files for which I just use a standard text editor (currently, Notepad++) to write.
There’s no web interface like Wordpress, but GitLab does provide a web IDE, which is a mostly-functional in-browser version of VS Code.
I have no idea whether changes I make persist between sessions in the web IDE without commiting them to the Git repository though, but if it does then that’ll be useful if I’m ever in a situation where I can’t use git directly (on my phone or tablet?)
I could even deploy to GitLab Pages as a preview when I don’t have access to the local jekyll serve
command.
I’m using the default Minima theme, albeit modified a bit.
Mainly, the colours are different.
I like this purple, but it does look a bit like the default a:visited
colour, so I might not keep it.
I dragged the entire theme into my source tree - this is probably a bad idea, but it makes it easier to modify stuff if I need to.
I might switch it to a serif font, I generally prefer those for reading.
Setting up the Archives page was a bit hacky. The pages that page links to (the pages for each month, category, tag, etc) were generated using the jekyll-archives plugin, but that plugin doesn’t seem to have the ability to generate a list of the archives it creates. The categories and tags sections was easy. The by month section, not so much:
{% assign date_list = "" %}
{% for post in site.posts %}
{% capture post_date %}{{ post.date | date: "%m,%Y" }}{% endcapture %}
{% assign dates_split = date_list | split: ":" %}
{% unless dates_split contains post_date %}
{% capture date_list %}{{ date_list }}:{{ post_date }}{% endcapture %}
<li><a href="{{ site.baseurl }}/{{ post.date | date: "%Y/%m" }}">{{ post.date | date: "%B %Y" }}</a></li>
{% endunless %}
{% endfor %}
Ugly, but it works! At least it doesn’t run every time you visit the page.
Speaking of, that’s another nice thing Jekyll has: syntax highlighting that doesn’t suck. On WordPress syntax highlighting is all done by plugins that all suck in some way or the other. I used to use one that only had support for a few programming languages, before I switched to one that supported a load of languages but required bringing in a giant library (I think it was VS Code’s highlighting engine?). In Jekyll it’s built in and you just use the usual Markdown syntax to highlight code.
The only thing I’m really missing from WordPress is comments. Nobody commented on the WP site anyway, because nobody reads this site at all nevermind leaves comments, but it was nice to have the feature. I’ve used Disqus before but I think I’d prefer something self-hosted… I’ll sort something out in that regard.
This is all for now. I wish I could use this blog more, but my life is not that interesting. I’m working on a project that might be more blog-worthy, though, so we’ll see.
]]>You know a generation is uninteresting when they open with “smoother animations” as one of their headlining features.
The only thing that could drive me to upgrade is my Series 7’s battery rotting (it’s at 84% health right now, and sometimes it doesn’t last all day anymore.)
3000 nits wtf. I wish my MacBook had that brightness; it reaches like 1200 when not playing HDR, and even then I can barely see it in the sun.
I’m still mostly happy with my 13 Pro Max, but having USB-C would be very convenient. Literally the only things I have that use Lighting are my iPhone and my rarely-used AirPods. And I only have so many chargers - no longer having to swap cable depending whether I want to charge my iPhone or literally anything that is not my iPhone gets a bit annoying. Still, thank you, EU. I miss you dearly.
The regular iPhone being USB 2.0 only sucks, but I’m a Pro user, and the new Pros use USB 3.0. About bloody time, my 13 Pro Max is only USB 2.0! It cost me £1149!!
The camera stuff is cool, but did they fix the watercolour effect that their sharpening filter exhibits sometimes?
The A-series chips getting real-time raytracing before the M-series is a bit interesting, but we’ll probably see that in the Macs whenever they announce the M3 family.
Also, the silent switch was fun to fidget with, even if I left it on silent most of the time. I’ll miss it.
£30 lmao
Can you blame me for entirely forgetting that Apple made these? I wonder how these end up a tenner cheaper than that Lightning adapter.
]]>Gravatar is also currently broken right now, the website errors whenever I try and add or delete or set an avatar. Bit annoying.
Fortunately they have an XMLRPC API, and the Perl module someone wrote for it is among the few that doesn’t point to a dead URL, so I ended up writing some (ech) Perl to set my avatar. And right now, unlike the website, the XMLRPC API is actually functioning.
Now I am an octopus, but probably not the one you’re thinking of. (You get bonus points for guessing the octopus.)
And it only took like 15 minutes more than it should’ve.
]]>If you want to look at the actual code behind this, the game’s source is available on GitHub and all the generation code is in the well-named scr_mapgen.gml. For the screenshots in this post I modified stuff a bit, the changes are in the mapgen-article branch. It looks pretty ugly in the screenshots, but the actual game is prettier, I promise. It’s moody!
The world generator creates maps using pre-set pieces. Each piece is simply a bitmap image, with specific colours representing specific tiles. The pieces are bitmaps because this makes editing them easy: I can just use any image editor (my usual is Paint.NET). Unfortunately, GameMaker’s HTML5 export, is unable to read binary files, so I wrote a basic Python script to convert the bitmaps to text files, which can be read in HTML5.
player | spawn1 | arena4 |
nothing | 000000 |
wall | FFFFFF |
jail bars | C0C0C0 |
jail bars (horizontal) | C1C1C1 |
toilet (facing down) | C2C2C2 |
toilet (facing up) | C3C3C3 |
bed (facing down) | C4C4C4 |
bed (facing up) | C5C5C5 |
floor | 808080 |
piece connection, 1-high | FF00FF |
piece connection, 2-high | FF00AA |
piece connection, 1-wide | AA00FF |
piece connection, 2-wide | AA00AA |
player spawn | 0026FF |
item spawn | 2200FF |
big zombie | 22AAFF |
decor | 267F00 |
When the game starts, all of the pieces are loaded into memory and stay there throughout the lifetime of the game. It’s only a handful of tiny images, so no worry about resource usage.
After each piece is loaded, a little bit of processing happens: for each possible pixel type there is a list of each piece containing that pixel, and the piece is added to the respective lists if it hasn’t been already. The location of every piece connector in the piece is also stored.
Throughout the generation process, the level is stored in an internal grid of tiles, separate to the actual tilemap that is eventually produced. Placing tiles from pieces into this grid is done by the blit function. This function replaces any tiles in the internal grid with that of a given piece, starting at the given coordinates. Any piece connectors that are placed by the blit function also get added to a queue.
There’s also an additional function canBlit which checks if any tiles would overlap when blitted. This is used to ensure a piece won’t be placed atop existing terrain.
The first bit is very simple: pick a random spawn piece and plop it right in the middle of the map.
The second part is not so simple. Remember how the blit function adds all connectors to a queue? This step runs for as long as that queue has something in it. For each connector in the queue, it tests every other non-spawn piece that has the same connector in a random order, and sees if there are any configurations in which that piece can be connected to the current connector without overlapping. If so, it blits that piece in, and replaces the connector pieces with floor tiles. Remember, the blit function adds new connectors to the queue, so this step will continue to run. It eventually ends once there’s no more connectors, or there’s no more space in the map to fit any more pieces - the map is only 64x64 tiles large. This is effectively a bruce-force method of making a level, and probably isn’t very efficient.
In the first version of the game, this step would only consider a single piece to connect to another. This resulted in a lot of empty levels when that piece wouldn’t fit, so I reworked it to consider every piece.
You probably noticed that step 2 left some holes where other pieces could connect. For this step it simply checks every connector piece and replaces it with a wall tile. It also checks each empty tile and replace those with walls if they have any floor tiles adjacent.
This step simply randomly replaces decorative tiles with floor tiles. Jail bars have a 1/5 chance of being replaced, decorative objects have a 1/2 chance, beds and toilets have a 1/3 chance. This makes the level detailing a bit less visually repetitive.
This step turns that internal grid into actual game state! Floors and walls are turned into a tile map, while decorative tiles are turned into floors in the tilemap with an object on top. Other special tiles also get turned into floors and their locations saved for later use. Tiles that were always floors in the internal grid have a chance to be flipped and/or mirrored. There’s no image to demonstrate this one, as I’ve needed it to have something to show you in every other step!
The item tiles that were identified in step 6 now get items placed on top of them.
For each item tile, a weapon will attempt to spawn if one hasn’t yet. The weapon that is spawned depends on the weapons the player currently owns and the level the player is on. If a player does not own a weapon and the level number is at least that specified, that weapon will spawn, with this being checked in the order shown in the table below:
Weapon | Level |
---|---|
Rifle | 0 |
Shotgun | 3 |
Chaingun | 5 |
Rocket launcher | 11 |
For all other items, it just chooses a random powerup to spawn.
To spawn zombies, it once again considers every single tile of the internal grid. If the tile was once a “big zombie” tile, a big zombie is spawned there with no further consideration. The remaining spawn code will then check:
If any of these fail, a zombie will not be spawned. Otherwise, there is a 1/16 chance a gun zombie will spawn, a 3/16 chance a fast zombie will spawn, and any other result spawns a normal zombie.
For gun zombies, there is a 1/6 chance each for it to spawn with a rifle, shotgun, or chaingun, so long as the current level is at least that needed to spawn it: rifle zombies spawn starting at level 5, shotgun zombies at level 10, and chaingun zombies at level 12.
Every zombie (including big zombies) is given a random rotation after being spawned.
And after all of that, we have a level. This is actually a fairly simple method for generating levels everything considered, but it worked well enough for this game, I think. It was certainly fun programming it and seeing it actually work.
There was one removed step: opening walls, which sat between resolving connections and sealing holes.
This allowed creating bigger arenas out of conjoined pieces. I forgot why I removed it, but there was probably good reason…
]]>There’s a problem however: there are almost 900 possible patches that can be performed. Some of these patches are actually multi-step patches as well. And as you can imagine, this can take a while. On my Ryzen 9 5900X, it takes around 1 minute and 25 seconds to run all the tests. The Jenkins node uses a Windows Server 2022 virtual machine with 2 cores running on a Xeon E3-1270 v6 machine that runs a bunch of other crap as well, and in this VM it takes about 2 minutes to run all the tests. Okay, not terribly long, but it’d be nice to get it faster…
Fortunately, the test suite code was very easy to multi-thread. It already assembled a list of IWAD pairs to patch between, and the patching code doesn’t rely on any global state other than the list of results, so it was easy enough to tear the loop out into its own function and add a mutex lock around that one shared data access.
After adding the multithreading, the test suite takes only ~12 seconds to run on my 5900X when using 24 threads - just over seven times faster. Unfortunately, I don’t know enough about CPUs or profiling to figure out why it isn’t at least 12x faster (I doubt it could reach 24x as half the logical cores because of simulatenous multithreading/hyperthreading.) On the Jenkins virtual machine, it takes around 1 minute to run the tests, so a much more predictable 2x speedup.
I decided to turn this into a benchmark, writing a simple Python script that runs the tests with a minimum and maximum amount of threads, running each three times and generating an average for each number of threads. Here’s a graph of the results of running that on my 5900X:
[Removed, this was a Google Sheets chart embed, but I apparently deleted the original sheet from my Google Drive. I’m an idiot, sorry.]
Unfortunately, as I already said, I don’t know enough about how CPUs work or profiling to explain that curve.
Interesting, however, that after >56 threads, three times the actual number of threads my CPU has, it shaves off about two seconds.
(This line of thought also lead me to discover that WaitForMultipleObjects
only supports up to 64 objects - fortunately using WaitForSingleObject
in a for
loop gets around this limit.) I also discovered a crash bug as the thread count reaches about 256 threads.
I was reading old The Old New Thing posts a while ago and thought it could be a similar issue to this, but Omniscient doesn’t use anywhere near that much memory.
Not too worried about this one though, 256 threads is a bit excessive.
Another thing to note is the sheer amount of disk activity the program uses multithreaded. Each test involves reading at least one patch file from the executable and the IWAD to patch to, and writes a patched WAD at least one. Some cases involve multi-step patches: for example, patching Doom v1.1 to shareware v1.1 will require going from Doom v1.1 -> Ultimate Doom v1.9 -> shareware v1.9 -> shareware v1.1 - three patches. This is done to reduce the number of permutations of patches, which reduces space in the executable. Each step is written to and then read from a temporary file that gets immediately deleted after use. Hopefully this doesn’t leave the filesystem cache, otherwise I’ve thrashed the hell out of my SSD.
You might’ve also spotted an optimisation there: each thread has a list of IWAD pairs, but each such pair might have multiple steps of patches. It would be best to split the work between threads based on the total patches to be done, not the IWAD pairing. But currently this is a lot better than it was when single-threaded.
On the back of this work I decided to make patching from the actual application run on a separate thread instead of the UI thread. Patching happens fast enough that nobody is likely to notice unless it’s being run on an absolutely archaic machine - which it might be, Omniscient supports Windows 98!
Bonus bit: Jenkins will display the total time of each test as the time taken for the entire suite, instead of the time value specified in the JUnit XML file. As a result, we end up with tests that appear to take longer to run than the job they were run in:
Bonus bit 2: there was a race condition in the code that caused tests to randomly fail. Can you spot it?
TString GetTemporaryFilePath()
{
static unsigned int i = 0;
TCHAR tmpdir[MAX_PATH];
TCHAR buf[MAX_PATH];
time_t now = time(NULL);
GetTempPath(MAX_PATH, tmpdir);
_sntprintf(buf, MAX_PATH, TEXT("%somn_%u_%u.tmp"), tmpdir, i, now);
i++;
return buf;
}
The answer is that i
wasn’t atomic, so a race condition would occur where two thread would try to open the same file, and the one that wasn’t first would fail.
The fix was to use InterlockedIncrement
to access and increment the counter - can’t use C++11 atomic
as I’m using Visual Studio 2005…
finalize()
, but finalizers run on a separate thread owned by the GC, which is problematic for some libraries like OpenGL where a context can be used by one thread at a time and it’s annoying.
So this trick uses finalize
to revive the object just so it can have its native resources deleted on the main thread before letting the GC actually take it:
public abstract class NativeResource {
public abstract void destroy();
@Override
protected void finalize() throws Throwable {
NativeResourceManager.add(this);
super.finalize();
}
}
public class NativeResourceManager {
private static List<NativeResource> resources = new ArrayList<>();
static void add(NativeResource resource) {
synchronized (resources) {
resources.add(resource);
}
}
public static void cleanup() {
synchronized (resources) {
for (var resource : resources) {
resource.destroy();
}
resources.clear();
}
}
}
And now you have an object that can automatically annoy the garbage collector by bringing itself back from the dead to get rid of its native resources:
public class SomethingThatUsesANativeResource extends NativeResource {
private long handle;
@Override
public void destroy() {
destroyNativeHandle(handle);
}
}
This is only actually useful when you can call NativeResourceManager.cleanup()
constantly on the main thread, like in a game loop.
Unfortunately, finalize has been deprecated since Java 9 and deprecated for removal since Java 18 so this trick’s time is limited. Perhaps now I’ll have to actually manage my memory properly and stop trolling the GC…
]]>