|Update: From Riches to Rags|
April 21, 2018
75 new things. Mostly broken things. Rotting things. Fragile things.
And stone walls, which are the opposite of all of that. And locking doors.
A whole new web of interdependence for farming.
A new mother selection method (temperature, not food).
|Update: The Monument|
April 14, 2018
If you build it...
We crossed one million lives lived inside the game this week. It's kind of mind boggling. And a group of well-coordinated players also trounced the previous lineage record of 32 generations, making it all the way up to 111 generations with the Lee family. I've posted the 111 names from the matriarch chain here.
I've tied all these trends together in this update, which includes a monument, along with quite a few other changes.
In light of the recent server performance updates, the per-server player cap has been raised.
The experience of being murdered has been dramatically improved. (Yes, that is a peculiar sentence.) It has never actually happened to me in game yet, and I hope it will never happen to you. But if it does, you'll have a small bit of time to get your affairs in order...
Long term food sustainability has been made much harder. Getting up to 30+ generations in one spot should be a pretty substantial challenge, and you won't be able to do it on carrots alone. The top has been hardened.
Short term food availability has been made a bit easier, with the addition of two more edible wild plants. But they don't grow back, and they can't be domesticated, so they don't affect the long term challenge. Being an Eve in the wilderness should be a little bit less stressful. The bottom has been softened.
And there may be one little tiny surprise in there too... the idea came to me while I was falling asleep last night... I just had to put it in.
Also, monument logging is in place server-side, but the processing and display of those logs is just a counter for now. A better monument roster will be implemented next week, assuming that it's needed. And yes, that is supposed to be a challenge.
|Server Peformance Improvements are Live|
April 13, 2018
I spent quite a bit of time this week on server performance.
The old database engine, the amazingly fast and compact KISSDB, was not designed for an ever-growing data set where the newest data are accessed more than the oldest.
As players continue exploring new areas of the map, the data from older areas becomes less relevant, but that is the data that is the fastest to access in KISSDB. In fact, we were constantly wading through that old data to get to the latest stuff, which essentially ended up at the end of the list in KISSDB's append-only data structure. It got slower and slower as the data got bigger and bigger.
This drop in performance is expected when a hash table fills up, and thus KISSDB documentation recommends a table that's "large enough" for the expected data.
But the expected data in this case is unbounded. We cannot pick an appropriate size, because the data will keep growing, and we don't want performance to degrade as that happens.
A stack-based hash table is much better suited for this usage pattern. The latest and most important stuff can remain at the top for fast access. So I wrote a new database engine from scratch on Monday and Tuesday. It helped a lot.
The stack-based implementation that I came up with (thanks Chard for all the thoughtful discussion along the way) is 7x faster on average and even uses a bit less disk space (6% less). But more interestingly, it's entirely disk based, using almost no RAM. 13,000x less RAM than KISSDB on a test data set, in fact. KISSDB holds part of the data structure in RAM for performance, and that RAM usage grows as the data grows, but the stack is so much faster for accessing recent data that it doesn't matter---we can do it all via disk accesses.
The stack database actually has a flat RAM profile regardless of how big the data grows, and CPU usage on recently-used data is flat as well, regardless of how big the entire data set (including old, less-used data) gets.
The impact on server CPU usage is quite remarkable, as can be seen in this before-and-after graph (with the same 40 players on server1 the whole time). The new database went live at the 10:00 mark:
I also did some live profiling with Valgrind and found a few more hotspots that could benefit from RAM-based caching of procedurally-generated map data. And since the database now uses almost no RAM, we have RAM to spare for this kind of caching.
Where the server RAM usage used to grow to 300 MiB or more as the map data grew, it now sits steady at only 17 MB. Yes, that's 17 MiB of RAM total for hosting 40 active players.
What does this mean? First of all, it means the servers are finally lag-free, assuming that you're not experiencing true network lag.
Second, it means we can finally have more players on each server. I'll be upping the number gradually over time and keeping an eye on lag and performance. I expect we can easily get at least 80 on each server, and maybe quite a few more than that.
|Windows read-only bug finally fixed, scheduled downtime tonight|
April 11, 2018
Most important: Main website down at 10pm PST tonight (Wednesday) for maintenance.
I've finally been able to find and fix that troublesome "read-only game folder" issue on Windows. If you're having problems with this, downloading v65b for Windows from your original download link will fix it.
The problem in v65 is triggered whenever the update process includes two binary updates to the client EXE. This can happen if you take a break from the game and return to a backlog of updates. Thus, just because the update process always worked for you in the past does not mean it won't fail for you in the future. This is why the bug was so hard to find and confirm, amid thousands of "hey, it works for me," reports.
So if you ever run into a read-only issue during a future update, installing fresh from v65b should fix it for you.
Next issue: Scheduled down-time, 10pm PST today (Wednesday)
My server provider Linode has been working to patch the recently discovered Spectre vulnerability in modern CPUs. The patch requires a reboot of each server.
For the most part, this is fine for this game, because I'm running 15 game servers anyway, and I can easily take them down in batches while people continue to play.
However, the main web server, onehouronelife.com, needs to be rebooted too. Unfortunately, the home page, forums, purchase system, and client reflector are all hosted on this server. The reflector is the most mission-critical part of all this, because it's what clients talk to first to find out what game server they should go to.
And this important piece needs to go down so Linode can install the update.
The game servers will keep running, and your current lives won't be disrupted, but if you try to get reborn, you won't be able to find a new server to connect to. You can work around this outage temporarily with the "customServerAddress.ini" setting. Take a look in the forums or ask in Discord for help with this.
I plan to trigger the down-time at 10pm PST today. I'm not sure how long the patch process will take, but it could be up to two hours.
Game purchase will be unavailable during the downtime.
Hopefully, it will be way shorter than two hours.
Thanks for understanding!
|End of the apocalypse, and lag fixes today|
April 8, 2018
Boy, did that wake everyone up or what?
For those of you who have played 100+ hours and are so mad after one day of change that you're thinking about asking for a refund....
Remember: this is a game that is being actively developed. By one person. Working alone. Doing everything. 12-16 hour days. It's Saturday. My family needs me. But here I am.
So, think for a minute before you jump on the review button and call me LAZY in all caps, please.
I must have the freedom to try things, dangerous things, game-breaking things, in my endless quest to make the game better and more interesting.
I appreciate that you love the game as-is and don't want it to change. But the numbers that I see on my end tell a different story. Yes, there are an impressive number of concurrent players at peak times (150 - 200). But that number has started to slide, and is nowhere near where it was a few weeks ago. In the mean time, 14,000 people own the game. They are not playing. For a reason.
And it has nothing to do with the apocalypse.
It has to do with the game not being quite good enough yet. The game is interesting and compelling up to the point where established villages achieve a steady, perpetual state. If you have limitless food, there is no challenge, no danger, no drama. Griefers are a symptom, not a cause. If you are struggling to survive, you have no time for griefing.
And this game should always be about struggling to survive, at some level. It should always be possible to fail, both at the individual and village level.
But villages were everywhere. You could always wander into a deserted one and pick right back up. Failure meant nothing.
Thus, the game sorely needed a hard reset. I decided it would be more interesting to put that power into your hands and see what you did with it. I also wanted to create a shared collective event.
Those who witnessed the apocalypse waves first-hand will never forget them. It's over now, but the reset happened.
And the result, for the time being, is a game that is much more interesting again.
Building a village from scratch is the interesting part, and making a contribution that really matters is the most meaningful way to leave a legacy. Making another bearskin rug in a village that already has 20 rugs, because there's nothing else to do, is far less interesting.
In the place of the apocalypse, I have added a new placement algorithm for Eves that will have a similar periodic cleansing effect. Not server-wide, but at the lineage level. Your chance to continue living and working in a given village will end when the lineage in that village dies out. No more wandering back later and starting over in the same spot with everything already done/built for you. Each new line will start in the wilderness.
That said, pilgrimages to the old village locations are still possible, but they will require a concerted group effort to pull off, Oregon-Trail style.
But after I implemented this new Eve placement, which involved only a few lines of code, a strange thing happened.
Server CPU and disk usage rose steadily over the next 16 hours, eventually getting to the point where the servers were so bogged down and laggy that the game was almost unplayable.
If you experienced this today, I'm sorry about that. I've fixed it now, but the source of the problem was surprising.
The underlying databases are hash table based. As more entries are added to these tables, collisions occur, effectively creating a chain of "pages" in the hash table. Lookups for these later entries thus have to step through several pages before finding the matching item.
The general pattern here is that as more of the map is explored and modified, the servers become slower and slower, as hash table collisions become more common, and multi-page lookups are needed.
That has always been the case, throughout the history of the game.
But now suddenly, with the new far-flung Eve placement, it became a serious problem.
It turns out that all those far-flung Eves were exploring more and more of the world than ever before (whereas previous Eves were in the same area, so they kept wandering through already-visited places on the map). This made the underlying databases grow and fill with collisions.
As an example, one of the databases had such long collision chains than the average lookup would need to hop through 175 hash table pages. Not good.
Even worse, the newer entries in the hash table go at the end of these chains. As Eves were placed farther and farther away, this meant that the quickest-to-access entries in the table (the oldest entries) were never being needed again, while the latest entries---the tiles we were looking at around the latest Eves---were at the end of very long chains.
The game uses an existing database module called KissDB that is very fast, but probably not designed with this usage pattern in mind.
The long-term solution is to re-write the database from scratch as a stack, so that the most recently-accessed elements are the fastest to access, while the forgotten parts of the map slide to the ends of the chain. I'll be doing that work next week.
In the mean time, I changed the usage patterns for some of the largest databases, resulting in a huge performance increase and reduced RAM footprint.
The servers are lag-free again.
And once I write a new database engine, performance should be even better, allowing me to raise the player caps per server.
So I hope you'll stick with me as I continue working to improve the game.
It's not over yet. We have years to go, together.