[Home] [Buy] [News] [Family Trees] [Leaderboard]
[Photos] [Update Log] [Forums] [Unicode Language Mod] [Tech Tree]
New Eve placement algorithm
April 24, 2018

https://onehouronelife.com/newsImages/evePlacement/spiral4.png

The way Eves are placed has a substantial impact on the feeling of the game. Whenever a player joins the server when there is no suitable mother for that player, that player spawns as a full-grown Eve instead of as a baby. Eve serves as the potential root of a new family tree, and her placement determines the opportunities that are available to that family.

Originally, Eves were placed at random inside an arbitrary radius around the world location (0,0).

This worked fine for a while, until that area began to fill up with civilization. Eve is supposed to feel like a fresh start, with maybe a small chance of stumbling into the ruins of a past civilization, or eventually bumping up against a living, neighboring civilization. As the center of the map got full, Eve was always just a stone's throw away from a village. Furthermore, with everyone so close together, there was no danger of losing a village if it died out. Thus, keeping a village alive meant nothing. We could always find our way back to revive the ghost town tomorrow. Even worse, as these areas got ravaged by human activities, the resources that a new Eve needs to bootstrap became more and more scarce. Eve does need a somewhat green pasture to found a new civilization.

The next Eve placement algorithm involved a random walk across the map, looking at the last Eve location and making a random jump 2000 tiles away to place the next Eve. There can be some randomly-occurring back-tracking with this method, which means that Eve can sometimes end up near the ruins of old civilizations, but we expect such a random walk to eventually explore the entire map, so we will also get farther and farther from center over time. And with many Eves dying without founding new civilizations, and also perhaps due to biases in the random coordinate generator, we quickly walked our way from (0,0) out into the 50,000's. This meant that villages were generally too far apart to interact with each other. Still, Eve was usually in an untouched area full of natural resources.

The next Eve placement algorithm was radial, placing Eves randomly at a radius of 1000 from a fixed center point. This put all villages within trading distance of each other, and offered plenty of untouched space for Eve---for a while. But soon, the "rim" of the wheel filled up with civilization, and we were back to where we started---Eves placed in a crowded area that was stripped bare of natural resources.

The latest Eve placement algorithm was suggested a long ago by Joriom and maybe a few other people, and involves an ever-growing spiral around a fixed center point. This guarantees that Eve is always in an untouched area, but also that she is never too far from some recent civilizations, so trade can happen.

While a server is running, the placements look like this:

https://onehouronelife.com/newsImages/evePlacement/spiral.png

You can see the three initial Eve placements at the center, which the server permits at startup to ensure that the first few Eves can have a chance to bootstrap a village in that spot. After that, the spiral ensues, and would keep going as long as the server was running.

What happens when a server shuts down though, as it does every week during updates?

First, the death location of the longest-lineage person during shutdown is remembered. This is used as the "center" of the spiral at the next startup, and after startup, the first three Eves are placed near there. After that, a new spiral grows around that new center point, like this:

https://onehouronelife.com/newsImages/evePlacement/spiral2.png

As this new spiral grows, it will cross through the previous spiral like so:

https://onehouronelife.com/newsImages/evePlacement/spiral3.png

That is okay, because any of the other civilizations that were active at shutdown will potentially be rediscovered by Eves, which makes for an interesting Eve variation. Still, future Eves will not be trapped in these already-developed areas, as the spiral will continue further out into untouched areas again.

Also, some of the older, long forgotten villages in the old spiral will be wiped when the server restarts due to the 24-hour abandonment cut-off. Thus, the new spiral will cross through many now-wild areas, even as it overlaps the old spiral.

As the new spiral grows bigger, it will eventually engulf the old spiral and move beyond it, but the overlapping area will always be substantially less than half of the new spiral. Thus, even if 100% of the old spiral contained active villages that were not wiped, more than half of the Eves in the new spiral will be placed in untouched wilderness.

https://onehouronelife.com/newsImages/evePlacement/spiral4.png

[Link][10 Comments]






Update: From Riches to Rags
April 21, 2018

https://i.imgur.com/Dn9qIiz.gif

75 new things. Mostly broken things. Rotting things. Fragile things.

And stone walls, which are the opposite of all of that. And locking doors.

A whole new web of interdependence for farming.

A new mother selection method (temperature, not food).
[Link][61 Comments]






Update: The Monument
April 14, 2018

https://i.imgur.com/jo2YXu0.gif

If you build it...

We crossed one million lives lived inside the game this week. It's kind of mind boggling. And a group of well-coordinated players also trounced the previous lineage record of 32 generations, making it all the way up to 111 generations with the Lee family. I've posted the 111 names from the matriarch chain here.

I've tied all these trends together in this update, which includes a monument, along with quite a few other changes.

In light of the recent server performance updates, the per-server player cap has been raised.

The experience of being murdered has been dramatically improved. (Yes, that is a peculiar sentence.) It has never actually happened to me in game yet, and I hope it will never happen to you. But if it does, you'll have a small bit of time to get your affairs in order...

Long term food sustainability has been made much harder. Getting up to 30+ generations in one spot should be a pretty substantial challenge, and you won't be able to do it on carrots alone. The top has been hardened.

Short term food availability has been made a bit easier, with the addition of two more edible wild plants. But they don't grow back, and they can't be domesticated, so they don't affect the long term challenge. Being an Eve in the wilderness should be a little bit less stressful. The bottom has been softened.

And there may be one little tiny surprise in there too... the idea came to me while I was falling asleep last night... I just had to put it in.

Also, monument logging is in place server-side, but the processing and display of those logs is just a counter for now. A better monument roster will be implemented next week, assuming that it's needed. And yes, that is supposed to be a challenge.
[Link][9 Comments]






Server Peformance Improvements are Live
April 13, 2018

I spent quite a bit of time this week on server performance.

The old database engine, the amazingly fast and compact KISSDB, was not designed for an ever-growing data set where the newest data are accessed more than the oldest.

As players continue exploring new areas of the map, the data from older areas becomes less relevant, but that is the data that is the fastest to access in KISSDB. In fact, we were constantly wading through that old data to get to the latest stuff, which essentially ended up at the end of the list in KISSDB's append-only data structure. It got slower and slower as the data got bigger and bigger.

This drop in performance is expected when a hash table fills up, and thus KISSDB documentation recommends a table that's "large enough" for the expected data.

But the expected data in this case is unbounded. We cannot pick an appropriate size, because the data will keep growing, and we don't want performance to degrade as that happens.

A stack-based hash table is much better suited for this usage pattern. The latest and most important stuff can remain at the top for fast access. So I wrote a new database engine from scratch on Monday and Tuesday. It helped a lot.

The stack-based implementation that I came up with (thanks Chard for all the thoughtful discussion along the way) is 7x faster on average and even uses a bit less disk space (6% less). But more interestingly, it's entirely disk based, using almost no RAM. 13,000x less RAM than KISSDB on a test data set, in fact. KISSDB holds part of the data structure in RAM for performance, and that RAM usage grows as the data grows, but the stack is so much faster for accessing recent data that it doesn't matter---we can do it all via disk accesses.

The stack database actually has a flat RAM profile regardless of how big the data grows, and CPU usage on recently-used data is flat as well, regardless of how big the entire data set (including old, less-used data) gets.

The impact on server CPU usage is quite remarkable, as can be seen in this before-and-after graph (with the same 40 players on server1 the whole time). The new database went live at the 10:00 mark:

https://onehouronelife.com/newsImages/serverCPU.png

I also did some live profiling with Valgrind and found a few more hotspots that could benefit from RAM-based caching of procedurally-generated map data. And since the database now uses almost no RAM, we have RAM to spare for this kind of caching.

Where the server RAM usage used to grow to 300 MiB or more as the map data grew, it now sits steady at only 17 MB. Yes, that's 17 MiB of RAM total for hosting 40 active players.

What does this mean? First of all, it means the servers are finally lag-free, assuming that you're not experiencing true network lag.

Second, it means we can finally have more players on each server. I'll be upping the number gradually over time and keeping an eye on lag and performance. I expect we can easily get at least 80 on each server, and maybe quite a few more than that.
[Link][3 Comments]






Windows read-only bug finally fixed, scheduled downtime tonight
April 11, 2018

Most important: Main website down at 10pm PST tonight (Wednesday) for maintenance.

I've finally been able to find and fix that troublesome "read-only game folder" issue on Windows. If you're having problems with this, downloading v65b for Windows from your original download link will fix it.

The problem in v65 is triggered whenever the update process includes two binary updates to the client EXE. This can happen if you take a break from the game and return to a backlog of updates. Thus, just because the update process always worked for you in the past does not mean it won't fail for you in the future. This is why the bug was so hard to find and confirm, amid thousands of "hey, it works for me," reports.

So if you ever run into a read-only issue during a future update, installing fresh from v65b should fix it for you.



Next issue: Scheduled down-time, 10pm PST today (Wednesday)

My server provider Linode has been working to patch the recently discovered Spectre vulnerability in modern CPUs. The patch requires a reboot of each server.

For the most part, this is fine for this game, because I'm running 15 game servers anyway, and I can easily take them down in batches while people continue to play.

However, the main web server, onehouronelife.com, needs to be rebooted too. Unfortunately, the home page, forums, purchase system, and client reflector are all hosted on this server. The reflector is the most mission-critical part of all this, because it's what clients talk to first to find out what game server they should go to.

And this important piece needs to go down so Linode can install the update.

The game servers will keep running, and your current lives won't be disrupted, but if you try to get reborn, you won't be able to find a new server to connect to. You can work around this outage temporarily with the "customServerAddress.ini" setting. Take a look in the forums or ask in Discord for help with this.

I plan to trigger the down-time at 10pm PST today. I'm not sure how long the patch process will take, but it could be up to two hours.

Game purchase will be unavailable during the downtime.

Hopefully, it will be way shorter than two hours.


Thanks for understanding!
Jason
[Link][1 Comment]






[Prev][Next]
[Home] [Buy] [Wiki] [Food Stats] [Fail Stats] [Polls] [FAQ] [Artwork] [AHAP] [Credits]