One Hour One Life Forums

a multiplayer game of parenting and civilization building

You are not logged in.

#76 2018-07-25 22:33:57

jasonrohrer
Administrator
Registered: 2017-02-13
Posts: 4,804

Re: Welp, found the actual source of lag

And yet another thing:

Linear probing increases the chances of false positives.  KISS has amazing performance when looking for records that aren't present.  It seems like this is mostly due to the lack of clutter in the first-line hash table.

Offline

#77 2018-07-26 03:50:58

jasonrohrer
Administrator
Registered: 2017-02-13
Posts: 4,804

Re: Welp, found the actual source of lag

Really proud of this result today.  This is for a fixed-size on-disk linear probing hash table that keeps 16 bits per hash record in RAM (so 60 MiB RAM for 30 million records), using MurmurHash2

Opening DB (table size 30000000) used 63760112 bytes, took 0.852000 sec
Sanity check passed
Inserted 15752961
Inserts used 0 bytes, took 247.927000 sec
Flushing system disk caches.

/dev/sda:
Random lookup for 500 batchs of 3000 (1500000/1500000 hits)
Random look used 0 bytes, took 4.964000 sec
Checksum = 186209500
Flushing system disk caches.

/dev/sda:
Last-inserted lookup for 500 batchs of 2401 (1200500/1200500 hits)
Last-inserted lookup used 0 bytes, took 3.845000 sec
Checksum = 283318000
Flushing system disk caches.

/dev/sda:
Random lookup for non-existing 500 repeating batchs of 3000 (0/1500000 hits)
Random look/miss used 0 bytes, took 0.479000 sec
Flushing system disk caches.

/dev/sda:
Random lookup for non-existing 1500000 non-repeating values (0/1500000 hits)
Random look/miss used 0 bytes, took 0.628000 sec
Flushing system disk caches.

/dev/sda:
Inserts for previously non-existing 500 batchs of 3000
Inserts after miss used 0 bytes, took 7.086000 sec
Flushing system disk caches.

/dev/sda:
Iterated 15755962, checksum 1957193177
Iterating used 0 bytes, took 12.888000 sec
Max bin depth = 63

real	4m40.847s
user	0m52.576s
sys	1m57.399s
-rw-r--r-- 1 root root 630000015 Jul 26 01:15 test.db

Here's 32-bit KISSDB for comparison:

Opening DB (table size 30000000) used 5840 bytes, took 0.000000 sec
Adding new page to the hash table
Sanity check passed
Adding new page to the hash table
Adding new page to the hash table
Inserted 15752961
Inserts used 360001536 bytes, took 188.042000 sec
Flushing system disk caches.

/dev/sda:
Random lookup for 500 batchs of 3000 (1500000/1500000 hits)
Random look used 0 bytes, took 5.152000 sec
Checksum = 186209500
Flushing system disk caches.

/dev/sda:
Last-inserted lookup for 500 batchs of 2401 (1200500/1200500 hits)
Last-inserted lookup used 0 bytes, took 0.836000 sec
Checksum = 283318000
Flushing system disk caches.

/dev/sda:
Random lookup for non-existing 500 repeating batchs of 3000 (0/1500000 hits)
Random look/miss used 0 bytes, took 1.186000 sec
Flushing system disk caches.

/dev/sda:
Random lookup for non-existing 1500000 non-repeating values (0/1500000 hits)
Random look/miss used 0 bytes, took 6.163000 sec
Flushing system disk caches.

/dev/sda:
Adding new page to the hash table
Inserts for previously non-existing 500 batchs of 3000
Inserts after miss used 120000512 bytes, took 8.031000 sec
Flushing system disk caches.

/dev/sda:
Iterated 15755962, checksum 1957193177
Iterating used 0 bytes, took 38.021000 sec
Max bin depth = 4

real	4m8.763s
user	0m52.005s
sys	2m14.527s
-rw-r--r-- 1 root root 795119272 Jul 25 23:30 test.db

Mine is faster on everything but intial insert.

One random NULL-result lookup, mine is 10x faster.  And that's the most important case for this application, because most map cells are empty and not in the database.

As for why initial insert is slower, I think it's because KISS appends all bulk data to the end.  The records are 20 bytes in the case of KISSDB and 21 bytes in the case of my new database (there's a "present" byte at the beginning of each record).  But during insert, KISSDB is appending all of the records to the end of the file, whereas mine is jumping all around inside the file, randomly inserting records directly into the hash table on disk.  KISS is also randomly inserting file pointers in the hash-table-portion of its file.

Not really sure why sequential writes are faster than random-access writes, though I suppose similar caching principles apply in reverse.

Will take a look with a profiler tomorrow and see if I can figure anything out about the initial insert.

Also, found this option today for valgrind callgrind, which rocks:

valgrind --tool=callgrind --collect-systime=yes

This records system call times, so it can show you which parts of your code are blocking on IO the most.  Wish I had been using this all along.  Normal profiling is counting instruction fetches, which doesn't show slow parts that are actually blocked.

Really thrilling when I found that my random lookup of NULL-result values resulted in ZERO system call time (because it never touched the disk, using just the in-ram 16-bit fingerprint map for all cases).

Oh, and here's a version of my new linear-probing disk-based table that uses DJB2 instead of MurmurHash2:

Opening DB (table size 30000000) used 63760112 bytes, took 0.796000 sec
Sanity check passed
Inserted 15752961
Inserts used 0 bytes, took 190.721000 sec
Flushing system disk caches.

/dev/sda:
Random lookup for 500 batchs of 3000 (1500000/1500000 hits)
Random look used 0 bytes, took 5.107000 sec
Checksum = 186209500
Flushing system disk caches.

/dev/sda:
Last-inserted lookup for 500 batchs of 2401 (1200500/1200500 hits)
Last-inserted lookup used 0 bytes, took 3.444000 sec
Checksum = 283318000
Flushing system disk caches.

/dev/sda:
Random lookup for non-existing 500 repeating batchs of 3000 (0/1500000 hits)
Random look/miss used 0 bytes, took 0.483000 sec
Flushing system disk caches.

/dev/sda:
Random lookup for non-existing 1500000 non-repeating values (0/1500000 hits)
Random look/miss used 0 bytes, took 0.616000 sec
Flushing system disk caches.

/dev/sda:
Inserts for previously non-existing 500 batchs of 3000
Inserts after miss used 0 bytes, took 7.524000 sec
Flushing system disk caches.

/dev/sda:
Iterated 15755962, checksum 1957193177
Iterating used 0 bytes, took 12.643000 sec
Max bin depth = 12

real	3m43.509s
user	0m48.033s
sys	1m46.764s
-rw-r--r-- 1 root root 630000015 Jul 26 03:47 test.db

This brings the inserts on par with KISS-32.  Though I worry that djb2 is just exhibiting "better" behavior because of the toy key examples that I'm using here.

But KISS-32 was not using Murmur for the above results.  It couldn't without overflowing RAM due to too many collisions.

Mine is also much more graceful when there are lots of collisions.

Offline

#78 2018-07-26 05:12:25

sc0rp
Member
Registered: 2018-05-25
Posts: 740

Re: Welp, found the actual source of lag

jasonrohrer wrote:

Not really sure why sequential writes are faster than random-access writes, though I suppose similar caching principles apply in reverse.

Because you cannot write 21 bytes to disk.  You must write whole sector, usually 4k.  That turns write into read-modify-write.  If sector is not in cache, it needs first to be brought in.

Also, for sequential writes, clib will buffer some data first, so it'll take fewer syscalls to write data, which are quite costly.  May not need as many file seek syscalls as well.

Last edited by sc0rp (2018-07-26 05:18:36)

Offline

#79 2018-07-26 17:56:34

jasonrohrer
Administrator
Registered: 2017-02-13
Posts: 4,804

Re: Welp, found the actual source of lag

So.... the way that I'm doing this, to avoid touching the disk at all during the linear probing, is to keep a 16-bit hash fingerprint of each key that's in each slot in the table.  This fingerprint is derived from mumurhash2 with a different seed, and it XORs the 64-bit hash down into 16 bits.

I store a map of these fingerprints in RAM that matches the current size of the table.

This is similar to what KISS does, but it's storing a full table of 64-bit file pointers, and that doesn't even help you determine if you have a match (you have to hit the disk to check the key).  What I'm doing is 4x smaller and prevents disk hits in most cases.  The 16-bit fingerprint hash is close to collision-proof, because we only check it IF we've already done a full modded hash that lands in this bin (or we've linear probed our way to this bin).  So, effectively, the fingerprint is way bigger than 16 bits.  And in the extremely rare case of a hash table collision and an key fingerprint collision, we can check the true key in the file.

As for why no bloom filter, it seemed a bit more complicated, and it wasn't clear how it could be resized as the number of elements grows.  And it only checks for existence.  It doesn't help with linear probing, where we want a non-false-negative-test to see "is this key possibly here in this slot?"


All that said, I'm now wondering if the cache locality of linear probing helps anymore.  I mean, there's RAM cache locality, but I think this whole thing is IO bound on the final seek and read (or write) that happens once linear probing finds a match.

Though maybe no other method would be any better, since it's all IO-bound anyway, and the NULL-results are fast enough now that they don't really even matter... I guess some other method might make them slightly faster?



The only place where my implementation lags behind KISS is in repeated lookup of batches of values that DO exists in the database.  KISS does this in a flash because all the values are in the same place on the disk (the test is looking up the most recently inserted values, which should all be together at the end of the file).  My code is randomly jumping around in a hash table that's the full size of the file to find these values.

I can't think of an obvious way to improve this, other than by storing 64-bit file locations in RAM like KISS does, and then having the file records be written sequentially in order of creation instead of in hash-table order.

Another idea is to keep the hash table of 64-bit file pointers on disk in a separate file.  Random access would only happen in that file, where sequential access would be more likely to happen in the separate "data block" file where the records are stored in order of creation.

But anyway, what I have is good enough for now, I think.

And I haven't added the linear hashing table expansion part yet.  Wanted to get fixed table size working and optimized first.

Offline

#80 2018-07-26 20:13:21

jasonrohrer
Administrator
Registered: 2017-02-13
Posts: 4,804

Re: Welp, found the actual source of lag

An interesting issue came up:

I'm using standard linear probing with one record per bucket.

I'm trying to blend this with linear hashing, without resorting to an overflow area or separate chaining or anything else.

It seems to me that when we need to split a bucket and rehash it with the new, 2x modulus, we also need to consider the effects of previous linear probing inserts around that bucket.  Elements that should be "in" this bucket can also be further to the right, due to linear probing.

Even the element in this bucket might not really belong to this bucket.  It may belong further to the left, but ended up here due to linear probing.

We also don't want to move elements out of this bucket and leave "holes" that will trip up future linear probing searches.  Linear probing assumes no removal.

So, it seems that all elements in the contiguous element segment to the left and right of this bucket (up to the first empty cell to the left or right) need to be rehashed.  They don't all need to be rehashed using the new 2x mod.  They need to be rehashed according to the latest placement rule based on the new split point.

I've tried to come up with some way to rehash them one at a time, while leaving the rest in place.

I'm pretty sure that hashing them from left-to-right would be safe, because after you remove a given item, there's no way that it will end up reinserted further to the right than it's old, empty location.  I'm also not sure that the cells to the left of the bucket need to be rehashed at all, seems like not.

And I have an intuition that this method will also not leave any holes that will trip up linear probing, but I have no proof prepared.  There may be a proof by induction on the length of the segment of filled cells.


It also seems that, due to wrap-around issues with linear probing, the contiguous segment of cells starting with the first cell of the table always needs to be rehashed when the table is expanded.  Previous expansions could have moved an unbounded number of cells into the new bin at the end, and many of these could have wrapped around to the first bin via linear probing.  The latest expansion could leave an empty cell at the end of the table, creating a hole for future linear probes.

Offline

#81 2018-07-26 21:04:44

sc0rp
Member
Registered: 2018-05-25
Posts: 740

Re: Welp, found the actual source of lag

jasonrohrer wrote:

It seems to me that when we need to split a bucket and rehash it with the new, 2x modulus, we also need to consider the effects of previous linear probing inserts around that bucket.  Elements that should be "in" this bucket can also be further to the right, due to linear probing.

Even the element in this bucket might not really belong to this bucket.  It may belong further to the left, but ended up here due to linear probing.

Correct.

jasonrohrer wrote:

We also don't want to move elements out of this bucket and leave "holes" that will trip up future linear probing searches.  Linear probing assumes no removal.

We cannot do this blindly, but sometimes split will create hole that is legit.

jasonrohrer wrote:

So, it seems that all elements in the contiguous element segment to the left and right of this bucket (up to the first empty cell to the left or right) need to be rehashed.  They don't all need to be rehashed using the new 2x mod.  They need to be rehashed according to the latest placement rule based on the new split point.

You don't need to bother yourself with elements to the left - you've already handled them in previous splits.  And yes, you need to handle the whole island to the right, possibly moving some elements back.  They cannot be moved into arbitrary bucket, though, only as far left as their primary bucket (the one they would hash into without collisons/overflows).  This process may create holes or some elements may jump over the ones that cannot move.

IMO the best way to handle it is to split all buckets in an island in one go.  You need to analyze them anyway, figure out where they should go, write ones that move, create some holes.  And on next split you will be analysing it all over again.  So better just split all the buckets up to and including trailing empty one in one go.

jasonrohrer wrote:

I've tried to come up with some way to rehash them one at a time, while leaving the rest in place.

You can with marking element as deleted, so linear probing will know to not stop there.  But it's completely counter productive - you'll end up with longer and longer chains.  So, you want to move elements back closer to their primary bucket and create holes that stop linear probing.

jasonrohrer wrote:

It also seems that, due to wrap-around issues with linear probing, the contiguous segment of cells starting with the first cell of the table always needs to be rehashed when the table is expanded.  Previous expansions could have moved an unbounded number of cells into the new bin at the end, and many of these could have wrapped around to the first bin via linear probing.  The latest expansion could leave an empty cell at the end of the table, creating a hole for future linear probes.

The best way to handle it is to avoid it altogether - never wrap around.  When collision/overflow happens in last bucket, just create new one following it.  It should be marked as special, because it's not normal split bucket yet (so you know where next split should happen even if you quit app and restart later).  On next split, you need to take overflown elements into consideration, possibly creating next overflow bucket(s).

Last edited by sc0rp (2018-07-26 21:08:12)

Offline

#82 2018-07-27 01:46:17

sc0rp
Member
Registered: 2018-05-25
Posts: 740

Re: Welp, found the actual source of lag

jasonrohrer wrote:

So.... the way that I'm doing this, to avoid touching the disk at all during the linear probing, is to keep a 16-bit hash fingerprint of each key that's in each slot in the table.  This fingerprint is derived from mumurhash2 with a different seed, and it XORs the 64-bit hash down into 16 bits.

16 bits are probably overkill here.  8 would do, <1% chance of false positives.  That's for single lookup, not whole chain.  But keeping chains short is necessary anyway.  With linear probing probability of extending chain grows linearly with its length.  So when load gets high you won't end up with many long chains, you'll get few huge ones.  That's why keeping load below 50% is essential.

jasonrohrer wrote:

As for why no bloom filter, it seemed a bit more complicated, and it wasn't clear how it could be resized as the number of elements grows.

Rebuild bigger one during splits and query both until old one can be dropped.

jasonrohrer wrote:

And it only checks for existence.  It doesn't help with linear probing, where we want a non-false-negative-test to see "is this key possibly here in this slot?"

In this case having fingerprint is better.  But Bloom DOES help with linear probing.  If you get "no" from Bloom you don't need to do any probing - the key is nowhere to be found.  Chains or not.

jasonrohrer wrote:

All that said, I'm now wondering if the cache locality of linear probing helps anymore.  I mean, there's RAM cache locality, but I think this whole thing is IO bound on the final seek and read (or write) that happens once linear probing finds a match.

Though maybe no other method would be any better, since it's all IO-bound anyway, and the NULL-results are fast enough now that they don't really even matter... I guess some other method might make them slightly faster?

Other methods like double hashing keep chains shorter.  But they are incompatible with linear hash - there is no way to extend table in small increments.

jasonrohrer wrote:

The only place where my implementation lags behind KISS is in repeated lookup of batches of values that DO exists in the database.  KISS does this in a flash because all the values are in the same place on the disk (the test is looking up the most recently inserted values, which should all be together at the end of the file).  My code is randomly jumping around in a hash table that's the full size of the file to find these values.

I can't think of an obvious way to improve this, other than by storing 64-bit file locations in RAM like KISS does, and then having the file records be written sequentially in order of creation instead of in hash-table order.

Write big chunks of data, all info about 16x16 block of tiles in one record.  You'll get hundreds of reads worth of data with just one read.

jasonrohrer wrote:

Another idea is to keep the hash table of 64-bit file pointers on disk in a separate file.  Random access would only happen in that file, where sequential access would be more likely to happen in the separate "data block" file where the records are stored in order of creation.

Then you need to access it somehow directly.  Otherwise you'll have random read + sequential one.  Hardly better than just one random read.

Last edited by sc0rp (2018-07-27 01:50:47)

Offline

#83 2018-07-27 02:10:58

sc0rp
Member
Registered: 2018-05-25
Posts: 740

Re: Welp, found the actual source of lag

I find one analogy very useful and helpful in deciding how to optimize code.  Change times that processor see to human scale.  It goes like this:  Getting data from register - just recalling something quicky - 1 second.  Getting data from L1 cache - finding and reading some number on screen - few seconds.  Getting data from L2 cache - reading few sentences - 15 seconds.  Getting data from main memory - going to other room, finding book and correct chapter, reading a page or two, going back - 5 minutes.  Getting data from SSD - jumping into a car, driving to nearby state, reading some document there and driving back home - 10 hours.  Getting data from spinning disk - flying to India, going on long trip to many different places, talking to many people in search of some old manuscript, finally finding it, reading it and flying home - 1 year.  Getting data from user in Europe across Internet - launching space probe to go to Jupiter, taking photo there and flying back - 20 years.

So far you figured out that sending a novel to users word at a time wouldn't work.  They'd get lost in plot.  So when they send request, you prepare a full page and send it.  But you still think that driving 10 hours to nearby state and back to get one word at a time works great. "It just takes couple of months per user, no big deal.  Look, I have this awesome system: for every word I bring, I print a page with all information required to know where it came from: chapter, page, sentence and word number in sentence.  I have lots and lots of stacks of papers like that.  And I also found out that very frequently I'm asking about a word far into sentence, but sentence is too short.  So I also print out a papers that let me know, that in chapter 7 on page 21 in sentence 15 there is no word number 17!  This system works amazingly good!"

I've been suggesting for some time, that if you're driving to the other state, don't bring just single word every day.  Bring xero of the whole page.  In one day you'll do serveral months worth of work.  But now I have even better idea.  Bring the whole fricking book home!  So when you need to read something, you just go to the other room.  -Fit the whole book in my home?  Impossible, look a all those stacks of the papers.  I constantly run out of space and need to throw some of them out.  -Well, if you don't keep single word per page, I'm sure it'll fit.  -But I have to amend this book every day!  What if tornado hits my home?  All work will be lost!  -Send every day a letter listing all changes you've made to the other state.  So if something goes wrong, the latest version of the book can be reconstructed.  -But this will generate infinite number of papers with changes.  I couldn't store that!  -Then from time to time let someone apply all the changes and bind new edition of the book.  Then you need to store only recent changes.

No, this proposition is not a joke.  Keep state in RAM.  Currently for every 21 bytes you write, you store only maybe 2 bytes of actual data.  That's like 90% overhead.  Keys give also some infromation, but they are highly correlated.  They hardly have more than additional bit or two of entropy.  And half of the cells are empty.  So your complete state will be smaller than the table with fingerprints you prepare, to know that "nothing is there".  Open append-only file, log every change which should persist across reboots.  On restart parse change log and rebuild the state.  Then write snapshot of state, so you don't need start always from beginning.  You can also do it incrementally.  Every 100k changes switch to new file.  Let some background process go through log entries and dump resulting state - a shorter list of changes.  No database schemas to keep.  No migrations.  -What if I run out of RAM?  -Pay 5 bucks, double the RAM, reboot.

Last edited by sc0rp (2018-07-27 02:13:22)

Offline

#84 2018-07-27 10:18:13

PunkyFickle
Member
Registered: 2018-07-13
Posts: 69

Re: Welp, found the actual source of lag

sc0rp wrote:

I find one analogy very useful and helpful in deciding how to optimize code. [...]

I have no clue about memory management for online programs and the validity of your statements, but your little narration is captivating.

Offline

#85 2018-07-27 18:08:08

jasonrohrer
Administrator
Registered: 2017-02-13
Posts: 4,804

Re: Welp, found the actual source of lag

I estimate that what you are proposing would take me two months of full time programming work.  I mean, to really do it right.


Even replacing the database engine, as simple as it is, with something with a similar structure, has taken me about two weeks.

So, at least for the time being, with the promise of weekly content updates in a one-person project, there are other constraints in place besides finding the solution with the highest performance.


Also, my general approach to "getting things done," as in shipping 19 games in 15 years, or making this crazily complicated thing by myself in three years, is to find the simplest possible solution, the one that's "good enough for now," and just go with it.

The problem that I had at hand originally was this:

1.  The map is practically infinite.  All areas of it should be buildable.  The whole thing will not fit in any imaginable RAM or disk that will likely ever be invented.  Even at only one byte per tile, it would take 16 million terabytes of storage.  And we need way more than one byte per tile.

2.  The representation has to be sparse (see 1) on top of a proc-genned base map.

3.  Sparse representations are troublesome to store and index as pure data without storing coordinates and other information along with each bit of data.  The tiles can't be stored in a simple array of objectIDs, for example.  A hash table or some other data structure is needed.

4.  I had no way to envision how much sparse data would need to be stored.  Would it fit in RAM?  Not in all cases.

5.  I envisioned that servers would sometimes crash or be taken offline unexpectedly.  The world that players are building should general persist no matter what.  So far, it has NEVER been lost unexpectedly in five months of operation.

6.  In general, I'm hesitant to write a piece of software that is a RAM hog, or that has RAM usage that grows substantially over time.  I use enough pieces of software on a daily basis that do that, and I hate them.  Thrashing is the worst possible case in your analogy.  It's like boxing up parts of your car engine and shipping them back and forth to the nearby state as you try to drive around.  And causing every other car on the road to start doing the same thing.

7.  I also envisioned that there would be a publicly released version of the server that would run on people's desktops.  I'd want that to be lean and mean and use almost no RAM.  Even more important than on a dedicated linode where you can throw pretty much the entire RAM pool at the server.


All of these combined, in my mind at the time, lead me to a disk-based database indexed by sparse map coordinates.  It was the simplest solution that solved the problem, and the easiest one to design and build.


You could imagine solving your travel-distance analogy simply by keeping the entire database in RAM, using exactly the same database engine structure.  That would be the simplest solution to essentially achieve what you are suggesting, though obvious a huge waste of RAM.  As it stands, the overhead that you point out (17 bytes of a 21-byte record) is a huge waste of disk space, but that's less of a concern.  The linodes have 20x more disk space than RAM.


So, given all of the practical constraints (on me, as a programmer), I'm taking a kind of incremental approach.  If 95% of our accesses are NULL results, and those are the slowest operations in STACKDB, what's the simplest way to make those fast?  Since random disk seeks are so slow, what's the simplest way to reduce random disk seeks to the bare minimum of one seek per matching record?


And yes, when fetching someone's birth map chunk (the biggest chunk we ever fetch for them), the above approach means we could be doing 100s of random seeks to fill out their map chunk.  Your chunk-based proposal would reduce that down to just a handful of seeks.

BUT, we must also remember that an over-full KISSDB was doing 1000s of seeks to fullfill the same request, and the same with STACKDB.  Not only to walk through the hash pages or stacks, but also doing the same thing for all the empty tiles on the map too.

So a 10-20x performance boost from a simple change is worth pursuing.

A 100-200x performance boost (storing map data as chunks on disk) would involve a much more complicated change.

A 1000x performance boost (your final suggestion---not touching the disk at all to fetch map data) would raise, for me, the spectre of running out of RAM, and is also a really complicated change.

Offline

#86 2018-07-28 07:06:07

jasonrohrer
Administrator
Registered: 2017-02-13
Posts: 4,804

Re: Welp, found the actual source of lag

Another great result:

The test:  load the huge databases in server1, spawn in a random wilderness location, and walk to the right for 1 minute.  This is for a single player (me) walking to the right alone on the server.

Ran with valgrind profiler.

Old code (stackdb):
38% of system call time spent in fseek
3,874,434 calls to fread
1,992,547 calls to fseek


New code (lineardb);
0.21% of system call time spent in fseek
1,593 calls to fread
4,665 calls to fseek

In terms of profiled system time improvement, that's 180x less system call time.

In terms of call counts, that's 2400x fewer fread and 427x fewer fseek calls.


The trade off here is that the old server code used only 25 MiB of RAM to have these huge maps loaded, and got up and running, from zero to ready to accept connections, in less than 1 second.

The new server code uses 282 MiB of RAM to load these huge maps, and takes 30 seconds to get up and running.

Furthermore, disk usage for map storage has gone from 1.1 GiB to 1.6 GiB.

Offline

Board footer

Powered by FluxBB