a multiplayer game of parenting and civilization building
You are not logged in.
Pages: 1
I spent quite a bit of time refactoring the transition code for objects with a certain number of uses.
That code now works fine even if both actor and target have a certain number of uses (using a mallet on a chisel, for example, correctly decrements the use count on both the mallet and the chisel).
But the underlying implementation still makes objects with 100s or 1000s of uses intractable. Two 100-use objects that interact would result in 10,000 auto-generated transitions.
Someone pointed out recently that the point of item decay was to complicate the process of long-term, trans-generational survival, not to burden individual lives with repetitive busy-work. If you make a tool, you should probably be able to use it for the rest of your life without making another. But the existence of your tool shouldn't exempt the village from making that tool forever. The forge shouldn't lay dormant, but you shouldn't have to keep going back to the forge yourself, during your life.
This means that tools need to last 100s of uses. The other option is to have them decay over time like baskets, regardless of use, but that feels weird. A tool on the ground suddenly breaks? And it also motivates people to use the tool like crazy while they can. Chop as many trees as you can, there's only 5 minutes left before this ax is toast.
The moment where a tool finally breaks from extended use is a great moment, and it feels right that it breaks during a "chop." I don't want it suddenly breaking while you're simply carrying it.
This could be accomplished with a purely random breakage system, but that also feels weird, because brand new tools can break sometimes.
The solution, without completely reimplementing the underlying transition engine, is a hybrid approach.
Each tool can now have a small number of true useCount states, along with a probability of transitioning between those use states.
So, an ax might have 4 uses, with a 0.1 of transitioning each time it is used, giving it 40 chops on average, but 4 chops in the absolute worst case. If the probability was instead 0.01, the ax would have 400 uses. 0.001 would give it 4000 uses. All without introducing any additional complexity blow-up in the underlying, auto-generated transitions.
A berry bush has 6 uses, with a 1.0 use chance, meaning that it is decremented every time it is used. But I could also give it a 0.75 chance, meaning that 25% of the time, you get a free berry when you pick one.
The use engine already supports sprites that appear or vanish with use (like berries on the bush or cracks on a tool). For tool usage, those cracks can also coincide with the true state transitions. A 4-use ax can get a crack each time it gets closer to breaking, for example, but the number of uses between cracks appearing is uncertain.
Offline
Thank you! Much appreciated. I guess the rare axe that gets 4 uses had flaws smithed into the steel lol
Be kind, generous, and work together my potatoes.
Offline
It's a little unclear which solution you decided to implement. I can't say i like the idea of it being random, and the way you explained it meant that it's only random because it's easier to code it instead of better game design.
Offline
Looks like he chose the one I was writing about, so you can have another look at that. It’s not a bad solution at all. The way I read Jason’s explanation, he is saying that both fully random and fully determined are worse solutions than what a combination of them is. I for one agree with that. For one thing, it’s more realistic. Let’s try it out and see if it feels good. Otherwise I’m sure there will be new updates.
Offline
When I read the thread yesterday. I also thought that the combination of randomnes and different sprites is the best solution. No need to change the whole engine, no big mess, no tracking, just modify what already existed, fast implementation.
Then again I just wonder for more advanced objects what is the absolute limit of minimum uses you could implement before it gets a mess again?
A better tool or a car for example that breaks after 4 uses would be insane
But you could go around this problem if you make reperations easier. I think thats what gus also wrote yesterday?
I just wonder what your thoughts are about this?
Its a rought world - keep dying untill you live <3
Offline
That was the idea I suggested in his other thread. The problem was the way he was doing to it wasn't very effective code wise and would be a pain to fix. Doing it completely random was really easy and efficient but there was a chance a chance of a tool breaking on the first use which would really suck. So the hybrid method negates the worst case scenario and gives the tool a minimum number of uses, while remaining easy and efficient to code into the game.
Offline
My hammer was frail because I was the chick blacksmith at the tiny camp.
I made axes, shovels and chisels.
And when I tried to create a blank file, my hammer was cut off. * Clink *
I got an adze that did not grow to a blank file.
I learned a lesson but I was 60 years old.
Last edited by JS (2018-04-28 02:37:16)
Offline
Well, the hybrid approach is even better in terms of worst cases, because of the magic of independent events.
If an ax has a 0.01 chance of failing, you expect it to last 100 chops. But out of 100 axes, you expect one to fail on the first chop.
If you split the chops up into four batches and have a 1/25 chance of transitioning to the next batch, you still expect 100 chops on average before failure, but you only expect a 4-chop ax once in about 400,000 axes. It would have to fail on the "first" chop four times in a row, which has a (1/25)^4 chance of happening.
And little realizations like this is the kind of stuff that I live for as a programmer and game designer.
Offline
If an ax has a 0.01 chance of failing, you expect it to last 100 chops.
That's actually not the correct math for an axe with 0.01 chance of failing, I would say it's wrong to expect 100 chops out of it.
On average you can expect 100 chops - that's true, but averages aren't a great metric for single use-case.
Out of 100 chops - the probability of none of the chops failing would be 0.99^100 - that's approx 36.6%
So the probability of at least one (one or more) chop failing would be approx 1- 36.6% - so that's 63.4%
With a chance of failure of 63.4% I wouldn't really be counting on my axe actually lasting 100 chops, the opposite is more probable.
If I wanted an axe that lasts at least 100 or more chops, with 0.01 failure chance per chop, I'd probably make multiple axes...
http://www.pstcc.edu/facstaff/jwlamb/Ma … sch4.5.pdf
P.S. Otherwise I think the idea is great, just wanted to clarify that averages may be misleading.
Last edited by KucheKlizma (2018-04-28 06:56:32)
Offline
The solution, without completely reimplementing the underlying transition engine, is a hybrid approach.
I did not read the source code, i have no idea how your game works and i am not the perfect programmer.
But this sounds like a dangerous problem to me.
Whenever i decided to not rewrite code in the past (because its super annoying and takes alot of time), i regretted it and in the end i rewrote the code anyways.
But this made me invest alot of time in dealing with a bad system, that i could have saved if i would have rewritten it immediately.
For me personally its no problem to wait several months before you make the next update (but i know alot of people see this differently)
Offline
This solution is brilliant in its simplicity and embraces the best of the current system and RNG. I don't think you need to panic about Jason paving over cracks and having to refactor everything at increased cost later. This change is miniscule in terms of code change, but adds a whole lot of depth to the game. It's almost cheating, really. But worst case, if he has to come up with a better system later, it won't be any harder to refactor anything.
Edit: TMPL (today my phone learned) refactor is a word.
Last edited by Uncle Gus (2018-04-28 09:03:40)
Offline
This solution is brilliant in its simplicity and embraces the best of the current system and RNG. I don't think you need to panic about Jason paving over cracks and having to refactor everything at increased cost later. This change is miniscule in terms of code change, but adds a whole lot of depth to the game. It's almost cheating, really. But worst case, if he has to come up with a better system later, it won't be any harder to refactor anything.
Edit: TMPL (today my phone learned) refactor is a word.
From his initial explanation I was actually more worried if it's efficient or if it could throttle performance to have a a bunch of "dummy objects" generate, as explained. Rather than functionality/structure wise. Does the entire object have to be loaded into memory multiple times? But then again it's more important how it look post-compile, as the compiler DGAF most of the time anyway.
I wish I wasn't a complete amateur and I could just check it myself, I'll prolly give it a prod at some point anyway just for fun.
Offline
jasonrohrer wrote:If an ax has a 0.01 chance of failing, you expect it to last 100 chops.
That's actually not the correct math for an axe with 0.01 chance of failing, I would say it's wrong to expect 100 chops out of it.
I was using "expect" in the mathematical sense, as in "expected value."
Which, for an ax with a 0.01 chance of failing on each chop, is exactly 100 chops.
If you roll a d20 die, and you keep rolling until it roles the number 13, you expect to roll it 20 times before the number 13 comes up.
Now, in terms of it matching our intuitive understanding of what we "expect" the ax to do.... it's pretty close. 50% of the axes will last less than 100 chops, and 50% of them will last MORE than 100 chops.
I think what you're getting at is that with a geometric probability mass function, so much more of the "weight" is below the expected value of 100. We can experiment with this tool:
https://homepage.divms.uiowa.edu/~mbogn … /geo1.html
The distribution has a very long, thin tail above 100, which actually never goes to 0 all the way to an infinite number of chops. That tail balances a very thick head below 100 chops.
This means that any particular number of chops below 100 is way more likely than any particular number of chops above 100. For example, 50 chops is 3x as likely as 150 chops.
Offline
The decay seems much better so far. Still have to see................................................. the future.
I got huge ballz.
Offline
What you said is absolutely correct, I was referring to "expect" as a non-mathematical concept for when you have a single axe. Practical application.
Anyway this is already addressed by having the chances change between 4 batches of chops, so I'm kinda going off on a tangent here...
With a single axe, the conditional probability of getting your axe 100 uses is still going to be working against you and it's an unrealistic expectation to get exactly 100 uses.
But for example if you target 67-68 chops you'll be close to 50%-50% conditional probability, even if the median is 100 chops.
It's similar if you have a coin flip to work with. If you flip the coin once, you won't realistically expect an average outcome of 50% tail and 50% heads. The coin is going to have to land on one side and either outcome is going to be outside the median.
If you have two coin flips and the first already landed heads - again you won't realistically expect a median outcome of the coin landing tails. It still going to have the same 50% chance of landing on either side. So it's still entirely possible it lands heads.
The conditional probability of having median outcome from 1 coin flips is 0% - impossible.
The conditional probability of having median outcome from 2 coin flips I believe is 50%.
Basically all I'm getting at is that averages work best when they have room to work with. In a single chance roll you can truly expect outcomes in the full spectrum of possibilities and the median is not the most probable outcome. It's more probable you'll get a non-median outcome at that stage. Which is why a lot of people get really angry from RNG, as they unrealistically expect immediate average outcome, which is not how averages work.
Offline
Looking at this further, each "batch" of uses is an independent geometric random variable. If p is the chance of moving on to the next batch with each use, we expect 1/p uses before moving on to the next batch. Our variance is (1 - p) / (p^2). And that is where the magic happens.
For independent random variables, the expectation of the sum of the variables is the sum of each variable's expectation. Same with the variance.
We can achieve the same expected value of "100 chops" with a single batch with 1/100 chance of failure, four batches with 1/25 failure, or ten batches with 1/10 failure. They all have the same expected value, due to the summing of the expected value of each batch. p gets bigger and bigger with the number of batches, but the expected value of the total sum does not change.
However, because the variance includes a p^2 term in the denominator, as p gets bigger, the sum of the independent variances shrinks.
For p = 1/100, Var = 9,900 (std deviation = 99.498 )
For four batches of p = 1/25, each batch has Var = 600, for a sum Var over all four batches of 2400 (sum std deviation = 48.989).
For 10 batches of p = 1/10, each batch has Var = 90, for a sum Var over all 10 batches of 900 (sum std deviation = 30).
In the most extreme case, we could have 100 batches of p = 1/1, where our expected value is still 100, but our Var shrinks to 0.
I recently shipped the iron mine with a single batch and a 1/10 chance of failure. It was supposed to feel like a gamble, but an expected value of 10 sounded like a lot of iron. But with a variance of 90 (std deviation = 9.48), we can see that we expect a very wide range. I'm fixing the iron mine to have more batches now.
Offline
For future reference, this ends up being a Pascal random variable. We're waiting to hop through all n states, where we hop between states with probability p. Our expected value, for number of hops before we reach the end, is:
n / p
And our variance is:
n * ( 1-p ) / ( p^2 )
https://www.probabilitycourse.com/calculator/pascal.php
If we want to hold our expected value constant---say we want an axe to last 100 swings on average---than we can vary n and p to achieve this while changing the variance. Obviously, p can never be greater than 1, which means that n can't be greater than the average we're trying to achieve.
But as n goes up while our expected value remains constant, p must go up as well, and our variance goes down. It's pretty amazing how much control this gives us over variance. For example, we can have 98 hops, and p = 0.98, and our expected value is still 100, but our variance shrinks to 2.04.
This is also like a simple Markov chain, where you have n nodes, and each node only connects to the next node or itself, and we're asking how long it takes before the final node is reached.
In revisiting this today, five years later (wow!), I couldn't find anyone else discussing this method of controlling variance for game events, like for loot drops, and so on. I also didn't realize, at the time, that the result is a Pascal random variable---at least in terms of measuring "how many steps until" the event happens.
For loot drops, you might be interested in how many drops occur in a given number of total steps (like if a monster drops an item 1/100 times, and you kill 1000 monsters, you expect 10 items, but this is a Binomial distribution, so the variance is 9.9.
With the Markov chain implementation, you can vary chain length and probability to keep the 10 item average while reducing variance, but it's not clear to me how to calculate the resulting variance.
Offline
In revisiting this today, five years later (wow!), I couldn't find anyone else discussing this method of controlling variance for game events, like for loot drops, and so on. I also didn't realize, at the time, that the result is a Pascal random variable---at least in terms of measuring "how many steps until" the event happens.
For loot drops, you might be interested in how many drops occur in a given number of total steps (like if a monster drops an item 1/100 times, and you kill 1000 monsters, you expect 10 items, but this is a Binomial distribution, so the variance is 9.9.
Thing is this method doesn't really work for loot. You will need to kill at least n monsters to get desired item, so you are trading off variance for grind.
When it works is durability, I think i saw exact same method used in some Minecraft mods.
Also something similar was used in Dwarf Fortress, i think, to determine dwarfs mood. Mood has a chance to change, let's say every second, and can either grow or drop depending on recent events. Somewhat of a similar manner random walk.
Weighted loot tables provide much more flexibility and are the perfect tool for the task. Bad luck protection (gradually increasing chances for better rarity), Dynamic drop rates (to give archers more arrows, or to player with 'bad carma' evil themed items), Item duplication prevention (just zero out recent drops) and many more ways to manipulate probability.
but it's not clear to me how to calculate the resulting variance.
n * ( 1-p ) / ( p^2 )
Are you looking for proof of this formula?
X is a Pascal random variable, it can be represented as a sequence of independent Bernoulli experiments B with probability of success p repeated until n-th success.
Let X_i is to be number of times B has to be performed for i-th success after having i-1 successes.
X_i are all independent. X_i is a geometric random variable with probability of success p, we know Var(X_i) = (1-p) / (p^2) (It takes quite a bit to proof rigorously, will leave it for the reader)
Var(X) = Var(\sum X_i) = \sum Var(X_i) = n * (1-p) / (p^2)
Offline
Well, the interesting thing about all the "tweaks" to loot drop probabilities is why they are needed in the first place: variance. If you want an item to be very rare, and only drop 1/500 kills, if you roll a D500 to determine when it drops, the variance is going to be huge. In fact, standard deviation in that case is +/- 499. And in terms of "grinding".... well, the average player has to kill 500 monsters to get the item, but a good portion have to kill 1000 to get the item. And yes, lucky ones will also get the rare item after only a few kills, but... that also kinda breaks the game for them, and it doesn't help to make up for the horrible experience that other players have when variance breaks the other way.
So you implement "bad luck protection" and all the rest. Which, by the way, only deals with bad luck. It doesn't help smooth out godlike luck, which is just as much of a problem when variance is high.
But if you could simply control variance in the first place, you wouldn't need other tricks.
Obviously, SOME variance is okay.... since you don't want players simply counting up to 500 and seeing the rare item drop like clockwork. But what if you thought +/- 499 was too big, and you wanted to try 500 +/- 250? Or 500 +/- 100? So the lucky players get it in 400 kills, and the unlucky ones get it in 600 kills?
I've never heard anyone talking about specific methods to control variance in loot drops, as a parameter.
If you implement it with Markov Chains instead of just weighted coin flips, you can control variance precisely.
For example, if you want the average drop to happen in 500 kills, with standard deviation of 100, I can plug those values into my formula here and tell you that you can have a chain with 24 hops in it, and 1/21 chance in going on to the next hop for each kill. Your EV is then 504 kills, with a standard deviation of 100.4 kills.
On the other hand, if you wanted 500 kills +/- 50 kills, you could have a chain with 83 hops in it, with a 1/6 chance of going on to the next hop for each kill. This gives you an EV of 498 kills, with a standard deviation of 49.9 kills.
The other cool thing about this is that when testing, you can force yourself to play as the "average" player by putting variance temporarily to 0, and taking luck out of the picture. Then you can see how it actually "feels" to be the average player. Later, after you're done testing, you can reintroduce whatever variance you want.
For example, it might be that 500 kills on average to get the rare item is way too long. But it's normally hard to feel this when testing, because sometimes you get lucky, etc.
Offline
Pages: 1