One Hour One Life Forums

a multiplayer game of parenting and civilization building

You are not logged in.

#1 2023-05-16 15:25:39

jasonrohrer
Administrator
Registered: 2017-02-13
Posts: 4,803

Why new content pull requests don't work

I see that Tarr spent a ton of time making 32 well-thought-out content updates, and posting them as pull requests.

Looking at them today, I realized, with a heavy heart, that almost all of them won't work.

The problem is inherent to the editing workflow of OHOL, which was, perhaps foolishly, not designed to handle two people editing in parallel (at least not without heavy coordination), even though it's stored on the seemingly-collaborative git.

Each object has a unique ID number, baked into its file name.  These ID numbers grow sequentially as new objects are added.  And these ID numbers are fixed, and used all over the place (in animations, in transitions, in the protocol between client and server, etc).

Tarr, diligently working in his own parallel editing universe, added 694 new objects, starting with object ID 4750, and ending with object 5056.

There are several problems here:

1.  In the main OHOL content, we're already up to object 4751 (the objects added for the new tarry-spot anti-grief fix).  Thus, objects 4747, 4748, 4749, 4750, and 4751 conflict with Tarr's object universe.  Fixing all of Tarr's patches would require pushing all the ID numbers up consistently, across all the transitions, animations, etc/ that he submitted.  Or at least skipping the first few, to get up past object 4751.

2.  I would essentially be forced to incorporate all 32 of these push requests, or none of them, or at least some chronological sequence of them, to cover the cases where some of the new ones refer to other new ones.

3.  I couldn't progressively incorporate these while also working on my own fixes, additions, and updates in parallel.  I.e., I can't squeeze any other new objects "in between" the ones that Tarr made.  Again, this forces me into an "all or nothing" situation.  Like, I can't fix any other bugs, or add a few of Tarr's changes each week.

4.  The longer I wait to look at these, the worse the situation gets, as more objects get added that conflict with Tarr's object universe.  I have to incorporate all of Tarr's stuff NOW, before more objects are added (by me) and the situation gets even worse.  But my plan was to fix the bugs first...

5.  I can't take a look at each pull request in isolation, test it, and see how it works/looks.  I pretty much need to pull them all in, and then somehow observe them all together.  I could pull them in one-by-one, in order.  But if I find one along the way that I don't like, and want to remove that one before incorporating the others, it'd risk breaking the others (if any of the later ones referred to an object ID that I decided to skip including).


The core problem here is that OHOL content is incremental, and builds on what came before.  In fact, the editor has several safe-guards to prevent you from accidentally deleting old things that other things still depend on.  You can't just go in and delete the shovel without first deleting all the transitions that use it, for example.

Thus, "merging" content using git is extremely messy and error-prone.  I'm pretty sure I've shot myself in the foot at least once in the past when editing from different machines, and forgetting to push, and then having to undo a bunch of stuff to fix all of the conflicting object IDs.


There are a few types of content "updates" that do work okay for merges... but mostly just little fixes.  Adding a new transition that refers to existing objects is safe.  So is changing an existing transition.  Or adding an additional existing sprite to the layers of an existing object.  Or changing an object's animation.

But adding new sprites, adding new sounds, or adding new objects....


If I had to do this over again, I might make longer guaranteed-unique IDs for each item, instead of short incremental ID numbers.  That would help a little bit.

But I chose the short ID numbers for a reason:  they are used constantly in the protocol, when communicating map chunks to the client, including the contents of containers on the map.  One map chunk might have hundreds or even thousands of object IDs in it.  The protocol is human-readible text, so having these object IDs be only 4 ascii digits long (max) is nice for bandwidth.  Making the object IDs 5x longer would make the map chunks and other messages 5x bigger.

But even if we used longer guaranteed-unique IDs instead, which would allow us to add new objects in any order, we'd still have a problem with parallel merging of content updates:  transitions would still refer to these unique object IDs, which means patches couldn't be selectively merged easily (you might decide to not merge some patch that defines an object that another patch, which you want to merge, refers to).


But it's also that git really isn't designed to work with "structured" file content like this.  Git assumes that two remote people would be very unlikely to add different files with the same name in parallel.  This is true for programmers.  We're not going to both suddenly decide to create GameSaveButton.cpp in parallel, and have a conflict when we push.  Or if we ever did end up with a name conflict, we probably wouldn't want to keep the content of BOTH same-named CPP files.  But OHOL content files are automatically named by the editor.  And two editors working in parallel are going to create new files with the same names.  And those same-named files will both contain new, valuable, and different content.  I.e., file name conflicts don't imply file-content conflicts.

If I wanted to support parallel editing from the start, I couldn't rely on git alone as the back-end.  I'd need my own protocol, where some kind of editing client submits updates to a central editing server, and those updates are "normalized" and merged in a sane way with the existing content base.  But that sounds like a mighty hard, unsolved problem to me.  Kinda like coding a special-purpose versioning/merging system from scratch.  Yikes.


Anyway, I'm looking at the patches submitted by AlecYawata back in 2019, and I see that most of them are the "safe" kind (they only change transition files).  But even there, I opted to manually fix these bugs in the editor myself instead of merging, just to be safe and avoid any unforeseen issues.  It's hard to look at the text of a transition file and be 100% sure of what its doing, or that it's doing the right thing, or that it doesn't refer to some object that no longer exists.

I mean, that's also an issue, even for a simple patch:  the patch was submitted at some point in the past, and who knows what has changed since then?  Is the patch still sane, relative to the latest "master" content?


The only way Tarr's update patches would work is he was the sole person adding content to OHOL, and he had final say over what to add, and no one else was editing in parallel.  If he plays his cards right, and reserves his seat on that rocket....  maybe someday!



BUT, I realize that even then, Tarr won't suddenly "have his day" and get to add this long-dormant content.  Because again, all this content was made in a parallel editing universe, referring to things that may or may not exist anymore when he goes to merge it.

In fact, Tarr's edits, even on his own computer today, have been broken by the tarry-spot changes that I pushed.  The only way he can see his 32 content changes is to keep living in the exact "git moment" in which he created them, and never pull anything new down to conflict with what he made.


I'm sad about this.

Offline

#2 2023-05-16 15:32:07

jasonrohrer
Administrator
Registered: 2017-02-13
Posts: 4,803

Re: Why new content pull requests don't work

Also, I'm leaving these pull requests open for now.  I might be missing something, and maybe someone can come up with a clever solution to this problem.

Offline

#3 2023-05-16 18:23:17

selalov734
Member
Registered: 2021-06-01
Posts: 77

Re: Why new content pull requests don't work

I don't completely understand this, but here is what i would do:

1. Write a specific program for this problem.
2. Run the program on the current version to identify all the valid IDs.
3. Merge all of Tarrs PRs
4. Run the program again, it now detects all the wrong IDs and fixes them.

Writing that program is work, but is it more work than creating the content that Tarr did?
In the future you might have the same problem and can just reuse the program.

But like I said I don't fully understand all the code and work behind this.

Offline

#4 2023-05-16 19:25:43

Tarr
Banned
Registered: 2018-03-31
Posts: 1,596

Re: Why new content pull requests don't work

So what I've done is labeled all of the transitional changes on the github since these are going to be the safe and easy type of pulls since they're not going to mess anything up and can easily be approved or denied based on whether you like the idea or not.

With the content updates I'm absolutely willing to redo things that are approved if needed to get the object numbers to line back up. For example the current things needing to be skipped were my protofishing hole (changed it but left the old ones in the files which are 4748/4749 and 4750 is the lathe head fix which can just be redone.) If needed I can either make a video of content to both show it works and to get it approved or screenshots. I can remove several of the joke pulls like the alphabet soup/ridable farm animals or the collection of egg babies and skeleton puppets.

Some of the content such as fish and shrimp tacos being 4754-4759 I could always just create a few simple objects to fill the gap and then those transitions/objects don't need to be remade unlike some of the objects that are higher up on the list. Hopefully we can figure something out so you can worry about fixes and I can try to pump out some fun little bits of content to compliment the fixes.


fug it’s Tarr.

Offline

#5 2023-05-16 22:43:41

OneOfMany
Member
Registered: 2019-06-10
Posts: 125

Re: Why new content pull requests don't work

Welcome back, Jason!  I might have to change my profile pic now.

If I had any advice for you, I would say to utilize Tarr. They seem quite willing to help out with coding and you can't beat free help.

Anyways, happy to see you back. I can't wait to see what you come up with in future updates.


I am a dirty, dirty roleplayer. I roleplay in the game, sometimes on the forum and if I'm being honest, a bit in real life. I can't help myself. I'm a dirty, dirty roleplayer. Don't hate the player, hate the game. smile

Offline

#6 2023-05-17 14:53:23

Coniculls13
Member
From: Australia
Registered: 2018-03-10
Posts: 42
Website

Re: Why new content pull requests don't work

Here's how CCM handled this back in the day. (Original source removed, I've archived within 2HOL)
https://github.com/twohoursonelife/arch … equests.md

My understanding is there were ~5 frequent contributors to their Data7, but it's been about 3-4 years since anyone I know of used this process in full.

We infrequently use these scripts around 2HOL, but don't have it down pat for multiple regular contributors.

I'd be interested in contributing or supporting attempts at improving this problem.


Maintainer of Two Hours One Life - a curated OHOL server. Discord https://discord.gg/atEgxm7

Offline

#7 2023-05-17 16:38:05

Arcurus
Member
Registered: 2020-04-23
Posts: 1,004

Re: Why new content pull requests don't work

Hey Jason!
What a surprise! Nice to see you again!
Yea the object IDs are quite of problematic.
Just as idea, maybe use Unique Ids in the editor and then make a short script to "compile" the object data to their vanilla OHOL IDs.
In case you are interested, Open Life Reborn currently creates one file out of all the object files to speed up client and server restart drastically. Guess it would not be that hard to use it to generate OHOL object IDs from new Unique Object IDs. So new Objects could use these unique Ids which can then be "compiled" to OHOL vanilla Ids.

Another problem i currently face is that custom servers like Open Life Reborn will not be compatible with a new client Data version until updated. With is kind of messy, since there is no way to get the timing right when vanilla client ist updated.

By the way, how is the right way to update the data version of a custom server, since i guess the dummy object ids will be messed up badly (they are just using the next not used IDs). So i guess i will have to add the new used IDs count to the dummy object Ids? Wonder how you handle this in vanilla server.

Last edited by Arcurus (2023-05-17 16:39:42)

Offline

Board footer

Powered by FluxBB