InterShardSegments for each shard rather than a shared one.



  • My feature request would be for each shard to have an inter-shard memory segment that it can write too.

    These segments would be accessible read-only from every other shard.

    So currently we have 3 shards so there would be 3 interShard memory segments. Each shard would only be able to write to one, but would be able to read from all.

    This way players wouldn't need to implement their own mutex locking mechanism on RawMemory.interShardSegment.

    👍🏻👍☹


  • Oh yes please. I'm looking at inter-shard operations right now and the single interShardSegment just makes communication between shards unnecessarily complicated and incredibly slow.

    All shards should have their own interShardSegment that is read-only for other shards. This would make everything so much easier and faster.


  • Dev Team

    This is in our backlog. We're going to implement it this way:

    InterShardMemory.getLocal()
    InterShardMemory.setLocal(string)
    InterShardMemory.getRemote(shardName)
    

    No ETA though.

    👍


  • For anyone else who is interested, I've implemented a polyfill for the new API:

    // Polyfill for new InterShardMemory API.
    // InterShardMemory.init() must be called at the beginning of the main loop.
    // InterShardMemory.update() must be called at the end of the main loop.
    let SHARDS = null;
    
    function _load() {
        if (SHARDS) {
            return;
        }
        SHARDS = JSON.parse(RawMemory.interShardSegment) || {};
    }
    
    global.InterShardMemory = {
        getLocal: function () {
            let shard = Game.shard.name;
            if (Memory.interShard) {
                return Memory.interShard[1];
            }
            return InterShardMemory.getRemote(shard);
        },
    
        setLocal: function (data) {
            if (typeof data !== 'string') {
                throw new Error('InterShardMemory must be a string');
            }
            let shard = Game.shard.name;
            Memory.interShard = [Game.time, data];
        },
    
        getRemote: function (shard) {
            _load();
            if (! SHARDS[shard]) {
                return '';
            }
            return SHARDS[shard][1];
        },
    
        init: function () {
            // clear the cache each tick
            SHARDS = null;
        },
    
        update: function () {
            let pending = Memory.interShard;
            if (! pending) {
                return;
            }
            _load();
            let shard = Game.shard.name;
            let entry = SHARDS[shard] || {};
            if (entry[0] != pending[0]) {
                // Try to update the shared interShardSegment
                SHARDS[shard] = pending;
                RawMemory.interShardSegment = JSON.stringify(SHARDS);
            }
        },
    };
    

    Example usage:

    require('./intershard');
    exports.loop = function () {
        InterShardMemory.init();
        // your code
        InterShardMemory.update();
    };
    

    NOTE: It is impossible to guarantee that RawMemory.interShardSegment isn't overwritten by another shard with a slow tick rate. No amount of locking can fix this - the existing API is fundamentally broken (it is impossible to implement locking when your only shared memory is a single asynchronous block). As a workaround, each shard's inter-shard memory will also be stored in Memory.interShard, which will be used to update the RawMemory.interShardSegment if it gets overwritten.



  • @systemparadox said in InterShardSegments for each shard rather than a shared one.:

    NOTE: It is impossible to guarantee that RawMemory.interShardSegment isn't overwritten by another shard with a slow tick rate. No amount of locking can fix this - the existing API is fundamentally broken (it is impossible to implement locking when your only shared memory is a single asynchronous block).

    I would like to say that this is incredibly false. You can use TDMA with large enough time slots and guard periods for each shard to access and save it safely. The synchronization between shards would be based on Date.now() calls, verifying the current time on the system. I doubt that any of the game's servers will be so skewed that it becomes desynced.

    That being said, I am looking forward to the new API as this solution seems excessive for what it was intended to be for and adds far more complexity than should be needed.

    Edit: spelling is hard...



  • The primary issue with TDMA is "large enough time slots". Ticks are several seconds so giving enough time to make that even vaguely safe is completely untenable. Also, it's still not safe even with an arbitrary amount of time.

    The key problem is that shards read interShardSegment at the beginning of the tick and write to it at the end, which introduces a substantial delay between reading and writing. Since shards sometimes get very slow, it's entirely possible that one shard could have several ticks during this time. When the slow shard finally catches up, it might write something into the shard memory that was based on a version several ticks ago from the perspective of the other shards.

    You can fix this by having a locking system that requires all shards to acknowledge the lock before continuing, or by passing control to the next shard (which has to happen continously, even if no other changes are being made). The issue with these approaches is that you now have to give a list of all the shards you're actually using. And then you have the issue of dealing with shards which are hanging or stuck, which brings us back to the whole problem again, because you have no true guarantee that a shard marked as stuck won't come back and write something really old into your memory.

    That said, so far I've been thinking about this in terms of the entire tick length. If the reading/writing happens individually before/after each player's code then that would significantly reduce the problematic delay, although I'm still not sure we can place any guarantees on it.



  • @systemparadox said in InterShardSegments for each shard rather than a shared one.:

    That said, so far I've been thinking about this in terms of the entire tick length. If the reading/writing happens individually before/after each player's code then that would significantly reduce the problematic delay, although I'm still not sure we can place any guarantees on it.

    I do know that the inter-shard segment is saved right after the user code runs. I don't know when it's loaded, but the private server loads the rest of its memory right before the user code runs, so I assume the inter-shard segment is loaded at the same time on the public server.

    If this is correct, then extremely long ticks shouldn't be much of an issue - the user code will automatically time out after the CPU is used up.



  • I've gamed out on paper that 20 second timeslots with 10 second guard periods would make it safe enough. even at 10 sec/tick, it'd allow for 1 tick to request, and a 2nd tick to access/save in each timeslot. the guard period would just be a buffer between accesses. if you really wanted to be paranoid, a 40/20 would be overkill but almost guarantee safety. regardless, it's not like you need any immediate traffic b/t them. if you've got a creep traveling between shards, there is plenty of time between when they initially spawn (you should know that theyre intended to go cross-shard at this point) and reaching the portal.



  • Could we just scrap the intershard memory and implemented a log like system?
    Basically like shardMsg(SHARD, msg string)
    That would be a Game.shardMessages = array where you can read the msgs received in that tick.
    You can handle saving them yourself then.

    ❌


  • I can see uses for both approaches. Sometimes you want to send a one-off message to a specific shard or broadcast to all shards. But I think there is a strong case for just keeping some information available that other shards can read when they like.

    Given a shared memory system you can implement messaging fairly easily. You can also do it the other way around, but I think this is more of a pain.



  • There's already an event log, so if ISC (inter-shard communication) pipes leveraged it, creating an event handler for messages would be trivial, and far saner than async access to shared memory. (Not to mention that message-passing is widely viewed as safer than synchronous shared memory access...)



  • @systemparadox said in InterShardSegments for each shard rather than a shared one.:

    it is impossible to implement locking when your only shared memory is a single asynchronous block

    That's absolutely false. Writes to the single asynchronous block are atomic, so it's 100% possible to do locking albeit with an infinite time horizon. The biggest issue is that a shard might stop running while you're in the middle of a locking operation. The second biggest issue is that there are no guarantees about the length of time between ticks - it's conceivable that one shard might have a glitchy tick that takes 1 minute.



  • @wtfrank Technically you could, you simple just let a shard update every few ticks ex. S1: True then when its done it changes S2: True



  • So to sum up, the current method is slow and unreliable.

    Suggestions to fix that are:

    • Private memory section for each shard, that is readable by the other shards.
    • Msg/log based system

    I have to agree with @SystemParadox both are useful, the msg system i.e. for inter shard travel to announce a incoming creep.
    The memory segment for static like market information that can be read at will by other shards.



  • @communication I'm not sure what you mean exactly, but it's possible to do, lockless style (eventually correct but you don't know if it's up to date or when it will eventually be correct), mutex style (guaranteed correct or you know that it's not correct), and even some kind of time-sharing to manage concurrency.

    They thing that all of the methods have in common is that none of them are very fast.



  • @wtfrank Thats true



  • @wtfrank said in InterShardSegments for each shard rather than a shared one.:

    That's absolutely false. Writes to the single asynchronous block are atomic, so it's 100% possible to do locking albeit with an infinite time horizon

    There are two points that everyone keeps missing when I discuss this:

    1. Atomic writes are not the only requirement for safe locking. You also have to make sure that no-one else can read or write the lock in between time. You can do this in a CPU environment, but not in screeps.
    2. Writes are atomic, but you can only write to the entire block. If shardA tries to change a lock flag and shardB tries to write some data, and shardA wins, the data that shardB wrote is gone. At first glance it seems like locking should prevent this situation from occurring at all, but you have to account for slow or glitchy ticks.

    Thus, you cannot implement an independent locking system that is 100% safe, because we do not have the guarantees required for this. As previously stated, locking systems which require cooperation between shards are possible, but will block communication if one shard is down (and any attempt to resolve this makes it unsafe again).

    I contend that anyone who thinks they have implemented a 100% safe locking system has made one of the following assumptions:

    1. That shards all run at the same rate and no shard is ever down (not true)
    2. That only one shard is executing at a time (not true, shards are executing simultaneously on different servers)
    3. That you can read and write the segment atomically without any other shard reading or writing in between (not guaranteed, since shards are executing simultaneously)
    4. That there is a limit to the time between reading the segment and writing it (not true, there are no guarantees here)
    5. That you can atomically update one part of the segment (e.g. a flag) without changing the rest of the segment (not true, it's all one block)

    Now I'm sure many people have locking systems implemented and working fine 99.9% of the time. The point is that it's not 100% safe, which means you have to think about what happens when it goes wrong. I suspect for most people the answer is that they don't care (so long as their locking system recovers without blocking everything). They'll lose some data, but it either wasn't important or will be re-written again soon enough. This is perfectly valid, but working 99.9% of the time and not caring when it goes wrong is not the same as provably 100% safe.

    The interesting thing about this is that any one of the assumptions above would allow us to implement 100% safe locking. It's entirely possible that assumption 4 is actually already true, but we would need guarantees from the devs before we could rely on it.

    Of course, the new InterShardMemory scheme is much simpler and solves all these problems.

    👍


  • @SystemParadox I believe you are correct for a system based on locking. You don't have to use locks though. There are 100% safe lockless systems.

    With the caveat that I've not actually implemented it, here is my (what I believe to be) 100% safe system:

    1. Partition the segement into sub-segments. Each shard owns a sub-segment.
    2. If we want to make a modification, read the entire segment, modify our sub-segment, write to the segment.
    3. On future ticks we know what our sub-segment should look like. Read the sub-segment and check. If it doens't look right that means we have a condented write which the other guy won. Skip a random number of ticks (or wallclock time) and goto 2. The skip probably isn't neccessary unless you're on >2 shards.

    This method works around contention by brute force and can handle a shard just disappearing. High contention will cause delays but the random time length skipping should ensure that all writes go through without too much latency. It depends a bit on how much you're willing to tolerate. It's also possible that a shard will see another shard's subsegment go back in time causing some confusion. That can be sorted by having a timestamp in each subsegment.

    If you're putting a lot in the segment then checking our section is correct every tick may bit a bit expensive. If you're only using it for small messages it shouldn't be.



  • Yes that's basically what I've done in the polyfill above, although I haven't done anything with high contention so it's possible that one shard might block the others if it writes every tick.

    I've got it checking against a tick number so it doesn't have to compare the data itself.



  • Here is a pure mutex system (no backwards writes). The key I think you missed it it's based on time not ticks.

    1. Claim the segment by writing your shard number and time + 3*N seconds
    2. Wait till you've claimed the segment for N seconds.
    3. If N < time < 2*N seconds then write data (also increment a sequence counter).
    4. Do not reclaim the segment for M seconds.

    During all this every shard keeps its own copy of the shared memory. which it refreshes every tick. If their copy is more up to date overwrite the segment.

    • N should to be your slowest tick + clock skew between machines.
    • The safest M is # of shards * N. Smaller values allow write starvation of slow shards.

    The easiest way to break this system is having a shards write get committed long after the script's tick is over (like > 3N after).

    It is also possible for this scheme to break when ticks are > N. But even then a single slow tick doesn't break the system. Consistently slow ticks also don't break the system. You need inconsistent tick times.

    1. Shard A 2N slow tick while Shard B claims
    2. Shard B has a 3N slow tick while trying to write.
    3. Shard A claims and writes during B's slow tick.
    4. Both Shard A and Shard B think they have the next sequence.

    Choosing N and M can be tricky but picking large values like 1 minute and 5 minutes respectively for the MMO server will just work.

    The big problem with this scheme is it's SLOW. A shard could wait 100 ticks or more before acquiring the shared segment.



  • @artch I think this discussion shows that you should pull that a bit up on your priority list 🤣
    But I love those kind of discussions they are clear indicators where something is lacking 🙂