Heap Problems are back with 50 Claimed Rooms



  • Summary of some memory optimizations:

    1. Replace spatial and findCache by Map instead of {}. Likely increases performance, impact on memory consumption is unknown. Easy to implement.
    2. Replace elements of spatial: new Array(2500) by Map. Maybe increases performance, impact on memory consumption is unknown. Easy to implement.
    3. Lazy creation of spatial and subtracting the cpu cost from user cpu time. Can decrease engine cpu comsumption for bots which don't depend on look. Decreases memory consumption.
    4. Compress terrain data from 2500 bytes down to 625 bytes. Decreases memory consumption, may increase cpu cost a little, breaking change for Room.Terrain.getRawBuffer, but that's already documented.
    5. Replace pos of all RoomObjects by a property which wraps the new integer field __packedPos. into a RoomPosition. Just like the previous change of RoomPosition this decreases memory consumption a little and adds a little more cpu cost for accessing pos.
    6. Write about memory problems with large empires in the documentation, so that players are prepared that intersharding is a 'must have' for large empires.

    Every of these points can be done independently. @o4kapuk Which changes would you support?



  • @Xenofix in V8 new Array(N) only pre-allocates when N < 2^31-1. Try again, with a billion and you should see the memory show up.

    Replacing new Array(2500) with a plain array will reduce memory by a little less than 200KiB per room. It's a tiny PR. Worse case scenario is I'm wrong and memory usage doesn't decrease.

    Some thoughts on your suggestions.

    1 and 2: Change all the sparse arrays to use Map instead looks like a good idea. I don't know if the ROI is worth dev time. I'd hope @o4kapuk would be willing merge in a PR that did the work for them. But you also have remember the current code works. Changing the code at every point that touches an index is a riskier change. There is nothing we can do to make that easier on o4kapuk short of getting screepsplus to run the patch for a few weeks to prove it's stable.

    3: I looked into the code after your previous post. The player cpu usage doesn't account for Game initialization, (at least in the open source screeps driver). That makes this change much trickier from the PR perspective. It would reduce overall CPU usage but would increase individual player cpu usage.

    4: I don't think compressing the terrain buffer will help much. Due to how it's set up the terrain buffer ram is shared across the entire runner. The shared ram isn't accounted to the per player ram limit.

    5: I suspect any gains from not storing a RoomPosition object will get washed out by recreating one every time a player accesses it.

    6: Maybe not official documentation (it only affects a very small subset of players) but a statement from the devs about the expected room count would probably be a good idea. If I were them I'd just say up to 45 rooms will work, more than than and you might need to get creative with memory.



  • I am running 50 claims with ~205 rooms vision, my heap jumps up around 100~150mb each tick. With 608mb and coming in at around 250mb on the first tick it has barely any slack for garbage collection. It basically flip-flops from tick to tick between 420mb and 550mb.

    100~150mb of released heap space each tick is a lot of RAM, dividing this by the 205 rooms I have visible this would mean 600k of memory per room. Not sure if a 2500 element array is significant enough in this regard.. What or why is the server shoving this much data into my IVM each tick, can't it hold on to some of this stuff at least (assuming it's largely the same data every tick anyhow, especially if it's terrain data that is the main contributor)?



  • Running 52 Rooms now, cannot use full CPU anymore due to Heap Limitations (cant setup new Remotes). Disabled Observer rescans yesterday to get some more tolerance, not helping as much as I was hoping though.



  • So any reason we cant have a Slider for Heap per Shard ? Why do Players who spread on all shards get combined 4x the Heap I get ?



  • No, that's the very wrong direction. Memory should not be another limiting factor. We are not writing code in C++ here where we have full control of the memory. If it's just limited because of limitations of the node servers then that's pity for you but that's not a reason to make memory a limiting factor for all the other players!

    Btw. intersharding is more complex than staying in one shard, it also adds more overhead for each shard. The extra memory can be seen as an incentive for intersharding. Maybe you take the challenge? What about this idea: Every player gets 20+GCL*10 cpu without a cap, but can only assign max. 300 cpu to one shard. Would that be incentive enough to go intershard? Then players wouldn't hit the memory limit as they strive for intersharding empires to receive full cpu.

    One last thing I would like to know from you Totalschaden: How much code can be executed per tick? Is the first line of your main.js executed? Is the first line in your loop executed? How much memory is used before you initialize any modules at the very first lines of your main.js and what's the memory consumption there? That first line should just be executed once per vm reset and the memory consumption there, before you initialize anything shows how much the engine really takes.



  • @totalschaden If there were a slider to allocate heap between shards, it would presumably cap out at the current heap limit for obvious reasons 😉



  • @xenofix I think you miss the point that memory already is a limiting factor such that when you keep claiming rooms with your 300 cpu eventually a significant portion of it gets eaten by garbage collection. It's just the Game objects graphs the game throws up that does not fit in the heap anymore, or it stuffs it so much that the garbage collectors enters a more aggressive schedule making it very costly. Partially migrating to another shard creates more heap slack and restores the garbage collector to an easier more CPU friendly schedule.

    A user on 3 shards gets 1800MB of RAM for his code. I only get 600mb cause I stay on one. This is where I agree with Totalschaden, this is not entirely fair. What I also don't understand is why someone with say 16 rooms gets the same 608mb where in this case it's a massive over allocation.

    A cleaner alternative is perhaps a hard cap on claims, perhaps with a slider per shard. Scale heap according to what is chosen there.

    Either way let's wait and see what they come up with, I understand they are changing some things around in the architecture.



  • @xenofix When its in the State where i took the Screenshot, no code is executed at all, its complete stuck.



  • The memory limit is there because of physical restrictions. That has nothing to do with fairness. Everyone can go intershard to prevent memory problems. That is fair. While I agree that it would be optimal to just have enough cheap memory, after all nowadays 8 GB is nothing...., I assume that we won't get it because of economic reasons.

    A new architecture can also solve these problems. But I am still and again very much against a stricter memory cap. The original topic was that memory is not enough for 50+ rooms, it was not that other people's memory is too much.

    👆


  • Much easier to go Intershard then dealing with the current heap situation. I already hate the fact that there are multiple worlds/shards now, when I started the game it was 1 Shard only game and thats what i loved in the first place and got me into the game.

    My current state of codebase should deal with intersharding no problem and if not, not much adjustment would be required. I would greatly prefer to imagine there is only the shard im currently in and ignore the rest of it. I am pretty sure by the time im forced into intersharding, that i would quite the game shortly after.



  • @xenofix That's pretty much a "it doesn't effect me, so I am against it" argument which is not really contributing anything to this thread, sorry. There is nothing economic about it apart from maybe giving me 100mb would mean giving 300mb to some of those that are already multi-sharding. There is also nothing anywhere that states you should at some point go multishard if you want to scale out further. It's just an emergent constraint that was not directly intentional. We are looking to be enabled not to disable anyone else if that is your fear...

    Additionally since the game does already limit CPU and Memory why would it not also limit heap memory now that has become such an attractive asset with IVM? Of course currently my heap use is just a drop in the ocean compared to the Game objects graphs which is ultimately the issue, the game doesn't support it's own weight.

    👆


  • @tun9an0 said in Heap Problems are back with 50 Claimed Rooms:

    Additionally since the game does already limit CPU and Memory

    FYI: Memory is the same as heap, per shard.



  • @gimmecookies Memory doesn't hit the limit from stuff the game puts in there automatically. I am fine with whatever, I just need more heap, core point of it all.



  • @tun9an0 You get me wrong. It will affect me, so I am supporting to get more memory but I'm also against restricting memory more than necessary.

    Your argument seems to be: You hit the memory limit with many rooms, so everyone else should also hit the memory limit with less rooms.

    I'm against that for obvious reasons.



  • @xenofix That not what I have said at all, but whatever...