PTR Changelog 2016-09-29
-
There is a reason I ask the question so many times, you don’t seem to see the problems the players are currently facing, and are waiving our concerns in a condescending/hostile way.
I didn’t mean to offend you or being hostile, and I’m sorry if it looks like that. You just have all the answers we can give, and asking them again doesn’t help really. We understand your concerns, this is why we have deployed this change on the PTR two weeks in advance.
Game.flags[‘flag’].memory might be touched during initialization for all I know
It is not touched.
How can I test the changes made to my code when I don’t even have my normal setup in a test environment?
Unfortunately, we cannot help with this currently. It would require a lot of new expensive hardware in order to scale the PTR to the size when it can handle the live world data.
How do you to reliably test the change if CPU varies so wildly?
CPU varies on the PTR only due to its specific environment. It should not be the case when this change is deployed (the runtime workers don’t have any background processes there). If it is, we’ll figure it out then. It is not like memory parsing, it's a more stable algorithm.
Why would rooms count even matter to flags?
Flags are serialized and unserialized on per room basis. There are two nested loops in the flags parsing routine - one per room and one per flag in the room. Otherwise it would be a lot more expensive than 0.005 CPU per flag.
-
Using your same benchmarking style of test for Memory on the production systems, I see these results:
[9:05:55 AM] Tick 14065006 Memory parse time result: 30.2501
[9:05:58 AM] Tick 14065007 Memory parse time result: 9.3850
[9:06:00 AM] Tick 14065008 Memory parse time result: 6.9617
[9:06:03 AM] Tick 14065009 Memory parse time result: 9.7145
[9:06:06 AM] Tick 14065010 Memory parse time result: 10.6524
[9:06:09 AM] Tick 14065011 Memory parse time result: 11.7271
[9:06:12 AM] Tick 14065012 Memory parse time result: 7.3918
[9:06:15 AM] Tick 14065013 Memory parse time result: 11.2062
[9:06:18 AM] Tick 14065014 Memory parse time result: 11.3516
[9:06:21 AM] Tick 14065015 Memory parse time result: 26.5043
[9:06:24 AM] Tick 14065016 Memory parse time result: 50.3858
[9:06:27 AM] Tick 14065017 Memory parse time result: 7.6152
[9:06:30 AM] Tick 14065018 Memory parse time result: 11.9079
[9:06:33 AM] Tick 14065019 Memory parse time result: 10.3699
[9:06:36 AM] Tick 14065020 Memory parse time result: 31.6772Do the production servers also have background processes running? Because this is a fluctuation between 7.6 cpu and 50.4 cpu, just to access memory. This is the test code:
module.exports.loop = function () {
// console.log(`------------------- tick start: ${Game.time} -------------------`);
let preMemCpu = Game.cpu.getUsed();
Memory;
let postCpu = Game.cpu.getUsed() - preMemCpu;
console.log(`Tick ${Game.time} Memory parse time result: ${postCpu.toFixed(4)}`);
...
}Let's just call this what it is - variability due to system overhead, maybe garbage collection, I don't know for sure. If you can see this kind of variability in production, why is PTR written off as an invalid case when seeing those numbers? I'm happy to post more from production, maybe I'll get lucky again and have the Memory test report 200 cpu as I have seen in the past. Please take this seriously, because right now it just feels like you are writing our concerns off. Flag processing plus memory parsing on a single tick could (given only the numbers I've posted in this thread) cost 80 + 50 = 130 cpu, which is my current limit.
Oh, and some more results from the memory timing test:
[9:14:38 AM] Tick 14065185 Memory parse time result: 9.6542
[9:14:41 AM] Tick 14065186 Memory parse time result: 72.8470
[9:14:44 AM] Tick 14065187 Memory parse time result: 86.3037
[9:14:47 AM] Tick 14065188 Memory parse time result: 17.5044What is the reason for this?
-
Actually, it won’t hurt if I show the flag parsing snippet, we’re going to opensource it soon anyway. Here it is:
serializedFlags.forEach(flagRoomData => {
<span class="hljs-keyword">var</span> data = flagRoomData.data.split(<span class="hljs-string">"|"</span>); data.<span class="hljs-keyword">forEach</span>(flagData => { <span class="hljs-keyword">if</span>(!flagData) { <span class="hljs-keyword">return</span>; } <span class="hljs-keyword">var</span> info = flagData.split(<span class="hljs-string">"~"</span>); <span class="hljs-keyword">var</span> id = <span class="hljs-string">'flag_'</span>+info[<span class="hljs-number">0</span>]; register._objects[id] = <span class="hljs-keyword">new</span> globals.Flag(info[<span class="hljs-number">0</span>], info[<span class="hljs-number">1</span>], info[<span class="hljs-number">2</span>], flagRoomData.room, info[<span class="hljs-number">3</span>], info[<span class="hljs-number">4</span>]); })
});
-
Let’s just call this what it is - variability due to system overhead, maybe garbage collection, I don’t know for sure.
…
What is the reason for this?GC is most likely. You may be hit by it in any point of execution, not only in Memory or flags parsing, but in an empty
while
loop also.But PTR has more than that, this is why it has a lot more spikes than the live server. It's a single machine with everything running on it - mongodb, redis, node processes, cronjobs, everything. The infrastructure of the live server is much more separated.
-
That's the problem, though, Artem. I was certain it was GC and thank you for confirming that, but the GC cost is something that is not necessarily related to our code, but could also come from the game engine, previous players code, or elsewhere - yet we have to pay for it on our own ticks and our own cpu time. I know that's a hard problem to solve, and my complaint isn't with having to pay it, it's with the inconsistent timing of *everything* when GC hits. Adding another parsing step to our cpu costs will cause ticks to vary even more wildly than current, and if you have a "bad" tick you can hit the tick limit before you finish processing your code. Isn't there a way to record aggregate flag parsing cost for each tick, then average it? You can take that number, add 20%, and call that the new "constant" for per-flag parse time. You could even make it a moving average - I just think it's unfair to have a 4x difference in CPU cost between ticks and expect us to be able to code around that. I'm not looking for cheap / free flags again - hell, make the parsing cost a full CONST 0.2 for all I care, but please work with us to find a way to make it consistent.
-
GC spikes are the reason why all players have the bucket and 500 CPU tick limit. It affects all players exactly the same eventually.
Anyway, let's just wait and see what it will look like on the live server. If it's too inconsistent, then we'll consider other options.
-
Is it possible to hook up user-methods which should return flag objects?
If we could somehow return an array of Flag[] to the game engine
ScreepsEngine.hook('loadFlags', someFunctionWhichLoadsFlagsFromMemoryOfPlayer)
ScreepsEngine.hook('loadRoomFlags', sameAsAboveButFilteredForRoom)If one of the function fails to reply as according to spec, flags should just be ignored, or a runtime error should occur. Examle:
engine requests flags for W12N1 , user code responds with W12N2, engine throws error.
Just like PathFinder.use() you can choose to use it or not.
Obviously this requires the engine to check the validity, but as you said: "Iterating is an order of magnitude less expensive than parsing" on which I agree wholeheartedly.
We were already discussing different ways of storing them in memory in an efficient, single-string way:
X [ 6 bits ] 0-63
Y [ 6 bits ] 0-63
W/E [ 1 bit ] W or E
[NUM 15 bits] 0 to 32767 (allows for expanding to a world which goes from W0-32767 E0-32767)
N/S [ 1 bit ] N or S
[NUM 15 bits] 0 to 32767 (allows for expanding to a world which goes from W0-32767 E0-32767)
COL1 [ 7 bit] 127 colors!
COL2 [ 7 bit] 127 colors!
Length [NUM 6 bits] 0 to 63 - length of string
NAME [remainder]This should speed up parsing by a lot. Not needing to do splits saves a shitton of memory. This would even fit in the diplomacy module we got going.
-
Only reason I use flags, is as you said, because I want to see my operation going: http://i.imgur.com/STJPhGX.png
I could change this now and everything would still work the same.
If we can get beacon for displaying what happens, that would be amazing to. No name, just a position + icon?
-
We were already discussing different ways of storing them in memory in an efficient, single-string way: This should speed up parsing by a lot.
We tried to do it this way and didn’t see any considerable performance benefit in comparison to simple
split
calls. String operations are very fast in V8,Flag
objects instantiation is the bottleneck here.But if you manage to make some benchmarks and prove that your method is a lot faster, then it should not be a big issue to switch to a new format.
Is it possible to hook up user-methods which should return flag objects?
If we can get beacon for displaying what happens, that would be amazing to. No name, just a position + icon?
Yeah, such features would be cool. Probably, in the future.
-
Artem, what about having flags not be fully instantiated every tick on the server? You could instantiate them once, then cache the resulting objects at the end of each tick and apply the data changes to the existing objects and add updated flags to Game.flags for the player, and discard the cached list. This would reduce instantiation cost as the objects would already exist, and you'd instead be iterating over the flags to synchronize the data with the properties. Is this feasible at all?
In short, promote on write - here's the basic idea:
// Tick ends
_gameCache.flags[playername] = Game.flags
// Tick start
_.forEach(flagroomData, f => {
// Split, etc
if(_gameCache.flags[playername][parsedName]) {
// Update properties directly
_gameCache.flags[playername][parsedname].color = COLOR_RAINBOW;
Game.flags[parsedname] = _gameCache.flags[playername][parsedname];
} else {
Game.flags[parsedname] = new Flag(..parsedinfo);
}
}
// Clear cache, references to flags that already exist are now in Game.flags for the player
// and will be copied back out on tick end
_gameCache.flags[playername] = {}
-
The reason most people use flags is to have a pointer to memory which they can quickly click:
They want to display some piece of data by choosing a color and an easy way to open te memory viewer.
I think if we can emit events for the game to display:
Game.rooms['someRoom'].displayIcon(SomeRoomPosition, COLOR_BLUE, COLOR_RED, "path.to.memory") // Maybe public/private? vision required!
A lot of people would be happy. Of course you can still choose to use the flags, but at a cost.
-
We could also impose a GCL cap on flags.
-
If you make this change I will quit the game. This is bullshit- my entire program is going to have to be rewritten for scratch!!!!
-
For the record I'm mostly using flags to place buildings. I'm not trying to cheat to use less memory, I'm literally using flags to mark locations of things.
-
I know you don't give a shit about my opinion because I have a lifetime license, but I supported this game from the beginning and brought in dozens of players. I started the open source projects.
And now your bullshit is making me rage quit.
GOOD FUCKING JOB WITH COMMUNITY MANAGEMENT GUYS.
-
YOU WON'T EVEN COPY THIS OVER TO PTR FOR US? JESUS CHRIST HOW DO YOU EXPECT US TO TEST THE GOD DAMN WRENCH YOU"RE THROWING INTO OUR SYSTEMS?
Maybe you should try actually playing this game so you realize how badly you're fucking your customers.
-
You aren't even attempting to use your god damn brains on this one.
How about making us a Game.rooms[].flags so we don't have to parse EVERY DAMN FLAG just for one rooms flags?
This is just another half assed barely thought out change that is going to screw everyone over.
-
Calm your tits mate, it isn't as bad as it look. The expected cost will be around 1 cpu for every 100th flag and it should mostly affect the players who has been using the flag objects as their memory object.
If you just use the flag to mark locations you just need to remove them when you done.
-
>> But if you manage to make some benchmarks and prove that your method is a lot faster, then it should not be a big issue to switch to a new format
I don't want to share that code, I can make things fast, which gives me an edge over other players. That's the whole point of the game right? I get my edge by being more efficient than others. This makes me play this game, but currently that's taken away from me, bit by bit.
Can't we get any buffs in CPU or anything? We're just expected to "deal with this increase in cpu, you got 2 weeks". It took about 4 weeks to reduce my CPU by 20. Now 50% of that goes back to flags, I'd like to influence how they're made.
-
This game has more breaking changes pushed per month than my actual job that I get paid for, and I get more notice of those changes and more opportunity to fix my code to comply with the changes. Two weeks? Are you serious?
I'm not going to disagree that flags should cost something to parse. Storing things in flags instead of memory is obviously just offloading data from something I pay for to something I don't, and the solution to that is clearly for me to have to pay for both.
That said, it's essentially impossible to avoid triggering flag parsing somewhere in your code, because all Room.look methods (even, it would appear, if I call LOOK_TERRAIN) will trigger this. So the only option is to use flags sparingly.
I don't have as extensive use of flags as some others but I have about 3000 flags right now, because there was absolutely no hint that this wasn't "intended usage", and no hint that they were going to be hit so hard with the nerf bat. I have to fix my code, or it will essentially stop working entirely in 14 days.
Did it ever occur to you that hitting the players with nerf after nerf is extremely demoralizing? Again, at my actual job I get massive notice before changes like these, huge amounts of opportunity to bring code up to standard, and these types of changes are few and far between. And I get paid to deal with that. It’s the opposite situation with this game. I pay you.
A suggestion: If we are going to pay the CPU cost for this, something that was previously taken on by the server, perhaps an increase to our CPU is in order? Something like 2 CPU per GCL increase. Right now this change is entirely a nerf. Built anything in your code that uses flags? Congratulations, your code is now less performant! I have to do a non-trivial amount of work just to get back to where I am now. That does not feel good at all. At least if we got a CPU buff along with this, there would be some carrot with the stick: here’s the CPU you were using, if you optimize your code you can actually get more CPU than before the change - and perhaps this would also mitigate the damage and allow a bit more time to change code.
I am supposed to enjoy playing your game. I am not supposed to wake up to posts about a game that tell me I have to spend hours of work to continue using the code I’ve invested, conservatively, hundreds of hours in, and I have a deadline to do this. What if I left on vacation? I’d come back to a dead empire, because my code couldn’t even get past initialization.
I used to unequivocally think that I would love the chance to buy a lifetime sub, say in the next indiegogo campaign. This is the first change that has made me waver on that thought. I am seriously reconsidering spending so much time on this game right now.