Auth Tokens
-
I too am concerned about the rate limit of 10 per hour on uploading code. It's way too restrictive.
Thinking back to when I started, I remember several days of furious coding in which I commited far more then 10 times per hour, after ditching the ingame editor. I believe it's the way most people learn a new language: change one line, commit, see if it works, rinse and repeat. it will also interfere with "printf" debugging.
I understand the devs might want to prevent excessive use (such as an external bot continuously changing code), but that should be addressed differently (increase CPU cost of uploading, use more elaborate limits, or just ignore it for now and monitor + warn players that do that)
-
The per-day limits would help to mitigate the issue, if it still serves the same purpose.
- 10 per hour -> 240 per day
- 60 per hour -> 1440 per day
To keep players from being locked out of using the game for a whole day because of abnormal use or some bug, perhaps a limited-use reset system would help.
In addition to that, if it were possible to raise some of the limits that are easier to hit that would probably solve a lot of the issues at least with my project.
I'd be interested to hear about the purpose for the limits, I had assumed it is so that players cannot bypass the CPU limitations with 3rd party tools.
-
@vrs considering that uploading code also resets the global for the user there's already a pretty steep CPU penalty for uploading.
-
I'd like to make a quick clarification here since it already causes a lot of confusion:
Regular requests made by browser and Steam client don't involve Auth Tokens and thus are NOT rate limited at all. They will work as before, without any limits, including code uploads. The documentation article is updated to indicate that.
I'll answer to other comments and suggestions on Monday. Rate limits values will be most probably changed, they are not final in any sense.
-
I am unable to log onto the public server from Chrome. When I try to login thru steam I get the following error:
TypeError: Cannot read property '_id' of undefined at tokenAuthPromise.then (/opt/backend/src/app/auth.js:349:41)
Must be fixed now, please confirm.
-
Any chance we can get an endpoint to query a tokens access? Being able to determine what a token can access would be helpful in cases for example, where a user selects an option to pull from segment 5, but has only granted access to segment 10.
-
@artch said in Auth Tokens:
olve Auth Tokens and thus are NOT rate limited at all. They will work as before, without any limits, including code uploads. The documentation article is updated to indicate that. I'll answer to other comments and suggestions on Monday. Rate limits values will be most probably changed, they are not final in any sense.
Will this replace basic access authentication as well come February when the auth tokens replace the current system?
-
I have another request that I think will be super helpful. Right now the options are
Full Access
or the selection of various options. I think aRead Only
option would be extremely useful, and since all of the write operations arePOST
requests it should be easy to define what is read only to just theGET
requests.This would allow third party developers to build really informative applications. Pretty much all of the League stuff can be handled with a read only token (with the exception of populating the public segments, but that's just one of roughly three systems the League site uses). The Screeps Dashboard used by
Quorum
is also read only. The backup tool could also be setup with a read only key.
-
As someone who enjoys making tooling for the screeps ecosystem, I'm pretty excited about this new feature, but wanted to come in to express my concerns about the rate limits. Since you've already said that the values will likely be changed, I'll just ask one question: Why do the rate limits exist? Here are my thoughts on potential answers to this:
The rate limits are intended to reduce demand on Screeps infrastructure.
In this case the limits should very likely be set so high that only problematic scripts would ever trigger them. For example, requesting a memory segment (100 KB) from each shard (3) once per tick (~.3 Hz) would round out to about 100 KB/s of bandwidth. If supporting that is tenable, the limit should be .3Hz (or 1080 / hour).
For an even more stark example, the code upload limit should likely be closer to 720 / hour or more, given that the "baseline" is users editing code in the online editor might save every 5 seconds during active development, and we know the infrastructure can support this.
The rate limits are intended to increase the challenge of the game.
This seems less likely to me, but if this is the case then browser and steam clients should be rate limited as well. If you don't rate limit that authentication mechanism, then the external tooling will just find ways to use it so it can bypass the rate limits. For example, instead of the tool saying "go here to get an API token", it would say "go here to log in, then run this user script to produce a cookie you can use to log in". It's also worth pointing out that the API method of accessing memory/market/map is emulatable using the console API.
Regardless of the motivation for rate limiting, I'd like to request that a few specific ones be increased to specific values:
POST /api/user/code
should have a rate limit of at least 12 / minute = 720 / hour. This lets you update code every 5 seconds, which I bet an active coder on the site would be updating at during active development.GET /api/user/memory-segment
should have a rate limit of at least 1080 / hour. This will allow a script to collect per-tick stats from each shard in realtime.
-
100kb/s would be 0.8 MBit/s per user (or 8.2 GB / day) just for stats... how do you think that would be tenable if you want to support that for every active user?
For an even more stark example, the code upload limit should likely be closer to 720 / hour or more, given that the "baseline" is users editing code in the online editor might save every 5 seconds during active development, and we know the infrastructure can support this.
I would really like to see someone coding for an hour with a average save frequency of 5 seconds That's like saving and uploading code every second tick.
If you don't rate limit that authentication mechanism, then the external tooling will just find ways to use it so it can bypass the rate limits.
That would only work if your script solves captchas. And if you do that actively to circumvent limits set by the game I would expect your account to get banned.
-
100kb/s would be 0.8 MBit/s per user (or 8.2 GB / day) just for stats... how do you think that would be tenable if you want to support that for every active user?
That's definitely not the case- it's a worse case scenario that isn't likely. Assuming one segment per tick per shard, and three second ticks with a player spread across three shards, the segment would have to be completely full of completely random data to hit that target. If the data isn't random the compression used by the API would drop the number significantly.
Even without compression the segments are not likely to be completely full- saving that much data (and thus paying for the
JSON.stringify
call) would use up a lot of CPU so people have incentive to only store what they are using. I"m a fairly high GCL player who collects a lot of stats and my segments tend to average around 50kb for stats- which turns into 7kb when compressed (which I just tested using real statistics segments).
-
Alright, now after reading some of the comments here, I'd like to make another clarification.
Tokens' purpose is to regulate automated use of API endpoints. Automated means human-less here. Such use may involve automated stats gathering or some automated actions during long (more than an hour) sessions. This explains such low limits for some endpoints, since they are not supposed to be automated in general.
However, if you use tokens in some third-party client or another software which involves human presence, then rate limiting shouldn't be the case at all, like in the official client. For that purpose we should probably develop a method to reset all tokens timers at any time in the official client. It would look like a "Reset" button in the "Auth Tokens" section with reCAPTCHA attached to it. If you (not your automated software) have faced some rate limit and it blocks you (not your automated software), then you can easily press that button and continue. We can even develop an UI-less page containing that reCAPTCHA that your client can embed in an
<iframe>
to handle this scenario easily.Now to specific questions.
-
@jbyoshi No, private servers won't include this system.
-
The rate limit on reading segments will really hurt the screeps stats programs, which currently store stats one per tick. This will be even worse for people who are multiple shard. Even at three second ticks users are only going to be able to get less than a third of their statistics with this system. Combine if with the rate limiting on reading memory (only once per minute) and statistics programs are effectively dead.
Not dead, but needing a refactor. You have to collect per-tick stats in a memory segment and flush it once per minute to third-party software. Collecting something every tick is the load profile that we'd like to eliminate with this new system.
I think the rate limiting on uploading code should be 240 per day, rather than 10 per hour. This would result in the same effective rate limit but would allow people to handle debugging a lot easier. I imagine there will be a lot of salt if people upload a bug but can't work around it due to the upload limit.
It's an option, but we have to consider the other side: with the new "Reset" button per-day limits would be easily circumvented by clicking it once a day, rather than once an hour (which is impossible for most human beings).
A new endpoint that allowed us to pull multiple segments at once would alleviate a lot the pain for the stats programs. With this we could grab all the statistic segments in one go, making it so each stat read only cost 1 memory read, 1 segment read, and 1 console call regardless of how many ticks are being processed.
Makes sense, we'll consider.
It would be nice if we could request an exemption, or at at least higher limits, for some third party tools. Specifically speaking I would like to request a higher limit for the League of Automated Nations website and account (which is only used for completely public information). Otherwise it's going to take a pretty massive rewrite (which I will not have time for in January due to work and travel) to get it to fit into the limits.
We might disable CAPTCHA for the specific user, but we have to define the roadmap of when this rewrite is going to be done, we can't allow this exception to stay for good.
-
After looking at these rate limits, I'm not sure thats viable without spreading the requests over several IPs to counteract the rate limiting, which would be a headache to manage.
All rate limits are user-based, not IP-based.
Another impact is currently most users request stats every 15 seconds, these limits effectively reduce that to once per minute when pulling from Memory, making stats useless for monitoring anything other than long averages.
Reducing the pulling interval is our goal here, as explained above. Please consider aggregating the per-tick stats and pulling them at once.
-
Are there any plans to add additional endpoints in to the token system? Specifically I think it would be useful to add the "my orders" and "wallet" endpoints to the system so that people can still collect stats about them but not have to give out a full access token.
Yes, it's possible.
-
Any chance we can get an endpoint to query a tokens access? Being able to determine what a token can access would be helpful in cases for example, where a user selects an option to pull from segment 5, but has only granted access to segment 10.
Sure, makes sense.
-
@tedivm Including all
GET
endpoints might mean a bit more than a normal third-party software needs to know. It will allow to read, for example, user email, subscription details, and other sensitive data. Is it really different from giving out a full access token?
-
The rate limits are intended to reduce demand on Screeps infrastructure.
This.
In this case the limits should very likely be set so high that only problematic scripts would ever trigger them. For example, requesting a memory segment (100 KB) from each shard (3) once per tick (~.3 Hz) would round out to about 100 KB/s of bandwidth. If supporting that is tenable, the limit should be .3Hz (or 1080 / hour).
We also should consider backend CPU overhead (e.g. for gzip compression) and internal LAN overhead for such operations.
-
Another side project here is how we should rate limit the websockets. This is not implemented here yet, but we need to come up with some solution eventually. One option which is currently being debated is to limit the connection rate itself:
- When you connect to a websocket using a token for the first time, you have 1 hour timeout. After the timer is expired, the websocket connection drops.
- After that all websocket sessions will drop in 15 seconds with a 60-second reconnect timeout.
- You can use the "Reset" button in your account settings to restore the 1-hour timeout again.