Auth Tokens
-
Another issue that's come up is that the rate limits are making development difficult. Would it be possible to have the rate limits lifted or removed for PTR?
-
For the moment I've reverted my code pushing to using username/password, I've hit the rate limiting 3 times in a row this morning trying to work on cross-shard code.
-
I have another minor request for the tokens: Add an option to add a short comment or label to the token in the UI, that would make it easy to tell which tokens are used where. For example, commenting with 'local dev', 'screepsplus', 'stats', etc
On that note, the ability to manually enter paths for the token would be nice too. Allows a bit more flexibility than the current full-access or limited selections.
-
Thanks for the reply!
Do you have anything you can share about the impact these proposed limits would have on current usage patterns? Perhaps it would be good to turn on these rate limits in "warning mode" for a few weeks to gather feedback on what reasonable limits feel like?
The websockets endpoint specifically I wish would be rate limited on bandwidth rather than on connection duration. My external console takes effectively no bandwidth but stays connected for hours, even a 0.25 KB/s bandwidth limit would be completely acceptable to me.
As far as the UI-less reCAPTCHA page to clear the rate limits: I actually think this is pretty fine. I'm imagining the deploy script would catch an error and print out a link for the user to click, then just keep refreshing in the background until the user did it.
Sounds like stats are a problem for you guys, so the rate limits on that make sense. I do wish it were more usage-based, like e.g. maybe these endpoints have a CPU cost associated with them that drain directly from your bucket. This creates an incentive for users to optimize their stats collection, and you can adjust the CPU cost of the endpoint as required.
-
The screeps3D project was planning on an early release to showcase the work so far. Based on this discussion, it sounds like there are still some issues to be considered for 3rd-party clients (overall data use including websockets, resetting rate limits). In light of that, it probably seems best not to do a release when it is uncertain what kind of issues it might cause for the public server.
About the rate limits, I'd humbly request some other option than the manual reset. It would not be very good user experience to be scrolling around the map and occasionally have to do a CAPTCHA. I'm not a web-dev so I looked up the invisible reCAPTCHA and I'm not sure that will be possible to do in a non-web environment like unity3d. Of course I understand that the dev-team has limited resources to accommodate the needs of a 3rd party client, so I'm not expecting it. It might be best to put the project on hold until there is something available.
-
Recording console data, which the proposed rate limits would make impossible to do as each websocket would only be able to be used for 15 seconds before being disconnected for a minute.
We need to come up with a solution that allows legit console usage like tracking errors and short messages, but disallow abusing it to send large amounts of data.
To add to that if you created a new API endpoint for pulling in room object data I bet a ton of people would use that instead of using the websocket, which would allow you to rate limit it using the ratelimiting system from above.
Makes sense, we'll look into that.
-
@artch A "read-only" token doesn't need to be a "read-everything" token. I don't know if there's a better description that's widely used for a token that's only authorized for read access to non-sensitive data.
I mean,
GET api/user/me
endpoint contains some sensitive data for example. Allowing allGET
endpoints would include this one. Otherwise we have to develop some other criteria other than "allGET
".
-
Another issue that's come up is that the rate limits are making development difficult. Would it be possible to have the rate limits lifted or removed for PTR?
We may leave only the global 120 req/min limit and drop all per-endpoint limits on the PTR.
-
For the moment I've reverted my code pushing to using username/password, I've hit the rate limiting 3 times in a row this morning trying to work on cross-shard code.
Do you think the "Reset" button would help you with that? You can even set up your push script to open that URL automatically on the rate limiting response, it will give you another 10 pushes in the current hour window (with automatic reset to yet another 10 requests in 00 minutes of the next hour).
-
I have another minor request for the tokens: Add an option to add a short comment or label to the token in the UI, that would make it easy to tell which tokens are used where. For example, commenting with 'local dev', 'screepsplus', 'stats', etc
Nice idea!
On that note, the ability to manually enter paths for the token would be nice too. Allows a bit more flexibility than the current full-access or limited selections.
It's technically difficult to implement.
-
The websockets endpoint specifically I wish would be rate limited on bandwidth rather than on connection duration. My external console takes effectively no bandwidth but stays connected for hours, even a 0.25 KB/s bandwidth limit would be completely acceptable to me.
How would you like to get it implemented personally? Truncating responses? Throttling/skipping them?
-
@bonzaiferroni Do you have any thoughts on what would be the best solution for your project, considering our needs with this new system?
-
Do you have any thoughts on what would be the best solution for your project, considering our needs with this new system?
Unfortunately I don't have any brilliant ideas. Clients are going to be in a whole different class of data use compared to an automated tool like a stats-checker. While a stats-checker will use a little bit of data constantly, a client will use potentially quite a bit more data except only when the player is active. That is why I thought the per-day limits might mitigate the problem, but it is only a partial solution and it isn't suitable for the reasons you've stated above. Another issue is meeting these limits with the client might lock a user out of other tools, which would be unacceptable to most players. The heart of the problem is that automated tools can be designed to stay within reasonable limits, but a client's data use will be intrinsically unpredictable and sporadic.
I can't think of any solution short of allowing clients to bypass the limits, as you've done with the official client. It might be that the best use for 3rd-party clients is with private servers. Since the Screeps3D project is being developed under the MIT license I suppose it would be possible for the dev-team to release their own version that has been modified to access the public server, but I realize that is probably unrealistic.
-
@bonzaiferroni I think the only option for clients is to handle Google Invisible reCAPTCHA somehow. This is how the official client will work, and the same principle should be applied to any other client. You can even continue to use
/api/auth/signin
endpoint with normal token exchange generated by it, but you have to be ready that the server will ask you to confirm CAPTCHA from time to time (once per a few hours). I believe there must be some tool to embed a Web View in a Unity application these days.
-
That might be very workable, I'll definitely look into what it would take. I wonder if it would be possible to get a new subforum for asking questions related to 3rd party tools, sometimes a little direction from the devs can go a long way.
-
@artch Throttling would be ideal. A token bucket controlling the maximum amount of data that can be sent is probably easiest to implement. From my limited memory of the socket endpoints, I think dropping messages that would overflow the bucket is reasonable (I don't think any client will get confused if messages were dropped since they are all named ws events). You could even send a console message describing that you are being rate limited.
-
@artch The link reset would help a lot there, In my case, it usually ends up being a burst of 2-5 uploads within a couple minutes, then sometimes 5-10 minutes until the next push. Those bursts can easily eat up the 10/hour. Resetting via a link seems like a good way to handle that. Will the reset only work once per hour period? Or repeatable?
-
@artch On the topic of Websockets, whats the current issue there? Is it console value sizes? For stats I know theres a few of us that spit out stats on console in order to get per-second ticks without heavy http polling, I have ~3kb in console output on the bigger output ticks, +~14kb of stats each tick, one of the very last things I do in my loop is serialize the stats into a single line and console.log that.
As a side note, one idea that crossed my mind after tokens were added is adding a logging service, allow users to register a token and I would connect to the socket and store the last day or so of console.log data. That data could then be downloaded as a text file or possibly indexed and searched online. This wouldn't be possible with time limited sockets, but would with a bandwidth limited socket. (This is assuming only subscribing to /console for each user)
For now I've got a couple projects/ideas on hold waiting to see how the rate-limiting plays out.
-
We need to come up with a solution that allows legit console usage like tracking errors and short messages, but disallow abusing it to send large amounts of data.
What about putting a limit on how much can be pushed through the console? For example, limiting the string length for
console.log
, or a maxconsole.log
total volume per tick. This would have the added benefit of working for all cases of console overuse, rather than just limiting stats. For example if you thought Quorum was pushing too much data through console, even though it is a bunch of short messages with no "stats", this would prompt some limiting.The other thing to consider is that there are really three main stats programs (screeps-stats, screeps-grafana, and screepsplus) that are used by the community at large, and they are essentially managed by three people (myself, @ags131, @Dissi). I'm pretty sure the three us will commit to not pushing stats over console if we're specifically told not to, and will refuse to merge any PRs which enable it. I know there are other individuals who have their own setup that's separate from that, but it may be a matter of simply sending your "high console users" a message to let them know they're putting too much strain on the system.
We may leave only the global 120 req/min limit and drop all per-endpoint limits on the PTR.
That would be fantastic.
-
@artch said in Auth Tokens:
@bonzaiferroni I think the only option for clients is to handle Google Invisible reCAPTCHA somehow. This is how the official client will work, and the same principle should be applied to any other client. You can even continue to use
/api/auth/signin
endpoint with normal token exchange generated by it, but you have to be ready that the server will ask you to confirm CAPTCHA from time to time (once per a few hours). I believe there must be some tool to embed a Web View in a Unity application these days.This is a bit more complicated, but I have an idea that might work for clients such as this.
- Client authenticates to
/api/auth/client
. /api/auth/client
returns two values- an authentication token and a URL.- The Client then exposes the URL to the User.
- The user clicks the URL, which brings them to a screeps page (presumably in a browser, not necessarily a webview) that includes the reCAPTCHA.
- Upon clicking the "enable" button the token given in step 2 is activated for six hours.
In the background the client can check
/api/auth/isactive
and pause all interaction until the token is enabled.The benefit to this method is it can be used by a number of applications that don't have a web view, including command line ones, but still requires full "human" authentication that can't be easily bypassed by a bot.
- Client authenticates to