Auth Tokens


  • Dev Team

    @bonzaiferroni I think the only option for clients is to handle Google Invisible reCAPTCHA somehow. This is how the official client will work, and the same principle should be applied to any other client. You can even continue to use /api/auth/signin endpoint with normal token exchange generated by it, but you have to be ready that the server will ask you to confirm CAPTCHA from time to time (once per a few hours). I believe there must be some tool to embed a Web View in a Unity application these days.



  • @artch

    That might be very workable, I'll definitely look into what it would take. I wonder if it would be possible to get a new subforum for asking questions related to 3rd party tools, sometimes a little direction from the devs can go a long way.



  • @artch Throttling would be ideal. A token bucket controlling the maximum amount of data that can be sent is probably easiest to implement. From my limited memory of the socket endpoints, I think dropping messages that would overflow the bucket is reasonable (I don't think any client will get confused if messages were dropped since they are all named ws events). You could even send a console message describing that you are being rate limited.


  • Culture

    @artch The link reset would help a lot there, In my case, it usually ends up being a burst of 2-5 uploads within a couple minutes, then sometimes 5-10 minutes until the next push. Those bursts can easily eat up the 10/hour. Resetting via a link seems like a good way to handle that. Will the reset only work once per hour period? Or repeatable?


  • Culture

    @artch On the topic of Websockets, whats the current issue there? Is it console value sizes? For stats I know theres a few of us that spit out stats on console in order to get per-second ticks without heavy http polling, I have ~3kb in console output on the bigger output ticks, +~14kb of stats each tick, one of the very last things I do in my loop is serialize the stats into a single line and console.log that.

    As a side note, one idea that crossed my mind after tokens were added is adding a logging service, allow users to register a token and I would connect to the socket and store the last day or so of console.log data. That data could then be downloaded as a text file or possibly indexed and searched online. This wouldn't be possible with time limited sockets, but would with a bandwidth limited socket. (This is assuming only subscribing to /console for each user)

    For now I've got a couple projects/ideas on hold waiting to see how the rate-limiting plays out.


  • Culture

    We need to come up with a solution that allows legit console usage like tracking errors and short messages, but disallow abusing it to send large amounts of data.

    What about putting a limit on how much can be pushed through the console? For example, limiting the string length for console.log, or a max console.log total volume per tick. This would have the added benefit of working for all cases of console overuse, rather than just limiting stats. For example if you thought Quorum was pushing too much data through console, even though it is a bunch of short messages with no "stats", this would prompt some limiting.

    The other thing to consider is that there are really three main stats programs (screeps-stats, screeps-grafana, and screepsplus) that are used by the community at large, and they are essentially managed by three people (myself, @ags131, @Dissi). I'm pretty sure the three us will commit to not pushing stats over console if we're specifically told not to, and will refuse to merge any PRs which enable it. I know there are other individuals who have their own setup that's separate from that, but it may be a matter of simply sending your "high console users" a message to let them know they're putting too much strain on the system.

    We may leave only the global 120 req/min limit and drop all per-endpoint limits on the PTR.

    That would be fantastic.


  • Culture

    @artch said in Auth Tokens:

    @bonzaiferroni I think the only option for clients is to handle Google Invisible reCAPTCHA somehow. This is how the official client will work, and the same principle should be applied to any other client. You can even continue to use /api/auth/signin endpoint with normal token exchange generated by it, but you have to be ready that the server will ask you to confirm CAPTCHA from time to time (once per a few hours). I believe there must be some tool to embed a Web View in a Unity application these days.

    This is a bit more complicated, but I have an idea that might work for clients such as this.

    1. Client authenticates to /api/auth/client.
    2. /api/auth/client returns two values- an authentication token and a URL.
    3. The Client then exposes the URL to the User.
    4. The user clicks the URL, which brings them to a screeps page (presumably in a browser, not necessarily a webview) that includes the reCAPTCHA.
    5. Upon clicking the "enable" button the token given in step 2 is activated for six hours.

    In the background the client can check /api/auth/isactive and pause all interaction until the token is enabled.

    The benefit to this method is it can be used by a number of applications that don't have a web view, including command line ones, but still requires full "human" authentication that can't be easily bypassed by a bot.


  • YP

    Sounds like oauth2 with a captcha in the login page .)


  • Culture

    @w4rl0ck I actually almost mentioned that it was a bastardized OAuth version, but since it's missing a ton of the OAuth requirements and is a much simpler overall system I let it pass 😄


  • Culture

    What about putting a limit on how much can be pushed through the console? For example, limiting the string length for console.log, or a max console.log total volume per tick. This would have the added benefit of working for all cases of console overuse, rather than just limiting stats. For example if you thought Quorum was pushing too much data through console, even though it is a bunch of short messages with no "stats", this would prompt some limiting.

    I agree here, it makes sense for there to be a console volume limit, per tick. That can also be easily worked around by the user by queuing and prioritizing messages to stay within the limits.

    The other thing to consider is that there are really three main stats programs (screeps-stats, screeps-grafana, and screepsplus) that are used by the community at large, and they are essentially managed by three people (myself, @ags131, @Dissi). I'm pretty sure the three us will commit to not pushing stats over console if we're specifically told not to, and will refuse to merge any PRs which enable it. I know there are other individuals who have their own setup that's separate from that, but it may be a matter of simply sending your "high console users" a message to let them know they're putting too much strain on the system.

    Agreed here too, if I'm asked not to do something (Such as using console for stats). I'll update the ScreepsPlus stats agent to remove that capability and encourage users to update and switch to another method.

    My primary motivations for using the console for stats output is live per-tick stats, polling the API endpoints every 3 seconds (Or whatever the tick rate is on that shard) is IMO way too many HTTP requests to be doing constantly, while the websocket stream is a steady trickle without the extra overhead of individual HTTP requests. I also dump a copy of every console message and error into rotated text files for easy searching, it makes debugging infrequent errors easier.


  • Dev Team

    @tedivm

    This is a bit more complicated, but I have an idea that might work for clients such as this.

    1. Client authenticates to /api/auth/client.
    2. /api/auth/client returns two values- an authentication token and a URL.
    3. The Client then exposes the URL to the User.
    4. The user clicks the URL, which brings them to a screeps page (presumably in a browser, not necessarily a webview) that includes the reCAPTCHA.
    5. Upon clicking the "enable" button the token given in step 2 is activated for six hours.

    What's difference here from usual /api/auth/signin flow (with reCAPTCHA enabled starting from February)? Looks like the same principle to me.


  • Culture

    @artch the main difference is that in my example no one has to embed a recaptcha directly in their program (which, after looking at it, i'm not sure is even possible for many applications). I just think it'll be easier to impliment.


  • Dev Team

    @tedivm You can open the URL directly instead of embedding it in an iframe or a web view, that's the decision of the client developer and has nothing to do with how the server works.


  • Culture

    @artch said in Auth Tokens:

    @tedivm You can open the URL directly instead of embedding it in an iframe or a web view, that's the decision of the client developer and has nothing to do with how the server works.

    Then I'm not sure I understand how that would work. Where does the URL for the iframe come from and how does it lock itself in to a specific token?


  • Dev Team

    @tedivm The URL is always the same. It's in the account settings, but we can make its variant without UI with a special flag like https://screeps.com/a/#!/account/auth-tokens/reset?noui=1. It doesn't lock a specific token, since rate limiting is user-based, not token-based, so it simply resets the user counter (both for persistent auth tokens and for regular one-time tokens). A third-party client might use any token mechanism, and give this URL to the user every time the servers responds that it's required to proceed.


  • Overlords

    I've had a confusing discussion on Slack around the rate limit on POST /api/user/code.

    I updated rollup-plugin-screeps to require a token when pushing to screeps.com which causes its deploys to be rate-limited to 10 per hour.

    I've been told that grunt-screeps doesn't get rate limited, but when looking at the code I thought it was because it is still using username/password authentication (https://github.com/screeps/grunt-screeps/blob/master/tasks/screeps.js#L61). Is it not going to need updating?

    Unlike node-screeps-api which uses /api/auth/signin to get a token and then uses that token to push code it appears that grunt-screeps uses an auth header to send email:password. Is the header going to stay and not be rate limited?


  • Culture

    @arcath node-screeps-api also optionally uses a token directly skipping the /api/auth/signin step. gulp-screeps has already been updated to use a token if supplied.


  • Culture

    Can you raise the ratelimit on GET /api/user/memory? Right now it is set at 60 per hour (once per minute). If possible (and I understand if this isn't) could that be made a "per shard" thing? If not can it be raised slightly- say 120 per hour?


  • Overlords

    @ags131 said in Auth Tokens:

    @arcath node-screeps-api also optionally uses a token directly skipping the /api/auth/signin step. gulp-screeps has already been updated to use a token if supplied.

    I'm using tokens if one is supplied and it requires you use a token for screeps.com

    It was mentioned by @dissi-mark on slack that grunt-screeps didn't have a rate limit. From looking at grunt-screeps source it appears that POST /api/user/code takes an auth header that isn't rate limited? I'd like to know if this is the case so I can change rollup-plugin-screeps back to requiring a username/password to use that header.


  • Dev Team

    Update

    • There is now a way to turn off rate limiting for a specific token for 2 hours, see documentation.

    • Limit values are updated. Some limits are per-day now.

    • Two new endpoints added to token access scope:

      • GET /api/user/money-history
      • GET /api/market/my-orders
    • Added an option to add description to a token.

    • New endpoint /api/auth/query-token?token=XXX which shows the access scope and unlimited period timer for a token.

    • Endpoint /api/user/memory-segment now accepts segment query param as a list separated by a comma.

    👍