Thanks a lot, @artch!
I simplified the code a bit more and filed bug 1446634.
Thanks a lot, @artch!
I simplified the code a bit more and filed bug 1446634.
I would report it to Mozilla, but I can't create an isolated test case for it - as the JS code is minified I cannot simply copy the drawing code, and I couldn't find the result in the DOM either (it's probably drawn on a <canvas>
or something). And I can't simply link the page because one would need a Screeps account to view it.
Any chance you could create a simple, public page that draws a badge the same way (with the black border), @artch? I would use the LOAN badge service, but the error does not appear there.
Since I upgraded to Firefox 59 yesterday, the player badges in the world view are rendered incorrectly:
As you can see, parts of the badge pattern "bleed" outside the circle.
This does not happen on other pages such as profile page or account management. It also does not happen in Chrome.
For me, not having chat archives in Slack is a killer issue. It's really annoying that I can't see what I wrote to someone a few months ago.
Yes, there is the archive bot, but you can't use it for private channels or direct messages.
So I vote for Discord.
Note on invite-only channels: In the Discord model, a group (e.g. an alliance) that previously used a Slack invite-only channel would probably simply create their own server. It's free and - unlike separate slack instances - uses the same user accounts, so you can't mix up people.
After I respawned on my private server, my old spawns were changed to the owner Screeps
. When my code claimed one of these room again and then tried to destroy()
the old spawn, I got ERR_NOT_OWNER
- however, when I clicked "Destroy this structure" in the UI, the spawn was destroyed. This looks like a bug to me.
Your rampart #593883b283f4af662b8c7b1b in room E5N83 is under attack!
.Your rampart #593943a9ec6b20792b881d30 in room E5N83 is under attack!
.Your rampart #5938840921e7fe834c004fee in room E5N83 is under attack!
.Note that when I uncheck the "Notify me when attacked" option for the rampart in question, I no longer get the notifications.
> What if the shards were vertical from each other, and the portals between them went vertically?
This is a truly great idea! It would make shards (probably called "levels" then) feel more like an intrinsic concept of the game world and less like a workaround for database performance bottleneck. (I still would prefer solving that problem by switching databases, however.)
> This looks pretty much close to how Redis Cluster works. And as far as I understand, Cassandra doesn't have secondary indexes support as well. Does it provide any benefits over Redis then?
Cassandra does have support for secondary indexes, but using them has a drawback: as secondary indexes are local, queries always have to involve all nodes (see https://pantheon.io/blog/cassandra-scale-problem-secondary-indexes for background), whereas for regular tables (and even materialized views) queries are directed to only a part of your cluster nodes. To my developers and customers, as an alternative I usually recommend using materialized views and/or specialized "lookup tables" which redundantly store data with primary keys optimized for the respective queries. This approach yields best performance for load profiles where data is read more often than it is written (which I guess may be the case for you).
I'm not too familiar with Redis Cluster. But from what I read (http://bigdataconsultants.blogspot.de/2013/12/difference-between-cassandra-and-redis.html and https://www.quora.com/Which-is-better-Redis-cluster-or-Cassandra) Redis uses a master slave architecture, whereas Cassandra nodes are all equal. I find the latter approach superior as it allows spreading not only read loads, but also writes.
> It looks like Cassandra is a better fit than MongoDB for big data set cases, not for higher read/write throughput.
Well, but read/write throughput in Cassandra scales linearly if you add more machines. So you don't have the problem of "overhead due to replication" killing the performance benefit of scaling horizontally.
Without going too much into details, the way Cassandra achieves this is because the partition key allows calculating which node(s) are responsible for the given query, and the driver will only ask these nodes. As a simple example (without duplicating data across nodes for fault tolerance), if I have 5 nodes, each of them will contain one fifth of the data, so only one fifth of queries will be handled by it. Thus, throughput load is spread evenly, and adding more nodes helps improving performance.