Wed, 26 Sep 2018 17:46:48 -0700 wireprotov2: advertise redirect targets in capabilities
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 26 Sep 2018 17:46:48 -0700] rev 40023
wireprotov2: advertise redirect targets in capabilities This is pretty straightforward. Redirect targets will require an extension to support. So we've added a function that can be wrapped to define redirect targets. To test this, we teach our simple cache test extension to read redirect targets from a file. It's a bit hacky. But it gets the job done. Differential Revision: https://phab.mercurial-scm.org/D4775
Wed, 26 Sep 2018 18:02:06 -0700 wireprotov2: define semantics for content redirects
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 26 Sep 2018 18:02:06 -0700] rev 40022
wireprotov2: define semantics for content redirects When I implemented the clonebundles feature and deployed it on hg.mozilla.org using Amazon S3 as a content server, server-side CPU and bandwidth usage dropped off a cliff and a ton of server scaling headaches went away pretty much the instant clients with support for clonebundles were rolled out to Firefox CI. An obvious takeaway from that experience was that offloading server load to scalable file servers - potentially backed by a CDN - is a really good idea. Another takeaway was that Mercurial's wire protocol wasn't in a good position to support data offload generally. In wire protocol version 1, there isn't a mechanism in the protocol to say "grab the data from over here instead." For HTTP, we could teach the client to follow HTTP redirects. Or we could invent a media type that encoded redirects inline. But for SSH, we were pretty much out of luck because that protocol wasn't very flexible. Wire protocol version 2 offers the opportunity to do something better. The recent generic server-side content caching layer in the wire protocol version 2 server demonstrated that it is possible to have drop-in caching of responses to command requests. This by itself adds tons of value and already makes the built-in server much more scalable. But I don't want to stop there. The existing server-side caching implementation has a big weakness: it requires the server to send data to the client. This means that the Mercurial server is potentially sending gigabytes of data to thousands of clients. This is problematic because compared to scaling static file servers, scaling dynamic servers is *hard*. A solution to this is to "offload" serving of content to something that isn't the Mercurial server. By offloading content serving, you turn the Mercurial server from a centralized monolithic service to a distributed mostly-indexing service. Assuming high rates of content offload, this should drastically reduce the total work performed by the Mercurial server, both in terms of CPU and data transfer. This will make Mercurial servers vastly easier to scale. This commit defines the semantics for "content redirects" in wire protocol version 2. Essentially: * Servers advertise the set of locations a response could be served from. * When making requests, clients advertise the set of locations they are willing to fetch content from. * Servers can then replace the inline response with one that says "get the response from over here instead." This feature - when fully implemented - will allow extending the server-side caching layer to facilitate such things as integrating your server-side cache with a scalable blob store (such as S3 or a CDN) and offloading most data transfer to that external service. This feature could also be leveraged for load balancing. e.g. requests could come into a central server and then get redirected to an available mirror depending on server availability or locality. There's tons of potential :) Differential Revision: https://phab.mercurial-scm.org/D4774
Wed, 26 Sep 2018 17:16:56 -0700 wireprotov2: support response caching
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 26 Sep 2018 17:16:56 -0700] rev 40021
wireprotov2: support response caching One of the things I've learned from managing VCS servers over the years is that they are hard to scale. It is well known that some companies have very beefy (read: very expensive) servers to power their VCS needs. It is also known that specialized servers for various VCS exist in order to facilitate scaling servers. (Mercurial is in this boat.) One of the aspects that make a VCS server hard to scale is the high CPU load incurred by constant client clone/pull operations. To alleviate the scaling pain associated with data retrieval operations, I want to integrate caching into the Mercurial wire protocol server as robustly as possible such that servers can aggressively cache responses and defer as much server load as possible. This commit represents the initial implementation of a general caching layer in wire protocol version 2. We define a new interface and behavior for a wire protocol cacher in repository.py. (This is probably where a reviewer should look first to understand what is going on.) The bulk of the added code is in wireprotov2server.py, where we define how a command can opt in to being cached and integrate caching into command dispatching. From a very high-level: * A command can declare itself as cacheable by providing a callable that can be used to derive a cache key. * At dispatch time, if a command is cacheable, we attempt to construct a cacher and use it for serving the request and/or caching the request. * The dispatch layer handles the bulk of the business logic for caching, making cachers mostly "dumb content stores." * The mechanism for invalidating cached entries (one of the harder parts about caching in general) is by varying the cache key when state changes. As such, cachers don't need to be concerned with cache invalidation. Initially, we've hooked up support for caching "manifestdata" and "filedata" commands. These are the simplest to cache, as they should be immutable over time. Caching of commands related to changeset data is a bit harder (because cache validation is impacted by changes to bookmarks, phases, etc). This will be implemented later. (Strictly speaking, censoring a file should invalidate caches. I've added an inline TODO to track this edge case.) To prove it works, this commit implements a test-only extension providing in-memory caching backed by an lrucachedict. A new test showing this extension behaving properly is added. FWIW, the cacher is ~50 lines of code, demonstrating the relative ease with which a cache can be added to a server. While the test cacher is not suitable for production workloads, just for kicks I performed a clone of just the changeset and manifest data for the mozilla-unified repository. With a fully warmed cache (of just the manifest data since changeset data is not cached), server-side CPU usage dropped from ~73s to ~28s. That's pretty significant and demonstrates the potential that response caching has on server scalability! Differential Revision: https://phab.mercurial-scm.org/D4773
Wed, 26 Sep 2018 17:16:27 -0700 wireprotov2: define type to represent pre-encoded object
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 26 Sep 2018 17:16:27 -0700] rev 40020
wireprotov2: define type to represent pre-encoded object An upcoming commit will introduce a caching layer to command serving. This will require the ability to cache pre-encoded data. This commit introduces a type to represent pre-encoded data and teaches the output layer to not CBOR encode an instance of that type. Differential Revision: https://phab.mercurial-scm.org/D4772
Wed, 26 Sep 2018 15:53:49 -0700 wireprotov2: change name and behavior of readframe()
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 26 Sep 2018 15:53:49 -0700] rev 40019
wireprotov2: change name and behavior of readframe() In the near future, we will want to support performing I/O from other sources. Let's rename readframe() to readdata() and tweak its logic to support future growth. Differential Revision: https://phab.mercurial-scm.org/D4771
Wed, 26 Sep 2018 16:07:59 -0700 url: move _wraphttpresponse() from httpeer
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 26 Sep 2018 16:07:59 -0700] rev 40018
url: move _wraphttpresponse() from httpeer This is a generally useful function. Having it on the url module will make it more accessible outside of the HTTP peers. Differential Revision: https://phab.mercurial-scm.org/D4770
Wed, 26 Sep 2018 14:54:15 -0700 debugcommands: print all CBOR objects
Gregory Szorc <gregory.szorc@gmail.com> [Wed, 26 Sep 2018 14:54:15 -0700] rev 40017
debugcommands: print all CBOR objects application/mercurial-cbor may contain multiple objects. Let's print all of them. Differential Revision: https://phab.mercurial-scm.org/D4769
Wed, 03 Oct 2018 22:48:19 +0900 help: document about "export" template keywords
Yuya Nishihara <yuya@tcha.org> [Wed, 03 Oct 2018 22:48:19 +0900] rev 40016
help: document about "export" template keywords
Wed, 03 Oct 2018 22:43:57 +0900 help: document about "config" template keywords
Yuya Nishihara <yuya@tcha.org> [Wed, 03 Oct 2018 22:43:57 +0900] rev 40015
help: document about "config" template keywords
Wed, 03 Oct 2018 22:34:18 +0900 help: document about "cat" template keywords
Yuya Nishihara <yuya@tcha.org> [Wed, 03 Oct 2018 22:34:18 +0900] rev 40014
help: document about "cat" template keywords
(0) -30000 -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 +10000 tip