Tue, 11 Oct 2016 20:39:47 -0300 i18n-pt_BR: synchronized with 149433e68974 stable
Wagner Bruna <wbruna@softwareexpress.com.br> [Tue, 11 Oct 2016 20:39:47 -0300] rev 30213
i18n-pt_BR: synchronized with 149433e68974
Sun, 16 Oct 2016 13:35:23 -0700 changegroup: increase write buffer size to 128k
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 16 Oct 2016 13:35:23 -0700] rev 30212
changegroup: increase write buffer size to 128k By default, Python defers to the operating system for choosing the default buffer size on opened files. On my Linux machine, the default is 4k, which is really small for 2016. This patch bumps the write buffer size when writing changegroups/bundles to 128k. This matches the 128k read buffer we already use on revlogs. It's worth noting that this only impacts when writing to an explicit file (such as during `hg bundle`). Buffers when writing to bundle files via the repo vfs or to a temporary file are not impacted. When producing a none-v2 bundle file of the mozilla-unified repository, this change caused the number of write() system calls to drop from 952,449 to 29,788. After this change, the most frequent system calls are fstat(), read(), lseek(), and open(). There were 2,523,672 system calls after this patch (so a net decrease of ~950k is statistically significant). This change shows no performance change on my system. But I have a high-end system with a fast SSD. It is quite possible this change will have a significant impact on network file systems, where extra network round trips due to excessive I/O system calls could introduce significant latency.
Fri, 14 Oct 2016 01:31:11 +0200 changegroup: skip delta when the underlying revlog do not use them
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Fri, 14 Oct 2016 01:31:11 +0200] rev 30211
changegroup: skip delta when the underlying revlog do not use them Revlog can now be configured to store full snapshot only. This is used on the changelog. However, the changegroup packing was still recomputing deltas to be sent over the wire. We now just reuse the full snapshot directly in this case, skipping delta computation. This provides use with a large speed up(-30%): # perfchangegroupchangelog on mercurial ! wall 2.010326 comb 2.020000 user 2.000000 sys 0.020000 (best of 5) ! wall 1.382039 comb 1.380000 user 1.370000 sys 0.010000 (best of 8) # perfchangegroupchangelog on pypy ! wall 5.792589 comb 5.780000 user 5.780000 sys 0.000000 (best of 3) ! wall 3.911158 comb 3.920000 user 3.900000 sys 0.020000 (best of 3) # perfchangegroupchangelog on mozilla central ! wall 20.683727 comb 20.680000 user 20.630000 sys 0.050000 (best of 3) ! wall 14.190204 comb 14.190000 user 14.150000 sys 0.040000 (best of 3) Many tests have to be updated because of the change in bundle content. All theses update have been verified. Because diffing changelog was not very valuable, the resulting bundle have similar size (often a bit smaller): # full bundle of mozilla central with delta: 1142740533B without delta: 1142173300B So this is a win all over the board.
Fri, 14 Oct 2016 02:25:08 +0200 revlog: make 'storedeltachains' a "public" attribute
Pierre-Yves David <pierre-yves.david@ens-lyon.org> [Fri, 14 Oct 2016 02:25:08 +0200] rev 30210
revlog: make 'storedeltachains' a "public" attribute The next changeset will make that attribute read by the changegroup packer. We make it "public" beforehand.
Mon, 17 Oct 2016 22:51:22 -0700 manifest: don't store None in fulltextcache
Martin von Zweigbergk <martinvonz@google.com> [Mon, 17 Oct 2016 22:51:22 -0700] rev 30209
manifest: don't store None in fulltextcache When we read a value from fulltextcache, we expect it to be an array, so we should not store None in it. Found while working on narrowhg.
Tue, 18 Oct 2016 02:09:08 +0200 copies: improve assertions during copy recombination
Gábor Stefanik <gabor.stefanik@nng.com> [Tue, 18 Oct 2016 02:09:08 +0200] rev 30208
copies: improve assertions during copy recombination - Make sure there is nothing to recombine in non-graftlike scenarios - More pythonic assert syntax
Mon, 17 Oct 2016 16:12:12 -0700 treemanifest: fix bad argument order to treemanifestctx
Martin von Zweigbergk <martinvonz@google.com> [Mon, 17 Oct 2016 16:12:12 -0700] rev 30207
treemanifest: fix bad argument order to treemanifestctx Found by running tests with _treeinmem (both of them) modified to be True.
Sun, 16 Oct 2016 11:10:21 -0700 wireproto: compress data from a generator
Gregory Szorc <gregory.szorc@gmail.com> [Sun, 16 Oct 2016 11:10:21 -0700] rev 30206
wireproto: compress data from a generator Currently, the "getbundle" wire protocol command obtains a generator of data, converts it to a util.chunkbuffer, then converts it back to a generator via the protocol's groupchunks() implementation. For the SSH protocol, groupchunks() simply reads 4kb chunks then write()s the data to a file descriptor. For the HTTP protocol, groupchunks() reads 32kb chunks, feeds those into a zlib compressor, emits compressed data as it is available, and that is sent to the WSGI layer, where it is likely turned into HTTP chunked transfer chunks as is or further buffered and turned into a larger chunk. For both the SSH and HTTP protocols, there is inefficiency from using util.chunkbuffer. For SSH, emitting consistent 4kb chunks sounds nice. However, the file descriptor it is writing to is almost certainly buffered. That means that a Python .write() probably doesn't translate into exactly what is written to the I/O layer. For HTTP, we're going through an intermediate layer to zlib compress data. So all util.chunkbuffer is doing is ensuring that the chunks we feed into the zlib compressor are of uniform size. This means more CPU time in Python buffering and emitting chunks in util.chunkbuffer but fewer function calls to zlib. This patch introduces and implements a new wire protocol abstract method: compresschunks(). It is like groupchunks() except it operates on a generator instead of something with a .read(). The SSH implementation simply proxies chunks. The HTTP implementation uses zlib compression. To avoid duplicate code, the HTTP groupchunks() has been reimplemented in terms of compresschunks(). To prove this all works, the "getbundle" wire protocol command has been switched to compresschunks(). This removes the util.chunkbuffer from that command. Now, data essentially streams straight from the changegroup emitter to the wire, possibly through a zlib compressor. Generators all the way, baby. There were slim to no performance changes on the server as measured with the mozilla-central repository. This is likely because CPU time is dominated by reading revlogs, producing the changegroup, and zlib compressing the output stream. Still, this brings us a little closer to our ideal of using generators everywhere.
Mon, 17 Oct 2016 19:48:36 +0200 revset: optimize for destination() being "inefficient"
Mads Kiilerich <madski@unity3d.com> [Mon, 17 Oct 2016 19:48:36 +0200] rev 30205
revset: optimize for destination() being "inefficient" destination() will scan through the whole subset and read extras for each revision to get its source.
Tue, 11 Oct 2016 04:39:47 +0200 copies: make _checkcopies handle copy sequences spanning the TCA (issue4028)
Gábor Stefanik <gabor.stefanik@nng.com> [Tue, 11 Oct 2016 04:39:47 +0200] rev 30204
copies: make _checkcopies handle copy sequences spanning the TCA (issue4028) When working in a rotated DAG (for a graftlike merge), there can be files that are renamed both between the base and the topological CA, and between the TCA and the endpoint farther from the base. Such renames span the TCA (and thus need both passes of _checkcopies to be fully detected), but may not necessarily be divergent. Make _checkcopies return "incomplete copies" and "incomplete divergences" in this case, and let mergecopies recombine them once data from both passes of _checkcopies is available. With this patch, all known cases involving renames and grafts pass. (Developed together with Pierre-Yves David)
(0) -30000 -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 +10000 tip