Sat, 19 Nov 2016 15:41:37 -0800 conflicts: make spacing consistent in conflict markers
Kostia Balytskyi <ikostia@fb.com> [Sat, 19 Nov 2016 15:41:37 -0800] rev 30460
conflicts: make spacing consistent in conflict markers The way default marker template was defined before this patch, the spacing before dash in conflict markes was dependent on whether changeset is a tip one or not. This is a relevant part of template: '{ifeq(tags, "tip", "", "{tags} "}' If revision is a tip revision with no other tags, this would resolve to an empty string, but for revisions which are not tip and don't have any other tags, this would resolve to a single space string. In the end this causes weirdnesses like the ones you can see in the affected tests. This is a not a big deal, but double spacing may be visually less pleasant. Please note that test changes where commit hashes change are the result of marking files as resolved without removing markers.
Thu, 10 Nov 2016 09:21:41 -0800 rebase: move bookmark update to before rebase clearing
Durham Goode <durham@fb.com> [Thu, 10 Nov 2016 09:21:41 -0800] rev 30459
rebase: move bookmark update to before rebase clearing Bookmark fixing should probably happen before the rebase starts to clean up, so let's move it before clearrebased. This will also help a future patch where we want to add more clear logic to the existing clear section.
Fri, 28 Oct 2016 17:44:28 +0200 setup: include a dummy $PATH in the custom environment used by build.py
Gábor Stefanik <gabor.stefanik@nng.com> [Fri, 28 Oct 2016 17:44:28 +0200] rev 30458
setup: include a dummy $PATH in the custom environment used by build.py This is required for building with pypiwin32, the pip-installable replacement for pywin32.
Fri, 11 Nov 2016 07:01:27 -0800 shelve: move unshelve-finishing logic to a separate function
Kostia Balytskyi <ikostia@fb.com> [Fri, 11 Nov 2016 07:01:27 -0800] rev 30457
shelve: move unshelve-finishing logic to a separate function Finishing unshelve involves two steps now: - stripping a changelog - aborting a transaction Obs-based shelve will not require these things, so isolating this logic into a separate function where the normal/obs-shelve branching is going to be implemented seems to be like a nice idea. Behavior-wise this change moves 'unshelvecleanup' from being between changelog stripping and transaction abortion to being after them. I don't think this has any negative effects.
Thu, 10 Nov 2016 11:02:39 -0800 shelve: move file-forgetting logic to a separate function
Kostia Balytskyi <ikostia@fb.com> [Thu, 10 Nov 2016 11:02:39 -0800] rev 30456
shelve: move file-forgetting logic to a separate function This is just a readability improvement.
Thu, 10 Nov 2016 10:57:10 -0800 shelve: move rebasing logic to a separate function
Kostia Balytskyi <ikostia@fb.com> [Thu, 10 Nov 2016 10:57:10 -0800] rev 30455
shelve: move rebasing logic to a separate function Rebasing restored shelved commit onto the right destination is done differently in traditional and obs-based unshelve: - for traditional, we just rebase it - for obs-based, we need to check whether a successor of the restored commit already exists in the destination (this might happen when unshelving twice on the same destination) This is the reason why this piece of logic should be in its own function: to not have excessive complexity in the main function.
Thu, 10 Nov 2016 10:51:06 -0800 shelve: move commit restoration logic to a separate function
Kostia Balytskyi <ikostia@fb.com> [Thu, 10 Nov 2016 10:51:06 -0800] rev 30454
shelve: move commit restoration logic to a separate function
Sun, 13 Nov 2016 03:35:52 -0800 shelve: move temporary commit creation to a separate function
Kostia Balytskyi <ikostia@fb.com> [Sun, 13 Nov 2016 03:35:52 -0800] rev 30453
shelve: move temporary commit creation to a separate function Committing working copy changes before rebasing a shelved commit on top of them is an independent piece of behavior, which fits into its own function. Similar to the previous series, this and a couple of following patches are for unshelve refactoring.
Thu, 17 Nov 2016 20:30:00 -0800 commands: print chunk type in debugrevlog
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 17 Nov 2016 20:30:00 -0800] rev 30452
commands: print chunk type in debugrevlog Each data entry ("chunk") in a revlog has a type based on the first byte of the data. This type indicates how to interpret the data. This seems like a useful thing to be able to query through a debug command. So let's add that to `hg debugrevlog`. This does make `hg debugrevlog` slightly slower, as it has to read more than just the index. However, even on the mozilla-unified manifest (which is ~200MB spread over ~350K revisions), this takes <400ms.
Thu, 17 Nov 2016 20:17:51 -0800 perf: add command for measuring revlog chunk operations
Gregory Szorc <gregory.szorc@gmail.com> [Thu, 17 Nov 2016 20:17:51 -0800] rev 30451
perf: add command for measuring revlog chunk operations Upcoming commits will teach revlogs to leverage the new compression engine API so that new compression formats can more easily be leveraged in revlogs. We want to be sure this refactoring doesn't regress performance. So this commit introduces "perfrevchunks" to explicitly test performance of reading, decompressing, and recompressing revlog chunks. Here is output when run on the mozilla-unified repo: $ hg perfrevlogchunks -c ! read ! wall 0.346603 comb 0.350000 user 0.340000 sys 0.010000 (best of 28) ! read w/ reused fd ! wall 0.337707 comb 0.340000 user 0.320000 sys 0.020000 (best of 30) ! read batch ! wall 0.013206 comb 0.020000 user 0.000000 sys 0.020000 (best of 221) ! read batch w/ reused fd ! wall 0.013259 comb 0.030000 user 0.010000 sys 0.020000 (best of 222) ! chunk ! wall 1.909939 comb 1.910000 user 1.900000 sys 0.010000 (best of 6) ! chunk batch ! wall 1.750677 comb 1.760000 user 1.740000 sys 0.020000 (best of 6) ! compress ! wall 5.668004 comb 5.670000 user 5.670000 sys 0.000000 (best of 3) $ hg perfrevlogchunks -m ! read ! wall 0.365834 comb 0.370000 user 0.350000 sys 0.020000 (best of 26) ! read w/ reused fd ! wall 0.350160 comb 0.350000 user 0.320000 sys 0.030000 (best of 28) ! read batch ! wall 0.024777 comb 0.020000 user 0.000000 sys 0.020000 (best of 119) ! read batch w/ reused fd ! wall 0.024895 comb 0.030000 user 0.000000 sys 0.030000 (best of 118) ! chunk ! wall 2.514061 comb 2.520000 user 2.480000 sys 0.040000 (best of 4) ! chunk batch ! wall 2.380788 comb 2.380000 user 2.360000 sys 0.020000 (best of 5) ! compress ! wall 9.815297 comb 9.820000 user 9.820000 sys 0.000000 (best of 3) We already see some interesting data, such as how much slower non-batched chunk reading is and that zlib compression appears to be >2x slower than decompression. I didn't have the data when I wrote this commit message, but I ran this on Mozilla's NFS-based Mercurial server and the time for reading with a reused file descriptor was faster. So I think it is worth testing both with and without file descriptor reuse so we can make informed decisions about recycling file descriptors.
(0) -30000 -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 +10000 tip