Mon, 11 Mar 2019 02:35:18 +0100 updatecaches: also warm hgtagsfnodescache
Pierre-Yves David <pierre-yves.david@octobus.net> [Mon, 11 Mar 2019 02:35:18 +0100] rev 42238
updatecaches: also warm hgtagsfnodescache Now that a full update of this cache run in a reasonable amount of time, we can warm everything when during a full update.
Mon, 11 Mar 2019 01:10:20 +0100 hgtagsfnodescache: inherit fnode from parent when possible
Pierre-Yves David <pierre-yves.david@octobus.net> [Mon, 11 Mar 2019 01:10:20 +0100] rev 42237
hgtagsfnodescache: inherit fnode from parent when possible If a changeset does not update the content of `.hgtags`, it means it will use the same file-node (for `.hgtags`) as its parents. In this case we can directly reuse the parent's file-node. We use this property when updating the `hgtagsfnodescache` taking a faster path if we already have a cached value for the parents of the node we are looking at. Doing so provides a large performance boost when looking at a lot of fnodes, especially on repository with very large manifest: timing for `tagsmod.fnoderevs(ui, repo, repo.changelog.revs())` mercurial: (41907 revisions, 1923 files) before: 6.9 seconds after: 2.7 seconds (-54%) pypy: (96266 revisions, 5198 files) before: 80 seconds after: 20 seconds (-75%) mozilla-central: (463411 revisions, 272080 files) before: 7166.4 seconds after: 47.8 seconds (-99%, x150 speedup) On a copy of mozilla-try with about 35K heads ans 1.7M changesets, this moves the computation from many hours to a couple of minutes, making it more interesting to do a full warm up of this cache before computing tags (from a cold cache). There seems to be other performance low hanging fruits, like avoiding the use of changectx or a more revision centric logic. However, the new code is fast enough for my needs right now.
Mon, 11 Mar 2019 01:09:38 +0100 hgtagsfnodescache: handle nullid lookup
Pierre-Yves David <pierre-yves.david@octobus.net> [Mon, 11 Mar 2019 01:09:38 +0100] rev 42236
hgtagsfnodescache: handle nullid lookup The null revision is empty, so it `.hgtags` content is `nullid` in regards with the `hgtagsfnodescache`. Dealing with `nullid` will help with the next changeset. Before this change, feeding `nullid` to `hgtagsfnodescache.getfnode` would return a wrong result (fnode for tip).
Fri, 26 Apr 2019 17:39:07 +0200 help: register the 'gpg' command category and give it a description
Sietse Brouwer <sbbrouwer@gmail.com> [Fri, 26 Apr 2019 17:39:07 +0200] rev 42235
help: register the 'gpg' command category and give it a description help.py expects extensions to register their command category in the CATEGORY_ORDER and CATEGORY_NAMES variables. Once gendoc.py orders commands by category, in the next patch, it'll assume this registration (and raise an exception on encountering any unregistered categories). Luckily, gpg is the only bundled extension with an unregistered custom category, so let's fix it. Differential Revision: https://phab.mercurial-scm.org/D6324
Thu, 25 Apr 2019 15:30:40 -0700 histedit: Speed up scrolling in patch view mode
feyu@google.com [Thu, 25 Apr 2019 15:30:40 -0700] rev 42234
histedit: Speed up scrolling in patch view mode Store patchcontents into the mode state, avoiding the expensive call to ui for computing the patchcontents. Before this change in large repos histedit patch view mode can be very irresponsive.
Thu, 02 May 2019 16:43:34 -0700 histedit: Show file names in multiple line format
Yu Feng <rainwoodman@gmail.com> [Thu, 02 May 2019 16:43:34 -0700] rev 42233
histedit: Show file names in multiple line format
Fri, 03 May 2019 20:06:03 +0900 parser: fix crash by parsing "()" in keyword argument position stable
Yuya Nishihara <yuya@tcha.org> [Fri, 03 May 2019 20:06:03 +0900] rev 42232
parser: fix crash by parsing "()" in keyword argument position A tree node can be either None or a tuple because x=("group", None) is reduced to x[1].
Sat, 06 Apr 2019 17:46:19 +0200 repoview: introduce a `experimental.extra-filter-revs` config
Pierre-Yves David <pierre-yves.david@octobus.net> [Sat, 06 Apr 2019 17:46:19 +0200] rev 42231
repoview: introduce a `experimental.extra-filter-revs` config The option define revisions to additionally filter out of all repository "view". The end goal is to provide and easy to way to serve multiple subset of the same repository using multiple "shares". The simplest use case of this feature is to have one view serving the public changesets and one view also serving the draft. This is currently achievable using the new `server.view` option introduced recently by Joerg Sonnenberger. However, more advanced use cases need more advanced definitions. For example some needs a view dedicated to some release branches, or view that hides security fixes to be released. Joerg Sonnenberger and I discussed this topic at the recent mini-sprint and the both of us have seen real life use cases for this. (This series got written during the same mini-sprint). The feature is fully functional, and use similar cache-fallback mechanism to ensure decent performance. However,there remaining room to ensure each share caches and hooks collaborate with each others. This will come at a later time once users start to actually test this feature on real usecase.
Wed, 17 Apr 2019 23:10:29 -0700 copies: filter out copies from non-existent source later in _chain()
Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 23:10:29 -0700] rev 42230
copies: filter out copies from non-existent source later in _chain() _changesetforwardcopies() repeatedly calls _chain(). That is very expensive because _chain() does lookups in the manifest. I hope to split up the function in two parts: 1) simple chaining, not considering end points, and 2) filter out files that don't exist in the end points (and ping-pong copies/renames). This patches gets us closer to that by moving the check for non-existent source later in the function. Now there are no more checks for "src" and "dst" in the first loop; all the filtering of invalid copies is done in the second loop. The code also looks much more consistent now. No measureable impact on `hg debugpathcopies 4.0 4.8`. That shouldn't be surprising since the only case we're doing more checks now is in case of chained copies/renames, which are quire rare in practice. Differential Revision: https://phab.mercurial-scm.org/D6277
Thu, 18 Apr 2019 00:12:56 -0700 copies: clarify mutually exclusive cases in _chain() with a s/if/elif/
Martin von Zweigbergk <martinvonz@google.com> [Thu, 18 Apr 2019 00:12:56 -0700] rev 42229
copies: clarify mutually exclusive cases in _chain() with a s/if/elif/ If the 'b' dict has a rename from 'x' to 'y', it shouldn't be possible for 'x' to be both (a key) in 'a' and in 'src'. That would mean that 'x' is a file in the source commit and also a rename destination in the intermediate commit. But we currently don't allow renaming files onto existing files, so that shouldn't happen. So let's clarify that by using an "elif" instead of an "if". And if we did allow renaming files onto existing files, we should prefer to use the rename destination in the intermediate commit as source anyway. Differential Revision: https://phab.mercurial-scm.org/D6276
Thu, 18 Apr 2019 00:05:05 -0700 copies: delete a redundant cleanup step in _chain()
Martin von Zweigbergk <martinvonz@google.com> [Thu, 18 Apr 2019 00:05:05 -0700] rev 42228
copies: delete a redundant cleanup step in _chain() The check is redundant since d5edb5d3a337 (copies: filter out copies when target is not in destination manifest, 2019-02-14). To test that hypothesis, I made this change in the commit that commit, but all tests still passed. I think the case was necessary before then, we just didn't have tests for it. Differential Revision: https://phab.mercurial-scm.org/D6275
Wed, 17 Apr 2019 23:10:14 -0700 copies: document cases in _chain()
Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 23:10:14 -0700] rev 42227
copies: document cases in _chain() Differential Revision: https://phab.mercurial-scm.org/D6274
Wed, 17 Apr 2019 14:44:18 -0700 copies: ignore heuristics copytracing when using changeset-centric algos
Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 14:44:18 -0700] rev 42226
copies: ignore heuristics copytracing when using changeset-centric algos Differential Revision: https://phab.mercurial-scm.org/D6269
Wed, 17 Apr 2019 14:42:23 -0700 copies: move check for experimental.copytrace==<falsy> earlier
Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 14:42:23 -0700] rev 42225
copies: move check for experimental.copytrace==<falsy> earlier I'm going to ignore experimental.copytrace when changeset-centric algorithms are required. This little refactoring makes that easier to add. Differential Revision: https://phab.mercurial-scm.org/D6268
Wed, 17 Apr 2019 14:11:54 -0700 copies: replace .items() by .values() where appropriate
Martin von Zweigbergk <martinvonz@google.com> [Wed, 17 Apr 2019 14:11:54 -0700] rev 42224
copies: replace .items() by .values() where appropriate As pointed out by Pierre-Yves. Differential Revision: https://phab.mercurial-scm.org/D6266
Fri, 12 Apr 2019 10:44:37 -0700 copies: inline _computenonoverlap() in mergecopies()
Martin von Zweigbergk <martinvonz@google.com> [Fri, 12 Apr 2019 10:44:37 -0700] rev 42223
copies: inline _computenonoverlap() in mergecopies() We now call pathcopies() from the base to each of the commits, and that calls _computeforwardmissing(), which does file prefetching (in the remotefilelog override). So the call to _computenonoverlap() is now pointless (the sets of files from _computenonoverlap() are subsets of the sets of files from _computeforwardmissing()). This somehow also fixes a broken remotefilelog test. Differential Revision: https://phab.mercurial-scm.org/D6256
Thu, 11 Apr 2019 23:22:54 -0700 copies: calculate mergecopies() based on pathcopies()
Martin von Zweigbergk <martinvonz@google.com> [Thu, 11 Apr 2019 23:22:54 -0700] rev 42222
copies: calculate mergecopies() based on pathcopies() When copies are stored in changesets, we need a changeset-centric version of mergecopies() just like we have a changeset-centric version of pathcopies(). I think the natural way of thinking about mergecopies() is in terms of pathcopies() from the base to each of the commits. So if we can rewrite mergecopies() based on two such pathcopies() calls, we'll get the changeset-centric version for free. That's what this patch does. A nice bonus is that it ends up being a lot simpler. mergecopies() has accumulated a lot of technical debt over time. One good example is the code for dealing with grafts (the "partial/incomplete/dirty" stuff). Since pathcopies() already deals with backwards renames and ping-pong renames, we get that for free. I've run tests with hard-coded debug logging for "fullcopy" and while I haven't looked at every difference it produces, all the ones I have looked at seemed reasonable to me. I'm a little surprised that no more tests fail when run with '--extra-config-opt experimental.copies.read-from=compatibility' compared to before this patch. This patch also fixes the broken cases in test-annotate.t and test-fastannotate.t. It also enables the part of test-copies.t that was previously disabled exactly because mergecopies() needed to get a changeset-centric version. One drawback of the rewritten code is that we may now make remotefilelog prefetch more files. We used to prefetch files that were unique to either side of the merge compared to the other. We now prefetch files that are unique to either side of the merge compared to the base. This means that if you added the same file to each side, we would not prefetch it before, but we would now. Such cases are probably quite rare, but one likely scenario where they happen is when moving from a commit to its successor (or the other way around). The user will probably already have the files in the cache in such cases, so it's probably not a big deal. Some timings for calculating mergecopies between two revisions (revisions shown on each line, all using the common ancestor as base): In the hg repo: 4.8 4.9: 0.21s -> 0.21s 4.0 4.8: 0.35s -> 0.63s In and old copy of the mozilla-unified repo: FIREFOX_BETA_60_BASE^ FIREFOX_BETA_60_BASE: 0.82s -> 0.82s FIREFOX_NIGHTLY_59_END FIREFOX_BETA_60_BASE: 2.5s -> 2.6s FIREFOX_BETA_59_END FIREFOX_BETA_60_BASE: 3.9s -> 4.1s FIREFOX_AURORA_50_BASE FIREFOX_BETA_60_BASE: 31s -> 33s So it's measurably slower in most cases. The most significant difference is in the hg repo between revisions 4.0 and 4.8. In that case it seems to come from the fact that pathcopies() uses fctx.isintroducedafter() (in _tracefile), while the old mergecopies() used fctx.linkrev() (in _checkcopies()). That results in a single call to filectx._adjustlinkrev(), which is responsible for the entire difference in time (in my repo). So we pay a performance penalty but we get more correct code (see change in test-mv-cp-st-diff.t). Deleting the "== f.filenode()" in _tracefile() recovers the lost performance in the hg repo. There were are few other optimizations in _checkcopies() that I could not measure any impact from. One was from the "seen" set. Another was from a "continue" when the file was not in the destination manifest (corresponding to "am" in _tracefile). Also note that merge copies are not calculated when updating with a clean working copy, which is probably the most common case. I therefore think the much simpler code is worth the slowdown. Differential Revision: https://phab.mercurial-scm.org/D6255
Mon, 29 Apr 2019 14:38:54 -0700 tests: add test where copy source is deleted and added back
Martin von Zweigbergk <martinvonz@google.com> [Mon, 29 Apr 2019 14:38:54 -0700] rev 42221
tests: add test where copy source is deleted and added back This shows another difference between pathcopies() and mergecopies(): mergecopies() considers files that have been deleted and then added back as different files, but pathcopies() does not. Differential Revision: https://phab.mercurial-scm.org/D6330
Wed, 01 May 2019 14:30:25 -0400 merge with stable
Augie Fackler <augie@google.com> [Wed, 01 May 2019 14:30:25 -0400] rev 42220
merge with stable
Wed, 01 May 2019 14:27:19 -0400 Added signature for changeset 07e479ef7c96 stable
Augie Fackler <raf@durin42.com> [Wed, 01 May 2019 14:27:19 -0400] rev 42219
Added signature for changeset 07e479ef7c96
Wed, 01 May 2019 14:27:17 -0400 Added tag 5.0 for changeset 07e479ef7c96 stable
Augie Fackler <raf@durin42.com> [Wed, 01 May 2019 14:27:17 -0400] rev 42218
Added tag 5.0 for changeset 07e479ef7c96
Mon, 29 Apr 2019 23:00:42 -0400 obsolete: drop the legacy `_enabled` variable
Matt Harbison <matt_harbison@yahoo.com> [Mon, 29 Apr 2019 23:00:42 -0400] rev 42217
obsolete: drop the legacy `_enabled` variable Evolve 8.5.0 stopped setting this, and it would have been easier to figure out why TortoiseHg stopped allowing amends if it would have crashed on the missing variable.
Sat, 27 Apr 2019 14:43:43 +0300 discovery: only calculate closed branches if required
Pulkit Goyal <pulkit@yandex-team.ru> [Sat, 27 Apr 2019 14:43:43 +0300] rev 42216
discovery: only calculate closed branches if required The number of new closed branches is required for printing in error message. So let's only calculate them if we need to print error about new branches. Differential Revision: https://phab.mercurial-scm.org/D6314
Thu, 25 Apr 2019 19:17:02 +0200 hghave: deal with "rc" release stable 5.0
Pierre-Yves David <pierre-yves.david@octobus.net> [Thu, 25 Apr 2019 19:17:02 +0200] rev 42215
hghave: deal with "rc" release Without this change, 5.0rc0 is not recognised as 5.0
Sat, 27 Apr 2019 02:13:43 +0300 branchcache: store the maximum tip in a variable inside for loop
Pulkit Goyal <pulkit@yandex-team.ru> [Sat, 27 Apr 2019 02:13:43 +0300] rev 42214
branchcache: store the maximum tip in a variable inside for loop Instead of assigning self.tiprev multiple times in the for loop, and calling cl.node() on it, let's store that in a temporary variable and assign it in the end of loop. Differential Revision: https://phab.mercurial-scm.org/D6311
Sat, 27 Apr 2019 23:30:19 -0700 tests: demonstrate that rename is followed to wrong parent from merge
Martin von Zweigbergk <martinvonz@google.com> [Sat, 27 Apr 2019 23:30:19 -0700] rev 42213
tests: demonstrate that rename is followed to wrong parent from merge This test case shows another way that copies are handled differently between `hg st` (pathcopies()) and `hg co -m` (mergecopies). The reason is that pathcopies() calls _tracefiles(), which checks that the file nodeid of an ancestor matches the file nodeid in the base commit. mergecopies() should probably be doing the same. Differential Revision: https://phab.mercurial-scm.org/D6323
Sat, 27 Apr 2019 23:14:49 -0700 test: demonstrate failure to follow rename with shadowed linkrev
Martin von Zweigbergk <martinvonz@google.com> [Sat, 27 Apr 2019 23:14:49 -0700] rev 42212
test: demonstrate failure to follow rename with shadowed linkrev This shows a difference in handling of copies between `hg st` (pathcopies()) and `hg co -m`. The issue here is that mergecopies() uses the unadjusted linkrev() for determining when to stop walking ancestors. Differential Revision: https://phab.mercurial-scm.org/D6322
Sat, 27 Apr 2019 22:57:15 -0700 tests: slightly modify a linkrev test to prepare for expanding it
Martin von Zweigbergk <martinvonz@google.com> [Sat, 27 Apr 2019 22:57:15 -0700] rev 42211
tests: slightly modify a linkrev test to prepare for expanding it The test case checks that the copy tracing code doesn't get confused by linkrevs when walking a file's ancestors. This patch chnages the test slightly so a second commit is grafted, thus producing a second "bad" linkrev. I'll use this in the next patch to demonstrate a bug. Differential Revision: https://phab.mercurial-scm.org/D6321
Sat, 27 Apr 2019 22:55:54 -0700 copies: process files in deterministic order for stable tests
Martin von Zweigbergk <martinvonz@google.com> [Sat, 27 Apr 2019 22:55:54 -0700] rev 42210
copies: process files in deterministic order for stable tests I also fixed a typo while at it. Differential Revision: https://phab.mercurial-scm.org/D6320
Wed, 17 Apr 2019 15:06:41 +0300 narrow: send specs as bundle2 data instead of param (issue5952) (issue6019) stable
Pulkit Goyal <pulkit@yandex-team.ru> [Wed, 17 Apr 2019 15:06:41 +0300] rev 42209
narrow: send specs as bundle2 data instead of param (issue5952) (issue6019) Before this patch, when ACL is involved, narrowspecs are send as bundle2 parameter for narrow:spec bundle2 part. The limitation of bundle2 parts are they cannot send data larger than 255 bytes. Includes and excludes in narrow are not limited by size and they can grow over 255 bytes. This patch introduces a new mandatory bundle2 part and send narrowspecs as data of that. The new bundle2 part is introduced to keep things cleaner and easy to distinguish related to backward compatibility. The part is mandatory because without server's narrowspec, the local ACL narrow repo won't work. This patch makes clients compatible with servers which have older versions. However I left a comment that we should drop the other bundle2 part soon as that's broken and people should not rely on that. I named the new bundle2 part 'Narrow:responsespec' because: 1) Capital 'N' to make it mandatory 2) 'Narrow:spec' cannot be used because bundle2 enforces that there should not be two different parts which resolve to same name when lowercased. 3) reponsespec clears that they are specs which are send as reponse by the server While I was here, I renamed `narrowhgacl` section to `narrowacl` as suggested by idlsoft@ and martinvonz@. Differential Revision: https://phab.mercurial-scm.org/D6310
(0) -30000 -10000 -3000 -1000 -300 -100 -50 -30 +30 +50 +100 +300 +1000 +3000 tip