Tue, 06 Jun 2023 04:56:54 +0200 perf: add a perf::stream-consume
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 06 Jun 2023 04:56:54 +0200] rev 50678
perf: add a perf::stream-consume We know how long it take to generate, lets check how long it take to apply now.
Tue, 06 Jun 2023 04:09:05 +0200 perf: add a perf::stream-generate command
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 06 Jun 2023 04:09:05 +0200] rev 50677
perf: add a perf::stream-generate command This record the time we take to generate a bundle.
Mon, 12 Jun 2023 18:04:09 +0200 perf: add a new "context" argument to timer
Pierre-Yves David <pierre-yves.david@octobus.net> [Mon, 12 Jun 2023 18:04:09 +0200] rev 50676
perf: add a new "context" argument to timer This allow to simple setup/teardown outside of the timed section. Especially using object that need context manager, like a temporary files.
Tue, 06 Jun 2023 01:48:10 +0200 perf: add support for stream-v3 during benchmark
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 06 Jun 2023 01:48:10 +0200] rev 50675
perf: add support for stream-v3 during benchmark This is getting important as the v3 protocol will diverge from the v2 protocol.
Tue, 06 Jun 2023 01:43:48 +0200 perf: add a function to find a stream version generator
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 06 Jun 2023 01:43:48 +0200] rev 50674
perf: add a function to find a stream version generator The logic is clearer and can be reused for other commands in the future.
Thu, 18 May 2023 19:23:59 +0100 treemanifest: make `updatecaches` update the nodemaps for all directories
Arseniy Alekseyev <aalekseyev@janestreet.com> [Thu, 18 May 2023 19:23:59 +0100] rev 50673
treemanifest: make `updatecaches` update the nodemaps for all directories Without this, if the cache for a nested directory is in a bad state, it's very hard to repair it.
Wed, 31 May 2023 10:37:55 +0100 stream-clone: avoid opening a revlog in case we do not need it
Arseniy Alekseyev <aalekseyev@janestreet.com> [Wed, 31 May 2023 10:37:55 +0100] rev 50672
stream-clone: avoid opening a revlog in case we do not need it Opening an revlog has a cost, especially if it is inline as we have to scan the file and construct an index. To prevent the associated slowdown, we just do a minimal scan to check that an inline file is still inline, and simply stream the file without creating a revlog when we can. This provides a big boost compared to the previous changeset, even if the full generation is still penalized by the initial gathering of information. All benchmarks are run on linux with Python 3.10.7. # benchmark.name = hg.exchange.stream.generate # benchmark.variants.version = v2 ### Compared to the previous changesets We get a large win all across the board! # mercurial-2018-08-01-zstd-sparse-revlog before: 0.250694 seconds after: 0.105986 seconds (-57.72%) # pypy-2018-08-01-zstd-sparse-revlog before: 3.885657 seconds after: 1.709748 seconds (-56.00%) # netbeans-2018-08-01-zstd-sparse-revlog before: 16.679371 seconds after: 7.687469 seconds (-53.91%) # mozilla-central-2018-08-01-zstd-sparse-revlog before: 38.575482 seconds after: 17.520316 seconds (-54.58%) # mozilla-try-2019-02-18-zstd-sparse-revlog before: 81.160994 seconds after: 37.073753 seconds (-54.32%) ### Compared to 6.4.3 We are still significantly slower than 6.4.3, the extra time is usually twice slower than the extra time we observe on the locked section, which is a quite interesting information. Except for mercurial-central that is much faster. That discrepancy is not really explained yet. # mercurial-2018-08-01-zstd-sparse-revlog 6.4.3: 0.072560 seconds after: 0.105986 seconds (+46.07%) (- 0.03 seconds) # pypy-2018-08-01-zstd-sparse-revlog 6.4.3: 1.211193 seconds after: 1.709748 seconds (+41.16%) (-0.45 seconds) # netbeans-2018-08-01-zstd-sparse-revlog 6.4.3: 4.932843 seconds after: 7.687469 seconds (+55.84%) (-2.75 seconds) # mozilla-central-2018-08-01-zstd-sparse-revlog 6.4.3: 34.012226 seconds after: 17.520316 seconds (-48.49%) (-16.49 seconds) # mozilla-try-2019-02-18-zstd-sparse-revlog 6.4.3: 23.850555 seconds after: 37.073753 seconds (+55.44%) (+13.22 seconds)
Tue, 30 May 2023 17:43:59 +0100 store: stop relying on a `revlog_type` property
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 30 May 2023 17:43:59 +0100] rev 50671
store: stop relying on a `revlog_type` property We want to know if a file is related to a revlog, but the rest is dealt with differently already, so we simplify things further. as a bonus, this cleanup This provides a small but noticeable speedup. The number below use `hg perf::stream-locked-section` to measure the time spend in the locked section of the streaming clone. Number are run on various repository and compare different steps.: 1) the effect of this patchs, 2) the effect of the cleanup series, 2) current state compared to because large refactoring. All benchmarks are run on linux with Python 3.10.7. ### Effect of this patch # mercurial-2018-08-01-zstd-sparse-revlog # benchmark.name = perf-stream-locked-section before: 0.030246 seconds after: 0.029274 seconds (-3.21%) # pypy-2018-08-01-zstd-sparse-revlog before: 0.545012 seconds after: 0.520872 seconds (-4.43%) # netbeans-2018-08-01-zstd-sparse-revlog before: 2.719939 seconds after: 2.626791 seconds (-3.42%) # mozilla-central-2018-08-01-zstd-sparse-revlog before: 6.304179 seconds after: 6.096700 seconds (-3.29%) # mozilla-try-2019-02-18-zstd-sparse-revlog before: 14.142687 seconds after: 13.640779 seconds (-3.55%) ### Effect of this series A small but sizeable speedup # mercurial-2018-08-01-zstd-sparse-revlog before: 0.031122 seconds after: 0.029274 seconds (-5.94%) # pypy-2018-08-01-zstd-sparse-revlog before: 0.589970 seconds after: 0.520872 seconds (-11.71%) # netbeans-2018-08-01-zstd-sparse-revlog before: 2.980300 seconds after: 2.626791 seconds (-11.86%) # mozilla-central-2018-08-01-zstd-sparse-revlog before: 6.863204 seconds after: 6.096700 seconds (-11.17%) # mozilla-try-2019-02-18-zstd-sparse-revlog before: 14.921393 seconds after: 13.640779 seconds (-8.58%) ### Current state compared to the pre-refactoring state The refactoring introduced multiple string manipulation and dictionary creation that seems to induce a signifiant slowdown Slowdown # mercurial-2018-08-01-zstd-sparse-revlog 6.4.3: 0.019459 seconds after: 0.029274 seconds (+50.44%) ## pypy-2018-08-01-zstd-sparse-revlog 6.4.3: 0.290715 seconds after: 0.520872 seconds (+79.17%) # netbeans-2018-08-01-zstd-sparse-revlog 6.4.3: 1.403447 seconds after: 2.626791 seconds (+87.17%) # mozilla-central-2018-08-01-zstd-sparse-revlog 6.4.3: 3.163549 seconds after: 6.096700 seconds (+92.72%) # mozilla-try-2019-02-18-zstd-sparse-revlog 6.4.3: 6.702184 seconds after: 13.640779 seconds (+103.53%)
Tue, 30 May 2023 16:38:13 +0100 store: directly pass the filesize in the `details` of revlog
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 30 May 2023 16:38:13 +0100] rev 50670
store: directly pass the filesize in the `details` of revlog The dictionary only contains 1 (or 0) entries, we can directly store that information (or None). Moving to a simpler argument passing result in a noticable speedup (because Python) The number below use `hg perf::stream-locked-section` to measure the time spend in the locked section of the streaming clone. Number are run on various repository. ### mercurial-2018-08-01-zstd-sparse-revlog before: 0.031247 seconds after: 0.030246 seconds (-3.20%) ### mozilla-central-2018-08-01-zstd-sparse-revlog before: 6.718968 seconds after: 6.304179 seconds (-6.17%) ### mozilla-try-2019-02-18-zstd-sparse-revlog before: 14.631343 seconds after: 14.142687 seconds (-3.34%) ### netbeans-2018-08-01-zstd-sparse-revlog before: 2.895584 seconds after: 2.719939 seconds (-6.07%) ### pypy-2018-08-01-zstd-sparse-revlog before: 0.561843 seconds after: 0.543034 seconds (-3.35%)
Tue, 30 May 2023 16:35:10 +0100 store: explicitly pass file_size when creating StoreFile
Pierre-Yves David <pierre-yves.david@octobus.net> [Tue, 30 May 2023 16:35:10 +0100] rev 50669
store: explicitly pass file_size when creating StoreFile A small cleanup before large cleanup in the next patch.
(0) -30000 -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 tip