hidden: use _domainancestors to compute revs revealed by dynamic blocker
The complexity of computing the revealed changesets is now 'O(revealed)'.
This massively speeds up the computation on large repository. Moving it to the
millisecond range.
Below are timing from two Mozilla repositories with different contents:
1) mozilla repository with:
* 400667 changesets
* 35 hidden changesets (first rev-268334)
* 288 visible drafts
* obsolete working copy (dynamicblockers),
Before:
! visible
! wall 0.030247 comb 0.030000 user 0.030000 sys 0.000000 (best of 100)
After:
! visible
! wall 0.000585 comb 0.000000 user 0.000000 sys 0.000000 (best of 4221)
The timing above include the computation of obsolete changeset:
! obsolete
! wall 0.000396 comb 0.000000 user 0.000000 sys 0.000000 (best of 6816)
So adjusted time give 30ms before versus 0.2ms after. A 150x speedup.
2) mozilla repository with:
* 405645 changesets
* 4312 hidden changesets (first rev-326004)
* 264 visible drafts
* obsolete working copy (dynamicblockers),
Before:
! visible
! wall 0.168658 comb 0.170000 user 0.170000 sys 0.000000 (best of 48)
After
! visible
! wall 0.008612 comb 0.010000 user 0.010000 sys 0.000000 (best of 325)
The timing above include the computation of obsolete changeset:
! obsolete
! wall 0.006408 comb 0.010000 user 0.010000 sys 0.000000 (best of 404)
So adjusted time give 160ms before versus 2ms after. A 75x speedup.
#require serve
Initialize repository
the status call is to check for issue5130
$ hg init server
$ cd server
$ touch foo
$ hg -q commit -A -m initial
>>> for i in range(1024):
... with open(str(i), 'wb') as fh:
... fh.write(str(i))
$ hg -q commit -A -m 'add a lot of files'
$ hg st
$ hg serve -p $HGPORT -d --pid-file=hg.pid
$ cat hg.pid >> $DAEMON_PIDS
$ cd ..
Basic clone
$ hg clone --uncompressed -U http://localhost:$HGPORT clone1
streaming all changes
1027 files to transfer, 96.3 KB of data
transferred 96.3 KB in * seconds (*/sec) (glob)
searching for changes
no changes found
Clone with background file closing enabled
$ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --uncompressed -U http://localhost:$HGPORT clone-background | grep -v adding
using http://localhost:$HGPORT/
sending capabilities command
sending branchmap command
streaming all changes
sending stream_out command
1027 files to transfer, 96.3 KB of data
starting 4 threads for background file closing
transferred 96.3 KB in * seconds (*/sec) (glob)
query 1; heads
sending batch command
searching for changes
all remote heads known locally
no changes found
sending getbundle command
bundle2-input-bundle: with-transaction
bundle2-input-part: "listkeys" (params: 1 mandatory) supported
bundle2-input-part: total payload size 58
bundle2-input-part: "listkeys" (params: 1 mandatory) supported
bundle2-input-bundle: 1 parts total
checking for updated bookmarks
Stream clone while repo is changing:
$ mkdir changing
$ cd changing
extension for delaying the server process so we reliably can modify the repo
while cloning
$ cat > delayer.py <<EOF
> import time
> from mercurial import extensions, vfs
> def __call__(orig, self, path, *args, **kwargs):
> if path == 'data/f1.i':
> time.sleep(2)
> return orig(self, path, *args, **kwargs)
> extensions.wrapfunction(vfs.vfs, '__call__', __call__)
> EOF
prepare repo with small and big file to cover both code paths in emitrevlogdata
$ hg init repo
$ touch repo/f1
$ $TESTDIR/seq.py 50000 > repo/f2
$ hg -R repo ci -Aqm "0"
$ hg -R repo serve -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py
$ cat hg.pid >> $DAEMON_PIDS
clone while modifying the repo between stating file with write lock and
actually serving file content
$ hg clone -q --uncompressed -U http://localhost:$HGPORT1 clone &
$ sleep 1
$ echo >> repo/f1
$ echo >> repo/f2
$ hg -R repo ci -m "1"
$ wait
$ hg -R clone id
000000000000