Wed, 20 Feb 2013 11:31:38 -0800 cmdutil: use a small initial window with --limit
Bryan O'Sullivan <bryano@fb.com> [Wed, 20 Feb 2013 11:31:38 -0800] rev 18710
cmdutil: use a small initial window with --limit In a large repo, running a command like "log -l1 -p" was expensive because it would always traverse 8 commits, as 8 was the initial window size. We now choose the lesser of 8 or the limit, speeding up the "log -l1 -p" case by a factor of 5.
Wed, 20 Feb 2013 11:31:34 -0800 worker: handle worker failures more aggressively
Bryan O'Sullivan <bryano@fb.com> [Wed, 20 Feb 2013 11:31:34 -0800] rev 18709
worker: handle worker failures more aggressively We now wait for worker processes in a separate thread, so that we can spot failures in a timely way, wihout waiting for the progress pipe to drain. If a worker fails, we recover the pre-parallel-update behaviour of failing early by killing its peers before propagating the failure.
Wed, 20 Feb 2013 11:31:31 -0800 worker: fix a race in SIGINT handling
Bryan O'Sullivan <bryano@fb.com> [Wed, 20 Feb 2013 11:31:31 -0800] rev 18708
worker: fix a race in SIGINT handling This is almost impossible to trigger due to the tiny time window involved.
Wed, 20 Feb 2013 11:31:27 -0800 worker: on error, exit similarly to the first failing worker
Bryan O'Sullivan <bryano@fb.com> [Wed, 20 Feb 2013 11:31:27 -0800] rev 18707
worker: on error, exit similarly to the first failing worker Previously, if a worker failed, we exited with status 1. We now exit with the correct exit code (killing ourselves if necessary).
Tue, 19 Feb 2013 13:35:39 -0600 merge with stable
Kevin Bullock <kbullock@ringworld.org> [Tue, 19 Feb 2013 13:35:39 -0600] rev 18706
merge with stable
Tue, 19 Feb 2013 13:35:25 -0600 merge with main
Kevin Bullock <kbullock@ringworld.org> [Tue, 19 Feb 2013 13:35:25 -0600] rev 18705
merge with main
Sat, 09 Feb 2013 21:07:42 +0000 largefiles: don't cache largefiles for pulled heads by default
Na'Tosha Bard <natosha@unity3d.com> [Sat, 09 Feb 2013 21:07:42 +0000] rev 18704
largefiles: don't cache largefiles for pulled heads by default After discussion, we've agreed that largefiles for newly pulled heads should not be cached by default. The use case for this is using largefiles repos with multiple remote servers (and therefore multiple remote largefiles caches), where users will be pulling from non-default locations on a regular basis. We think this use case will be significantly less common than the use case where all largefiles are stored on the same central server, so the default should be no caching. The old behavior can be obtained by passing the --cache-largefiles flag to pull.
Mon, 18 Feb 2013 13:21:27 -0600 merge with stable
Matt Mackall <mpm@selenic.com> [Mon, 18 Feb 2013 13:21:27 -0600] rev 18703
merge with stable
Mon, 18 Feb 2013 13:20:59 -0600 merge with crew
Matt Mackall <mpm@selenic.com> [Mon, 18 Feb 2013 13:20:59 -0600] rev 18702
merge with crew
Mon, 18 Feb 2013 00:04:28 +0900 bundle: treat branches created newly on the local correctly (issue3828) stable
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Mon, 18 Feb 2013 00:04:28 +0900] rev 18701
bundle: treat branches created newly on the local correctly (issue3828) Before this patch, "hg bundle --branch foo other" fails to create bundle file, if specified "foo" branch is created newly on the local repository. "hg bundle" uses "hg.addbranchrevs(repo, other, ...)" to look branch names up, even though other outgoing-like implementation uses "hg.addbranchrevs(repo, repo, ...)". In the former invocation, "other" repository recognizes such branches as unknown, so execution is aborted. This patch uses "hg.addbranchrevs(repo, repo, ..)" in "hg bundle" to bundle revisions on such branches correctly.
(0) -10000 -3000 -1000 -300 -100 -10 +10 +100 +300 +1000 +3000 +10000 +30000 tip