merge: default into stable for release candidate stable 5.0rc0
authorAugie Fackler <augie@google.com>
Wed, 17 Apr 2019 13:41:18 -0400
branchstable
changeset 42146 4a8d9ed86475
parent 41984 d1c33b2442a7 (current diff)
parent 42143 29569f2db929 (diff)
child 42147 807a6ca6d096
merge: default into stable for release candidate
contrib/discovery-helper.sh
contrib/python-zstandard/zstd_cffi.py
contrib/win32/mercurial.iss
contrib/win32/win32-build.txt
contrib/wix/COPYING.rtf
contrib/wix/README.txt
contrib/wix/contrib.wxs
contrib/wix/defines.wxi
contrib/wix/dist.wxs
contrib/wix/doc.wxs
contrib/wix/guids.wxi
contrib/wix/help.wxs
contrib/wix/hg.cmd
contrib/wix/i18n.wxs
contrib/wix/locale.wxs
contrib/wix/mercurial.wxs
contrib/wix/templates.wxs
tests/test-demandimport.py.out
--- a/Makefile	Tue Mar 19 09:23:35 2019 -0400
+++ b/Makefile	Wed Apr 17 13:41:18 2019 -0400
@@ -5,7 +5,7 @@
 # % make PREFIX=/opt/ install
 
 export PREFIX=/usr/local
-PYTHON=python
+PYTHON?=python
 $(eval HGROOT := $(shell pwd))
 HGPYTHONS ?= $(HGROOT)/build/pythons
 PURE=
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/automation/README.rst	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,127 @@
+====================
+Mercurial Automation
+====================
+
+This directory contains code and utilities for building and testing Mercurial
+on remote machines.
+
+The ``automation.py`` Script
+============================
+
+``automation.py`` is an executable Python script (requires Python 3.5+)
+that serves as a driver to common automation tasks.
+
+When executed, the script will *bootstrap* a virtualenv in
+``<source-root>/build/venv-automation`` then re-execute itself using
+that virtualenv. So there is no need for the caller to have a virtualenv
+explicitly activated. This virtualenv will be populated with various
+dependencies (as defined by the ``requirements.txt`` file).
+
+To see what you can do with this script, simply run it::
+
+   $ ./automation.py
+
+Local State
+===========
+
+By default, local state required to interact with remote servers is stored
+in the ``~/.hgautomation`` directory.
+
+We attempt to limit persistent state to this directory. Even when
+performing tasks that may have side-effects, we try to limit those
+side-effects so they don't impact the local system. e.g. when we SSH
+into a remote machine, we create a temporary directory for the SSH
+config so the user's known hosts file isn't updated.
+
+AWS Integration
+===============
+
+Various automation tasks integrate with AWS to provide access to
+resources such as EC2 instances for generic compute.
+
+This obviously requires an AWS account and credentials to work.
+
+We use the ``boto3`` library for interacting with AWS APIs. We do not employ
+any special functionality for telling ``boto3`` where to find AWS credentials. See
+https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html
+for how ``boto3`` works. Once you have configured your environment such
+that ``boto3`` can find credentials, interaction with AWS should *just work*.
+
+.. hint::
+
+   Typically you have a ``~/.aws/credentials`` file containing AWS
+   credentials. If you manage multiple credentials, you can override which
+   *profile* to use at run-time by setting the ``AWS_PROFILE`` environment
+   variable.
+
+Resource Management
+-------------------
+
+Depending on the task being performed, various AWS services will be accessed.
+This of course requires AWS credentials with permissions to access these
+services.
+
+The following AWS services can be accessed by automation tasks:
+
+* EC2
+* IAM
+* Simple Systems Manager (SSM)
+
+Various resources will also be created as part of performing various tasks.
+This also requires various permissions.
+
+The following AWS resources can be created by automation tasks:
+
+* EC2 key pairs
+* EC2 security groups
+* EC2 instances
+* IAM roles and instance profiles
+* SSM command invocations
+
+When possible, we prefix resource names with ``hg-`` so they can easily
+be identified as belonging to Mercurial.
+
+.. important::
+
+   We currently assume that AWS accounts utilized by *us* are single
+   tenancy. Attempts to have discrete users of ``automation.py`` (including
+   sharing credentials across machines) using the same AWS account can result
+   in them interfering with each other and things breaking.
+
+Cost of Operation
+-----------------
+
+``automation.py`` tries to be frugal with regards to utilization of remote
+resources. Persistent remote resources are minimized in order to keep costs
+in check. For example, EC2 instances are often ephemeral and only live as long
+as the operation being performed.
+
+Under normal operation, recurring costs are limited to:
+
+* Storage costs for AMI / EBS snapshots. This should be just a few pennies
+  per month.
+
+When running EC2 instances, you'll be billed accordingly. By default, we
+use *small* instances, like ``t3.medium``. This instance type costs ~$0.07 per
+hour.
+
+.. note::
+
+   When running Windows EC2 instances, AWS bills at the full hourly cost, even
+   if the instance doesn't run for a full hour (per-second billing doesn't
+   apply to Windows AMIs).
+
+Managing Remote Resources
+-------------------------
+
+Occassionally, there may be an error purging a temporary resource. Or you
+may wish to forcefully purge remote state. Commands can be invoked to manually
+purge remote resources.
+
+To terminate all EC2 instances that we manage::
+
+   $ automation.py terminate-ec2-instances
+
+To purge all EC2 resources that we manage::
+
+   $ automation.py purge-ec2-resources
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/automation/automation.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,70 @@
+#!/usr/bin/env python3
+#
+# automation.py - Perform tasks on remote machines
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+import os
+import pathlib
+import subprocess
+import sys
+import venv
+
+
+HERE = pathlib.Path(os.path.abspath(__file__)).parent
+REQUIREMENTS_TXT = HERE / 'requirements.txt'
+SOURCE_DIR = HERE.parent.parent
+VENV = SOURCE_DIR / 'build' / 'venv-automation'
+
+
+def bootstrap():
+    venv_created = not VENV.exists()
+
+    VENV.parent.mkdir(exist_ok=True)
+
+    venv.create(VENV, with_pip=True)
+
+    if os.name == 'nt':
+        venv_bin = VENV / 'Scripts'
+        pip = venv_bin / 'pip.exe'
+        python = venv_bin / 'python.exe'
+    else:
+        venv_bin = VENV / 'bin'
+        pip = venv_bin / 'pip'
+        python = venv_bin / 'python'
+
+    args = [str(pip), 'install', '-r', str(REQUIREMENTS_TXT),
+            '--disable-pip-version-check']
+
+    if not venv_created:
+        args.append('-q')
+
+    subprocess.run(args, check=True)
+
+    os.environ['HGAUTOMATION_BOOTSTRAPPED'] = '1'
+    os.environ['PATH'] = '%s%s%s' % (
+        venv_bin, os.pathsep, os.environ['PATH'])
+
+    subprocess.run([str(python), __file__] + sys.argv[1:], check=True)
+
+
+def run():
+    import hgautomation.cli as cli
+
+    # Need to strip off main Python executable.
+    cli.main()
+
+
+if __name__ == '__main__':
+    try:
+        if 'HGAUTOMATION_BOOTSTRAPPED' not in os.environ:
+            bootstrap()
+        else:
+            run()
+    except subprocess.CalledProcessError as e:
+        sys.exit(e.returncode)
+    except KeyboardInterrupt:
+        sys.exit(1)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/automation/hgautomation/__init__.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,59 @@
+# __init__.py - High-level automation interfaces
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import pathlib
+import secrets
+
+from .aws import (
+    AWSConnection,
+)
+
+
+class HGAutomation:
+    """High-level interface for Mercurial automation.
+
+    Holds global state, provides access to other primitives, etc.
+    """
+
+    def __init__(self, state_path: pathlib.Path):
+        self.state_path = state_path
+
+        state_path.mkdir(exist_ok=True)
+
+    def default_password(self):
+        """Obtain the default password to use for remote machines.
+
+        A new password will be generated if one is not stored.
+        """
+        p = self.state_path / 'default-password'
+
+        try:
+            with p.open('r', encoding='ascii') as fh:
+                data = fh.read().strip()
+
+                if data:
+                    return data
+
+        except FileNotFoundError:
+            pass
+
+        password = secrets.token_urlsafe(24)
+
+        with p.open('w', encoding='ascii') as fh:
+            fh.write(password)
+            fh.write('\n')
+
+        p.chmod(0o0600)
+
+        return password
+
+    def aws_connection(self, region: str):
+        """Obtain an AWSConnection instance bound to a specific region."""
+
+        return AWSConnection(self, region)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/automation/hgautomation/aws.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,879 @@
+# aws.py - Automation code for Amazon Web Services
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import contextlib
+import copy
+import hashlib
+import json
+import os
+import pathlib
+import subprocess
+import time
+
+import boto3
+import botocore.exceptions
+
+from .winrm import (
+    run_powershell,
+    wait_for_winrm,
+)
+
+
+SOURCE_ROOT = pathlib.Path(os.path.abspath(__file__)).parent.parent.parent.parent
+
+INSTALL_WINDOWS_DEPENDENCIES = (SOURCE_ROOT / 'contrib' /
+                                'install-windows-dependencies.ps1')
+
+
+KEY_PAIRS = {
+    'automation',
+}
+
+
+SECURITY_GROUPS = {
+    'windows-dev-1': {
+        'description': 'Mercurial Windows instances that perform build automation',
+        'ingress': [
+            {
+                'FromPort': 22,
+                'ToPort': 22,
+                'IpProtocol': 'tcp',
+                'IpRanges': [
+                    {
+                        'CidrIp': '0.0.0.0/0',
+                        'Description': 'SSH from entire Internet',
+                    },
+                ],
+            },
+            {
+                'FromPort': 3389,
+                'ToPort': 3389,
+                'IpProtocol': 'tcp',
+                'IpRanges': [
+                    {
+                        'CidrIp': '0.0.0.0/0',
+                        'Description': 'RDP from entire Internet',
+                    },
+                ],
+
+            },
+            {
+                'FromPort': 5985,
+                'ToPort': 5986,
+                'IpProtocol': 'tcp',
+                'IpRanges': [
+                    {
+                        'CidrIp': '0.0.0.0/0',
+                        'Description': 'PowerShell Remoting (Windows Remote Management)',
+                    },
+                ],
+            }
+        ],
+    },
+}
+
+
+IAM_ROLES = {
+    'ephemeral-ec2-role-1': {
+        'description': 'Mercurial temporary EC2 instances',
+        'policy_arns': [
+            'arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM',
+        ],
+    },
+}
+
+
+ASSUME_ROLE_POLICY_DOCUMENT = '''
+{
+  "Version": "2012-10-17",
+  "Statement": [
+    {
+      "Effect": "Allow",
+      "Principal": {
+        "Service": "ec2.amazonaws.com"
+      },
+      "Action": "sts:AssumeRole"
+    }
+  ]
+}
+'''.strip()
+
+
+IAM_INSTANCE_PROFILES = {
+    'ephemeral-ec2-1': {
+        'roles': [
+            'ephemeral-ec2-role-1',
+        ],
+    }
+}
+
+
+# User Data for Windows EC2 instance. Mainly used to set the password
+# and configure WinRM.
+# Inspired by the User Data script used by Packer
+# (from https://www.packer.io/intro/getting-started/build-image.html).
+WINDOWS_USER_DATA = r'''
+<powershell>
+
+# TODO enable this once we figure out what is failing.
+#$ErrorActionPreference = "stop"
+
+# Set administrator password
+net user Administrator "%s"
+wmic useraccount where "name='Administrator'" set PasswordExpires=FALSE
+
+# First, make sure WinRM can't be connected to
+netsh advfirewall firewall set rule name="Windows Remote Management (HTTP-In)" new enable=yes action=block
+
+# Delete any existing WinRM listeners
+winrm delete winrm/config/listener?Address=*+Transport=HTTP  2>$Null
+winrm delete winrm/config/listener?Address=*+Transport=HTTPS 2>$Null
+
+# Create a new WinRM listener and configure
+winrm create winrm/config/listener?Address=*+Transport=HTTP
+winrm set winrm/config/winrs '@{MaxMemoryPerShellMB="0"}'
+winrm set winrm/config '@{MaxTimeoutms="7200000"}'
+winrm set winrm/config/service '@{AllowUnencrypted="true"}'
+winrm set winrm/config/service '@{MaxConcurrentOperationsPerUser="12000"}'
+winrm set winrm/config/service/auth '@{Basic="true"}'
+winrm set winrm/config/client/auth '@{Basic="true"}'
+
+# Configure UAC to allow privilege elevation in remote shells
+$Key = 'HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System'
+$Setting = 'LocalAccountTokenFilterPolicy'
+Set-ItemProperty -Path $Key -Name $Setting -Value 1 -Force
+
+# Configure and restart the WinRM Service; Enable the required firewall exception
+Stop-Service -Name WinRM
+Set-Service -Name WinRM -StartupType Automatic
+netsh advfirewall firewall set rule name="Windows Remote Management (HTTP-In)" new action=allow localip=any remoteip=any
+Start-Service -Name WinRM
+
+# Disable firewall on private network interfaces so prompts don't appear.
+Set-NetFirewallProfile -Name private -Enabled false
+</powershell>
+'''.lstrip()
+
+
+WINDOWS_BOOTSTRAP_POWERSHELL = '''
+Write-Output "installing PowerShell dependencies"
+Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
+Set-PSRepository -Name PSGallery -InstallationPolicy Trusted
+Install-Module -Name OpenSSHUtils -RequiredVersion 0.0.2.0
+
+Write-Output "installing OpenSSL server"
+Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
+# Various tools will attempt to use older versions of .NET. So we enable
+# the feature that provides them so it doesn't have to be auto-enabled
+# later.
+Write-Output "enabling .NET Framework feature"
+Install-WindowsFeature -Name Net-Framework-Core
+'''
+
+
+class AWSConnection:
+    """Manages the state of a connection with AWS."""
+
+    def __init__(self, automation, region: str):
+        self.automation = automation
+        self.local_state_path = automation.state_path
+
+        self.prefix = 'hg-'
+
+        self.session = boto3.session.Session(region_name=region)
+        self.ec2client = self.session.client('ec2')
+        self.ec2resource = self.session.resource('ec2')
+        self.iamclient = self.session.client('iam')
+        self.iamresource = self.session.resource('iam')
+
+        ensure_key_pairs(automation.state_path, self.ec2resource)
+
+        self.security_groups = ensure_security_groups(self.ec2resource)
+        ensure_iam_state(self.iamresource)
+
+    def key_pair_path_private(self, name):
+        """Path to a key pair private key file."""
+        return self.local_state_path / 'keys' / ('keypair-%s' % name)
+
+    def key_pair_path_public(self, name):
+        return self.local_state_path / 'keys' / ('keypair-%s.pub' % name)
+
+
+def rsa_key_fingerprint(p: pathlib.Path):
+    """Compute the fingerprint of an RSA private key."""
+
+    # TODO use rsa package.
+    res = subprocess.run(
+        ['openssl', 'pkcs8', '-in', str(p), '-nocrypt', '-topk8',
+         '-outform', 'DER'],
+        capture_output=True,
+        check=True)
+
+    sha1 = hashlib.sha1(res.stdout).hexdigest()
+    return ':'.join(a + b for a, b in zip(sha1[::2], sha1[1::2]))
+
+
+def ensure_key_pairs(state_path: pathlib.Path, ec2resource, prefix='hg-'):
+    remote_existing = {}
+
+    for kpi in ec2resource.key_pairs.all():
+        if kpi.name.startswith(prefix):
+            remote_existing[kpi.name[len(prefix):]] = kpi.key_fingerprint
+
+    # Validate that we have these keys locally.
+    key_path = state_path / 'keys'
+    key_path.mkdir(exist_ok=True, mode=0o700)
+
+    def remove_remote(name):
+        print('deleting key pair %s' % name)
+        key = ec2resource.KeyPair(name)
+        key.delete()
+
+    def remove_local(name):
+        pub_full = key_path / ('keypair-%s.pub' % name)
+        priv_full = key_path / ('keypair-%s' % name)
+
+        print('removing %s' % pub_full)
+        pub_full.unlink()
+        print('removing %s' % priv_full)
+        priv_full.unlink()
+
+    local_existing = {}
+
+    for f in sorted(os.listdir(key_path)):
+        if not f.startswith('keypair-') or not f.endswith('.pub'):
+            continue
+
+        name = f[len('keypair-'):-len('.pub')]
+
+        pub_full = key_path / f
+        priv_full = key_path / ('keypair-%s' % name)
+
+        with open(pub_full, 'r', encoding='ascii') as fh:
+            data = fh.read()
+
+        if not data.startswith('ssh-rsa '):
+            print('unexpected format for key pair file: %s; removing' %
+                  pub_full)
+            pub_full.unlink()
+            priv_full.unlink()
+            continue
+
+        local_existing[name] = rsa_key_fingerprint(priv_full)
+
+    for name in sorted(set(remote_existing) | set(local_existing)):
+        if name not in local_existing:
+            actual = '%s%s' % (prefix, name)
+            print('remote key %s does not exist locally' % name)
+            remove_remote(actual)
+            del remote_existing[name]
+
+        elif name not in remote_existing:
+            print('local key %s does not exist remotely' % name)
+            remove_local(name)
+            del local_existing[name]
+
+        elif remote_existing[name] != local_existing[name]:
+            print('key fingerprint mismatch for %s; '
+                  'removing from local and remote' % name)
+            remove_local(name)
+            remove_remote('%s%s' % (prefix, name))
+            del local_existing[name]
+            del remote_existing[name]
+
+    missing = KEY_PAIRS - set(remote_existing)
+
+    for name in sorted(missing):
+        actual = '%s%s' % (prefix, name)
+        print('creating key pair %s' % actual)
+
+        priv_full = key_path / ('keypair-%s' % name)
+        pub_full = key_path / ('keypair-%s.pub' % name)
+
+        kp = ec2resource.create_key_pair(KeyName=actual)
+
+        with priv_full.open('w', encoding='ascii') as fh:
+            fh.write(kp.key_material)
+            fh.write('\n')
+
+        priv_full.chmod(0o0600)
+
+        # SSH public key can be extracted via `ssh-keygen`.
+        with pub_full.open('w', encoding='ascii') as fh:
+            subprocess.run(
+                ['ssh-keygen', '-y', '-f', str(priv_full)],
+                stdout=fh,
+                check=True)
+
+        pub_full.chmod(0o0600)
+
+
+def delete_instance_profile(profile):
+    for role in profile.roles:
+        print('removing role %s from instance profile %s' % (role.name,
+                                                             profile.name))
+        profile.remove_role(RoleName=role.name)
+
+    print('deleting instance profile %s' % profile.name)
+    profile.delete()
+
+
+def ensure_iam_state(iamresource, prefix='hg-'):
+    """Ensure IAM state is in sync with our canonical definition."""
+
+    remote_profiles = {}
+
+    for profile in iamresource.instance_profiles.all():
+        if profile.name.startswith(prefix):
+            remote_profiles[profile.name[len(prefix):]] = profile
+
+    for name in sorted(set(remote_profiles) - set(IAM_INSTANCE_PROFILES)):
+        delete_instance_profile(remote_profiles[name])
+        del remote_profiles[name]
+
+    remote_roles = {}
+
+    for role in iamresource.roles.all():
+        if role.name.startswith(prefix):
+            remote_roles[role.name[len(prefix):]] = role
+
+    for name in sorted(set(remote_roles) - set(IAM_ROLES)):
+        role = remote_roles[name]
+
+        print('removing role %s' % role.name)
+        role.delete()
+        del remote_roles[name]
+
+    # We've purged remote state that doesn't belong. Create missing
+    # instance profiles and roles.
+    for name in sorted(set(IAM_INSTANCE_PROFILES) - set(remote_profiles)):
+        actual = '%s%s' % (prefix, name)
+        print('creating IAM instance profile %s' % actual)
+
+        profile = iamresource.create_instance_profile(
+            InstanceProfileName=actual)
+        remote_profiles[name] = profile
+
+    for name in sorted(set(IAM_ROLES) - set(remote_roles)):
+        entry = IAM_ROLES[name]
+
+        actual = '%s%s' % (prefix, name)
+        print('creating IAM role %s' % actual)
+
+        role = iamresource.create_role(
+            RoleName=actual,
+            Description=entry['description'],
+            AssumeRolePolicyDocument=ASSUME_ROLE_POLICY_DOCUMENT,
+        )
+
+        remote_roles[name] = role
+
+        for arn in entry['policy_arns']:
+            print('attaching policy %s to %s' % (arn, role.name))
+            role.attach_policy(PolicyArn=arn)
+
+    # Now reconcile state of profiles.
+    for name, meta in sorted(IAM_INSTANCE_PROFILES.items()):
+        profile = remote_profiles[name]
+        wanted = {'%s%s' % (prefix, role) for role in meta['roles']}
+        have = {role.name for role in profile.roles}
+
+        for role in sorted(have - wanted):
+            print('removing role %s from %s' % (role, profile.name))
+            profile.remove_role(RoleName=role)
+
+        for role in sorted(wanted - have):
+            print('adding role %s to %s' % (role, profile.name))
+            profile.add_role(RoleName=role)
+
+
+def find_windows_server_2019_image(ec2resource):
+    """Find the Amazon published Windows Server 2019 base image."""
+
+    images = ec2resource.images.filter(
+        Filters=[
+            {
+                'Name': 'owner-alias',
+                'Values': ['amazon'],
+            },
+            {
+                'Name': 'state',
+                'Values': ['available'],
+            },
+            {
+                'Name': 'image-type',
+                'Values': ['machine'],
+            },
+            {
+                'Name': 'name',
+                'Values': ['Windows_Server-2019-English-Full-Base-2019.02.13'],
+            },
+        ])
+
+    for image in images:
+        return image
+
+    raise Exception('unable to find Windows Server 2019 image')
+
+
+def ensure_security_groups(ec2resource, prefix='hg-'):
+    """Ensure all necessary Mercurial security groups are present.
+
+    All security groups are prefixed with ``hg-`` by default. Any security
+    groups having this prefix but aren't in our list are deleted.
+    """
+    existing = {}
+
+    for group in ec2resource.security_groups.all():
+        if group.group_name.startswith(prefix):
+            existing[group.group_name[len(prefix):]] = group
+
+    purge = set(existing) - set(SECURITY_GROUPS)
+
+    for name in sorted(purge):
+        group = existing[name]
+        print('removing legacy security group: %s' % group.group_name)
+        group.delete()
+
+    security_groups = {}
+
+    for name, group in sorted(SECURITY_GROUPS.items()):
+        if name in existing:
+            security_groups[name] = existing[name]
+            continue
+
+        actual = '%s%s' % (prefix, name)
+        print('adding security group %s' % actual)
+
+        group_res = ec2resource.create_security_group(
+            Description=group['description'],
+            GroupName=actual,
+        )
+
+        group_res.authorize_ingress(
+            IpPermissions=group['ingress'],
+        )
+
+        security_groups[name] = group_res
+
+    return security_groups
+
+
+def terminate_ec2_instances(ec2resource, prefix='hg-'):
+    """Terminate all EC2 instances managed by us."""
+    waiting = []
+
+    for instance in ec2resource.instances.all():
+        if instance.state['Name'] == 'terminated':
+            continue
+
+        for tag in instance.tags or []:
+            if tag['Key'] == 'Name' and tag['Value'].startswith(prefix):
+                print('terminating %s' % instance.id)
+                instance.terminate()
+                waiting.append(instance)
+
+    for instance in waiting:
+        instance.wait_until_terminated()
+
+
+def remove_resources(c, prefix='hg-'):
+    """Purge all of our resources in this EC2 region."""
+    ec2resource = c.ec2resource
+    iamresource = c.iamresource
+
+    terminate_ec2_instances(ec2resource, prefix=prefix)
+
+    for image in ec2resource.images.all():
+        if image.name.startswith(prefix):
+            remove_ami(ec2resource, image)
+
+    for group in ec2resource.security_groups.all():
+        if group.group_name.startswith(prefix):
+            print('removing security group %s' % group.group_name)
+            group.delete()
+
+    for profile in iamresource.instance_profiles.all():
+        if profile.name.startswith(prefix):
+            delete_instance_profile(profile)
+
+    for role in iamresource.roles.all():
+        if role.name.startswith(prefix):
+            print('removing role %s' % role.name)
+            role.delete()
+
+
+def wait_for_ip_addresses(instances):
+    """Wait for the public IP addresses of an iterable of instances."""
+    for instance in instances:
+        while True:
+            if not instance.public_ip_address:
+                time.sleep(2)
+                instance.reload()
+                continue
+
+            print('public IP address for %s: %s' % (
+                instance.id, instance.public_ip_address))
+            break
+
+
+def remove_ami(ec2resource, image):
+    """Remove an AMI and its underlying snapshots."""
+    snapshots = []
+
+    for device in image.block_device_mappings:
+        if 'Ebs' in device:
+            snapshots.append(ec2resource.Snapshot(device['Ebs']['SnapshotId']))
+
+    print('deregistering %s' % image.id)
+    image.deregister()
+
+    for snapshot in snapshots:
+        print('deleting snapshot %s' % snapshot.id)
+        snapshot.delete()
+
+
+def wait_for_ssm(ssmclient, instances):
+    """Wait for SSM to come online for an iterable of instance IDs."""
+    while True:
+        res = ssmclient.describe_instance_information(
+            Filters=[
+                {
+                    'Key': 'InstanceIds',
+                    'Values': [i.id for i in instances],
+                },
+            ],
+        )
+
+        available = len(res['InstanceInformationList'])
+        wanted = len(instances)
+
+        print('%d/%d instances available in SSM' % (available, wanted))
+
+        if available == wanted:
+            return
+
+        time.sleep(2)
+
+
+def run_ssm_command(ssmclient, instances, document_name, parameters):
+    """Run a PowerShell script on an EC2 instance."""
+
+    res = ssmclient.send_command(
+        InstanceIds=[i.id for i in instances],
+        DocumentName=document_name,
+        Parameters=parameters,
+        CloudWatchOutputConfig={
+            'CloudWatchOutputEnabled': True,
+        },
+    )
+
+    command_id = res['Command']['CommandId']
+
+    for instance in instances:
+        while True:
+            try:
+                res = ssmclient.get_command_invocation(
+                    CommandId=command_id,
+                    InstanceId=instance.id,
+                )
+            except botocore.exceptions.ClientError as e:
+                if e.response['Error']['Code'] == 'InvocationDoesNotExist':
+                    print('could not find SSM command invocation; waiting')
+                    time.sleep(1)
+                    continue
+                else:
+                    raise
+
+            if res['Status'] == 'Success':
+                break
+            elif res['Status'] in ('Pending', 'InProgress', 'Delayed'):
+                time.sleep(2)
+            else:
+                raise Exception('command failed on %s: %s' % (
+                    instance.id, res['Status']))
+
+
+@contextlib.contextmanager
+def temporary_ec2_instances(ec2resource, config):
+    """Create temporary EC2 instances.
+
+    This is a proxy to ``ec2client.run_instances(**config)`` that takes care of
+    managing the lifecycle of the instances.
+
+    When the context manager exits, the instances are terminated.
+
+    The context manager evaluates to the list of data structures
+    describing each created instance. The instances may not be available
+    for work immediately: it is up to the caller to wait for the instance
+    to start responding.
+    """
+
+    ids = None
+
+    try:
+        res = ec2resource.create_instances(**config)
+
+        ids = [i.id for i in res]
+        print('started instances: %s' % ' '.join(ids))
+
+        yield res
+    finally:
+        if ids:
+            print('terminating instances: %s' % ' '.join(ids))
+            for instance in res:
+                instance.terminate()
+            print('terminated %d instances' % len(ids))
+
+
+@contextlib.contextmanager
+def create_temp_windows_ec2_instances(c: AWSConnection, config):
+    """Create temporary Windows EC2 instances.
+
+    This is a higher-level wrapper around ``create_temp_ec2_instances()`` that
+    configures the Windows instance for Windows Remote Management. The emitted
+    instances will have a ``winrm_client`` attribute containing a
+    ``pypsrp.client.Client`` instance bound to the instance.
+    """
+    if 'IamInstanceProfile' in config:
+        raise ValueError('IamInstanceProfile cannot be provided in config')
+    if 'UserData' in config:
+        raise ValueError('UserData cannot be provided in config')
+
+    password = c.automation.default_password()
+
+    config = copy.deepcopy(config)
+    config['IamInstanceProfile'] = {
+        'Name': 'hg-ephemeral-ec2-1',
+    }
+    config.setdefault('TagSpecifications', []).append({
+        'ResourceType': 'instance',
+        'Tags': [{'Key': 'Name', 'Value': 'hg-temp-windows'}],
+    })
+    config['UserData'] = WINDOWS_USER_DATA % password
+
+    with temporary_ec2_instances(c.ec2resource, config) as instances:
+        wait_for_ip_addresses(instances)
+
+        print('waiting for Windows Remote Management service...')
+
+        for instance in instances:
+            client = wait_for_winrm(instance.public_ip_address, 'Administrator', password)
+            print('established WinRM connection to %s' % instance.id)
+            instance.winrm_client = client
+
+        yield instances
+
+
+def ensure_windows_dev_ami(c: AWSConnection, prefix='hg-'):
+    """Ensure Windows Development AMI is available and up-to-date.
+
+    If necessary, a modern AMI will be built by starting a temporary EC2
+    instance and bootstrapping it.
+
+    Obsolete AMIs will be deleted so there is only a single AMI having the
+    desired name.
+
+    Returns an ``ec2.Image`` of either an existing AMI or a newly-built
+    one.
+    """
+    ec2client = c.ec2client
+    ec2resource = c.ec2resource
+    ssmclient = c.session.client('ssm')
+
+    name = '%s%s' % (prefix, 'windows-dev')
+
+    config = {
+        'BlockDeviceMappings': [
+            {
+                'DeviceName': '/dev/sda1',
+                'Ebs': {
+                    'DeleteOnTermination': True,
+                    'VolumeSize': 32,
+                    'VolumeType': 'gp2',
+                },
+            }
+        ],
+        'ImageId': find_windows_server_2019_image(ec2resource).id,
+        'InstanceInitiatedShutdownBehavior': 'stop',
+        'InstanceType': 't3.medium',
+        'KeyName': '%sautomation' % prefix,
+        'MaxCount': 1,
+        'MinCount': 1,
+        'SecurityGroupIds': [c.security_groups['windows-dev-1'].id],
+    }
+
+    commands = [
+        # Need to start the service so sshd_config is generated.
+        'Start-Service sshd',
+        'Write-Output "modifying sshd_config"',
+        r'$content = Get-Content C:\ProgramData\ssh\sshd_config',
+        '$content = $content -replace "Match Group administrators","" -replace "AuthorizedKeysFile __PROGRAMDATA__/ssh/administrators_authorized_keys",""',
+        r'$content | Set-Content C:\ProgramData\ssh\sshd_config',
+        'Import-Module OpenSSHUtils',
+        r'Repair-SshdConfigPermission C:\ProgramData\ssh\sshd_config -Confirm:$false',
+        'Restart-Service sshd',
+        'Write-Output "installing OpenSSL client"',
+        'Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0',
+        'Set-Service -Name sshd -StartupType "Automatic"',
+        'Write-Output "OpenSSH server running"',
+    ]
+
+    with INSTALL_WINDOWS_DEPENDENCIES.open('r', encoding='utf-8') as fh:
+        commands.extend(l.rstrip() for l in fh)
+
+    # Disable Windows Defender when bootstrapping because it just slows
+    # things down.
+    commands.insert(0, 'Set-MpPreference -DisableRealtimeMonitoring $true')
+    commands.append('Set-MpPreference -DisableRealtimeMonitoring $false')
+
+    # Compute a deterministic fingerprint to determine whether image needs
+    # to be regenerated.
+    fingerprint = {
+        'instance_config': config,
+        'user_data': WINDOWS_USER_DATA,
+        'initial_bootstrap': WINDOWS_BOOTSTRAP_POWERSHELL,
+        'bootstrap_commands': commands,
+    }
+
+    fingerprint = json.dumps(fingerprint, sort_keys=True)
+    fingerprint = hashlib.sha256(fingerprint.encode('utf-8')).hexdigest()
+
+    # Find existing AMIs with this name and delete the ones that are invalid.
+    # Store a reference to a good image so it can be returned one the
+    # image state is reconciled.
+    images = ec2resource.images.filter(
+        Filters=[{'Name': 'name', 'Values': [name]}])
+
+    existing_image = None
+
+    for image in images:
+        if image.tags is None:
+            print('image %s for %s lacks required tags; removing' % (
+                image.id, image.name))
+            remove_ami(ec2resource, image)
+        else:
+            tags = {t['Key']: t['Value'] for t in image.tags}
+
+            if tags.get('HGIMAGEFINGERPRINT') == fingerprint:
+                existing_image = image
+            else:
+                print('image %s for %s has wrong fingerprint; removing' % (
+                      image.id, image.name))
+                remove_ami(ec2resource, image)
+
+    if existing_image:
+        return existing_image
+
+    print('no suitable Windows development image found; creating one...')
+
+    with create_temp_windows_ec2_instances(c, config) as instances:
+        assert len(instances) == 1
+        instance = instances[0]
+
+        wait_for_ssm(ssmclient, [instance])
+
+        # On first boot, install various Windows updates.
+        # We would ideally use PowerShell Remoting for this. However, there are
+        # trust issues that make it difficult to invoke Windows Update
+        # remotely. So we use SSM, which has a mechanism for running Windows
+        # Update.
+        print('installing Windows features...')
+        run_ssm_command(
+            ssmclient,
+            [instance],
+            'AWS-RunPowerShellScript',
+            {
+                'commands': WINDOWS_BOOTSTRAP_POWERSHELL.split('\n'),
+            },
+        )
+
+        # Reboot so all updates are fully applied.
+        print('rebooting instance %s' % instance.id)
+        ec2client.reboot_instances(InstanceIds=[instance.id])
+
+        time.sleep(15)
+
+        print('waiting for Windows Remote Management to come back...')
+        client = wait_for_winrm(instance.public_ip_address, 'Administrator',
+                                c.automation.default_password())
+        print('established WinRM connection to %s' % instance.id)
+        instance.winrm_client = client
+
+        print('bootstrapping instance...')
+        run_powershell(instance.winrm_client, '\n'.join(commands))
+
+        print('bootstrap completed; stopping %s to create image' % instance.id)
+        instance.stop()
+
+        ec2client.get_waiter('instance_stopped').wait(
+            InstanceIds=[instance.id],
+            WaiterConfig={
+                'Delay': 5,
+            })
+        print('%s is stopped' % instance.id)
+
+        image = instance.create_image(
+            Name=name,
+            Description='Mercurial Windows development environment',
+        )
+
+        image.create_tags(Tags=[
+            {
+                'Key': 'HGIMAGEFINGERPRINT',
+                'Value': fingerprint,
+            },
+        ])
+
+        print('waiting for image %s' % image.id)
+
+        ec2client.get_waiter('image_available').wait(
+            ImageIds=[image.id],
+        )
+
+        print('image %s available as %s' % (image.id, image.name))
+
+        return image
+
+
+@contextlib.contextmanager
+def temporary_windows_dev_instances(c: AWSConnection, image, instance_type,
+                                    prefix='hg-', disable_antivirus=False):
+    """Create a temporary Windows development EC2 instance.
+
+    Context manager resolves to the list of ``EC2.Instance`` that were created.
+    """
+    config = {
+        'BlockDeviceMappings': [
+            {
+                'DeviceName': '/dev/sda1',
+                'Ebs': {
+                    'DeleteOnTermination': True,
+                    'VolumeSize': 32,
+                    'VolumeType': 'gp2',
+                },
+            }
+        ],
+        'ImageId': image.id,
+        'InstanceInitiatedShutdownBehavior': 'stop',
+        'InstanceType': instance_type,
+        'KeyName': '%sautomation' % prefix,
+        'MaxCount': 1,
+        'MinCount': 1,
+        'SecurityGroupIds': [c.security_groups['windows-dev-1'].id],
+    }
+
+    with create_temp_windows_ec2_instances(c, config) as instances:
+        if disable_antivirus:
+            for instance in instances:
+                run_powershell(
+                    instance.winrm_client,
+                    'Set-MpPreference -DisableRealtimeMonitoring $true')
+
+        yield instances
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/automation/hgautomation/cli.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,273 @@
+# cli.py - Command line interface for automation
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import argparse
+import os
+import pathlib
+
+from . import (
+    aws,
+    HGAutomation,
+    windows,
+)
+
+
+SOURCE_ROOT = pathlib.Path(os.path.abspath(__file__)).parent.parent.parent.parent
+DIST_PATH = SOURCE_ROOT / 'dist'
+
+
+def bootstrap_windows_dev(hga: HGAutomation, aws_region):
+    c = hga.aws_connection(aws_region)
+    image = aws.ensure_windows_dev_ami(c)
+    print('Windows development AMI available as %s' % image.id)
+
+
+def build_inno(hga: HGAutomation, aws_region, arch, revision, version):
+    c = hga.aws_connection(aws_region)
+    image = aws.ensure_windows_dev_ami(c)
+    DIST_PATH.mkdir(exist_ok=True)
+
+    with aws.temporary_windows_dev_instances(c, image, 't3.medium') as insts:
+        instance = insts[0]
+
+        windows.synchronize_hg(SOURCE_ROOT, revision, instance)
+
+        for a in arch:
+            windows.build_inno_installer(instance.winrm_client, a,
+                                         DIST_PATH,
+                                         version=version)
+
+
+def build_wix(hga: HGAutomation, aws_region, arch, revision, version):
+    c = hga.aws_connection(aws_region)
+    image = aws.ensure_windows_dev_ami(c)
+    DIST_PATH.mkdir(exist_ok=True)
+
+    with aws.temporary_windows_dev_instances(c, image, 't3.medium') as insts:
+        instance = insts[0]
+
+        windows.synchronize_hg(SOURCE_ROOT, revision, instance)
+
+        for a in arch:
+            windows.build_wix_installer(instance.winrm_client, a,
+                                        DIST_PATH, version=version)
+
+
+def build_windows_wheel(hga: HGAutomation, aws_region, arch, revision):
+    c = hga.aws_connection(aws_region)
+    image = aws.ensure_windows_dev_ami(c)
+    DIST_PATH.mkdir(exist_ok=True)
+
+    with aws.temporary_windows_dev_instances(c, image, 't3.medium') as insts:
+        instance = insts[0]
+
+        windows.synchronize_hg(SOURCE_ROOT, revision, instance)
+
+        for a in arch:
+            windows.build_wheel(instance.winrm_client, a, DIST_PATH)
+
+
+def build_all_windows_packages(hga: HGAutomation, aws_region, revision):
+    c = hga.aws_connection(aws_region)
+    image = aws.ensure_windows_dev_ami(c)
+    DIST_PATH.mkdir(exist_ok=True)
+
+    with aws.temporary_windows_dev_instances(c, image, 't3.medium') as insts:
+        instance = insts[0]
+
+        winrm_client = instance.winrm_client
+
+        windows.synchronize_hg(SOURCE_ROOT, revision, instance)
+
+        for arch in ('x86', 'x64'):
+            windows.purge_hg(winrm_client)
+            windows.build_wheel(winrm_client, arch, DIST_PATH)
+            windows.purge_hg(winrm_client)
+            windows.build_inno_installer(winrm_client, arch, DIST_PATH)
+            windows.purge_hg(winrm_client)
+            windows.build_wix_installer(winrm_client, arch, DIST_PATH)
+
+
+def terminate_ec2_instances(hga: HGAutomation, aws_region):
+    c = hga.aws_connection(aws_region)
+    aws.terminate_ec2_instances(c.ec2resource)
+
+
+def purge_ec2_resources(hga: HGAutomation, aws_region):
+    c = hga.aws_connection(aws_region)
+    aws.remove_resources(c)
+
+
+def run_tests_windows(hga: HGAutomation, aws_region, instance_type,
+                      python_version, arch, test_flags):
+    c = hga.aws_connection(aws_region)
+    image = aws.ensure_windows_dev_ami(c)
+
+    with aws.temporary_windows_dev_instances(c, image, instance_type,
+                                             disable_antivirus=True) as insts:
+        instance = insts[0]
+
+        windows.synchronize_hg(SOURCE_ROOT, '.', instance)
+        windows.run_tests(instance.winrm_client, python_version, arch,
+                          test_flags)
+
+
+def get_parser():
+    parser = argparse.ArgumentParser()
+
+    parser.add_argument(
+        '--state-path',
+        default='~/.hgautomation',
+        help='Path for local state files',
+    )
+    parser.add_argument(
+        '--aws-region',
+        help='AWS region to use',
+        default='us-west-1',
+    )
+
+    subparsers = parser.add_subparsers()
+
+    sp = subparsers.add_parser(
+        'bootstrap-windows-dev',
+        help='Bootstrap the Windows development environment',
+    )
+    sp.set_defaults(func=bootstrap_windows_dev)
+
+    sp = subparsers.add_parser(
+        'build-all-windows-packages',
+        help='Build all Windows packages',
+    )
+    sp.add_argument(
+        '--revision',
+        help='Mercurial revision to build',
+        default='.',
+    )
+    sp.set_defaults(func=build_all_windows_packages)
+
+    sp = subparsers.add_parser(
+        'build-inno',
+        help='Build Inno Setup installer(s)',
+    )
+    sp.add_argument(
+        '--arch',
+        help='Architecture to build for',
+        choices={'x86', 'x64'},
+        nargs='*',
+        default=['x64'],
+    )
+    sp.add_argument(
+        '--revision',
+        help='Mercurial revision to build',
+        default='.',
+    )
+    sp.add_argument(
+        '--version',
+        help='Mercurial version string to use in installer',
+    )
+    sp.set_defaults(func=build_inno)
+
+    sp = subparsers.add_parser(
+        'build-windows-wheel',
+        help='Build Windows wheel(s)',
+    )
+    sp.add_argument(
+        '--arch',
+        help='Architecture to build for',
+        choices={'x86', 'x64'},
+        nargs='*',
+        default=['x64'],
+    )
+    sp.add_argument(
+        '--revision',
+        help='Mercurial revision to build',
+        default='.',
+    )
+    sp.set_defaults(func=build_windows_wheel)
+
+    sp = subparsers.add_parser(
+        'build-wix',
+        help='Build WiX installer(s)'
+    )
+    sp.add_argument(
+        '--arch',
+        help='Architecture to build for',
+        choices={'x86', 'x64'},
+        nargs='*',
+        default=['x64'],
+    )
+    sp.add_argument(
+        '--revision',
+        help='Mercurial revision to build',
+        default='.',
+    )
+    sp.add_argument(
+        '--version',
+        help='Mercurial version string to use in installer',
+    )
+    sp.set_defaults(func=build_wix)
+
+    sp = subparsers.add_parser(
+        'terminate-ec2-instances',
+        help='Terminate all active EC2 instances managed by us',
+    )
+    sp.set_defaults(func=terminate_ec2_instances)
+
+    sp = subparsers.add_parser(
+        'purge-ec2-resources',
+        help='Purge all EC2 resources managed by us',
+    )
+    sp.set_defaults(func=purge_ec2_resources)
+
+    sp = subparsers.add_parser(
+        'run-tests-windows',
+        help='Run tests on Windows',
+    )
+    sp.add_argument(
+        '--instance-type',
+        help='EC2 instance type to use',
+        default='t3.medium',
+    )
+    sp.add_argument(
+        '--python-version',
+        help='Python version to use',
+        choices={'2.7', '3.5', '3.6', '3.7', '3.8'},
+        default='2.7',
+    )
+    sp.add_argument(
+        '--arch',
+        help='Architecture to test',
+        choices={'x86', 'x64'},
+        default='x64',
+    )
+    sp.add_argument(
+        '--test-flags',
+        help='Extra command line flags to pass to run-tests.py',
+    )
+    sp.set_defaults(func=run_tests_windows)
+
+    return parser
+
+
+def main():
+    parser = get_parser()
+    args = parser.parse_args()
+
+    local_state_path = pathlib.Path(os.path.expanduser(args.state_path))
+    automation = HGAutomation(local_state_path)
+
+    if not hasattr(args, 'func'):
+        parser.print_help()
+        return
+
+    kwargs = dict(vars(args))
+    del kwargs['func']
+    del kwargs['state_path']
+
+    args.func(automation, **kwargs)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/automation/hgautomation/windows.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,287 @@
+# windows.py - Automation specific to Windows
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import os
+import pathlib
+import re
+import subprocess
+import tempfile
+
+from .winrm import (
+    run_powershell,
+)
+
+
+# PowerShell commands to activate a Visual Studio 2008 environment.
+# This is essentially a port of vcvarsall.bat to PowerShell.
+ACTIVATE_VC9_AMD64 = r'''
+Write-Output "activating Visual Studio 2008 environment for AMD64"
+$root = "$env:LOCALAPPDATA\Programs\Common\Microsoft\Visual C++ for Python\9.0"
+$Env:VCINSTALLDIR = "${root}\VC\"
+$Env:WindowsSdkDir = "${root}\WinSDK\"
+$Env:PATH = "${root}\VC\Bin\amd64;${root}\WinSDK\Bin\x64;${root}\WinSDK\Bin;$Env:PATH"
+$Env:INCLUDE = "${root}\VC\Include;${root}\WinSDK\Include;$Env:PATH"
+$Env:LIB = "${root}\VC\Lib\amd64;${root}\WinSDK\Lib\x64;$Env:LIB"
+$Env:LIBPATH = "${root}\VC\Lib\amd64;${root}\WinSDK\Lib\x64;$Env:LIBPATH"
+'''.lstrip()
+
+ACTIVATE_VC9_X86 = r'''
+Write-Output "activating Visual Studio 2008 environment for x86"
+$root = "$env:LOCALAPPDATA\Programs\Common\Microsoft\Visual C++ for Python\9.0"
+$Env:VCINSTALLDIR = "${root}\VC\"
+$Env:WindowsSdkDir = "${root}\WinSDK\"
+$Env:PATH = "${root}\VC\Bin;${root}\WinSDK\Bin;$Env:PATH"
+$Env:INCLUDE = "${root}\VC\Include;${root}\WinSDK\Include;$Env:INCLUDE"
+$Env:LIB = "${root}\VC\Lib;${root}\WinSDK\Lib;$Env:LIB"
+$Env:LIBPATH = "${root}\VC\lib;${root}\WinSDK\Lib:$Env:LIBPATH"
+'''.lstrip()
+
+HG_PURGE = r'''
+$Env:PATH = "C:\hgdev\venv-bootstrap\Scripts;$Env:PATH"
+Set-Location C:\hgdev\src
+hg.exe --config extensions.purge= purge --all
+if ($LASTEXITCODE -ne 0) {
+    throw "process exited non-0: $LASTEXITCODE"
+}
+Write-Output "purged Mercurial repo"
+'''
+
+HG_UPDATE_CLEAN = r'''
+$Env:PATH = "C:\hgdev\venv-bootstrap\Scripts;$Env:PATH"
+Set-Location C:\hgdev\src
+hg.exe --config extensions.purge= purge --all
+if ($LASTEXITCODE -ne 0) {{
+    throw "process exited non-0: $LASTEXITCODE"
+}}
+hg.exe update -C {revision}
+if ($LASTEXITCODE -ne 0) {{
+    throw "process exited non-0: $LASTEXITCODE"
+}}
+hg.exe log -r .
+Write-Output "updated Mercurial working directory to {revision}"
+'''.lstrip()
+
+BUILD_INNO = r'''
+Set-Location C:\hgdev\src
+$python = "C:\hgdev\python27-{arch}\python.exe"
+C:\hgdev\python37-x64\python.exe contrib\packaging\inno\build.py --python $python
+if ($LASTEXITCODE -ne 0) {{
+    throw "process exited non-0: $LASTEXITCODE"
+}}
+'''.lstrip()
+
+BUILD_WHEEL = r'''
+Set-Location C:\hgdev\src
+C:\hgdev\python27-{arch}\Scripts\pip.exe wheel --wheel-dir dist .
+if ($LASTEXITCODE -ne 0) {{
+    throw "process exited non-0: $LASTEXITCODE"
+}}
+'''
+
+BUILD_WIX = r'''
+Set-Location C:\hgdev\src
+$python = "C:\hgdev\python27-{arch}\python.exe"
+C:\hgdev\python37-x64\python.exe contrib\packaging\wix\build.py --python $python {extra_args}
+if ($LASTEXITCODE -ne 0) {{
+    throw "process exited non-0: $LASTEXITCODE"
+}}
+'''
+
+RUN_TESTS = r'''
+C:\hgdev\MinGW\msys\1.0\bin\sh.exe --login -c "cd /c/hgdev/src/tests && /c/hgdev/{python_path}/python.exe run-tests.py {test_flags}"
+if ($LASTEXITCODE -ne 0) {{
+    throw "process exited non-0: $LASTEXITCODE"
+}}
+'''
+
+
+def get_vc_prefix(arch):
+    if arch == 'x86':
+        return ACTIVATE_VC9_X86
+    elif arch == 'x64':
+        return ACTIVATE_VC9_AMD64
+    else:
+        raise ValueError('illegal arch: %s; must be x86 or x64' % arch)
+
+
+def fix_authorized_keys_permissions(winrm_client, path):
+    commands = [
+        '$ErrorActionPreference = "Stop"',
+        'Repair-AuthorizedKeyPermission -FilePath %s -Confirm:$false' % path,
+        r'icacls %s /remove:g "NT Service\sshd"' % path,
+    ]
+
+    run_powershell(winrm_client, '\n'.join(commands))
+
+
+def synchronize_hg(hg_repo: pathlib.Path, revision: str, ec2_instance):
+    """Synchronize local Mercurial repo to remote EC2 instance."""
+
+    winrm_client = ec2_instance.winrm_client
+
+    with tempfile.TemporaryDirectory() as temp_dir:
+        temp_dir = pathlib.Path(temp_dir)
+
+        ssh_dir = temp_dir / '.ssh'
+        ssh_dir.mkdir()
+        ssh_dir.chmod(0o0700)
+
+        # Generate SSH key to use for communication.
+        subprocess.run([
+            'ssh-keygen', '-t', 'rsa', '-b', '4096', '-N', '',
+            '-f', str(ssh_dir / 'id_rsa')],
+            check=True, capture_output=True)
+
+        # Add it to ~/.ssh/authorized_keys on remote.
+        # This assumes the file doesn't already exist.
+        authorized_keys = r'c:\Users\Administrator\.ssh\authorized_keys'
+        winrm_client.execute_cmd(r'mkdir c:\Users\Administrator\.ssh')
+        winrm_client.copy(str(ssh_dir / 'id_rsa.pub'), authorized_keys)
+        fix_authorized_keys_permissions(winrm_client, authorized_keys)
+
+        public_ip = ec2_instance.public_ip_address
+
+        ssh_config = temp_dir / '.ssh' / 'config'
+
+        with open(ssh_config, 'w', encoding='utf-8') as fh:
+            fh.write('Host %s\n' % public_ip)
+            fh.write('  User Administrator\n')
+            fh.write('  StrictHostKeyChecking no\n')
+            fh.write('  UserKnownHostsFile %s\n' % (ssh_dir / 'known_hosts'))
+            fh.write('  IdentityFile %s\n' % (ssh_dir / 'id_rsa'))
+
+        env = dict(os.environ)
+        env['HGPLAIN'] = '1'
+        env['HGENCODING'] = 'utf-8'
+
+        hg_bin = hg_repo / 'hg'
+
+        res = subprocess.run(
+            ['python2.7', str(hg_bin), 'log', '-r', revision, '-T', '{node}'],
+            cwd=str(hg_repo), env=env, check=True, capture_output=True)
+
+        full_revision = res.stdout.decode('ascii')
+
+        args = [
+            'python2.7', hg_bin,
+            '--config', 'ui.ssh=ssh -F %s' % ssh_config,
+            '--config', 'ui.remotecmd=c:/hgdev/venv-bootstrap/Scripts/hg.exe',
+            'push', '-r', full_revision, 'ssh://%s/c:/hgdev/src' % public_ip,
+        ]
+
+        subprocess.run(args, cwd=str(hg_repo), env=env, check=True)
+
+        run_powershell(winrm_client,
+                       HG_UPDATE_CLEAN.format(revision=full_revision))
+
+        # TODO detect dirty local working directory and synchronize accordingly.
+
+
+def purge_hg(winrm_client):
+    """Purge the Mercurial source repository on an EC2 instance."""
+    run_powershell(winrm_client, HG_PURGE)
+
+
+def find_latest_dist(winrm_client, pattern):
+    """Find path to newest file in dist/ directory matching a pattern."""
+
+    res = winrm_client.execute_ps(
+        r'$v = Get-ChildItem -Path C:\hgdev\src\dist -Filter "%s" '
+        '| Sort-Object LastWriteTime -Descending '
+        '| Select-Object -First 1\n'
+        '$v.name' % pattern
+    )
+    return res[0]
+
+
+def copy_latest_dist(winrm_client, pattern, dest_path):
+    """Copy latest file matching pattern in dist/ directory.
+
+    Given a WinRM client and a file pattern, find the latest file on the remote
+    matching that pattern and copy it to the ``dest_path`` directory on the
+    local machine.
+    """
+    latest = find_latest_dist(winrm_client, pattern)
+    source = r'C:\hgdev\src\dist\%s' % latest
+    dest = dest_path / latest
+    print('copying %s to %s' % (source, dest))
+    winrm_client.fetch(source, str(dest))
+
+
+def build_inno_installer(winrm_client, arch: str, dest_path: pathlib.Path,
+                         version=None):
+    """Build the Inno Setup installer on a remote machine.
+
+    Using a WinRM client, remote commands are executed to build
+    a Mercurial Inno Setup installer.
+    """
+    print('building Inno Setup installer for %s' % arch)
+
+    extra_args = []
+    if version:
+        extra_args.extend(['--version', version])
+
+    ps = get_vc_prefix(arch) + BUILD_INNO.format(arch=arch,
+                                                 extra_args=' '.join(extra_args))
+    run_powershell(winrm_client, ps)
+    copy_latest_dist(winrm_client, '*.exe', dest_path)
+
+
+def build_wheel(winrm_client, arch: str, dest_path: pathlib.Path):
+    """Build Python wheels on a remote machine.
+
+    Using a WinRM client, remote commands are executed to build a Python wheel
+    for Mercurial.
+    """
+    print('Building Windows wheel for %s' % arch)
+    ps = get_vc_prefix(arch) + BUILD_WHEEL.format(arch=arch)
+    run_powershell(winrm_client, ps)
+    copy_latest_dist(winrm_client, '*.whl', dest_path)
+
+
+def build_wix_installer(winrm_client, arch: str, dest_path: pathlib.Path,
+                        version=None):
+    """Build the WiX installer on a remote machine.
+
+    Using a WinRM client, remote commands are executed to build a WiX installer.
+    """
+    print('Building WiX installer for %s' % arch)
+    extra_args = []
+    if version:
+        extra_args.extend(['--version', version])
+
+    ps = get_vc_prefix(arch) + BUILD_WIX.format(arch=arch,
+                                                extra_args=' '.join(extra_args))
+    run_powershell(winrm_client, ps)
+    copy_latest_dist(winrm_client, '*.msi', dest_path)
+
+
+def run_tests(winrm_client, python_version, arch, test_flags=''):
+    """Run tests on a remote Windows machine.
+
+    ``python_version`` is a ``X.Y`` string like ``2.7`` or ``3.7``.
+    ``arch`` is ``x86`` or ``x64``.
+    ``test_flags`` is a str representing extra arguments to pass to
+    ``run-tests.py``.
+    """
+    if not re.match(r'\d\.\d', python_version):
+        raise ValueError(r'python_version must be \d.\d; got %s' %
+                         python_version)
+
+    if arch not in ('x86', 'x64'):
+        raise ValueError('arch must be x86 or x64; got %s' % arch)
+
+    python_path = 'python%s-%s' % (python_version.replace('.', ''), arch)
+
+    ps = RUN_TESTS.format(
+        python_path=python_path,
+        test_flags=test_flags or '',
+    )
+
+    run_powershell(winrm_client, ps)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/automation/hgautomation/winrm.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,82 @@
+# winrm.py - Interact with Windows Remote Management (WinRM)
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import logging
+import pprint
+import time
+
+from pypsrp.client import (
+    Client,
+)
+from pypsrp.powershell import (
+    PowerShell,
+    PSInvocationState,
+    RunspacePool,
+)
+import requests.exceptions
+
+
+logger = logging.getLogger(__name__)
+
+
+def wait_for_winrm(host, username, password, timeout=120, ssl=False):
+    """Wait for the Windows Remoting (WinRM) service to become available.
+
+    Returns a ``psrpclient.Client`` instance.
+    """
+
+    end_time = time.time() + timeout
+
+    while True:
+        try:
+            client = Client(host, username=username, password=password,
+                            ssl=ssl, connection_timeout=5)
+            client.execute_cmd('echo "hello world"')
+            return client
+        except requests.exceptions.ConnectionError:
+            if time.time() >= end_time:
+                raise
+
+            time.sleep(1)
+
+
+def format_object(o):
+    if isinstance(o, str):
+        return o
+
+    try:
+        o = str(o)
+    except TypeError:
+        o = pprint.pformat(o.extended_properties)
+
+    return o
+
+
+def run_powershell(client, script):
+    with RunspacePool(client.wsman) as pool:
+        ps = PowerShell(pool)
+        ps.add_script(script)
+
+        ps.begin_invoke()
+
+        while ps.state == PSInvocationState.RUNNING:
+            ps.poll_invoke()
+            for o in ps.output:
+                print(format_object(o))
+
+            ps.output[:] = []
+
+        ps.end_invoke()
+
+        for o in ps.output:
+            print(format_object(o))
+
+        if ps.state == PSInvocationState.FAILED:
+            raise Exception('PowerShell execution failed: %s' %
+                            ' '.join(map(format_object, ps.streams.error)))
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/automation/requirements.txt	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,119 @@
+#
+# This file is autogenerated by pip-compile
+# To update, run:
+#
+#    pip-compile -U --generate-hashes --output-file contrib/automation/requirements.txt contrib/automation/requirements.txt.in
+#
+asn1crypto==0.24.0 \
+    --hash=sha256:2f1adbb7546ed199e3c90ef23ec95c5cf3585bac7d11fb7eb562a3fe89c64e87 \
+    --hash=sha256:9d5c20441baf0cb60a4ac34cc447c6c189024b6b4c6cd7877034f4965c464e49 \
+    # via cryptography
+boto3==1.9.111 \
+    --hash=sha256:06414c75d1f62af7d04fd652b38d1e4fd3cfd6b35bad978466af88e2aaecd00d \
+    --hash=sha256:f3b77dff382374773d02411fa47ee408f4f503aeebd837fd9dc9ed8635bc5e8e
+botocore==1.12.111 \
+    --hash=sha256:6af473c52d5e3e7ff82de5334e9fee96b2d5ec2df5d78bc00cd9937e2573a7a8 \
+    --hash=sha256:9f5123c7be704b17aeacae99b5842ab17bda1f799dd29134de8c70e0a50a45d7 \
+    # via boto3, s3transfer
+certifi==2019.3.9 \
+    --hash=sha256:59b7658e26ca9c7339e00f8f4636cdfe59d34fa37b9b04f6f9e9926b3cece1a5 \
+    --hash=sha256:b26104d6835d1f5e49452a26eb2ff87fe7090b89dfcaee5ea2212697e1e1d7ae \
+    # via requests
+cffi==1.12.2 \
+    --hash=sha256:00b97afa72c233495560a0793cdc86c2571721b4271c0667addc83c417f3d90f \
+    --hash=sha256:0ba1b0c90f2124459f6966a10c03794082a2f3985cd699d7d63c4a8dae113e11 \
+    --hash=sha256:0bffb69da295a4fc3349f2ec7cbe16b8ba057b0a593a92cbe8396e535244ee9d \
+    --hash=sha256:21469a2b1082088d11ccd79dd84157ba42d940064abbfa59cf5f024c19cf4891 \
+    --hash=sha256:2e4812f7fa984bf1ab253a40f1f4391b604f7fc424a3e21f7de542a7f8f7aedf \
+    --hash=sha256:2eac2cdd07b9049dd4e68449b90d3ef1adc7c759463af5beb53a84f1db62e36c \
+    --hash=sha256:2f9089979d7456c74d21303c7851f158833d48fb265876923edcb2d0194104ed \
+    --hash=sha256:3dd13feff00bddb0bd2d650cdb7338f815c1789a91a6f68fdc00e5c5ed40329b \
+    --hash=sha256:4065c32b52f4b142f417af6f33a5024edc1336aa845b9d5a8d86071f6fcaac5a \
+    --hash=sha256:51a4ba1256e9003a3acf508e3b4f4661bebd015b8180cc31849da222426ef585 \
+    --hash=sha256:59888faac06403767c0cf8cfb3f4a777b2939b1fbd9f729299b5384f097f05ea \
+    --hash=sha256:59c87886640574d8b14910840327f5cd15954e26ed0bbd4e7cef95fa5aef218f \
+    --hash=sha256:610fc7d6db6c56a244c2701575f6851461753c60f73f2de89c79bbf1cc807f33 \
+    --hash=sha256:70aeadeecb281ea901bf4230c6222af0248c41044d6f57401a614ea59d96d145 \
+    --hash=sha256:71e1296d5e66c59cd2c0f2d72dc476d42afe02aeddc833d8e05630a0551dad7a \
+    --hash=sha256:8fc7a49b440ea752cfdf1d51a586fd08d395ff7a5d555dc69e84b1939f7ddee3 \
+    --hash=sha256:9b5c2afd2d6e3771d516045a6cfa11a8da9a60e3d128746a7fe9ab36dfe7221f \
+    --hash=sha256:9c759051ebcb244d9d55ee791259ddd158188d15adee3c152502d3b69005e6bd \
+    --hash=sha256:b4d1011fec5ec12aa7cc10c05a2f2f12dfa0adfe958e56ae38dc140614035804 \
+    --hash=sha256:b4f1d6332339ecc61275bebd1f7b674098a66fea11a00c84d1c58851e618dc0d \
+    --hash=sha256:c030cda3dc8e62b814831faa4eb93dd9a46498af8cd1d5c178c2de856972fd92 \
+    --hash=sha256:c2e1f2012e56d61390c0e668c20c4fb0ae667c44d6f6a2eeea5d7148dcd3df9f \
+    --hash=sha256:c37c77d6562074452120fc6c02ad86ec928f5710fbc435a181d69334b4de1d84 \
+    --hash=sha256:c8149780c60f8fd02752d0429246088c6c04e234b895c4a42e1ea9b4de8d27fb \
+    --hash=sha256:cbeeef1dc3c4299bd746b774f019de9e4672f7cc666c777cd5b409f0b746dac7 \
+    --hash=sha256:e113878a446c6228669144ae8a56e268c91b7f1fafae927adc4879d9849e0ea7 \
+    --hash=sha256:e21162bf941b85c0cda08224dade5def9360f53b09f9f259adb85fc7dd0e7b35 \
+    --hash=sha256:fb6934ef4744becbda3143d30c6604718871495a5e36c408431bf33d9c146889 \
+    # via cryptography
+chardet==3.0.4 \
+    --hash=sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae \
+    --hash=sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691 \
+    # via requests
+cryptography==2.6.1 \
+    --hash=sha256:066f815f1fe46020877c5983a7e747ae140f517f1b09030ec098503575265ce1 \
+    --hash=sha256:210210d9df0afba9e000636e97810117dc55b7157c903a55716bb73e3ae07705 \
+    --hash=sha256:26c821cbeb683facb966045e2064303029d572a87ee69ca5a1bf54bf55f93ca6 \
+    --hash=sha256:2afb83308dc5c5255149ff7d3fb9964f7c9ee3d59b603ec18ccf5b0a8852e2b1 \
+    --hash=sha256:2db34e5c45988f36f7a08a7ab2b69638994a8923853dec2d4af121f689c66dc8 \
+    --hash=sha256:409c4653e0f719fa78febcb71ac417076ae5e20160aec7270c91d009837b9151 \
+    --hash=sha256:45a4f4cf4f4e6a55c8128f8b76b4c057027b27d4c67e3fe157fa02f27e37830d \
+    --hash=sha256:48eab46ef38faf1031e58dfcc9c3e71756a1108f4c9c966150b605d4a1a7f659 \
+    --hash=sha256:6b9e0ae298ab20d371fc26e2129fd683cfc0cfde4d157c6341722de645146537 \
+    --hash=sha256:6c4778afe50f413707f604828c1ad1ff81fadf6c110cb669579dea7e2e98a75e \
+    --hash=sha256:8c33fb99025d353c9520141f8bc989c2134a1f76bac6369cea060812f5b5c2bb \
+    --hash=sha256:9873a1760a274b620a135054b756f9f218fa61ca030e42df31b409f0fb738b6c \
+    --hash=sha256:9b069768c627f3f5623b1cbd3248c5e7e92aec62f4c98827059eed7053138cc9 \
+    --hash=sha256:9e4ce27a507e4886efbd3c32d120db5089b906979a4debf1d5939ec01b9dd6c5 \
+    --hash=sha256:acb424eaca214cb08735f1a744eceb97d014de6530c1ea23beb86d9c6f13c2ad \
+    --hash=sha256:c8181c7d77388fe26ab8418bb088b1a1ef5fde058c6926790c8a0a3d94075a4a \
+    --hash=sha256:d4afbb0840f489b60f5a580a41a1b9c3622e08ecb5eec8614d4fb4cd914c4460 \
+    --hash=sha256:d9ed28030797c00f4bc43c86bf819266c76a5ea61d006cd4078a93ebf7da6bfd \
+    --hash=sha256:e603aa7bb52e4e8ed4119a58a03b60323918467ef209e6ff9db3ac382e5cf2c6 \
+    # via pypsrp
+docutils==0.14 \
+    --hash=sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6 \
+    --hash=sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274 \
+    --hash=sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6 \
+    # via botocore
+idna==2.8 \
+    --hash=sha256:c357b3f628cf53ae2c4c05627ecc484553142ca23264e593d327bcde5e9c3407 \
+    --hash=sha256:ea8b7f6188e6fa117537c3df7da9fc686d485087abf6ac197f9c46432f7e4a3c \
+    # via requests
+jmespath==0.9.4 \
+    --hash=sha256:3720a4b1bd659dd2eecad0666459b9788813e032b83e7ba58578e48254e0a0e6 \
+    --hash=sha256:bde2aef6f44302dfb30320115b17d030798de8c4110e28d5cf6cf91a7a31074c \
+    # via boto3, botocore
+ntlm-auth==1.2.0 \
+    --hash=sha256:7bc02a3fbdfee7275d3dc20fce8028ed8eb6d32364637f28be9e9ae9160c6d5c \
+    --hash=sha256:9b13eaf88f16a831637d75236a93d60c0049536715aafbf8190ba58a590b023e \
+    # via pypsrp
+pycparser==2.19 \
+    --hash=sha256:a988718abfad80b6b157acce7bf130a30876d27603738ac39f140993246b25b3 \
+    # via cffi
+pypsrp==0.3.1 \
+    --hash=sha256:309853380fe086090a03cc6662a778ee69b1cae355ae4a932859034fd76e9d0b \
+    --hash=sha256:90f946254f547dc3493cea8493c819ab87e152a755797c93aa2668678ba8ae85
+python-dateutil==2.8.0 \
+    --hash=sha256:7e6584c74aeed623791615e26efd690f29817a27c73085b78e4bad02493df2fb \
+    --hash=sha256:c89805f6f4d64db21ed966fda138f8a5ed7a4fdbc1a8ee329ce1b74e3c74da9e \
+    # via botocore
+requests==2.21.0 \
+    --hash=sha256:502a824f31acdacb3a35b6690b5fbf0bc41d63a24a45c4004352b0242707598e \
+    --hash=sha256:7bf2a778576d825600030a110f3c0e3e8edc51dfaafe1c146e39a2027784957b \
+    # via pypsrp
+s3transfer==0.2.0 \
+    --hash=sha256:7b9ad3213bff7d357f888e0fab5101b56fa1a0548ee77d121c3a3dbfbef4cb2e \
+    --hash=sha256:f23d5cb7d862b104401d9021fc82e5fa0e0cf57b7660a1331425aab0c691d021 \
+    # via boto3
+six==1.12.0 \
+    --hash=sha256:3350809f0555b11f552448330d0b52d5f24c91a322ea4a15ef22629740f3761c \
+    --hash=sha256:d16a0141ec1a18405cd4ce8b4613101da75da0e9a7aec5bdd4fa804d0e0eba73 \
+    # via cryptography, pypsrp, python-dateutil
+urllib3==1.24.1 \
+    --hash=sha256:61bf29cada3fc2fbefad4fdf059ea4bd1b4a86d2b6d15e1c7c0b582b9752fe39 \
+    --hash=sha256:de9529817c93f27c8ccbfead6985011db27bd0ddfcdb2d86f3f663385c6a9c22 \
+    # via botocore, requests
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/automation/requirements.txt.in	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,2 @@
+boto3
+pypsrp
--- a/contrib/base-revsets.txt	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/base-revsets.txt	Wed Apr 17 13:41:18 2019 -0400
@@ -47,3 +47,6 @@
 # The one below is used by rebase
 (children(ancestor(tip~5, tip)) and ::(tip~5))::
 heads(commonancestors(last(head(), 2)))
+heads(-10000:-1)
+roots(-10000:-1)
+only(max(head()), min(head()))
--- a/contrib/bdiff-torture.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/bdiff-torture.py	Wed Apr 17 13:41:18 2019 -0400
@@ -25,7 +25,7 @@
 
         try:
             test1(a, b)
-        except Exception as inst:
+        except Exception:
             reductions += 1
             tries = 0
             a = a2
--- a/contrib/check-code.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/check-code.py	Wed Apr 17 13:41:18 2019 -0400
@@ -40,6 +40,8 @@
 except ImportError:
     re2 = None
 
+import testparseutil
+
 def compilere(pat, multiline=False):
     if multiline:
         pat = '(?m)' + pat
@@ -231,8 +233,10 @@
     (r"( +)(#([^!][^\n]*\S)?)", repcomment),
 ]
 
-pypats = [
+# common patterns to check *.py
+commonpypats = [
   [
+    (r'\\$', 'Use () to wrap long lines in Python, not \\'),
     (r'^\s*def\s*\w+\s*\(.*,\s*\(',
      "tuple parameter unpacking not available in Python 3+"),
     (r'lambda\s*\(.*,.*\)',
@@ -261,7 +265,6 @@
         # a pass at the same indent level, which is bogus
         r'(?P=indent)pass[ \t\n#]'
       ), 'omit superfluous pass'),
-    (r'.{81}', "line too long"),
     (r'[^\n]\Z', "no trailing newline"),
     (r'(\S[ \t]+|^[ \t]+)\n', "trailing whitespace"),
 #    (r'^\s+[^_ \n][^_. \n]+_[^_\n]+\s*=',
@@ -299,7 +302,6 @@
      "wrong whitespace around ="),
     (r'\([^()]*( =[^=]|[^<>!=]= )',
      "no whitespace around = for named parameters"),
-    (r'raise Exception', "don't raise generic exceptions"),
     (r'raise [^,(]+, (\([^\)]+\)|[^,\(\)]+)$',
      "don't use old-style two-argument raise, use Exception(message)"),
     (r' is\s+(not\s+)?["\'0-9-]', "object comparison with literal"),
@@ -315,21 +317,12 @@
      "use opener.read() instead"),
     (r'opener\([^)]*\).write\(',
      "use opener.write() instead"),
-    (r'[\s\(](open|file)\([^)]*\)\.read\(',
-     "use util.readfile() instead"),
-    (r'[\s\(](open|file)\([^)]*\)\.write\(',
-     "use util.writefile() instead"),
-    (r'^[\s\(]*(open(er)?|file)\([^)]*\)(?!\.close\(\))',
-     "always assign an opened file to a variable, and close it afterwards"),
-    (r'[\s\(](open|file)\([^)]*\)\.(?!close\(\))',
-     "always assign an opened file to a variable, and close it afterwards"),
     (r'(?i)descend[e]nt', "the proper spelling is descendAnt"),
     (r'\.debug\(\_', "don't mark debug messages for translation"),
     (r'\.strip\(\)\.split\(\)', "no need to strip before splitting"),
     (r'^\s*except\s*:', "naked except clause", r'#.*re-raises'),
     (r'^\s*except\s([^\(,]+|\([^\)]+\))\s*,',
      'legacy exception syntax; use "as" instead of ","'),
-    (r':\n(    )*( ){1,3}[^ ]', "must indent 4 spaces"),
     (r'release\(.*wlock, .*lock\)', "wrong lock release order"),
     (r'\bdef\s+__bool__\b', "__bool__ should be __nonzero__ in Python 2"),
     (r'os\.path\.join\(.*, *(""|\'\')\)',
@@ -339,7 +332,6 @@
     (r'def.*[( ]\w+=\{\}', "don't use mutable default arguments"),
     (r'\butil\.Abort\b', "directly use error.Abort"),
     (r'^@(\w*\.)?cachefunc', "module-level @cachefunc is risky, please avoid"),
-    (r'^import atexit', "don't use atexit, use ui.atexit"),
     (r'^import Queue', "don't use Queue, use pycompat.queue.Queue + "
                        "pycompat.queue.Empty"),
     (r'^import cStringIO', "don't use cStringIO.StringIO, use util.stringio"),
@@ -358,6 +350,34 @@
      "don't convert rev to node before passing to revision(nodeorrev)"),
     (r'platform\.system\(\)', "don't use platform.system(), use pycompat"),
 
+  ],
+  # warnings
+  [
+  ]
+]
+
+# patterns to check normal *.py files
+pypats = [
+  [
+    # Ideally, these should be placed in "commonpypats" for
+    # consistency of coding rules in Mercurial source tree.
+    # But on the other hand, these are not so seriously required for
+    # python code fragments embedded in test scripts. Fixing test
+    # scripts for these patterns requires many changes, and has less
+    # profit than effort.
+    (r'.{81}', "line too long"),
+    (r'raise Exception', "don't raise generic exceptions"),
+    (r'[\s\(](open|file)\([^)]*\)\.read\(',
+     "use util.readfile() instead"),
+    (r'[\s\(](open|file)\([^)]*\)\.write\(',
+     "use util.writefile() instead"),
+    (r'^[\s\(]*(open(er)?|file)\([^)]*\)(?!\.close\(\))',
+     "always assign an opened file to a variable, and close it afterwards"),
+    (r'[\s\(](open|file)\([^)]*\)\.(?!close\(\))',
+     "always assign an opened file to a variable, and close it afterwards"),
+    (r':\n(    )*( ){1,3}[^ ]', "must indent 4 spaces"),
+    (r'^import atexit', "don't use atexit, use ui.atexit"),
+
     # rules depending on implementation of repquote()
     (r' x+[xpqo%APM][\'"]\n\s+[\'"]x',
      'string join across lines with no space'),
@@ -376,21 +396,35 @@
            # because _preparepats forcibly adds "\n" into [^...],
            # even though this regexp wants match it against "\n")''',
      "missing _() in ui message (use () to hide false-positives)"),
-  ],
+  ] + commonpypats[0],
   # warnings
   [
     # rules depending on implementation of repquote()
     (r'(^| )pp +xxxxqq[ \n][^\n]', "add two newlines after '.. note::'"),
-  ]
+  ] + commonpypats[1]
 ]
 
-pyfilters = [
+# patterns to check *.py for embedded ones in test script
+embeddedpypats = [
+  [
+  ] + commonpypats[0],
+  # warnings
+  [
+  ] + commonpypats[1]
+]
+
+# common filters to convert *.py
+commonpyfilters = [
     (r"""(?msx)(?P<comment>\#.*?$)|
          ((?P<quote>('''|\"\"\"|(?<!')'(?!')|(?<!")"(?!")))
           (?P<text>(([^\\]|\\.)*?))
           (?P=quote))""", reppython),
 ]
 
+# filters to convert normal *.py files
+pyfilters = [
+] + commonpyfilters
+
 # non-filter patterns
 pynfpats = [
     [
@@ -403,6 +437,10 @@
     [],
 ]
 
+# filters to convert *.py for embedded ones in test script
+embeddedpyfilters = [
+] + commonpyfilters
+
 # extension non-filter patterns
 pyextnfpats = [
     [(r'^"""\n?[A-Z]', "don't capitalize docstring title")],
@@ -414,7 +452,7 @@
 
 txtpats = [
   [
-    ('\s$', 'trailing whitespace'),
+    (r'\s$', 'trailing whitespace'),
     ('.. note::[ \n][^\n]', 'add two newlines after note::')
   ],
   []
@@ -537,9 +575,17 @@
      allfilesfilters, allfilespats),
 ]
 
+# (desc,
+#  func to pick up embedded code fragments,
+#  list of patterns to convert target files
+#  list of patterns to detect errors/warnings)
+embeddedchecks = [
+    ('embedded python',
+     testparseutil.pyembedded, embeddedpyfilters, embeddedpypats)
+]
+
 def _preparepats():
-    for c in checks:
-        failandwarn = c[-1]
+    def preparefailandwarn(failandwarn):
         for pats in failandwarn:
             for i, pseq in enumerate(pats):
                 # fix-up regexes for multi-line searches
@@ -553,10 +599,19 @@
                 p = re.sub(r'(?<!\\)\[\^', r'[^\\n', p)
 
                 pats[i] = (re.compile(p, re.MULTILINE),) + pseq[1:]
-        filters = c[3]
+
+    def preparefilters(filters):
         for i, flt in enumerate(filters):
             filters[i] = re.compile(flt[0]), flt[1]
 
+    for cs in (checks, embeddedchecks):
+        for c in cs:
+            failandwarn = c[-1]
+            preparefailandwarn(failandwarn)
+
+            filters = c[-2]
+            preparefilters(filters)
+
 class norepeatlogger(object):
     def __init__(self):
         self._lastseen = None
@@ -604,13 +659,12 @@
 
     return True if no error is found, False otherwise.
     """
-    blamecache = None
     result = True
 
     try:
         with opentext(f) as fp:
             try:
-                pre = post = fp.read()
+                pre = fp.read()
             except UnicodeDecodeError as e:
                 print("%s while reading %s" % (e, f))
                 return result
@@ -618,11 +672,12 @@
         print("Skipping %s, %s" % (f, str(e).split(':', 1)[0]))
         return result
 
+    # context information shared while single checkfile() invocation
+    context = {'blamecache': None}
+
     for name, match, magic, filters, pats in checks:
-        post = pre # discard filtering result of previous check
         if debug:
             print(name, f)
-        fc = 0
         if not (re.match(match, f) or (magic and re.search(magic, pre))):
             if debug:
                 print("Skipping %s for %s it doesn't match %s" % (
@@ -637,6 +692,74 @@
             # tests/test-check-code.t
             print("Skipping %s it has no-che?k-code (glob)" % f)
             return "Skip" # skip checking this file
+
+        fc = _checkfiledata(name, f, pre, filters, pats, context,
+                            logfunc, maxerr, warnings, blame, debug, lineno)
+        if fc:
+            result = False
+
+    if f.endswith('.t') and "no-" "check-code" not in pre:
+        if debug:
+            print("Checking embedded code in %s" % (f))
+
+        prelines = pre.splitlines()
+        embeddederros = []
+        for name, embedded, filters, pats in embeddedchecks:
+            # "reset curmax at each repetition" treats maxerr as "max
+            # nubmer of errors in an actual file per entry of
+            # (embedded)checks"
+            curmaxerr = maxerr
+
+            for found in embedded(f, prelines, embeddederros):
+                filename, starts, ends, code = found
+                fc = _checkfiledata(name, f, code, filters, pats, context,
+                                    logfunc, curmaxerr, warnings, blame, debug,
+                                    lineno, offset=starts - 1)
+                if fc:
+                    result = False
+                    if curmaxerr:
+                        if fc >= curmaxerr:
+                            break
+                        curmaxerr -= fc
+
+    return result
+
+def _checkfiledata(name, f, filedata, filters, pats, context,
+                   logfunc, maxerr, warnings, blame, debug, lineno,
+                   offset=None):
+    """Execute actual error check for file data
+
+    :name: of the checking category
+    :f: filepath
+    :filedata: content of a file
+    :filters: to be applied before checking
+    :pats: to detect errors
+    :context: a dict of information shared while single checkfile() invocation
+              Valid keys: 'blamecache'.
+    :logfunc: function used to report error
+              logfunc(filename, linenumber, linecontent, errormessage)
+    :maxerr: number of error to display before aborting, or False to
+             report all errors
+    :warnings: whether warning level checks should be applied
+    :blame: whether blame information should be displayed at error reporting
+    :debug: whether debug information should be displayed
+    :lineno: whether lineno should be displayed at error reporting
+    :offset: line number offset of 'filedata' in 'f' for checking
+             an embedded code fragment, or None (offset=0 is different
+             from offset=None)
+
+    returns number of detected errors.
+    """
+    blamecache = context['blamecache']
+    if offset is None:
+        lineoffset = 0
+    else:
+        lineoffset = offset
+
+    fc = 0
+    pre = post = filedata
+
+    if True: # TODO: get rid of this redundant 'if' block
         for p, r in filters:
             post = re.sub(p, r, post)
         nerrs = len(pats[0]) # nerr elements are errors
@@ -679,20 +802,30 @@
                 if ignore and re.search(ignore, l, re.MULTILINE):
                     if debug:
                         print("Skipping %s for %s:%s (ignore pattern)" % (
-                            name, f, n))
+                            name, f, (n + lineoffset)))
                     continue
                 bd = ""
                 if blame:
                     bd = 'working directory'
-                    if not blamecache:
+                    if blamecache is None:
                         blamecache = getblame(f)
-                    if n < len(blamecache):
-                        bl, bu, br = blamecache[n]
-                        if bl == l:
+                        context['blamecache'] = blamecache
+                    if (n + lineoffset) < len(blamecache):
+                        bl, bu, br = blamecache[(n + lineoffset)]
+                        if offset is None and bl == l:
                             bd = '%s@%s' % (bu, br)
+                        elif offset is not None and bl.endswith(l):
+                            # "offset is not None" means "checking
+                            # embedded code fragment". In this case,
+                            # "l" does not have information about the
+                            # beginning of an *original* line in the
+                            # file (e.g. '  > ').
+                            # Therefore, use "str.endswith()", and
+                            # show "maybe" for a little loose
+                            # examination.
+                            bd = '%s@%s, maybe' % (bu, br)
 
-                errors.append((f, lineno and n + 1, l, msg, bd))
-                result = False
+                errors.append((f, lineno and (n + lineoffset + 1), l, msg, bd))
 
         errors.sort()
         for e in errors:
@@ -702,7 +835,7 @@
                 print(" (too many errors, giving up)")
                 break
 
-    return result
+    return fc
 
 def main():
     parser = optparse.OptionParser("%prog [options] [files | -]")
--- a/contrib/check-commit	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/check-commit	Wed Apr 17 13:41:18 2019 -0400
@@ -47,7 +47,7 @@
      "adds a function with foo_bar naming"),
 ]
 
-word = re.compile('\S')
+word = re.compile(r'\S')
 def nonempty(first, second):
     if word.search(first):
         return first
--- a/contrib/check-config.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/check-config.py	Wed Apr 17 13:41:18 2019 -0400
@@ -25,7 +25,7 @@
         (?:default=)?(?P<default>\S+?))?
     \)''', re.VERBOSE | re.MULTILINE)
 
-configwithre = re.compile(b'''
+configwithre = re.compile(br'''
     ui\.config(?P<ctype>with)\(
         # First argument is callback function. This doesn't parse robustly
         # if it is e.g. a function call.
@@ -61,10 +61,10 @@
             linenum += 1
 
             # check topic-like bits
-            m = re.match(b'\s*``(\S+)``', l)
+            m = re.match(br'\s*``(\S+)``', l)
             if m:
                 prevname = m.group(1)
-            if re.match(b'^\s*-+$', l):
+            if re.match(br'^\s*-+$', l):
                 sect = prevname
                 prevname = b''
 
--- a/contrib/check-py3-compat.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/check-py3-compat.py	Wed Apr 17 13:41:18 2019 -0400
@@ -14,6 +14,7 @@
 import os
 import sys
 import traceback
+import warnings
 
 def check_compat_py2(f):
     """Check Python 3 compatibility for a file with Python 2"""
@@ -45,7 +46,7 @@
         content = fh.read()
 
     try:
-        ast.parse(content)
+        ast.parse(content, filename=f)
     except SyntaxError as e:
         print('%s: invalid syntax: %s' % (f, e))
         return
@@ -91,6 +92,11 @@
         fn = check_compat_py3
 
     for f in sys.argv[1:]:
-        fn(f)
+        with warnings.catch_warnings(record=True) as warns:
+            fn(f)
+
+        for w in warns:
+            print(warnings.formatwarning(w.message, w.category,
+                                         w.filename, w.lineno).rstrip())
 
     sys.exit(0)
--- a/contrib/chg/hgclient.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/chg/hgclient.c	Wed Apr 17 13:41:18 2019 -0400
@@ -84,8 +84,9 @@
 
 static void enlargecontext(context_t *ctx, size_t newsize)
 {
-	if (newsize <= ctx->maxdatasize)
+	if (newsize <= ctx->maxdatasize) {
 		return;
+	}
 
 	newsize = defaultdatasize *
 	          ((newsize + defaultdatasize - 1) / defaultdatasize);
@@ -117,22 +118,25 @@
 
 	uint32_t datasize_n;
 	rsize = recv(hgc->sockfd, &datasize_n, sizeof(datasize_n), 0);
-	if (rsize != sizeof(datasize_n))
+	if (rsize != sizeof(datasize_n)) {
 		abortmsg("failed to read data size");
+	}
 
 	/* datasize denotes the maximum size to write if input request */
 	hgc->ctx.datasize = ntohl(datasize_n);
 	enlargecontext(&hgc->ctx, hgc->ctx.datasize);
 
-	if (isupper(hgc->ctx.ch) && hgc->ctx.ch != 'S')
+	if (isupper(hgc->ctx.ch) && hgc->ctx.ch != 'S') {
 		return; /* assumes input request */
+	}
 
 	size_t cursize = 0;
 	while (cursize < hgc->ctx.datasize) {
 		rsize = recv(hgc->sockfd, hgc->ctx.data + cursize,
 		             hgc->ctx.datasize - cursize, 0);
-		if (rsize < 1)
+		if (rsize < 1) {
 			abortmsg("failed to read data block");
+		}
 		cursize += rsize;
 	}
 }
@@ -143,8 +147,9 @@
 	const char *const endp = p + datasize;
 	while (p < endp) {
 		ssize_t r = send(sockfd, p, endp - p, 0);
-		if (r < 0)
+		if (r < 0) {
 			abortmsgerrno("cannot communicate");
+		}
 		p += r;
 	}
 }
@@ -186,8 +191,9 @@
 		ctx->datasize += n;
 	}
 
-	if (ctx->datasize > 0)
+	if (ctx->datasize > 0) {
 		--ctx->datasize; /* strip last '\0' */
+	}
 }
 
 /* Extract '\0'-separated list of args to new buffer, terminated by NULL */
@@ -205,8 +211,9 @@
 		args[nargs] = s;
 		nargs++;
 		s = memchr(s, '\0', e - s);
-		if (!s)
+		if (!s) {
 			break;
+		}
 		s++;
 	}
 	args[nargs] = NULL;
@@ -225,8 +232,9 @@
 static void handlereadlinerequest(hgclient_t *hgc)
 {
 	context_t *ctx = &hgc->ctx;
-	if (!fgets(ctx->data, ctx->datasize, stdin))
+	if (!fgets(ctx->data, ctx->datasize, stdin)) {
 		ctx->data[0] = '\0';
+	}
 	ctx->datasize = strlen(ctx->data);
 	writeblock(hgc);
 }
@@ -239,8 +247,9 @@
 	ctx->data[ctx->datasize] = '\0'; /* terminate last string */
 
 	const char **args = unpackcmdargsnul(ctx);
-	if (!args[0] || !args[1] || !args[2])
+	if (!args[0] || !args[1] || !args[2]) {
 		abortmsg("missing type or command or cwd in system request");
+	}
 	if (strcmp(args[0], "system") == 0) {
 		debugmsg("run '%s' at '%s'", args[1], args[2]);
 		int32_t r = runshellcmd(args[1], args + 3, args[2]);
@@ -252,8 +261,9 @@
 		writeblock(hgc);
 	} else if (strcmp(args[0], "pager") == 0) {
 		setuppager(args[1], args + 3);
-		if (hgc->capflags & CAP_ATTACHIO)
+		if (hgc->capflags & CAP_ATTACHIO) {
 			attachio(hgc);
+		}
 		/* unblock the server */
 		static const char emptycmd[] = "\n";
 		sendall(hgc->sockfd, emptycmd, sizeof(emptycmd) - 1);
@@ -296,9 +306,10 @@
 			handlesystemrequest(hgc);
 			break;
 		default:
-			if (isupper(ctx->ch))
+			if (isupper(ctx->ch)) {
 				abortmsg("cannot handle response (ch = %c)",
 				         ctx->ch);
+			}
 		}
 	}
 }
@@ -308,8 +319,9 @@
 	unsigned int flags = 0;
 	while (s < e) {
 		const char *t = strchr(s, ' ');
-		if (!t || t > e)
+		if (!t || t > e) {
 			t = e;
+		}
 		const cappair_t *cap;
 		for (cap = captable; cap->flag; ++cap) {
 			size_t n = t - s;
@@ -346,11 +358,13 @@
 	const char *const dataend = ctx->data + ctx->datasize;
 	while (s < dataend) {
 		const char *t = strchr(s, ':');
-		if (!t || t[1] != ' ')
+		if (!t || t[1] != ' ') {
 			break;
+		}
 		const char *u = strchr(t + 2, '\n');
-		if (!u)
+		if (!u) {
 			u = dataend;
+		}
 		if (strncmp(s, "capabilities:", t - s + 1) == 0) {
 			hgc->capflags = parsecapabilities(t + 2, u);
 		} else if (strncmp(s, "pgid:", t - s + 1) == 0) {
@@ -367,8 +381,9 @@
 {
 	int r = snprintf(hgc->ctx.data, hgc->ctx.maxdatasize, "chg[worker/%d]",
 	                 (int)getpid());
-	if (r < 0 || (size_t)r >= hgc->ctx.maxdatasize)
+	if (r < 0 || (size_t)r >= hgc->ctx.maxdatasize) {
 		abortmsg("insufficient buffer to write procname (r = %d)", r);
+	}
 	hgc->ctx.datasize = (size_t)r;
 	writeblockrequest(hgc, "setprocname");
 }
@@ -380,8 +395,9 @@
 	sendall(hgc->sockfd, chcmd, sizeof(chcmd) - 1);
 	readchannel(hgc);
 	context_t *ctx = &hgc->ctx;
-	if (ctx->ch != 'I')
+	if (ctx->ch != 'I') {
 		abortmsg("unexpected response for attachio (ch = %c)", ctx->ch);
+	}
 
 	static const int fds[3] = {STDIN_FILENO, STDOUT_FILENO, STDERR_FILENO};
 	struct msghdr msgh;
@@ -399,23 +415,27 @@
 	memcpy(CMSG_DATA(cmsg), fds, sizeof(fds));
 	msgh.msg_controllen = cmsg->cmsg_len;
 	ssize_t r = sendmsg(hgc->sockfd, &msgh, 0);
-	if (r < 0)
+	if (r < 0) {
 		abortmsgerrno("sendmsg failed");
+	}
 
 	handleresponse(hgc);
 	int32_t n;
-	if (ctx->datasize != sizeof(n))
+	if (ctx->datasize != sizeof(n)) {
 		abortmsg("unexpected size of attachio result");
+	}
 	memcpy(&n, ctx->data, sizeof(n));
 	n = ntohl(n);
-	if (n != sizeof(fds) / sizeof(fds[0]))
+	if (n != sizeof(fds) / sizeof(fds[0])) {
 		abortmsg("failed to send fds (n = %d)", n);
+	}
 }
 
 static void chdirtocwd(hgclient_t *hgc)
 {
-	if (!getcwd(hgc->ctx.data, hgc->ctx.maxdatasize))
+	if (!getcwd(hgc->ctx.data, hgc->ctx.maxdatasize)) {
 		abortmsgerrno("failed to getcwd");
+	}
 	hgc->ctx.datasize = strlen(hgc->ctx.data);
 	writeblockrequest(hgc, "chdir");
 }
@@ -440,8 +460,9 @@
 hgclient_t *hgc_open(const char *sockname)
 {
 	int fd = socket(AF_UNIX, SOCK_STREAM, 0);
-	if (fd < 0)
+	if (fd < 0) {
 		abortmsgerrno("cannot create socket");
+	}
 
 	/* don't keep fd on fork(), so that it can be closed when the parent
 	 * process get terminated. */
@@ -456,34 +477,39 @@
 	{
 		const char *split = strrchr(sockname, '/');
 		if (split && split != sockname) {
-			if (split[1] == '\0')
+			if (split[1] == '\0') {
 				abortmsg("sockname cannot end with a slash");
+			}
 			size_t len = split - sockname;
 			char sockdir[len + 1];
 			memcpy(sockdir, sockname, len);
 			sockdir[len] = '\0';
 
 			bakfd = open(".", O_DIRECTORY);
-			if (bakfd == -1)
+			if (bakfd == -1) {
 				abortmsgerrno("cannot open cwd");
+			}
 
 			int r = chdir(sockdir);
-			if (r != 0)
+			if (r != 0) {
 				abortmsgerrno("cannot chdir %s", sockdir);
+			}
 
 			basename = split + 1;
 		}
 	}
-	if (strlen(basename) >= sizeof(addr.sun_path))
+	if (strlen(basename) >= sizeof(addr.sun_path)) {
 		abortmsg("sockname is too long: %s", basename);
+	}
 	strncpy(addr.sun_path, basename, sizeof(addr.sun_path));
 	addr.sun_path[sizeof(addr.sun_path) - 1] = '\0';
 
 	/* real connect */
 	int r = connect(fd, (struct sockaddr *)&addr, sizeof(addr));
 	if (r < 0) {
-		if (errno != ENOENT && errno != ECONNREFUSED)
+		if (errno != ENOENT && errno != ECONNREFUSED) {
 			abortmsgerrno("cannot connect to %s", sockname);
+		}
 	}
 	if (bakfd != -1) {
 		fchdirx(bakfd);
@@ -501,16 +527,21 @@
 	initcontext(&hgc->ctx);
 
 	readhello(hgc);
-	if (!(hgc->capflags & CAP_RUNCOMMAND))
+	if (!(hgc->capflags & CAP_RUNCOMMAND)) {
 		abortmsg("insufficient capability: runcommand");
-	if (hgc->capflags & CAP_SETPROCNAME)
+	}
+	if (hgc->capflags & CAP_SETPROCNAME) {
 		updateprocname(hgc);
-	if (hgc->capflags & CAP_ATTACHIO)
+	}
+	if (hgc->capflags & CAP_ATTACHIO) {
 		attachio(hgc);
-	if (hgc->capflags & CAP_CHDIR)
+	}
+	if (hgc->capflags & CAP_CHDIR) {
 		chdirtocwd(hgc);
-	if (hgc->capflags & CAP_SETUMASK2)
+	}
+	if (hgc->capflags & CAP_SETUMASK2) {
 		forwardumask(hgc);
+	}
 
 	return hgc;
 }
@@ -555,16 +586,18 @@
                           size_t argsize)
 {
 	assert(hgc);
-	if (!(hgc->capflags & CAP_VALIDATE))
+	if (!(hgc->capflags & CAP_VALIDATE)) {
 		return NULL;
+	}
 
 	packcmdargs(&hgc->ctx, args, argsize);
 	writeblockrequest(hgc, "validate");
 	handleresponse(hgc);
 
 	/* the server returns '\0' if it can handle our request */
-	if (hgc->ctx.datasize <= 1)
+	if (hgc->ctx.datasize <= 1) {
 		return NULL;
+	}
 
 	/* make sure the buffer is '\0' terminated */
 	enlargecontext(&hgc->ctx, hgc->ctx.datasize + 1);
@@ -599,8 +632,9 @@
 void hgc_attachio(hgclient_t *hgc)
 {
 	assert(hgc);
-	if (!(hgc->capflags & CAP_ATTACHIO))
+	if (!(hgc->capflags & CAP_ATTACHIO)) {
 		return;
+	}
 	attachio(hgc);
 }
 
@@ -613,8 +647,9 @@
 void hgc_setenv(hgclient_t *hgc, const char *const envp[])
 {
 	assert(hgc && envp);
-	if (!(hgc->capflags & CAP_SETENV))
+	if (!(hgc->capflags & CAP_SETENV)) {
 		return;
+	}
 	packcmdargs(&hgc->ctx, envp, /*argsize*/ -1);
 	writeblockrequest(hgc, "setenv");
 }
--- a/contrib/chg/procutil.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/chg/procutil.c	Wed Apr 17 13:41:18 2019 -0400
@@ -25,8 +25,9 @@
 static void forwardsignal(int sig)
 {
 	assert(peerpid > 0);
-	if (kill(peerpid, sig) < 0)
+	if (kill(peerpid, sig) < 0) {
 		abortmsgerrno("cannot kill %d", peerpid);
+	}
 	debugmsg("forward signal %d", sig);
 }
 
@@ -34,8 +35,9 @@
 {
 	/* prefer kill(-pgid, sig), fallback to pid if pgid is invalid */
 	pid_t killpid = peerpgid > 1 ? -peerpgid : peerpid;
-	if (kill(killpid, sig) < 0)
+	if (kill(killpid, sig) < 0) {
 		abortmsgerrno("cannot kill %d", killpid);
+	}
 	debugmsg("forward signal %d to %d", sig, killpid);
 }
 
@@ -43,28 +45,36 @@
 {
 	sigset_t unblockset, oldset;
 	struct sigaction sa, oldsa;
-	if (sigemptyset(&unblockset) < 0)
+	if (sigemptyset(&unblockset) < 0) {
 		goto error;
-	if (sigaddset(&unblockset, sig) < 0)
+	}
+	if (sigaddset(&unblockset, sig) < 0) {
 		goto error;
+	}
 	memset(&sa, 0, sizeof(sa));
 	sa.sa_handler = SIG_DFL;
 	sa.sa_flags = SA_RESTART;
-	if (sigemptyset(&sa.sa_mask) < 0)
+	if (sigemptyset(&sa.sa_mask) < 0) {
 		goto error;
+	}
 
 	forwardsignal(sig);
-	if (raise(sig) < 0) /* resend to self */
+	if (raise(sig) < 0) { /* resend to self */
 		goto error;
-	if (sigaction(sig, &sa, &oldsa) < 0)
+	}
+	if (sigaction(sig, &sa, &oldsa) < 0) {
 		goto error;
-	if (sigprocmask(SIG_UNBLOCK, &unblockset, &oldset) < 0)
+	}
+	if (sigprocmask(SIG_UNBLOCK, &unblockset, &oldset) < 0) {
 		goto error;
+	}
 	/* resent signal will be handled before sigprocmask() returns */
-	if (sigprocmask(SIG_SETMASK, &oldset, NULL) < 0)
+	if (sigprocmask(SIG_SETMASK, &oldset, NULL) < 0) {
 		goto error;
-	if (sigaction(sig, &oldsa, NULL) < 0)
+	}
+	if (sigaction(sig, &oldsa, NULL) < 0) {
 		goto error;
+	}
 	return;
 
 error:
@@ -73,19 +83,22 @@
 
 static void handlechildsignal(int sig UNUSED_)
 {
-	if (peerpid == 0 || pagerpid == 0)
+	if (peerpid == 0 || pagerpid == 0) {
 		return;
+	}
 	/* if pager exits, notify the server with SIGPIPE immediately.
 	 * otherwise the server won't get SIGPIPE if it does not write
 	 * anything. (issue5278) */
-	if (waitpid(pagerpid, NULL, WNOHANG) == pagerpid)
+	if (waitpid(pagerpid, NULL, WNOHANG) == pagerpid) {
 		kill(peerpid, SIGPIPE);
+	}
 }
 
 void setupsignalhandler(pid_t pid, pid_t pgid)
 {
-	if (pid <= 0)
+	if (pid <= 0) {
 		return;
+	}
 	peerpid = pid;
 	peerpgid = (pgid <= 1 ? 0 : pgid);
 
@@ -98,42 +111,52 @@
 	 * - SIGINT: usually generated by the terminal */
 	sa.sa_handler = forwardsignaltogroup;
 	sa.sa_flags = SA_RESTART;
-	if (sigemptyset(&sa.sa_mask) < 0)
+	if (sigemptyset(&sa.sa_mask) < 0) {
+		goto error;
+	}
+	if (sigaction(SIGHUP, &sa, NULL) < 0) {
 		goto error;
-	if (sigaction(SIGHUP, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGINT, &sa, NULL) < 0) {
 		goto error;
-	if (sigaction(SIGINT, &sa, NULL) < 0)
-		goto error;
+	}
 
 	/* terminate frontend by double SIGTERM in case of server freeze */
 	sa.sa_handler = forwardsignal;
 	sa.sa_flags |= SA_RESETHAND;
-	if (sigaction(SIGTERM, &sa, NULL) < 0)
+	if (sigaction(SIGTERM, &sa, NULL) < 0) {
 		goto error;
+	}
 
 	/* notify the worker about window resize events */
 	sa.sa_flags = SA_RESTART;
-	if (sigaction(SIGWINCH, &sa, NULL) < 0)
+	if (sigaction(SIGWINCH, &sa, NULL) < 0) {
 		goto error;
+	}
 	/* forward user-defined signals */
-	if (sigaction(SIGUSR1, &sa, NULL) < 0)
+	if (sigaction(SIGUSR1, &sa, NULL) < 0) {
 		goto error;
-	if (sigaction(SIGUSR2, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGUSR2, &sa, NULL) < 0) {
 		goto error;
+	}
 	/* propagate job control requests to worker */
 	sa.sa_handler = forwardsignal;
 	sa.sa_flags = SA_RESTART;
-	if (sigaction(SIGCONT, &sa, NULL) < 0)
+	if (sigaction(SIGCONT, &sa, NULL) < 0) {
 		goto error;
+	}
 	sa.sa_handler = handlestopsignal;
 	sa.sa_flags = SA_RESTART;
-	if (sigaction(SIGTSTP, &sa, NULL) < 0)
+	if (sigaction(SIGTSTP, &sa, NULL) < 0) {
 		goto error;
+	}
 	/* get notified when pager exits */
 	sa.sa_handler = handlechildsignal;
 	sa.sa_flags = SA_RESTART;
-	if (sigaction(SIGCHLD, &sa, NULL) < 0)
+	if (sigaction(SIGCHLD, &sa, NULL) < 0) {
 		goto error;
+	}
 
 	return;
 
@@ -147,26 +170,34 @@
 	memset(&sa, 0, sizeof(sa));
 	sa.sa_handler = SIG_DFL;
 	sa.sa_flags = SA_RESTART;
-	if (sigemptyset(&sa.sa_mask) < 0)
+	if (sigemptyset(&sa.sa_mask) < 0) {
 		goto error;
+	}
 
-	if (sigaction(SIGHUP, &sa, NULL) < 0)
+	if (sigaction(SIGHUP, &sa, NULL) < 0) {
 		goto error;
-	if (sigaction(SIGTERM, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGTERM, &sa, NULL) < 0) {
 		goto error;
-	if (sigaction(SIGWINCH, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGWINCH, &sa, NULL) < 0) {
 		goto error;
-	if (sigaction(SIGCONT, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGCONT, &sa, NULL) < 0) {
 		goto error;
-	if (sigaction(SIGTSTP, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGTSTP, &sa, NULL) < 0) {
 		goto error;
-	if (sigaction(SIGCHLD, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGCHLD, &sa, NULL) < 0) {
 		goto error;
+	}
 
 	/* ignore Ctrl+C while shutting down to make pager exits cleanly */
 	sa.sa_handler = SIG_IGN;
-	if (sigaction(SIGINT, &sa, NULL) < 0)
+	if (sigaction(SIGINT, &sa, NULL) < 0) {
 		goto error;
+	}
 
 	peerpid = 0;
 	return;
@@ -180,22 +211,27 @@
 pid_t setuppager(const char *pagercmd, const char *envp[])
 {
 	assert(pagerpid == 0);
-	if (!pagercmd)
+	if (!pagercmd) {
 		return 0;
+	}
 
 	int pipefds[2];
-	if (pipe(pipefds) < 0)
+	if (pipe(pipefds) < 0) {
 		return 0;
+	}
 	pid_t pid = fork();
-	if (pid < 0)
+	if (pid < 0) {
 		goto error;
+	}
 	if (pid > 0) {
 		close(pipefds[0]);
-		if (dup2(pipefds[1], fileno(stdout)) < 0)
+		if (dup2(pipefds[1], fileno(stdout)) < 0) {
 			goto error;
+		}
 		if (isatty(fileno(stderr))) {
-			if (dup2(pipefds[1], fileno(stderr)) < 0)
+			if (dup2(pipefds[1], fileno(stderr)) < 0) {
 				goto error;
+			}
 		}
 		close(pipefds[1]);
 		pagerpid = pid;
@@ -222,16 +258,18 @@
 
 void waitpager(void)
 {
-	if (pagerpid == 0)
+	if (pagerpid == 0) {
 		return;
+	}
 
 	/* close output streams to notify the pager its input ends */
 	fclose(stdout);
 	fclose(stderr);
 	while (1) {
 		pid_t ret = waitpid(pagerpid, NULL, 0);
-		if (ret == -1 && errno == EINTR)
+		if (ret == -1 && errno == EINTR) {
 			continue;
+		}
 		break;
 	}
 }
--- a/contrib/chg/util.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/chg/util.c	Wed Apr 17 13:41:18 2019 -0400
@@ -25,8 +25,9 @@
 
 static inline void fsetcolor(FILE *fp, const char *code)
 {
-	if (!colorenabled)
+	if (!colorenabled) {
 		return;
+	}
 	fprintf(fp, "\033[%sm", code);
 }
 
@@ -35,8 +36,9 @@
 	fsetcolor(stderr, "1;31");
 	fputs("chg: abort: ", stderr);
 	vfprintf(stderr, fmt, args);
-	if (no != 0)
+	if (no != 0) {
 		fprintf(stderr, " (errno = %d, %s)", no, strerror(no));
+	}
 	fsetcolor(stderr, "");
 	fputc('\n', stderr);
 	exit(255);
@@ -82,8 +84,9 @@
 
 void debugmsg(const char *fmt, ...)
 {
-	if (!debugmsgenabled)
+	if (!debugmsgenabled) {
 		return;
+	}
 
 	va_list args;
 	va_start(args, fmt);
@@ -98,32 +101,37 @@
 void fchdirx(int dirfd)
 {
 	int r = fchdir(dirfd);
-	if (r == -1)
+	if (r == -1) {
 		abortmsgerrno("failed to fchdir");
+	}
 }
 
 void fsetcloexec(int fd)
 {
 	int flags = fcntl(fd, F_GETFD);
-	if (flags < 0)
+	if (flags < 0) {
 		abortmsgerrno("cannot get flags of fd %d", fd);
-	if (fcntl(fd, F_SETFD, flags | FD_CLOEXEC) < 0)
+	}
+	if (fcntl(fd, F_SETFD, flags | FD_CLOEXEC) < 0) {
 		abortmsgerrno("cannot set flags of fd %d", fd);
+	}
 }
 
 void *mallocx(size_t size)
 {
 	void *result = malloc(size);
-	if (!result)
+	if (!result) {
 		abortmsg("failed to malloc");
+	}
 	return result;
 }
 
 void *reallocx(void *ptr, size_t size)
 {
 	void *result = realloc(ptr, size);
-	if (!result)
+	if (!result) {
 		abortmsg("failed to realloc");
+	}
 	return result;
 }
 
@@ -144,30 +152,37 @@
 	memset(&newsa, 0, sizeof(newsa));
 	newsa.sa_handler = SIG_IGN;
 	newsa.sa_flags = 0;
-	if (sigemptyset(&newsa.sa_mask) < 0)
+	if (sigemptyset(&newsa.sa_mask) < 0) {
 		goto done;
-	if (sigaction(SIGINT, &newsa, &oldsaint) < 0)
+	}
+	if (sigaction(SIGINT, &newsa, &oldsaint) < 0) {
 		goto done;
+	}
 	doneflags |= F_SIGINT;
-	if (sigaction(SIGQUIT, &newsa, &oldsaquit) < 0)
+	if (sigaction(SIGQUIT, &newsa, &oldsaquit) < 0) {
 		goto done;
+	}
 	doneflags |= F_SIGQUIT;
 
-	if (sigaddset(&newsa.sa_mask, SIGCHLD) < 0)
+	if (sigaddset(&newsa.sa_mask, SIGCHLD) < 0) {
 		goto done;
-	if (sigprocmask(SIG_BLOCK, &newsa.sa_mask, &oldmask) < 0)
+	}
+	if (sigprocmask(SIG_BLOCK, &newsa.sa_mask, &oldmask) < 0) {
 		goto done;
+	}
 	doneflags |= F_SIGMASK;
 
 	pid_t pid = fork();
-	if (pid < 0)
+	if (pid < 0) {
 		goto done;
+	}
 	if (pid == 0) {
 		sigaction(SIGINT, &oldsaint, NULL);
 		sigaction(SIGQUIT, &oldsaquit, NULL);
 		sigprocmask(SIG_SETMASK, &oldmask, NULL);
-		if (cwd && chdir(cwd) < 0)
+		if (cwd && chdir(cwd) < 0) {
 			_exit(127);
+		}
 		const char *argv[] = {"sh", "-c", cmd, NULL};
 		if (envp) {
 			execve("/bin/sh", (char **)argv, (char **)envp);
@@ -176,25 +191,32 @@
 		}
 		_exit(127);
 	} else {
-		if (waitpid(pid, &status, 0) < 0)
+		if (waitpid(pid, &status, 0) < 0) {
 			goto done;
+		}
 		doneflags |= F_WAITPID;
 	}
 
 done:
-	if (doneflags & F_SIGINT)
+	if (doneflags & F_SIGINT) {
 		sigaction(SIGINT, &oldsaint, NULL);
-	if (doneflags & F_SIGQUIT)
+	}
+	if (doneflags & F_SIGQUIT) {
 		sigaction(SIGQUIT, &oldsaquit, NULL);
-	if (doneflags & F_SIGMASK)
+	}
+	if (doneflags & F_SIGMASK) {
 		sigprocmask(SIG_SETMASK, &oldmask, NULL);
+	}
 
 	/* no way to report other errors, use 127 (= shell termination) */
-	if (!(doneflags & F_WAITPID))
+	if (!(doneflags & F_WAITPID)) {
 		return 127;
-	if (WIFEXITED(status))
+	}
+	if (WIFEXITED(status)) {
 		return WEXITSTATUS(status);
-	if (WIFSIGNALED(status))
+	}
+	if (WIFSIGNALED(status)) {
 		return -WTERMSIG(status);
+	}
 	return 127;
 }
--- a/contrib/clang-format-ignorelist	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/clang-format-ignorelist	Wed Apr 17 13:41:18 2019 -0400
@@ -62,6 +62,11 @@
 contrib/python-zstandard/zstd/compress/zstd_opt.c
 contrib/python-zstandard/zstd/compress/zstd_opt.h
 contrib/python-zstandard/zstd/decompress/huf_decompress.c
+contrib/python-zstandard/zstd/decompress/zstd_ddict.c
+contrib/python-zstandard/zstd/decompress/zstd_ddict.h
+contrib/python-zstandard/zstd/decompress/zstd_decompress_block.c
+contrib/python-zstandard/zstd/decompress/zstd_decompress_block.h
+contrib/python-zstandard/zstd/decompress/zstd_decompress_internal.h
 contrib/python-zstandard/zstd/decompress/zstd_decompress.c
 contrib/python-zstandard/zstd/deprecated/zbuff_common.c
 contrib/python-zstandard/zstd/deprecated/zbuff_compress.c
--- a/contrib/debugshell.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/debugshell.py	Wed Apr 17 13:41:18 2019 -0400
@@ -7,6 +7,7 @@
 import sys
 from mercurial import (
     demandimport,
+    pycompat,
     registrar,
 )
 
@@ -32,28 +33,30 @@
 
     IPython.embed()
 
-@command('debugshell|dbsh', [])
+@command(b'debugshell|dbsh', [])
 def debugshell(ui, repo, **opts):
-    bannermsg = "loaded repo : %s\n" \
-                "using source: %s" % (repo.root,
-                                      mercurial.__path__[0])
+    bannermsg = ("loaded repo : %s\n"
+                 "using source: %s" % (pycompat.sysstr(repo.root),
+                                       mercurial.__path__[0]))
 
     pdbmap = {
         'pdb'  : 'code',
         'ipdb' : 'IPython'
     }
 
-    debugger = ui.config("ui", "debugger")
+    debugger = ui.config(b"ui", b"debugger")
     if not debugger:
         debugger = 'pdb'
+    else:
+        debugger = pycompat.sysstr(debugger)
 
     # if IPython doesn't exist, fallback to code.interact
     try:
         with demandimport.deactivated():
             __import__(pdbmap[debugger])
     except ImportError:
-        ui.warn(("%s debugger specified but %s module was not found\n")
+        ui.warn((b"%s debugger specified but %s module was not found\n")
                 % (debugger, pdbmap[debugger]))
-        debugger = 'pdb'
+        debugger = b'pdb'
 
     getattr(sys.modules[__name__], debugger)(ui, repo, bannermsg, **opts)
--- a/contrib/discovery-helper.sh	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,64 +0,0 @@
-#!/bin/bash
-#
-# produces two repositories with different common and missing subsets
-#
-#   $ discovery-helper.sh REPO NBHEADS DEPT
-#
-# The Goal is to produce two repositories with some common part and some
-# exclusive part on each side. Provide a source repository REPO, it will
-# produce two repositories REPO-left and REPO-right.
-#
-# Each repository will be missing some revisions exclusive to NBHEADS of the
-# repo topological heads. These heads and revisions exclusive to them (up to
-# DEPTH depth) are stripped.
-#
-# The "left" repository will use the NBHEADS first heads (sorted by
-# description). The "right" use the last NBHEADS one.
-#
-# To find out how many topological heads a repo has, use:
-#
-#   $ hg heads -t -T '{rev}\n' | wc -l
-#
-# Example:
-#
-#  The `pypy-2018-09-01` repository has 192 heads. To produce two repositories
-#  with 92 common heads and ~50 exclusive heads on each side.
-#
-#    $ ./discovery-helper.sh pypy-2018-08-01 50 10
-
-set -euo pipefail
-
-if [ $# -lt 3 ]; then
-     echo "usage: `basename $0` REPO NBHEADS DEPTH"
-     exit 64
-fi
-
-repo="$1"
-shift
-
-nbheads="$1"
-shift
-
-depth="$1"
-shift
-
-leftrepo="${repo}-left"
-rightrepo="${repo}-right"
-
-left="first(sort(heads(all()), 'desc'), $nbheads)"
-right="last(sort(heads(all()), 'desc'), $nbheads)"
-
-leftsubset="ancestors($left, $depth) and only($left, heads(all() - $left))"
-rightsubset="ancestors($right, $depth) and only($right, heads(all() - $right))"
-
-echo '### building left repository:' $left-repo
-echo '# cloning'
-hg clone --noupdate "${repo}" "${leftrepo}"
-echo '# stripping' '"'${leftsubset}'"'
-hg -R "${leftrepo}" --config extensions.strip= strip --rev "$leftsubset" --no-backup
-
-echo '### building right repository:' $right-repo
-echo '# cloning'
-hg clone --noupdate "${repo}" "${rightrepo}"
-echo '# stripping:' '"'${rightsubset}'"'
-hg -R "${rightrepo}" --config extensions.strip= strip --rev "$rightsubset" --no-backup
--- a/contrib/fuzz/manifest.cc	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/fuzz/manifest.cc	Wed Apr 17 13:41:18 2019 -0400
@@ -20,11 +20,19 @@
   lm = lazymanifest(mdata)
   # iterate the whole thing, which causes the code to fully parse
   # every line in the manifest
-  list(lm.iterentries())
+  for e, _, _ in lm.iterentries():
+      # also exercise __getitem__ et al
+      lm[e]
+      e in lm
+      (e + 'nope') in lm
   lm[b'xyzzy'] = (b'\0' * 20, 'x')
   # do an insert, text should change
   assert lm.text() != mdata, "insert should change text and didn't: %r %r" % (lm.text(), mdata)
+  cloned = lm.filtercopy(lambda x: x != 'xyzzy')
+  assert cloned.text() == mdata, 'cloned text should equal mdata'
+  cloned.diff(lm)
   del lm[b'xyzzy']
+  cloned.diff(lm)
   # should be back to the same
   assert lm.text() == mdata, "delete should have restored text but didn't: %r %r" % (lm.text(), mdata)
 except Exception as e:
@@ -39,6 +47,11 @@
 
 int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size)
 {
+	// Don't allow fuzzer inputs larger than 100k, since we'll just bog
+	// down and not accomplish much.
+	if (Size > 100000) {
+		return 0;
+	}
 	PyObject *mtext =
 	    PyBytes_FromStringAndSize((const char *)Data, (Py_ssize_t)Size);
 	PyObject *locals = PyDict_New();
--- a/contrib/fuzz/revlog.cc	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/fuzz/revlog.cc	Wed Apr 17 13:41:18 2019 -0400
@@ -19,6 +19,11 @@
 for inline in (True, False):
     try:
         index, cache = parse_index2(data, inline)
+        index.slicechunktodensity(list(range(len(index))), 0.5, 262144)
+        for rev in range(len(index)):
+            node = index[rev][7]
+            partial = index.shortest(node)
+            index.partialmatch(node[:partial])
     except Exception as e:
         pass
         # uncomment this print if you're editing this Python code
@@ -31,6 +36,11 @@
 
 int LLVMFuzzerTestOneInput(const uint8_t *Data, size_t Size)
 {
+	// Don't allow fuzzer inputs larger than 60k, since we'll just bog
+	// down and not accomplish much.
+	if (Size > 60000) {
+		return 0;
+	}
 	PyObject *text =
 	    PyBytes_FromStringAndSize((const char *)Data, (Py_ssize_t)Size);
 	PyObject *locals = PyDict_New();
--- a/contrib/hg-test-mode.el	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/hg-test-mode.el	Wed Apr 17 13:41:18 2019 -0400
@@ -53,4 +53,45 @@
   (setq mode-name "hg-test")
   (run-hooks 'hg-test-mode-hook))
 
+(with-eval-after-load "compile"
+  ;; Link to Python sources in tracebacks in .t failures.
+  (add-to-list 'compilation-error-regexp-alist-alist
+               '(hg-test-output-python-tb
+                 "^\\+ +File ['\"]\\([^'\"]+\\)['\"], line \\([0-9]+\\)," 1 2))
+  (add-to-list 'compilation-error-regexp-alist 'hg-test-output-python-tb)
+  ;; Link to source files in test-check-code.t violations.
+  (add-to-list 'compilation-error-regexp-alist-alist
+               '(hg-test-check-code-output
+                 "\\+  \\([^:\n]+\\):\\([0-9]+\\):$" 1 2))
+  (add-to-list 'compilation-error-regexp-alist 'hg-test-check-code-output))
+
+(defun hg-test-mode--test-one-error-line-regexp (test)
+  (erase-buffer)
+  (setq compilation-locs (make-hash-table))
+  (insert (car test))
+  (compilation-parse-errors (point-min) (point-max))
+  (let ((msg (get-text-property 1 'compilation-message)))
+    (should msg)
+    (let ((loc (compilation--message->loc msg))
+          (line (nth 1 test))
+          (file (nth 2 test)))
+      (should (equal (compilation--loc->line loc) line))
+      (should (equal (caar (compilation--loc->file-struct loc)) file)))
+      msg))
+
+(require 'ert)
+(ert-deftest hg-test-mode--compilation-mode-support ()
+  "Test hg-specific compilation-mode regular expressions"
+  (require 'compile)
+  (with-temp-buffer
+    (font-lock-mode -1)
+    (mapc 'hg-test-mode--test-one-error-line-regexp
+          '(
+            ("+  contrib/debugshell.py:37:" 37 "contrib/debugshell.py")
+            ("+    File \"/tmp/hg/mercurial/commands.py\", line 3115, in help_"
+             3115 "/tmp/hg/mercurial/commands.py")
+            ("+    File \"mercurial/dispatch.py\", line 225, in dispatch"
+             225 "mercurial/dispatch.py")))))
+
+
 (provide 'hg-test-mode)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/install-windows-dependencies.ps1	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,200 @@
+# install-dependencies.ps1 - Install Windows dependencies for building Mercurial
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# This script can be used to bootstrap a Mercurial build environment on
+# Windows.
+#
+# The script makes a lot of assumptions about how things should work.
+# For example, the install location of Python is hardcoded to c:\hgdev\*.
+#
+# The script should be executed from a PowerShell with elevated privileges
+# if you don't want to see a UAC prompt for various installers.
+#
+# The script is tested on Windows 10 and Windows Server 2019 (in EC2).
+
+$VS_BUILD_TOOLS_URL = "https://download.visualstudio.microsoft.com/download/pr/a1603c02-8a66-4b83-b821-811e3610a7c4/aa2db8bb39e0cbd23e9940d8951e0bc3/vs_buildtools.exe"
+$VS_BUILD_TOOLS_SHA256 = "911E292B8E6E5F46CBC17003BDCD2D27A70E616E8D5E6E69D5D489A605CAA139"
+
+$VC9_PYTHON_URL = "https://download.microsoft.com/download/7/9/6/796EF2E4-801B-4FC4-AB28-B59FBF6D907B/VCForPython27.msi"
+$VC9_PYTHON_SHA256 = "070474db76a2e625513a5835df4595df9324d820f9cc97eab2a596dcbc2f5cbf"
+
+$PYTHON27_x64_URL = "https://www.python.org/ftp/python/2.7.16/python-2.7.16.amd64.msi"
+$PYTHON27_x64_SHA256 = "7c0f45993019152d46041a7db4b947b919558fdb7a8f67bcd0535bc98d42b603"
+$PYTHON27_X86_URL = "https://www.python.org/ftp/python/2.7.16/python-2.7.16.msi"
+$PYTHON27_X86_SHA256 = "d57dc3e1ba490aee856c28b4915d09e3f49442461e46e481bc6b2d18207831d7"
+
+$PYTHON35_x86_URL = "https://www.python.org/ftp/python/3.5.4/python-3.5.4.exe"
+$PYTHON35_x86_SHA256 = "F27C2D67FD9688E4970F3BFF799BB9D722A0D6C2C13B04848E1F7D620B524B0E"
+$PYTHON35_x64_URL = "https://www.python.org/ftp/python/3.5.4/python-3.5.4-amd64.exe"
+$PYTHON35_x64_SHA256 = "9B7741CC32357573A77D2EE64987717E527628C38FD7EAF3E2AACA853D45A1EE"
+
+$PYTHON36_x86_URL = "https://www.python.org/ftp/python/3.6.8/python-3.6.8.exe"
+$PYTHON36_x86_SHA256 = "89871D432BC06E4630D7B64CB1A8451E53C80E68DE29029976B12AAD7DBFA5A0"
+$PYTHON36_x64_URL = "https://www.python.org/ftp/python/3.6.8/python-3.6.8-amd64.exe"
+$PYTHON36_x64_SHA256 = "96088A58B7C43BC83B84E6B67F15E8706C614023DD64F9A5A14E81FF824ADADC"
+
+$PYTHON37_x86_URL = "https://www.python.org/ftp/python/3.7.2/python-3.7.2.exe"
+$PYTHON37_x86_SHA256 = "8BACE330FB409E428B04EEEE083DD9CA7F6C754366D07E23B3853891D8F8C3D0"
+$PYTHON37_x64_URL = "https://www.python.org/ftp/python/3.7.2/python-3.7.2-amd64.exe"
+$PYTHON37_x64_SHA256 = "0FE2A696F5A3E481FED795EF6896ED99157BCEF273EF3C4A96F2905CBDB3AA13"
+
+$PYTHON38_x86_URL = "https://www.python.org/ftp/python/3.8.0/python-3.8.0a2.exe"
+$PYTHON38_x86_SHA256 = "013A7DDD317679FE51223DE627688CFCB2F0F1128FD25A987F846AEB476D3FEF"
+$PYTHON38_x64_URL = "https://www.python.org/ftp/python/3.8.0/python-3.8.0a2-amd64.exe"
+$PYTHON38_X64_SHA256 = "560BC6D1A76BCD6D544AC650709F3892956890753CDCF9CE67E3D7302D76FB41"
+
+# PIP 19.0.3.
+$PIP_URL = "https://github.com/pypa/get-pip/raw/fee32c376da1ff6496a798986d7939cd51e1644f/get-pip.py"
+$PIP_SHA256 = "efe99298f3fbb1f56201ce6b81d2658067d2f7d7dfc2d412e0d3cacc9a397c61"
+
+$VIRTUALENV_URL = "https://files.pythonhosted.org/packages/37/db/89d6b043b22052109da35416abc3c397655e4bd3cff031446ba02b9654fa/virtualenv-16.4.3.tar.gz"
+$VIRTUALENV_SHA256 = "984d7e607b0a5d1329425dd8845bd971b957424b5ba664729fab51ab8c11bc39"
+
+$INNO_SETUP_URL = "http://files.jrsoftware.org/is/5/innosetup-5.6.1-unicode.exe"
+$INNO_SETUP_SHA256 = "27D49E9BC769E9D1B214C153011978DB90DC01C2ACD1DDCD9ED7B3FE3B96B538"
+
+$MINGW_BIN_URL = "https://osdn.net/frs/redir.php?m=constant&f=mingw%2F68260%2Fmingw-get-0.6.3-mingw32-pre-20170905-1-bin.zip"
+$MINGW_BIN_SHA256 = "2AB8EFD7C7D1FC8EAF8B2FA4DA4EEF8F3E47768284C021599BC7435839A046DF"
+
+$MERCURIAL_WHEEL_FILENAME = "mercurial-4.9-cp27-cp27m-win_amd64.whl"
+$MERCURIAL_WHEEL_URL = "https://files.pythonhosted.org/packages/fe/e8/b872d53dfbbf986bdc46af0b30f580b227fb59bddd2587152a55e205b0cc/$MERCURIAL_WHEEL_FILENAME"
+$MERCURIAL_WHEEL_SHA256 = "218cc2e7c3f1d535007febbb03351663897edf27df0e57d6842e3b686492b429"
+
+# Writing progress slows down downloads substantially. So disable it.
+$progressPreference = 'silentlyContinue'
+
+function Secure-Download($url, $path, $sha256) {
+    if (Test-Path -Path $path) {
+        Get-FileHash -Path $path -Algorithm SHA256 -OutVariable hash
+
+        if ($hash.Hash -eq $sha256) {
+            Write-Output "SHA256 of $path verified as $sha256"
+            return
+        }
+
+        Write-Output "hash mismatch on $path; downloading again"
+    }
+
+    Write-Output "downloading $url to $path"
+    Invoke-WebRequest -Uri $url -OutFile $path
+    Get-FileHash -Path $path -Algorithm SHA256 -OutVariable hash
+
+    if ($hash.Hash -ne $sha256) {
+        Remove-Item -Path $path
+        throw "hash mismatch when downloading $url; got $($hash.Hash), expected $sha256"
+    }
+}
+
+function Invoke-Process($path, $arguments) {
+    $p = Start-Process -FilePath $path -ArgumentList $arguments -Wait -PassThru -WindowStyle Hidden
+
+    if ($p.ExitCode -ne 0) {
+        throw "process exited non-0: $($p.ExitCode)"
+    }
+}
+
+function Install-Python3($name, $installer, $dest, $pip) {
+    Write-Output "installing $name"
+
+    # We hit this when running the script as part of Simple Systems Manager in
+    # EC2. The Python 3 installer doesn't seem to like per-user installs
+    # when running as the SYSTEM user. So enable global installs if executed in
+    # this mode.
+    if ($env:USERPROFILE -eq "C:\Windows\system32\config\systemprofile") {
+        Write-Output "running with SYSTEM account; installing for all users"
+        $allusers = "1"
+    }
+    else {
+        $allusers = "0"
+    }
+
+    Invoke-Process $installer "/quiet TargetDir=${dest} InstallAllUsers=${allusers} AssociateFiles=0 CompileAll=0 PrependPath=0 Include_doc=0 Include_launcher=0 InstallLauncherAllUsers=0 Include_pip=0 Include_test=0"
+    Invoke-Process ${dest}\python.exe $pip
+}
+
+function Install-Dependencies($prefix) {
+    if (!(Test-Path -Path $prefix\assets)) {
+        New-Item -Path $prefix\assets -ItemType Directory
+    }
+
+    $pip = "${prefix}\assets\get-pip.py"
+
+    Secure-Download $VC9_PYTHON_URL ${prefix}\assets\VCForPython27.msi $VC9_PYTHON_SHA256
+    Secure-Download $PYTHON27_x86_URL ${prefix}\assets\python27-x86.msi $PYTHON27_x86_SHA256
+    Secure-Download $PYTHON27_x64_URL ${prefix}\assets\python27-x64.msi $PYTHON27_x64_SHA256
+    Secure-Download $PYTHON35_x86_URL ${prefix}\assets\python35-x86.exe $PYTHON35_x86_SHA256
+    Secure-Download $PYTHON35_x64_URL ${prefix}\assets\python35-x64.exe $PYTHON35_x64_SHA256
+    Secure-Download $PYTHON36_x86_URL ${prefix}\assets\python36-x86.exe $PYTHON36_x86_SHA256
+    Secure-Download $PYTHON36_x64_URL ${prefix}\assets\python36-x64.exe $PYTHON36_x64_SHA256
+    Secure-Download $PYTHON37_x86_URL ${prefix}\assets\python37-x86.exe $PYTHON37_x86_SHA256
+    Secure-Download $PYTHON37_x64_URL ${prefix}\assets\python37-x64.exe $PYTHON37_x64_SHA256
+    Secure-Download $PYTHON38_x86_URL ${prefix}\assets\python38-x86.exe $PYTHON38_x86_SHA256
+    Secure-Download $PYTHON38_x64_URL ${prefix}\assets\python38-x64.exe $PYTHON38_x64_SHA256
+    Secure-Download $PIP_URL ${pip} $PIP_SHA256
+    Secure-Download $VIRTUALENV_URL ${prefix}\assets\virtualenv.tar.gz $VIRTUALENV_SHA256
+    Secure-Download $VS_BUILD_TOOLS_URL ${prefix}\assets\vs_buildtools.exe $VS_BUILD_TOOLS_SHA256
+    Secure-Download $INNO_SETUP_URL ${prefix}\assets\InnoSetup.exe $INNO_SETUP_SHA256
+    Secure-Download $MINGW_BIN_URL ${prefix}\assets\mingw-get-bin.zip $MINGW_BIN_SHA256
+    Secure-Download $MERCURIAL_WHEEL_URL ${prefix}\assets\${MERCURIAL_WHEEL_FILENAME} $MERCURIAL_WHEEL_SHA256
+
+    Write-Output "installing Python 2.7 32-bit"
+    Invoke-Process msiexec.exe "/i ${prefix}\assets\python27-x86.msi /l* ${prefix}\assets\python27-x86.log /q TARGETDIR=${prefix}\python27-x86 ALLUSERS="
+    Invoke-Process ${prefix}\python27-x86\python.exe ${prefix}\assets\get-pip.py
+    Invoke-Process ${prefix}\python27-x86\Scripts\pip.exe "install ${prefix}\assets\virtualenv.tar.gz"
+
+    Write-Output "installing Python 2.7 64-bit"
+    Invoke-Process msiexec.exe "/i ${prefix}\assets\python27-x64.msi /l* ${prefix}\assets\python27-x64.log /q TARGETDIR=${prefix}\python27-x64 ALLUSERS="
+    Invoke-Process ${prefix}\python27-x64\python.exe ${prefix}\assets\get-pip.py
+    Invoke-Process ${prefix}\python27-x64\Scripts\pip.exe "install ${prefix}\assets\virtualenv.tar.gz"
+
+    Install-Python3 "Python 3.5 32-bit" ${prefix}\assets\python35-x86.exe ${prefix}\python35-x86 ${pip}
+    Install-Python3 "Python 3.5 64-bit" ${prefix}\assets\python35-x64.exe ${prefix}\python35-x64 ${pip}
+    Install-Python3 "Python 3.6 32-bit" ${prefix}\assets\python36-x86.exe ${prefix}\python36-x86 ${pip}
+    Install-Python3 "Python 3.6 64-bit" ${prefix}\assets\python36-x64.exe ${prefix}\python36-x64 ${pip}
+    Install-Python3 "Python 3.7 32-bit" ${prefix}\assets\python37-x86.exe ${prefix}\python37-x86 ${pip}
+    Install-Python3 "Python 3.7 64-bit" ${prefix}\assets\python37-x64.exe ${prefix}\python37-x64 ${pip}
+    Install-Python3 "Python 3.8 32-bit" ${prefix}\assets\python38-x86.exe ${prefix}\python38-x86 ${pip}
+    Install-Python3 "Python 3.8 64-bit" ${prefix}\assets\python38-x64.exe ${prefix}\python38-x64 ${pip}
+
+    Write-Output "installing Visual Studio 2017 Build Tools and SDKs"
+    Invoke-Process ${prefix}\assets\vs_buildtools.exe "--quiet --wait --norestart --nocache --channelUri https://aka.ms/vs/15/release/channel --add Microsoft.VisualStudio.Workload.MSBuildTools --add Microsoft.VisualStudio.Component.Windows10SDK.17763 --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Component.Windows10SDK --add Microsoft.VisualStudio.Component.VC.140"
+
+    Write-Output "installing Visual C++ 9.0 for Python 2.7"
+    Invoke-Process msiexec.exe "/i ${prefix}\assets\VCForPython27.msi /l* ${prefix}\assets\VCForPython27.log /q"
+
+    Write-Output "installing Inno Setup"
+    Invoke-Process ${prefix}\assets\InnoSetup.exe "/SP- /VERYSILENT /SUPPRESSMSGBOXES"
+
+    Write-Output "extracting MinGW base archive"
+    Expand-Archive -Path ${prefix}\assets\mingw-get-bin.zip -DestinationPath "${prefix}\MinGW" -Force
+
+    Write-Output "updating MinGW package catalogs"
+    Invoke-Process ${prefix}\MinGW\bin\mingw-get.exe "update"
+
+    Write-Output "installing MinGW packages"
+    Invoke-Process ${prefix}\MinGW\bin\mingw-get.exe "install msys-base msys-coreutils msys-diffutils msys-unzip"
+
+    # Construct a virtualenv useful for bootstrapping. It conveniently contains a
+    # Mercurial install.
+    Write-Output "creating bootstrap virtualenv with Mercurial"
+    Invoke-Process "$prefix\python27-x64\Scripts\virtualenv.exe" "${prefix}\venv-bootstrap"
+    Invoke-Process "${prefix}\venv-bootstrap\Scripts\pip.exe" "install ${prefix}\assets\${MERCURIAL_WHEEL_FILENAME}"
+}
+
+function Clone-Mercurial-Repo($prefix, $repo_url, $dest) {
+    Write-Output "cloning $repo_url to $dest"
+    # TODO Figure out why CA verification isn't working in EC2 and remove
+    # --insecure.
+    Invoke-Process "${prefix}\venv-bootstrap\Scripts\hg.exe" "clone --insecure $repo_url $dest"
+
+    # Mark repo as non-publishing by default for convenience.
+    Add-Content -Path "$dest\.hg\hgrc" -Value "`n[phases]`npublish = false"
+}
+
+$prefix = "c:\hgdev"
+Install-Dependencies $prefix
+Clone-Mercurial-Repo $prefix "https://www.mercurial-scm.org/repo/hg" $prefix\src
--- a/contrib/packaging/hg-docker	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/packaging/hg-docker	Wed Apr 17 13:41:18 2019 -0400
@@ -76,7 +76,7 @@
     p.communicate(input=dockerfile)
     if p.returncode:
         raise subprocess.CalledProcessException(
-                p.returncode, 'failed to build docker image: %s %s' \
+                p.returncode, 'failed to build docker image: %s %s'
                 % (p.stdout, p.stderr))
 
 def command_build(args):
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/hgpackaging/downloads.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,175 @@
+# downloads.py - Code for downloading dependencies.
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import gzip
+import hashlib
+import pathlib
+import urllib.request
+
+
+DOWNLOADS = {
+    'gettext': {
+        'url': 'https://versaweb.dl.sourceforge.net/project/gnuwin32/gettext/0.14.4/gettext-0.14.4-bin.zip',
+        'size': 1606131,
+        'sha256': '60b9ef26bc5cceef036f0424e542106cf158352b2677f43a01affd6d82a1d641',
+        'version': '0.14.4',
+    },
+    'gettext-dep': {
+        'url': 'https://versaweb.dl.sourceforge.net/project/gnuwin32/gettext/0.14.4/gettext-0.14.4-dep.zip',
+        'size': 715086,
+        'sha256': '411f94974492fd2ecf52590cb05b1023530aec67e64154a88b1e4ebcd9c28588',
+    },
+    'py2exe': {
+        'url': 'https://versaweb.dl.sourceforge.net/project/py2exe/py2exe/0.6.9/py2exe-0.6.9.zip',
+        'size': 149687,
+        'sha256': '6bd383312e7d33eef2e43a5f236f9445e4f3e0f6b16333c6f183ed445c44ddbd',
+        'version': '0.6.9',
+    },
+    # The VC9 CRT merge modules aren't readily available on most systems because
+    # they are only installed as part of a full Visual Studio 2008 install.
+    # While we could potentially extract them from a Visual Studio 2008
+    # installer, it is easier to just fetch them from a known URL.
+    'vc9-crt-x86-msm': {
+        'url': 'https://github.com/indygreg/vc90-merge-modules/raw/9232f8f0b2135df619bf7946eaa176b4ac35ccff/Microsoft_VC90_CRT_x86.msm',
+        'size': 615424,
+        'sha256': '837e887ef31b332feb58156f429389de345cb94504228bb9a523c25a9dd3d75e',
+    },
+    'vc9-crt-x86-msm-policy': {
+        'url': 'https://github.com/indygreg/vc90-merge-modules/raw/9232f8f0b2135df619bf7946eaa176b4ac35ccff/policy_9_0_Microsoft_VC90_CRT_x86.msm',
+        'size': 71168,
+        'sha256': '3fbcf92e3801a0757f36c5e8d304e134a68d5cafd197a6df7734ae3e8825c940',
+    },
+    'vc9-crt-x64-msm': {
+        'url': 'https://github.com/indygreg/vc90-merge-modules/raw/9232f8f0b2135df619bf7946eaa176b4ac35ccff/Microsoft_VC90_CRT_x86_x64.msm',
+        'size': 662528,
+        'sha256': '50d9639b5ad4844a2285269c7551bf5157ec636e32396ddcc6f7ec5bce487a7c',
+    },
+    'vc9-crt-x64-msm-policy': {
+        'url': 'https://github.com/indygreg/vc90-merge-modules/raw/9232f8f0b2135df619bf7946eaa176b4ac35ccff/policy_9_0_Microsoft_VC90_CRT_x86_x64.msm',
+        'size': 71168,
+        'sha256': '0550ea1929b21239134ad3a678c944ba0f05f11087117b6cf0833e7110686486',
+    },
+    'virtualenv': {
+        'url': 'https://files.pythonhosted.org/packages/37/db/89d6b043b22052109da35416abc3c397655e4bd3cff031446ba02b9654fa/virtualenv-16.4.3.tar.gz',
+        'size': 3713208,
+        'sha256': '984d7e607b0a5d1329425dd8845bd971b957424b5ba664729fab51ab8c11bc39',
+        'version': '16.4.3',
+    },
+    'wix': {
+        'url': 'https://github.com/wixtoolset/wix3/releases/download/wix3111rtm/wix311-binaries.zip',
+        'size': 34358269,
+        'sha256': '37f0a533b0978a454efb5dc3bd3598becf9660aaf4287e55bf68ca6b527d051d',
+        'version': '3.11.1',
+    },
+}
+
+
+def hash_path(p: pathlib.Path):
+    h = hashlib.sha256()
+
+    with p.open('rb') as fh:
+        while True:
+            chunk = fh.read(65536)
+            if not chunk:
+                break
+
+            h.update(chunk)
+
+    return h.hexdigest()
+
+
+class IntegrityError(Exception):
+    """Represents an integrity error when downloading a URL."""
+
+
+def secure_download_stream(url, size, sha256):
+    """Securely download a URL to a stream of chunks.
+
+    If the integrity of the download fails, an IntegrityError is
+    raised.
+    """
+    h = hashlib.sha256()
+    length = 0
+
+    with urllib.request.urlopen(url) as fh:
+        if not url.endswith('.gz') and fh.info().get('Content-Encoding') == 'gzip':
+            fh = gzip.GzipFile(fileobj=fh)
+
+        while True:
+            chunk = fh.read(65536)
+            if not chunk:
+                break
+
+            h.update(chunk)
+            length += len(chunk)
+
+            yield chunk
+
+    digest = h.hexdigest()
+
+    if length != size:
+        raise IntegrityError('size mismatch on %s: wanted %d; got %d' % (
+            url, size, length))
+
+    if digest != sha256:
+        raise IntegrityError('sha256 mismatch on %s: wanted %s; got %s' % (
+            url, sha256, digest))
+
+
+def download_to_path(url: str, path: pathlib.Path, size: int, sha256: str):
+    """Download a URL to a filesystem path, possibly with verification."""
+
+    # We download to a temporary file and rename at the end so there's
+    # no chance of the final file being partially written or containing
+    # bad data.
+    print('downloading %s to %s' % (url, path))
+
+    if path.exists():
+        good = True
+
+        if path.stat().st_size != size:
+            print('existing file size is wrong; removing')
+            good = False
+
+        if good:
+            if hash_path(path) != sha256:
+                print('existing file hash is wrong; removing')
+                good = False
+
+        if good:
+            print('%s exists and passes integrity checks' % path)
+            return
+
+        path.unlink()
+
+    tmp = path.with_name('%s.tmp' % path.name)
+
+    try:
+        with tmp.open('wb') as fh:
+            for chunk in secure_download_stream(url, size, sha256):
+                fh.write(chunk)
+    except IntegrityError:
+        tmp.unlink()
+        raise
+
+    tmp.rename(path)
+    print('successfully downloaded %s' % url)
+
+
+def download_entry(name: dict, dest_path: pathlib.Path, local_name=None) -> pathlib.Path:
+    entry = DOWNLOADS[name]
+
+    url = entry['url']
+
+    local_name = local_name or url[url.rindex('/') + 1:]
+
+    local_path = dest_path / local_name
+    download_to_path(url, local_path, entry['size'], entry['sha256'])
+
+    return local_path, entry
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/hgpackaging/inno.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,78 @@
+# inno.py - Inno Setup functionality.
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import os
+import pathlib
+import shutil
+import subprocess
+
+from .py2exe import (
+    build_py2exe,
+)
+from .util import (
+    find_vc_runtime_files,
+)
+
+
+EXTRA_PACKAGES = {
+    'dulwich',
+    'keyring',
+    'pygments',
+    'win32ctypes',
+}
+
+
+def build(source_dir: pathlib.Path, build_dir: pathlib.Path,
+          python_exe: pathlib.Path, iscc_exe: pathlib.Path,
+          version=None):
+    """Build the Inno installer.
+
+    Build files will be placed in ``build_dir``.
+
+    py2exe's setup.py doesn't use setuptools. It doesn't have modern logic
+    for finding the Python 2.7 toolchain. So, we require the environment
+    to already be configured with an active toolchain.
+    """
+    if not iscc_exe.exists():
+        raise Exception('%s does not exist' % iscc_exe)
+
+    vc_x64 = r'\x64' in os.environ.get('LIB', '')
+
+    requirements_txt = (source_dir / 'contrib' / 'packaging' /
+                        'inno' / 'requirements.txt')
+
+    build_py2exe(source_dir, build_dir, python_exe, 'inno',
+                 requirements_txt, extra_packages=EXTRA_PACKAGES)
+
+    # hg.exe depends on VC9 runtime DLLs. Copy those into place.
+    for f in find_vc_runtime_files(vc_x64):
+        if f.name.endswith('.manifest'):
+            basename = 'Microsoft.VC90.CRT.manifest'
+        else:
+            basename = f.name
+
+        dest_path = source_dir / 'dist' / basename
+
+        print('copying %s to %s' % (f, dest_path))
+        shutil.copyfile(f, dest_path)
+
+    print('creating installer')
+
+    args = [str(iscc_exe)]
+
+    if vc_x64:
+        args.append('/dARCH=x64')
+
+    if version:
+        args.append('/dVERSION=%s' % version)
+
+    args.append('/Odist')
+    args.append('contrib/packaging/inno/mercurial.iss')
+
+    subprocess.run(args, cwd=str(source_dir), check=True)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/hgpackaging/py2exe.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,150 @@
+# py2exe.py - Functionality for performing py2exe builds.
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import os
+import pathlib
+import subprocess
+
+from .downloads import (
+    download_entry,
+)
+from .util import (
+    extract_tar_to_directory,
+    extract_zip_to_directory,
+    python_exe_info,
+)
+
+
+def build_py2exe(source_dir: pathlib.Path, build_dir: pathlib.Path,
+                 python_exe: pathlib.Path, build_name: str,
+                 venv_requirements_txt: pathlib.Path,
+                 extra_packages=None, extra_excludes=None,
+                 extra_dll_excludes=None,
+                 extra_packages_script=None):
+    """Build Mercurial with py2exe.
+
+    Build files will be placed in ``build_dir``.
+
+    py2exe's setup.py doesn't use setuptools. It doesn't have modern logic
+    for finding the Python 2.7 toolchain. So, we require the environment
+    to already be configured with an active toolchain.
+    """
+    if 'VCINSTALLDIR' not in os.environ:
+        raise Exception('not running from a Visual C++ build environment; '
+                        'execute the "Visual C++ <version> Command Prompt" '
+                        'application shortcut or a vcsvarsall.bat file')
+
+    # Identity x86/x64 and validate the environment matches the Python
+    # architecture.
+    vc_x64 = r'\x64' in os.environ['LIB']
+
+    py_info = python_exe_info(python_exe)
+
+    if vc_x64:
+        if py_info['arch'] != '64bit':
+            raise Exception('architecture mismatch: Visual C++ environment '
+                            'is configured for 64-bit but Python is 32-bit')
+    else:
+        if py_info['arch'] != '32bit':
+            raise Exception('architecture mismatch: Visual C++ environment '
+                            'is configured for 32-bit but Python is 64-bit')
+
+    if py_info['py3']:
+        raise Exception('Only Python 2 is currently supported')
+
+    build_dir.mkdir(exist_ok=True)
+
+    gettext_pkg, gettext_entry = download_entry('gettext', build_dir)
+    gettext_dep_pkg = download_entry('gettext-dep', build_dir)[0]
+    virtualenv_pkg, virtualenv_entry = download_entry('virtualenv', build_dir)
+    py2exe_pkg, py2exe_entry = download_entry('py2exe', build_dir)
+
+    venv_path = build_dir / ('venv-%s-%s' % (build_name,
+                                             'x64' if vc_x64 else 'x86'))
+
+    gettext_root = build_dir / (
+        'gettext-win-%s' % gettext_entry['version'])
+
+    if not gettext_root.exists():
+        extract_zip_to_directory(gettext_pkg, gettext_root)
+        extract_zip_to_directory(gettext_dep_pkg, gettext_root)
+
+    # This assumes Python 2. We don't need virtualenv on Python 3.
+    virtualenv_src_path = build_dir / (
+        'virtualenv-%s' % virtualenv_entry['version'])
+    virtualenv_py = virtualenv_src_path / 'virtualenv.py'
+
+    if not virtualenv_src_path.exists():
+        extract_tar_to_directory(virtualenv_pkg, build_dir)
+
+    py2exe_source_path = build_dir / ('py2exe-%s' % py2exe_entry['version'])
+
+    if not py2exe_source_path.exists():
+        extract_zip_to_directory(py2exe_pkg, build_dir)
+
+    if not venv_path.exists():
+        print('creating virtualenv with dependencies')
+        subprocess.run(
+            [str(python_exe), str(virtualenv_py), str(venv_path)],
+            check=True)
+
+    venv_python = venv_path / 'Scripts' / 'python.exe'
+    venv_pip = venv_path / 'Scripts' / 'pip.exe'
+
+    subprocess.run([str(venv_pip), 'install', '-r', str(venv_requirements_txt)],
+                   check=True)
+
+    # Force distutils to use VC++ settings from environment, which was
+    # validated above.
+    env = dict(os.environ)
+    env['DISTUTILS_USE_SDK'] = '1'
+    env['MSSdk'] = '1'
+
+    if extra_packages_script:
+        more_packages = set(subprocess.check_output(
+            extra_packages_script,
+            cwd=build_dir).split(b'\0')[-1].strip().decode('utf-8').splitlines())
+        if more_packages:
+            if not extra_packages:
+                extra_packages = more_packages
+            else:
+                extra_packages |= more_packages
+
+    if extra_packages:
+        env['HG_PY2EXE_EXTRA_PACKAGES'] = ' '.join(sorted(extra_packages))
+        hgext3rd_extras = sorted(
+            e for e in extra_packages if e.startswith('hgext3rd.'))
+        if hgext3rd_extras:
+            env['HG_PY2EXE_EXTRA_INSTALL_PACKAGES'] = ' '.join(hgext3rd_extras)
+    if extra_excludes:
+        env['HG_PY2EXE_EXTRA_EXCLUDES'] = ' '.join(sorted(extra_excludes))
+    if extra_dll_excludes:
+        env['HG_PY2EXE_EXTRA_DLL_EXCLUDES'] = ' '.join(
+            sorted(extra_dll_excludes))
+
+    py2exe_py_path = venv_path / 'Lib' / 'site-packages' / 'py2exe'
+    if not py2exe_py_path.exists():
+        print('building py2exe')
+        subprocess.run([str(venv_python), 'setup.py', 'install'],
+                       cwd=py2exe_source_path,
+                       env=env,
+                       check=True)
+
+    # Register location of msgfmt and other binaries.
+    env['PATH'] = '%s%s%s' % (
+        env['PATH'], os.pathsep, str(gettext_root / 'bin'))
+
+    print('building Mercurial')
+    subprocess.run(
+        [str(venv_python), 'setup.py',
+         'py2exe',
+         'build_doc', '--html'],
+        cwd=str(source_dir),
+        env=env,
+        check=True)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/hgpackaging/util.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,155 @@
+# util.py - Common packaging utility code.
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import distutils.version
+import getpass
+import os
+import pathlib
+import subprocess
+import tarfile
+import zipfile
+
+
+def extract_tar_to_directory(source: pathlib.Path, dest: pathlib.Path):
+    with tarfile.open(source, 'r') as tf:
+        tf.extractall(dest)
+
+
+def extract_zip_to_directory(source: pathlib.Path, dest: pathlib.Path):
+    with zipfile.ZipFile(source, 'r') as zf:
+        zf.extractall(dest)
+
+
+def find_vc_runtime_files(x64=False):
+    """Finds Visual C++ Runtime DLLs to include in distribution."""
+    winsxs = pathlib.Path(os.environ['SYSTEMROOT']) / 'WinSxS'
+
+    prefix = 'amd64' if x64 else 'x86'
+
+    candidates = sorted(p for p in os.listdir(winsxs)
+                  if p.lower().startswith('%s_microsoft.vc90.crt_' % prefix))
+
+    for p in candidates:
+        print('found candidate VC runtime: %s' % p)
+
+    # Take the newest version.
+    version = candidates[-1]
+
+    d = winsxs / version
+
+    return [
+        d / 'msvcm90.dll',
+        d / 'msvcp90.dll',
+        d / 'msvcr90.dll',
+        winsxs / 'Manifests' / ('%s.manifest' % version),
+    ]
+
+
+def windows_10_sdk_info():
+    """Resolves information about the Windows 10 SDK."""
+
+    base = pathlib.Path(os.environ['ProgramFiles(x86)']) / 'Windows Kits' / '10'
+
+    if not base.is_dir():
+        raise Exception('unable to find Windows 10 SDK at %s' % base)
+
+    # Find the latest version.
+    bin_base = base / 'bin'
+
+    versions = [v for v in os.listdir(bin_base) if v.startswith('10.')]
+    version = sorted(versions, reverse=True)[0]
+
+    bin_version = bin_base / version
+
+    return {
+        'root': base,
+        'version': version,
+        'bin_root': bin_version,
+        'bin_x86': bin_version / 'x86',
+        'bin_x64': bin_version / 'x64'
+    }
+
+
+def find_signtool():
+    """Find signtool.exe from the Windows SDK."""
+    sdk = windows_10_sdk_info()
+
+    for key in ('bin_x64', 'bin_x86'):
+        p = sdk[key] / 'signtool.exe'
+
+        if p.exists():
+            return p
+
+    raise Exception('could not find signtool.exe in Windows 10 SDK')
+
+
+def sign_with_signtool(file_path, description, subject_name=None,
+                       cert_path=None, cert_password=None,
+                       timestamp_url=None):
+    """Digitally sign a file with signtool.exe.
+
+    ``file_path`` is file to sign.
+    ``description`` is text that goes in the signature.
+
+    The signing certificate can be specified by ``cert_path`` or
+    ``subject_name``. These correspond to the ``/f`` and ``/n`` arguments
+    to signtool.exe, respectively.
+
+    The certificate password can be specified via ``cert_password``. If
+    not provided, you will be prompted for the password.
+
+    ``timestamp_url`` is the URL of a RFC 3161 timestamp server (``/tr``
+    argument to signtool.exe).
+    """
+    if cert_path and subject_name:
+        raise ValueError('cannot specify both cert_path and subject_name')
+
+    while cert_path and not cert_password:
+        cert_password = getpass.getpass('password for %s: ' % cert_path)
+
+    args = [
+        str(find_signtool()), 'sign',
+        '/v',
+        '/fd', 'sha256',
+        '/d', description,
+    ]
+
+    if cert_path:
+        args.extend(['/f', str(cert_path), '/p', cert_password])
+    elif subject_name:
+        args.extend(['/n', subject_name])
+
+    if timestamp_url:
+        args.extend(['/tr', timestamp_url, '/td', 'sha256'])
+
+    args.append(str(file_path))
+
+    print('signing %s' % file_path)
+    subprocess.run(args, check=True)
+
+
+PRINT_PYTHON_INFO = '''
+import platform; print("%s:%s" % (platform.architecture()[0], platform.python_version()))
+'''.strip()
+
+
+def python_exe_info(python_exe: pathlib.Path):
+    """Obtain information about a Python executable."""
+
+    res = subprocess.check_output([str(python_exe), '-c', PRINT_PYTHON_INFO])
+
+    arch, version = res.decode('utf-8').split(':')
+
+    version = distutils.version.LooseVersion(version)
+
+    return {
+        'arch': arch,
+        'version': version,
+        'py3': version >= distutils.version.LooseVersion('3'),
+    }
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/hgpackaging/wix.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,327 @@
+# wix.py - WiX installer functionality
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+import os
+import pathlib
+import re
+import subprocess
+import tempfile
+import typing
+import xml.dom.minidom
+
+from .downloads import (
+    download_entry,
+)
+from .py2exe import (
+    build_py2exe,
+)
+from .util import (
+    extract_zip_to_directory,
+    sign_with_signtool,
+)
+
+
+SUPPORT_WXS = [
+    ('contrib.wxs', r'contrib'),
+    ('dist.wxs', r'dist'),
+    ('doc.wxs', r'doc'),
+    ('help.wxs', r'mercurial\help'),
+    ('i18n.wxs', r'i18n'),
+    ('locale.wxs', r'mercurial\locale'),
+    ('templates.wxs', r'mercurial\templates'),
+]
+
+
+EXTRA_PACKAGES = {
+    'distutils',
+    'pygments',
+}
+
+
+def find_version(source_dir: pathlib.Path):
+    version_py = source_dir / 'mercurial' / '__version__.py'
+
+    with version_py.open('r', encoding='utf-8') as fh:
+        source = fh.read().strip()
+
+    m = re.search('version = b"(.*)"', source)
+    return m.group(1)
+
+
+def normalize_version(version):
+    """Normalize Mercurial version string so WiX accepts it.
+
+    Version strings have to be numeric X.Y.Z.
+    """
+
+    if '+' in version:
+        version, extra = version.split('+', 1)
+    else:
+        extra = None
+
+    # 4.9rc0
+    if version[:-1].endswith('rc'):
+        version = version[:-3]
+
+    versions = [int(v) for v in version.split('.')]
+    while len(versions) < 3:
+        versions.append(0)
+
+    major, minor, build = versions[:3]
+
+    if extra:
+        # <commit count>-<hash>+<date>
+        build = int(extra.split('-')[0])
+
+    return '.'.join('%d' % x for x in (major, minor, build))
+
+
+def ensure_vc90_merge_modules(build_dir):
+    x86 = (
+        download_entry('vc9-crt-x86-msm', build_dir,
+                       local_name='microsoft.vcxx.crt.x86_msm.msm')[0],
+        download_entry('vc9-crt-x86-msm-policy', build_dir,
+                       local_name='policy.x.xx.microsoft.vcxx.crt.x86_msm.msm')[0]
+    )
+
+    x64 = (
+        download_entry('vc9-crt-x64-msm', build_dir,
+                       local_name='microsoft.vcxx.crt.x64_msm.msm')[0],
+        download_entry('vc9-crt-x64-msm-policy', build_dir,
+                       local_name='policy.x.xx.microsoft.vcxx.crt.x64_msm.msm')[0]
+    )
+    return {
+        'x86': x86,
+        'x64': x64,
+    }
+
+
+def run_candle(wix, cwd, wxs, source_dir, defines=None):
+    args = [
+        str(wix / 'candle.exe'),
+        '-nologo',
+        str(wxs),
+        '-dSourceDir=%s' % source_dir,
+    ]
+
+    if defines:
+        args.extend('-d%s=%s' % define for define in sorted(defines.items()))
+
+    subprocess.run(args, cwd=str(cwd), check=True)
+
+
+def make_post_build_signing_fn(name, subject_name=None, cert_path=None,
+                               cert_password=None, timestamp_url=None):
+    """Create a callable that will use signtool to sign hg.exe."""
+
+    def post_build_sign(source_dir, build_dir, dist_dir, version):
+        description = '%s %s' % (name, version)
+
+        sign_with_signtool(dist_dir / 'hg.exe', description,
+                           subject_name=subject_name, cert_path=cert_path,
+                           cert_password=cert_password,
+                           timestamp_url=timestamp_url)
+
+    return post_build_sign
+
+
+LIBRARIES_XML = '''
+<?xml version="1.0" encoding="utf-8"?>
+<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
+
+  <?include {wix_dir}/guids.wxi ?>
+  <?include {wix_dir}/defines.wxi ?>
+
+  <Fragment>
+    <DirectoryRef Id="INSTALLDIR" FileSource="$(var.SourceDir)">
+      <Directory Id="libdir" Name="lib" FileSource="$(var.SourceDir)/lib">
+        <Component Id="libOutput" Guid="$(var.lib.guid)" Win64='$(var.IsX64)'>
+        </Component>
+      </Directory>
+    </DirectoryRef>
+  </Fragment>
+</Wix>
+'''.lstrip()
+
+
+def make_libraries_xml(wix_dir: pathlib.Path, dist_dir: pathlib.Path):
+    """Make XML data for library components WXS."""
+    # We can't use ElementTree because it doesn't handle the
+    # <?include ?> directives.
+    doc = xml.dom.minidom.parseString(
+        LIBRARIES_XML.format(wix_dir=str(wix_dir)))
+
+    component = doc.getElementsByTagName('Component')[0]
+
+    f = doc.createElement('File')
+    f.setAttribute('Name', 'library.zip')
+    f.setAttribute('KeyPath', 'yes')
+    component.appendChild(f)
+
+    lib_dir = dist_dir / 'lib'
+
+    for p in sorted(lib_dir.iterdir()):
+        if not p.name.endswith(('.dll', '.pyd')):
+            continue
+
+        f = doc.createElement('File')
+        f.setAttribute('Name', p.name)
+        component.appendChild(f)
+
+    return doc.toprettyxml()
+
+
+def build_installer(source_dir: pathlib.Path, python_exe: pathlib.Path,
+                    msi_name='mercurial', version=None, post_build_fn=None,
+                    extra_packages_script=None,
+                    extra_wxs:typing.Optional[typing.Dict[str,str]]=None,
+                    extra_features:typing.Optional[typing.List[str]]=None):
+    """Build a WiX MSI installer.
+
+    ``source_dir`` is the path to the Mercurial source tree to use.
+    ``arch`` is the target architecture. either ``x86`` or ``x64``.
+    ``python_exe`` is the path to the Python executable to use/bundle.
+    ``version`` is the Mercurial version string. If not defined,
+    ``mercurial/__version__.py`` will be consulted.
+    ``post_build_fn`` is a callable that will be called after building
+    Mercurial but before invoking WiX. It can be used to e.g. facilitate
+    signing. It is passed the paths to the Mercurial source, build, and
+    dist directories and the resolved Mercurial version.
+    ``extra_packages_script`` is a command to be run to inject extra packages
+    into the py2exe binary. It should stage packages into the virtualenv and
+    print a null byte followed by a newline-separated list of packages that
+    should be included in the exe.
+    ``extra_wxs`` is a dict of {wxs_name: working_dir_for_wxs_build}.
+    ``extra_features`` is a list of additional named Features to include in
+    the build. These must match Feature names in one of the wxs scripts.
+    """
+    arch = 'x64' if r'\x64' in os.environ.get('LIB', '') else 'x86'
+
+    hg_build_dir = source_dir / 'build'
+    dist_dir = source_dir / 'dist'
+    wix_dir = source_dir / 'contrib' / 'packaging' / 'wix'
+
+    requirements_txt = wix_dir / 'requirements.txt'
+
+    build_py2exe(source_dir, hg_build_dir,
+                 python_exe, 'wix', requirements_txt,
+                 extra_packages=EXTRA_PACKAGES,
+                 extra_packages_script=extra_packages_script)
+
+    version = version or normalize_version(find_version(source_dir))
+    print('using version string: %s' % version)
+
+    if post_build_fn:
+        post_build_fn(source_dir, hg_build_dir, dist_dir, version)
+
+    build_dir = hg_build_dir / ('wix-%s' % arch)
+
+    build_dir.mkdir(exist_ok=True)
+
+    wix_pkg, wix_entry = download_entry('wix', hg_build_dir)
+    wix_path = hg_build_dir / ('wix-%s' % wix_entry['version'])
+
+    if not wix_path.exists():
+        extract_zip_to_directory(wix_pkg, wix_path)
+
+    ensure_vc90_merge_modules(hg_build_dir)
+
+    source_build_rel = pathlib.Path(os.path.relpath(source_dir, build_dir))
+
+    defines = {'Platform': arch}
+
+    for wxs, rel_path in SUPPORT_WXS:
+        wxs = wix_dir / wxs
+        wxs_source_dir = source_dir / rel_path
+        run_candle(wix_path, build_dir, wxs, wxs_source_dir, defines=defines)
+
+    for source, rel_path in sorted((extra_wxs or {}).items()):
+        run_candle(wix_path, build_dir, source, rel_path, defines=defines)
+
+    # candle.exe doesn't like when we have an open handle on the file.
+    # So use TemporaryDirectory() instead of NamedTemporaryFile().
+    with tempfile.TemporaryDirectory() as td:
+        td = pathlib.Path(td)
+
+        tf = td / 'library.wxs'
+        with tf.open('w') as fh:
+            fh.write(make_libraries_xml(wix_dir, dist_dir))
+
+        run_candle(wix_path, build_dir, tf, dist_dir, defines=defines)
+
+    source = wix_dir / 'mercurial.wxs'
+    defines['Version'] = version
+    defines['Comments'] = 'Installs Mercurial version %s' % version
+    defines['VCRedistSrcDir'] = str(hg_build_dir)
+    if extra_features:
+        assert all(';' not in f for f in extra_features)
+        defines['MercurialExtraFeatures'] = ';'.join(extra_features)
+
+    run_candle(wix_path, build_dir, source, source_build_rel, defines=defines)
+
+    msi_path = source_dir / 'dist' / (
+        '%s-%s-%s.msi' % (msi_name, version, arch))
+
+    args = [
+        str(wix_path / 'light.exe'),
+        '-nologo',
+        '-ext', 'WixUIExtension',
+        '-sw1076',
+        '-spdb',
+        '-o', str(msi_path),
+    ]
+
+    for source, rel_path in SUPPORT_WXS:
+        assert source.endswith('.wxs')
+        args.append(str(build_dir / ('%s.wixobj' % source[:-4])))
+
+    for source, rel_path in sorted((extra_wxs or {}).items()):
+        assert source.endswith('.wxs')
+        source = os.path.basename(source)
+        args.append(str(build_dir / ('%s.wixobj' % source[:-4])))
+
+    args.extend([
+        str(build_dir / 'library.wixobj'),
+        str(build_dir / 'mercurial.wixobj'),
+    ])
+
+    subprocess.run(args, cwd=str(source_dir), check=True)
+
+    print('%s created' % msi_path)
+
+    return {
+        'msi_path': msi_path,
+    }
+
+
+def build_signed_installer(source_dir: pathlib.Path, python_exe: pathlib.Path,
+                           name: str, version=None, subject_name=None,
+                           cert_path=None, cert_password=None,
+                           timestamp_url=None, extra_packages_script=None,
+                           extra_wxs=None, extra_features=None):
+    """Build an installer with signed executables."""
+
+    post_build_fn = make_post_build_signing_fn(
+        name,
+        subject_name=subject_name,
+        cert_path=cert_path,
+        cert_password=cert_password,
+        timestamp_url=timestamp_url)
+
+    info = build_installer(source_dir, python_exe=python_exe,
+                           msi_name=name.lower(), version=version,
+                           post_build_fn=post_build_fn,
+                           extra_packages_script=extra_packages_script,
+                           extra_wxs=extra_wxs, extra_features=extra_features)
+
+    description = '%s %s' % (name, version)
+
+    sign_with_signtool(info['msi_path'], description,
+                       subject_name=subject_name, cert_path=cert_path,
+                       cert_password=cert_password, timestamp_url=timestamp_url)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/inno/build.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,51 @@
+#!/usr/bin/env python3
+# build.py - Inno installer build script.
+#
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# This script automates the building of the Inno MSI installer for Mercurial.
+
+# no-check-code because Python 3 native.
+
+import argparse
+import os
+import pathlib
+import sys
+
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser()
+
+    parser.add_argument('--python',
+                        required=True,
+                        help='path to python.exe to use')
+    parser.add_argument('--iscc',
+                        help='path to iscc.exe to use')
+    parser.add_argument('--version',
+                        help='Mercurial version string to use '
+                             '(detected from __version__.py if not defined')
+
+    args = parser.parse_args()
+
+    if not os.path.isabs(args.python):
+        raise Exception('--python arg must be an absolute path')
+
+    if args.iscc:
+        iscc = pathlib.Path(args.iscc)
+    else:
+        iscc = (pathlib.Path(os.environ['ProgramFiles(x86)']) / 'Inno Setup 5' /
+            'ISCC.exe')
+
+    here = pathlib.Path(os.path.abspath(os.path.dirname(__file__)))
+    source_dir = here.parent.parent.parent
+    build_dir = source_dir / 'build'
+
+    sys.path.insert(0, str(source_dir / 'contrib' / 'packaging'))
+
+    from hgpackaging.inno import build
+
+    build(source_dir, build_dir, pathlib.Path(args.python), iscc,
+          version=args.version)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/inno/mercurial.iss	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,124 @@
+; Script generated by the Inno Setup Script Wizard.
+; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES!
+
+#ifndef VERSION
+#define FileHandle
+#define FileLine
+#define VERSION = "unknown"
+#if FileHandle = FileOpen(SourcePath + "\..\..\..\mercurial\__version__.py")
+  #expr FileLine = FileRead(FileHandle)
+  #expr FileLine = FileRead(FileHandle)
+  #define VERSION = Copy(FileLine, Pos('"', FileLine)+1, Len(FileLine)-Pos('"', FileLine)-1)
+#endif
+#if FileHandle
+  #expr FileClose(FileHandle)
+#endif
+#pragma message "Detected Version: " + VERSION
+#endif
+
+#ifndef ARCH
+#define ARCH = "x86"
+#endif
+
+[Setup]
+AppCopyright=Copyright 2005-2019 Matt Mackall and others
+AppName=Mercurial
+AppVersion={#VERSION}
+#if ARCH == "x64"
+AppVerName=Mercurial {#VERSION} (64-bit)
+OutputBaseFilename=Mercurial-{#VERSION}-x64
+ArchitecturesAllowed=x64
+ArchitecturesInstallIn64BitMode=x64
+#else
+AppVerName=Mercurial {#VERSION}
+OutputBaseFilename=Mercurial-{#VERSION}
+#endif
+InfoAfterFile=contrib/win32/postinstall.txt
+LicenseFile=COPYING
+ShowLanguageDialog=yes
+AppPublisher=Matt Mackall and others
+AppPublisherURL=https://mercurial-scm.org/
+AppSupportURL=https://mercurial-scm.org/
+AppUpdatesURL=https://mercurial-scm.org/
+AppID={{4B95A5F1-EF59-4B08-BED8-C891C46121B3}
+AppContact=mercurial@mercurial-scm.org
+DefaultDirName={pf}\Mercurial
+SourceDir=..\..\..
+VersionInfoDescription=Mercurial distributed SCM (version {#VERSION})
+VersionInfoCopyright=Copyright 2005-2019 Matt Mackall and others
+VersionInfoCompany=Matt Mackall and others
+InternalCompressLevel=max
+SolidCompression=true
+SetupIconFile=contrib\win32\mercurial.ico
+AllowNoIcons=true
+DefaultGroupName=Mercurial
+PrivilegesRequired=none
+ChangesEnvironment=true
+
+[Files]
+Source: contrib\mercurial.el; DestDir: {app}/Contrib
+Source: contrib\vim\*.*; DestDir: {app}/Contrib/Vim
+Source: contrib\zsh_completion; DestDir: {app}/Contrib
+Source: contrib\bash_completion; DestDir: {app}/Contrib
+Source: contrib\tcsh_completion; DestDir: {app}/Contrib
+Source: contrib\tcsh_completion_build.sh; DestDir: {app}/Contrib
+Source: contrib\hgk; DestDir: {app}/Contrib; DestName: hgk.tcl
+Source: contrib\xml.rnc; DestDir: {app}/Contrib
+Source: contrib\mercurial.el; DestDir: {app}/Contrib
+Source: contrib\mq.el; DestDir: {app}/Contrib
+Source: contrib\hgweb.fcgi; DestDir: {app}/Contrib
+Source: contrib\hgweb.wsgi; DestDir: {app}/Contrib
+Source: contrib\win32\ReadMe.html; DestDir: {app}; Flags: isreadme
+Source: contrib\win32\postinstall.txt; DestDir: {app}; DestName: ReleaseNotes.txt
+Source: dist\hg.exe; DestDir: {app}; AfterInstall: Touch('{app}\hg.exe.local')
+Source: dist\lib\*.dll; Destdir: {app}\lib
+Source: dist\lib\*.pyd; Destdir: {app}\lib
+Source: dist\python*.dll; Destdir: {app}; Flags: skipifsourcedoesntexist
+Source: dist\msvc*.dll; DestDir: {app}; Flags: skipifsourcedoesntexist
+Source: dist\Microsoft.VC*.CRT.manifest; DestDir: {app}; Flags: skipifsourcedoesntexist
+Source: dist\lib\library.zip; DestDir: {app}\lib
+Source: doc\*.html; DestDir: {app}\Docs
+Source: doc\style.css; DestDir: {app}\Docs
+Source: mercurial\help\*.txt; DestDir: {app}\help
+Source: mercurial\help\internals\*.txt; DestDir: {app}\help\internals
+Source: mercurial\default.d\*.rc; DestDir: {app}\default.d
+Source: mercurial\locale\*.*; DestDir: {app}\locale; Flags: recursesubdirs createallsubdirs skipifsourcedoesntexist
+Source: mercurial\templates\*.*; DestDir: {app}\Templates; Flags: recursesubdirs createallsubdirs
+Source: CONTRIBUTORS; DestDir: {app}; DestName: Contributors.txt
+Source: COPYING; DestDir: {app}; DestName: Copying.txt
+
+[INI]
+Filename: {app}\Mercurial.url; Section: InternetShortcut; Key: URL; String: https://mercurial-scm.org/
+Filename: {app}\default.d\editor.rc; Section: ui; Key: editor; String: notepad
+
+[UninstallDelete]
+Type: files; Name: {app}\Mercurial.url
+Type: filesandordirs; Name: {app}\default.d
+Type: files; Name: "{app}\hg.exe.local"
+
+[Icons]
+Name: {group}\Uninstall Mercurial; Filename: {uninstallexe}
+Name: {group}\Mercurial Command Reference; Filename: {app}\Docs\hg.1.html
+Name: {group}\Mercurial Configuration Files; Filename: {app}\Docs\hgrc.5.html
+Name: {group}\Mercurial Ignore Files; Filename: {app}\Docs\hgignore.5.html
+Name: {group}\Mercurial Web Site; Filename: {app}\Mercurial.url
+
+[Tasks]
+Name: modifypath; Description: Add the installation path to the search path; Flags: unchecked
+
+[Code]
+procedure Touch(fn: String);
+begin
+  SaveStringToFile(ExpandConstant(fn), '', False);
+end;
+
+const
+    ModPathName = 'modifypath';
+    ModPathType = 'user';
+
+function ModPathDir(): TArrayOfString;
+begin
+    setArrayLength(Result, 1)
+    Result[0] := ExpandConstant('{app}');
+end;
+#include "modpath.iss"
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/inno/modpath.iss	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,219 @@
+// ----------------------------------------------------------------------------
+//
+// Inno Setup Ver:	5.4.2
+// Script Version:	1.4.2
+// Author:			Jared Breland <jbreland@legroom.net>
+// Homepage:		http://www.legroom.net/software
+// License:			GNU Lesser General Public License (LGPL), version 3
+//						http://www.gnu.org/licenses/lgpl.html
+//
+// Script Function:
+//	Allow modification of environmental path directly from Inno Setup installers
+//
+// Instructions:
+//	Copy modpath.iss to the same directory as your setup script
+//
+//	Add this statement to your [Setup] section
+//		ChangesEnvironment=true
+//
+//	Add this statement to your [Tasks] section
+//	You can change the Description or Flags
+//	You can change the Name, but it must match the ModPathName setting below
+//		Name: modifypath; Description: &Add application directory to your environmental path; Flags: unchecked
+//
+//	Add the following to the end of your [Code] section
+//	ModPathName defines the name of the task defined above
+//	ModPathType defines whether the 'user' or 'system' path will be modified;
+//		this will default to user if anything other than system is set
+//	setArrayLength must specify the total number of dirs to be added
+//	Result[0] contains first directory, Result[1] contains second, etc.
+//		const
+//			ModPathName = 'modifypath';
+//			ModPathType = 'user';
+//
+//		function ModPathDir(): TArrayOfString;
+//		begin
+//			setArrayLength(Result, 1);
+//			Result[0] := ExpandConstant('{app}');
+//		end;
+//		#include "modpath.iss"
+// ----------------------------------------------------------------------------
+
+procedure ModPath();
+var
+	oldpath:	String;
+	newpath:	String;
+	updatepath:	Boolean;
+	pathArr:	TArrayOfString;
+	aExecFile:	String;
+	aExecArr:	TArrayOfString;
+	i, d:		Integer;
+	pathdir:	TArrayOfString;
+	regroot:	Integer;
+	regpath:	String;
+
+begin
+	// Get constants from main script and adjust behavior accordingly
+	// ModPathType MUST be 'system' or 'user'; force 'user' if invalid
+	if ModPathType = 'system' then begin
+		regroot := HKEY_LOCAL_MACHINE;
+		regpath := 'SYSTEM\CurrentControlSet\Control\Session Manager\Environment';
+	end else begin
+		regroot := HKEY_CURRENT_USER;
+		regpath := 'Environment';
+	end;
+
+	// Get array of new directories and act on each individually
+	pathdir := ModPathDir();
+	for d := 0 to GetArrayLength(pathdir)-1 do begin
+		updatepath := true;
+
+		// Modify WinNT path
+		if UsingWinNT() = true then begin
+
+			// Get current path, split into an array
+			RegQueryStringValue(regroot, regpath, 'Path', oldpath);
+			oldpath := oldpath + ';';
+			i := 0;
+
+			while (Pos(';', oldpath) > 0) do begin
+				SetArrayLength(pathArr, i+1);
+				pathArr[i] := Copy(oldpath, 0, Pos(';', oldpath)-1);
+				oldpath := Copy(oldpath, Pos(';', oldpath)+1, Length(oldpath));
+				i := i + 1;
+
+				// Check if current directory matches app dir
+				if pathdir[d] = pathArr[i-1] then begin
+					// if uninstalling, remove dir from path
+					if IsUninstaller() = true then begin
+						continue;
+					// if installing, flag that dir already exists in path
+					end else begin
+						updatepath := false;
+					end;
+				end;
+
+				// Add current directory to new path
+				if i = 1 then begin
+					newpath := pathArr[i-1];
+				end else begin
+					newpath := newpath + ';' + pathArr[i-1];
+				end;
+			end;
+
+			// Append app dir to path if not already included
+			if (IsUninstaller() = false) AND (updatepath = true) then
+				newpath := newpath + ';' + pathdir[d];
+
+			// Write new path
+			RegWriteStringValue(regroot, regpath, 'Path', newpath);
+
+		// Modify Win9x path
+		end else begin
+
+			// Convert to shortened dirname
+			pathdir[d] := GetShortName(pathdir[d]);
+
+			// If autoexec.bat exists, check if app dir already exists in path
+			aExecFile := 'C:\AUTOEXEC.BAT';
+			if FileExists(aExecFile) then begin
+				LoadStringsFromFile(aExecFile, aExecArr);
+				for i := 0 to GetArrayLength(aExecArr)-1 do begin
+					if IsUninstaller() = false then begin
+						// If app dir already exists while installing, skip add
+						if (Pos(pathdir[d], aExecArr[i]) > 0) then
+							updatepath := false;
+							break;
+					end else begin
+						// If app dir exists and = what we originally set, then delete at uninstall
+						if aExecArr[i] = 'SET PATH=%PATH%;' + pathdir[d] then
+							aExecArr[i] := '';
+					end;
+				end;
+			end;
+
+			// If app dir not found, or autoexec.bat didn't exist, then (create and) append to current path
+			if (IsUninstaller() = false) AND (updatepath = true) then begin
+				SaveStringToFile(aExecFile, #13#10 + 'SET PATH=%PATH%;' + pathdir[d], True);
+
+			// If uninstalling, write the full autoexec out
+			end else begin
+				SaveStringsToFile(aExecFile, aExecArr, False);
+			end;
+		end;
+	end;
+end;
+
+// Split a string into an array using passed delimeter
+procedure MPExplode(var Dest: TArrayOfString; Text: String; Separator: String);
+var
+	i: Integer;
+begin
+	i := 0;
+	repeat
+		SetArrayLength(Dest, i+1);
+		if Pos(Separator,Text) > 0 then	begin
+			Dest[i] := Copy(Text, 1, Pos(Separator, Text)-1);
+			Text := Copy(Text, Pos(Separator,Text) + Length(Separator), Length(Text));
+			i := i + 1;
+		end else begin
+			 Dest[i] := Text;
+			 Text := '';
+		end;
+	until Length(Text)=0;
+end;
+
+
+procedure CurStepChanged(CurStep: TSetupStep);
+var
+	taskname:	String;
+begin
+	taskname := ModPathName;
+	if CurStep = ssPostInstall then
+		if IsTaskSelected(taskname) then
+			ModPath();
+end;
+
+procedure CurUninstallStepChanged(CurUninstallStep: TUninstallStep);
+var
+	aSelectedTasks:	TArrayOfString;
+	i:				Integer;
+	taskname:		String;
+	regpath:		String;
+	regstring:		String;
+	appid:			String;
+begin
+	// only run during actual uninstall
+	if CurUninstallStep = usUninstall then begin
+		// get list of selected tasks saved in registry at install time
+		appid := '{#emit SetupSetting("AppId")}';
+		if appid = '' then appid := '{#emit SetupSetting("AppName")}';
+		regpath := ExpandConstant('Software\Microsoft\Windows\CurrentVersion\Uninstall\'+appid+'_is1');
+		RegQueryStringValue(HKLM, regpath, 'Inno Setup: Selected Tasks', regstring);
+		if regstring = '' then RegQueryStringValue(HKCU, regpath, 'Inno Setup: Selected Tasks', regstring);
+
+		// check each task; if matches modpath taskname, trigger patch removal
+		if regstring <> '' then begin
+			taskname := ModPathName;
+			MPExplode(aSelectedTasks, regstring, ',');
+			if GetArrayLength(aSelectedTasks) > 0 then begin
+				for i := 0 to GetArrayLength(aSelectedTasks)-1 do begin
+					if comparetext(aSelectedTasks[i], taskname) = 0 then
+						ModPath();
+				end;
+			end;
+		end;
+	end;
+end;
+
+function NeedRestart(): Boolean;
+var
+	taskname:	String;
+begin
+	taskname := ModPathName;
+	if IsTaskSelected(taskname) and not UsingWinNT() then begin
+		Result := True;
+	end else begin
+		Result := False;
+	end;
+end;
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/inno/readme.rst	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,61 @@
+Requirements
+============
+
+Building the Inno installer requires a Windows machine.
+
+The following system dependencies must be installed:
+
+* Python 2.7 (download from https://www.python.org/downloads/)
+* Microsoft Visual C++ Compiler for Python 2.7
+  (https://www.microsoft.com/en-us/download/details.aspx?id=44266)
+* Inno Setup (http://jrsoftware.org/isdl.php) version 5.4 or newer.
+  Be sure to install the optional Inno Setup Preprocessor feature,
+  which is required.
+* Python 3.5+ (to run the ``build.py`` script)
+
+Building
+========
+
+The ``build.py`` script automates the process of producing an
+Inno installer. It manages fetching and configuring the
+non-system dependencies (such as py2exe, gettext, and various
+Python packages).
+
+The script requires an activated ``Visual C++ 2008`` command prompt.
+A shortcut to such a prompt was installed with ``Microsoft Visual C++
+Compiler for Python 2.7``. From your Start Menu, look for
+``Microsoft Visual C++ Compiler Package for Python 2.7`` then launch
+either ``Visual C++ 2008 32-bit Command Prompt`` or
+``Visual C++ 2008 64-bit Command Prompt``.
+
+From the prompt, change to the Mercurial source directory. e.g.
+``cd c:\src\hg``.
+
+Next, invoke ``build.py`` to produce an Inno installer. You will
+need to supply the path to the Python interpreter to use.:
+
+   $ python3.exe contrib\packaging\inno\build.py \
+       --python c:\python27\python.exe
+
+.. note::
+
+   The script validates that the Visual C++ environment is
+   active and that the architecture of the specified Python
+   interpreter matches the Visual C++ environment and errors
+   if not.
+
+If everything runs as intended, dependencies will be fetched and
+configured into the ``build`` sub-directory, Mercurial will be built,
+and an installer placed in the ``dist`` sub-directory. The final
+line of output should print the name of the generated installer.
+
+Additional options may be configured. Run ``build.py --help`` to
+see a list of program flags.
+
+MinGW
+=====
+
+It is theoretically possible to generate an installer that uses
+MinGW. This isn't well tested and ``build.py`` and may properly
+support it. See old versions of this file in version control for
+potentially useful hints as to how to achieve this.
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/inno/requirements.txt	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,38 @@
+#
+# This file is autogenerated by pip-compile
+# To update, run:
+#
+#    pip-compile --generate-hashes contrib/packaging/inno/requirements.txt.in -o contrib/packaging/inno/requirements.txt
+#
+certifi==2018.11.29 \
+    --hash=sha256:47f9c83ef4c0c621eaef743f133f09fa8a74a9b75f037e8624f83bd1b6626cb7 \
+    --hash=sha256:993f830721089fef441cdfeb4b2c8c9df86f0c63239f06bd025a76a7daddb033 \
+    # via dulwich
+configparser==3.7.3 \
+    --hash=sha256:27594cf4fc279f321974061ac69164aaebd2749af962ac8686b20503ac0bcf2d \
+    --hash=sha256:9d51fe0a382f05b6b117c5e601fc219fede4a8c71703324af3f7d883aef476a3 \
+    # via entrypoints
+docutils==0.14 \
+    --hash=sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6 \
+    --hash=sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274 \
+    --hash=sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6
+dulwich==0.19.11 \
+    --hash=sha256:afbe070f6899357e33f63f3f3696e601731fef66c64a489dea1bc9f539f4a725
+entrypoints==0.3 \
+    --hash=sha256:589f874b313739ad35be6e0cd7efde2a4e9b6fea91edcc34e58ecbb8dbe56d19 \
+    --hash=sha256:c70dd71abe5a8c85e55e12c19bd91ccfeec11a6e99044204511f9ed547d48451 \
+    # via keyring
+keyring==18.0.0 \
+    --hash=sha256:12833d2b05d2055e0e25931184af9cd6a738f320a2264853cabbd8a3a0f0b65d \
+    --hash=sha256:ca33f5ccc542b9ffaa196ee9a33488069e5e7eac77d5b81969f8a3ce74d0230c
+pygments==2.3.1 \
+    --hash=sha256:5ffada19f6203563680669ee7f53b64dabbeb100eb51b61996085e99c03b284a \
+    --hash=sha256:e8218dd399a61674745138520d0d4cf2621d7e032439341bc3f647bff125818d
+pywin32-ctypes==0.2.0 \
+    --hash=sha256:24ffc3b341d457d48e8922352130cf2644024a4ff09762a2261fd34c36ee5942 \
+    --hash=sha256:9dc2d991b3479cc2df15930958b674a48a227d5361d413827a4cfd0b5876fc98 \
+    # via keyring
+urllib3==1.24.1 \
+    --hash=sha256:61bf29cada3fc2fbefad4fdf059ea4bd1b4a86d2b6d15e1c7c0b582b9752fe39 \
+    --hash=sha256:de9529817c93f27c8ccbfead6985011db27bd0ddfcdb2d86f3f663385c6a9c22 \
+    # via dulwich
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/inno/requirements.txt.in	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,4 @@
+docutils
+dulwich
+keyring
+pygments
Binary file contrib/packaging/wix/COPYING.rtf has changed
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/build.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,84 @@
+#!/usr/bin/env python3
+# Copyright 2019 Gregory Szorc <gregory.szorc@gmail.com>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+# no-check-code because Python 3 native.
+
+"""Code to build Mercurial WiX installer."""
+
+import argparse
+import os
+import pathlib
+import sys
+
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser()
+
+    parser.add_argument('--name',
+                        help='Application name',
+                        default='Mercurial')
+    parser.add_argument('--python',
+                        help='Path to Python executable to use',
+                        required=True)
+    parser.add_argument('--sign-sn',
+                        help='Subject name (or fragment thereof) of certificate '
+                             'to use for signing')
+    parser.add_argument('--sign-cert',
+                        help='Path to certificate to use for signing')
+    parser.add_argument('--sign-password',
+                        help='Password for signing certificate')
+    parser.add_argument('--sign-timestamp-url',
+                        help='URL of timestamp server to use for signing')
+    parser.add_argument('--version',
+                        help='Version string to use')
+    parser.add_argument('--extra-packages-script',
+                        help=('Script to execute to include extra packages in '
+                              'py2exe binary.'))
+    parser.add_argument('--extra-wxs',
+                        help='CSV of path_to_wxs_file=working_dir_for_wxs_file')
+    parser.add_argument('--extra-features',
+                        help=('CSV of extra feature names to include '
+                              'in the installer from the extra wxs files'))
+
+    args = parser.parse_args()
+
+    here = pathlib.Path(os.path.abspath(os.path.dirname(__file__)))
+    source_dir = here.parent.parent.parent
+
+    sys.path.insert(0, str(source_dir / 'contrib' / 'packaging'))
+
+    from hgpackaging.wix import (
+        build_installer,
+        build_signed_installer,
+    )
+
+    fn = build_installer
+    kwargs = {
+        'source_dir': source_dir,
+        'python_exe': pathlib.Path(args.python),
+        'version': args.version,
+    }
+
+    if not os.path.isabs(args.python):
+        raise Exception('--python arg must be an absolute path')
+
+    if args.extra_packages_script:
+        kwargs['extra_packages_script'] = args.extra_packages_script
+    if args.extra_wxs:
+        kwargs['extra_wxs'] = dict(
+            thing.split("=") for thing in args.extra_wxs.split(','))
+    if args.extra_features:
+        kwargs['extra_features'] = args.extra_features.split(',')
+
+    if args.sign_sn or args.sign_cert:
+        fn = build_signed_installer
+        kwargs['name'] = args.name
+        kwargs['subject_name'] = args.sign_sn
+        kwargs['cert_path'] = args.sign_cert
+        kwargs['cert_password'] = args.sign_password
+        kwargs['timestamp_url'] = args.sign_timestamp_url
+
+    fn(**kwargs)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/contrib.wxs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,43 @@
+<?xml version="1.0" encoding="utf-8"?>
+<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
+
+  <?include guids.wxi ?>
+  <?include defines.wxi ?>
+
+  <Fragment>
+    <ComponentGroup Id="contribFolder">
+      <ComponentRef Id="contrib" />
+      <ComponentRef Id="contrib.vim" />
+    </ComponentGroup>
+  </Fragment>
+
+  <Fragment>
+    <DirectoryRef Id="INSTALLDIR">
+      <Directory Id="contribdir" Name="contrib" FileSource="$(var.SourceDir)">
+        <Component Id="contrib" Guid="$(var.contrib.guid)" Win64='$(var.IsX64)'>
+          <File Name="bash_completion" KeyPath="yes" />
+          <File Name="hgk" />
+          <File Name="hgweb.fcgi" />
+          <File Name="hgweb.wsgi" />
+          <File Name="logo-droplets.svg" />
+          <File Name="mercurial.el" />
+          <File Name="tcsh_completion" />
+          <File Name="tcsh_completion_build.sh" />
+          <File Name="xml.rnc" />
+          <File Name="zsh_completion" />
+        </Component>
+        <Directory Id="vimdir" Name="vim">
+          <Component Id="contrib.vim" Guid="$(var.contrib.vim.guid)" Win64='$(var.IsX64)'>
+            <File Name="hg-menu.vim" KeyPath="yes" />
+            <File Name="HGAnnotate.vim" />
+            <File Name="hgcommand.vim" />
+            <File Name="patchreview.txt" />
+            <File Name="patchreview.vim" />
+            <File Name="hgtest.vim" />
+          </Component>
+        </Directory>
+      </Directory>
+    </DirectoryRef>
+  </Fragment>
+
+</Wix>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/defines.wxi	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,9 @@
+<Include>
+
+  <?if $(var.Platform) = "x64" ?>
+    <?define IsX64 = yes ?>
+  <?else?>
+    <?define IsX64 = no ?>
+  <?endif?>
+
+</Include>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/dist.wxs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,15 @@
+<?xml version="1.0" encoding="utf-8"?>
+<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
+
+  <?include guids.wxi ?>
+  <?include defines.wxi ?>
+
+  <Fragment>
+    <DirectoryRef Id="INSTALLDIR" FileSource="$(var.SourceDir)">
+      <Component Id="distOutput" Guid="$(var.dist.guid)" Win64='$(var.IsX64)'>
+        <File Name="python27.dll" KeyPath="yes" />
+      </Component>
+    </DirectoryRef>
+  </Fragment>
+
+</Wix>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/doc.wxs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,50 @@
+<?xml version="1.0" encoding="utf-8"?>
+<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
+
+  <?include guids.wxi ?>
+  <?include defines.wxi ?>
+
+  <Fragment>
+    <ComponentGroup Id="docFolder">
+      <ComponentRef Id="doc.hg.1.html" />
+      <ComponentRef Id="doc.hgignore.5.html" />
+      <ComponentRef Id="doc.hgrc.5.html" />
+      <ComponentRef Id="doc.style.css" />
+    </ComponentGroup>
+  </Fragment>
+
+  <Fragment>
+    <DirectoryRef Id="INSTALLDIR">
+      <Directory Id="docdir" Name="doc" FileSource="$(var.SourceDir)">
+        <Component Id="doc.hg.1.html" Guid="$(var.doc.hg.1.html.guid)" Win64='$(var.IsX64)'>
+          <File Name="hg.1.html" KeyPath="yes">
+            <Shortcut Id="hg1StartMenu" Directory="ProgramMenuDir"
+                      Name="Mercurial Command Reference"
+                      Icon="hgIcon.ico" IconIndex="0" Advertise="yes"
+            />
+          </File>
+        </Component>
+        <Component Id="doc.hgignore.5.html" Guid="$(var.doc.hgignore.5.html.guid)" Win64='$(var.IsX64)'>
+          <File Name="hgignore.5.html" KeyPath="yes">
+            <Shortcut Id="hgignore5StartMenu" Directory="ProgramMenuDir"
+                      Name="Mercurial Ignore Files"
+                      Icon="hgIcon.ico" IconIndex="0" Advertise="yes"
+            />
+          </File>
+        </Component>
+        <Component Id="doc.hgrc.5.html" Guid="$(var.doc.hgrc.5.html)" Win64='$(var.IsX64)'>
+          <File Name="hgrc.5.html" KeyPath="yes">
+            <Shortcut Id="hgrc5StartMenu" Directory="ProgramMenuDir"
+                      Name="Mercurial Configuration Files"
+                      Icon="hgIcon.ico" IconIndex="0" Advertise="yes"
+            />
+          </File>
+        </Component>
+        <Component Id="doc.style.css" Guid="$(var.doc.style.css)" Win64='$(var.IsX64)'>
+          <File Name="style.css" KeyPath="yes" />
+        </Component>
+      </Directory>
+    </DirectoryRef>
+  </Fragment>
+
+</Wix>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/guids.wxi	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,52 @@
+<Include>
+  <!-- These are component GUIDs used for Mercurial installers.
+       YOU MUST CHANGE ALL GUIDs below when copying this file
+       and replace 'Mercurial' in this notice with the name of
+       your project. Component GUIDs have global namespace!      -->
+
+  <!-- contrib.wxs -->
+  <?define contrib.guid = {4E11FFC2-E2F7-482A-8460-9394B5489F02} ?>
+  <?define contrib.vim.guid = {BB04903A-652D-4C4F-9590-2BD07A2304F2} ?>
+
+  <!-- dist.wxs -->
+  <?define dist.guid = {CE405FE6-CD1E-4873-9C9A-7683AE5A3D90} ?>
+  <?define lib.guid = {877633b5-0b7e-4b46-8f1c-224a61733297} ?>
+
+  <!-- doc.wxs -->
+  <?define doc.hg.1.html.guid = {AAAA3FDA-EDC5-4220-B59D-D342722358A2} ?>
+  <?define doc.hgignore.5.html.guid = {AA9118C4-F3A0-4429-A5F4-5A1906B2D67F} ?>
+  <?define doc.hgrc.5.html = {E0CEA1EB-FA01-408c-844B-EE5965165BAE} ?>
+  <?define doc.style.css = {172F8262-98E0-4711-BD39-4DAE0D77EF05} ?>
+
+  <!-- help.wxs -->
+  <?define help.root.guid = {9FA957DB-6DFE-44f2-AD03-293B2791CF17} ?>
+  <?define help.internals.guid = {2DD7669D-0DB8-4C39-9806-78E6475E7ACC} ?>
+
+  <!-- i18n.wxs -->
+  <?define i18nFolder.guid = {1BF8026D-CF7C-4174-AEE6-D6B7BF119248} ?>
+
+  <!-- templates.wxs -->
+  <?define templates.root.guid = {437FD55C-7756-4EA0-87E5-FDBE75DC8595} ?>
+  <?define templates.atom.guid = {D30E14A5-8AF0-4268-8B00-00BEE9E09E39} ?>
+  <?define templates.coal.guid = {B63CCAAB-4EAF-43b4-901E-4BD13F5B78FC} ?>
+  <?define templates.gitweb.guid = {827334AF-1EFD-421B-962C-5660A068F612} ?>
+  <?define templates.json.guid = {F535BE7A-EC34-46E0-B9BE-013F3DBAFB19} ?>
+  <?define templates.monoblue.guid = {8060A1E4-BD4C-453E-92CB-9536DC44A9E3} ?>
+  <?define templates.paper.guid = {61AB1DE9-645F-46ED-8AF8-0CF02267FFBB} ?>
+  <?define templates.raw.guid = {834DF8D7-9784-43A6-851D-A96CE1B3575B} ?>
+  <?define templates.rss.guid = {9338FA09-E128-4B1C-B723-1142DBD09E14} ?>
+  <?define templates.spartan.guid = {80222625-FA8F-44b1-86CE-1781EF375D09} ?>
+  <?define templates.static.guid = {6B3D7C24-98DA-4B67-9F18-35F77357B0B4} ?>
+
+  <!-- mercurial.wxs -->
+  <?define ProductUpgradeCode = {A1CC6134-E945-4399-BE36-EB0017FDF7CF} ?>
+
+  <?define ComponentMainExecutableGUID = {D102B8FA-059B-4ACC-9FA3-8C78C3B58EEF} ?>
+
+  <?define ReadMe.guid = {56A8E372-991D-4DCA-B91D-93D775974CF5} ?>
+  <?define COPYING.guid = {B7801DBA-1C49-4BF4-91AD-33C65F5C7895} ?>
+  <?define mercurial.rc.guid = {1D5FAEEE-7E6E-43B1-9F7F-802714316B15} ?>
+  <?define mergetools.rc.guid = {E8A1DC29-FF40-4B5F-BD12-80B9F7BF0CCD} ?>
+  <?define ProgramMenuDir.guid = {D5A63320-1238-489B-B68B-CF053E9577CA} ?>
+
+</Include>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/help.wxs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,64 @@
+<?xml version="1.0" encoding="utf-8"?>
+<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
+
+  <?include guids.wxi ?>
+  <?include defines.wxi ?>
+
+  <Fragment>
+    <ComponentGroup Id='helpFolder'>
+      <ComponentRef Id='help.root' />
+      <ComponentRef Id='help.internals' />
+    </ComponentGroup>
+  </Fragment>
+
+  <Fragment>
+    <DirectoryRef Id="INSTALLDIR">
+      <Directory Id="helpdir" Name="help" FileSource="$(var.SourceDir)">
+        <Component Id="help.root" Guid="$(var.help.root.guid)" Win64='$(var.IsX64)'>
+          <File Name="bundlespec.txt" />
+          <File Name="color.txt" />
+          <File Name="config.txt" KeyPath="yes" />
+          <File Name="dates.txt" />
+          <File Name="deprecated.txt" />
+          <File Name="diffs.txt" />
+          <File Name="environment.txt" />
+          <File Name="extensions.txt" />
+          <File Name="filesets.txt" />
+          <File Name="flags.txt" />
+          <File Name="glossary.txt" />
+          <File Name="hgignore.txt" />
+          <File Name="hgweb.txt" />
+          <File Name="merge-tools.txt" />
+          <File Name="pager.txt" />
+          <File Name="patterns.txt" />
+          <File Name="phases.txt" />
+          <File Name="revisions.txt" />
+          <File Name="scripting.txt" />
+          <File Name="subrepos.txt" />
+          <File Name="templates.txt" />
+          <File Name="urls.txt" />
+        </Component>
+
+        <Directory Id="help.internaldir" Name="internals">
+          <Component Id="help.internals" Guid="$(var.help.internals.guid)" Win64='$(var.IsX64)'>
+            <File Id="internals.bundle2.txt"      Name="bundle2.txt" />
+            <File Id="internals.bundles.txt"      Name="bundles.txt" KeyPath="yes" />
+            <File Id="internals.cbor.txt"         Name="cbor.txt" />
+            <File Id="internals.censor.txt"       Name="censor.txt" />
+            <File Id="internals.changegroups.txt" Name="changegroups.txt" />
+            <File Id="internals.config.txt"       Name="config.txt" />
+            <File Id="internals.extensions.txt"   Name="extensions.txt" />
+            <File Id="internals.linelog.txt"      Name="linelog.txt" />
+            <File Id="internals.requirements.txt" Name="requirements.txt" />
+            <File Id="internals.revlogs.txt"      Name="revlogs.txt" />
+            <File Id="internals.wireprotocol.txt" Name="wireprotocol.txt" />
+            <File Id="internals.wireprotocolrpc.txt" Name="wireprotocolrpc.txt" />
+            <File Id="internals.wireprotocolv2.txt" Name="wireprotocolv2.txt" />
+          </Component>
+        </Directory>
+
+      </Directory>
+    </DirectoryRef>
+  </Fragment>
+
+</Wix>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/i18n.wxs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,26 @@
+<?xml version="1.0" encoding="utf-8"?>
+<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
+
+  <?include guids.wxi ?>
+  <?include defines.wxi ?>
+
+  <?define hg_po_langs =
+    da;de;el;fr;it;ja;pt_BR;ro;ru;sv;zh_CN;zh_TW
+  ?>
+
+  <Fragment>
+    <DirectoryRef Id="INSTALLDIR">
+      <Directory Id="i18ndir" Name="i18n" FileSource="$(var.SourceDir)">
+        <Component Id="i18nFolder" Guid="$(var.i18nFolder.guid)" Win64='$(var.IsX64)'>
+          <File Name="hggettext" KeyPath="yes" />
+          <?foreach LANG in $(var.hg_po_langs) ?>
+            <File Id="hg.$(var.LANG).po"
+                  Name="$(var.LANG).po"
+            />
+          <?endforeach?>
+        </Component>
+      </Directory>
+    </DirectoryRef>
+  </Fragment>
+
+</Wix>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/locale.wxs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,34 @@
+<?xml version="1.0" encoding="utf-8"?>
+<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
+
+  <?include defines.wxi ?>
+
+  <?define hglocales =
+    da;de;el;fr;it;ja;pt_BR;ro;ru;sv;zh_CN;zh_TW
+  ?>
+
+  <Fragment>
+    <ComponentGroup Id="localeFolder">
+      <?foreach LOC in $(var.hglocales) ?>
+        <ComponentRef Id="hg.locale.$(var.LOC)"/>
+      <?endforeach?>
+    </ComponentGroup>
+  </Fragment>
+
+  <Fragment>
+    <DirectoryRef Id="INSTALLDIR">
+      <Directory Id="localedir" Name="locale" FileSource="$(var.SourceDir)">
+        <?foreach LOC in $(var.hglocales) ?>
+          <Directory Id="hg.locale.$(var.LOC)" Name="$(var.LOC)">
+            <Directory Id="hg.locale.$(var.LOC).LC_MESSAGES" Name="LC_MESSAGES">
+              <Component Id="hg.locale.$(var.LOC)" Guid="*" Win64='$(var.IsX64)'>
+                <File Id="hg.mo.$(var.LOC)" Name="hg.mo" KeyPath="yes" />
+              </Component>
+            </Directory>
+          </Directory>
+        <?endforeach?>
+      </Directory>
+    </DirectoryRef>
+  </Fragment>
+
+</Wix>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/mercurial.wxs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,167 @@
+<?xml version='1.0' encoding='windows-1252'?>
+<Wix xmlns='http://schemas.microsoft.com/wix/2006/wi'>
+
+  <!-- Copyright 2010 Steve Borho <steve@borho.org>
+
+  This software may be used and distributed according to the terms of the
+  GNU General Public License version 2 or any later version. -->
+
+  <?include guids.wxi ?>
+  <?include defines.wxi ?>
+
+  <?if $(var.Platform) = "x64" ?>
+    <?define PFolder = ProgramFiles64Folder ?>
+  <?else?>
+    <?define PFolder = ProgramFilesFolder ?>
+  <?endif?>
+
+  <Product Id='*'
+    Name='Mercurial $(var.Version) ($(var.Platform))'
+    UpgradeCode='$(var.ProductUpgradeCode)'
+    Language='1033' Codepage='1252' Version='$(var.Version)'
+    Manufacturer='Matt Mackall and others'>
+
+    <Package Id='*'
+      Keywords='Installer'
+      Description="Mercurial distributed SCM (version $(var.Version))"
+      Comments='$(var.Comments)'
+      Platform='$(var.Platform)'
+      Manufacturer='Matt Mackall and others'
+      InstallerVersion='300' Languages='1033' Compressed='yes' SummaryCodepage='1252' />
+
+    <Media Id='1' Cabinet='mercurial.cab' EmbedCab='yes' DiskPrompt='CD-ROM #1'
+           CompressionLevel='high' />
+    <Property Id='DiskPrompt' Value="Mercurial $(var.Version) Installation [1]" />
+
+    <Condition Message='Mercurial MSI installers require Windows XP or higher'>
+        VersionNT >= 501
+    </Condition>
+
+    <Property Id="INSTALLDIR">
+      <ComponentSearch Id='SearchForMainExecutableComponent'
+                       Guid='$(var.ComponentMainExecutableGUID)' />
+    </Property>
+
+    <!--Property Id='ARPCOMMENTS'>any comments</Property-->
+    <Property Id='ARPCONTACT'>mercurial@mercurial-scm.org</Property>
+    <Property Id='ARPHELPLINK'>https://mercurial-scm.org/wiki/</Property>
+    <Property Id='ARPURLINFOABOUT'>https://mercurial-scm.org/about/</Property>
+    <Property Id='ARPURLUPDATEINFO'>https://mercurial-scm.org/downloads/</Property>
+    <Property Id='ARPHELPTELEPHONE'>https://mercurial-scm.org/wiki/Support</Property>
+    <Property Id='ARPPRODUCTICON'>hgIcon.ico</Property>
+
+    <Property Id='INSTALLEDMERCURIALPRODUCTS' Secure='yes'></Property>
+    <Property Id='REINSTALLMODE'>amus</Property>
+
+    <!--Auto-accept the license page-->
+    <Property Id='LicenseAccepted'>1</Property>
+
+    <Directory Id='TARGETDIR' Name='SourceDir'>
+      <Directory Id='$(var.PFolder)' Name='PFiles'>
+        <Directory Id='INSTALLDIR' Name='Mercurial'>
+          <Component Id='MainExecutable' Guid='$(var.ComponentMainExecutableGUID)' Win64='$(var.IsX64)'>
+            <File Id='hgEXE' Name='hg.exe' Source='dist\hg.exe' KeyPath='yes' />
+            <Environment Id="Environment" Name="PATH" Part="last" System="yes"
+                         Permanent="no" Value="[INSTALLDIR]" Action="set" />
+          </Component>
+          <Component Id='ReadMe' Guid='$(var.ReadMe.guid)' Win64='$(var.IsX64)'>
+              <File Id='ReadMe' Name='ReadMe.html' Source='contrib\win32\ReadMe.html'
+                    KeyPath='yes'/>
+          </Component>
+          <Component Id='COPYING' Guid='$(var.COPYING.guid)' Win64='$(var.IsX64)'>
+            <File Id='COPYING' Name='COPYING.rtf' Source='contrib\packaging\wix\COPYING.rtf'
+                  KeyPath='yes'/>
+          </Component>
+
+          <Directory Id='HGRCD' Name='hgrc.d'>
+            <Component Id='mercurial.rc' Guid='$(var.mercurial.rc.guid)' Win64='$(var.IsX64)'>
+              <File Id='mercurial.rc' Name='Mercurial.rc' Source='contrib\win32\mercurial.ini'
+                    ReadOnly='yes' KeyPath='yes'/>
+            </Component>
+            <Component Id='mergetools.rc' Guid='$(var.mergetools.rc.guid)' Win64='$(var.IsX64)'>
+              <File Id='mergetools.rc' Name='MergeTools.rc' Source='mercurial\default.d\mergetools.rc'
+                    ReadOnly='yes' KeyPath='yes'/>
+            </Component>
+          </Directory>
+
+        </Directory>
+      </Directory>
+
+      <Directory Id="ProgramMenuFolder" Name="Programs">
+        <Directory Id="ProgramMenuDir" Name="Mercurial $(var.Version)">
+          <Component Id="ProgramMenuDir" Guid="$(var.ProgramMenuDir.guid)" Win64='$(var.IsX64)'>
+            <RemoveFolder Id='ProgramMenuDir' On='uninstall' />
+            <RegistryValue Root='HKCU' Key='Software\Mercurial\InstallDir' Type='string'
+                           Value='[INSTALLDIR]' KeyPath='yes' />
+            <Shortcut Id='UrlShortcut' Directory='ProgramMenuDir' Name='Mercurial Web Site'
+                      Target='[ARPHELPLINK]' Icon="hgIcon.ico" IconIndex='0' />
+          </Component>
+        </Directory>
+      </Directory>
+
+      <?if $(var.Platform) = "x86" ?>
+        <Merge Id='VCRuntime' DiskId='1' Language='1033'
+              SourceFile='$(var.VCRedistSrcDir)\microsoft.vcxx.crt.x86_msm.msm' />
+        <Merge Id='VCRuntimePolicy' DiskId='1' Language='1033'
+              SourceFile='$(var.VCRedistSrcDir)\policy.x.xx.microsoft.vcxx.crt.x86_msm.msm' />
+      <?else?>
+        <Merge Id='VCRuntime' DiskId='1' Language='1033'
+              SourceFile='$(var.VCRedistSrcDir)\microsoft.vcxx.crt.x64_msm.msm' />
+        <Merge Id='VCRuntimePolicy' DiskId='1' Language='1033'
+              SourceFile='$(var.VCRedistSrcDir)\policy.x.xx.microsoft.vcxx.crt.x64_msm.msm' />
+      <?endif?>
+    </Directory>
+
+    <Feature Id='Complete' Title='Mercurial' Description='The complete package'
+        Display='expand' Level='1' ConfigurableDirectory='INSTALLDIR' >
+      <Feature Id='MainProgram' Title='Program' Description='Mercurial command line app'
+             Level='1' Absent='disallow' >
+        <ComponentRef Id='MainExecutable' />
+        <ComponentRef Id='distOutput' />
+        <ComponentRef Id='libOutput' />
+        <ComponentRef Id='ProgramMenuDir' />
+        <ComponentRef Id='ReadMe' />
+        <ComponentRef Id='COPYING' />
+        <ComponentRef Id='mercurial.rc' />
+        <ComponentRef Id='mergetools.rc' />
+        <ComponentGroupRef Id='helpFolder' />
+        <ComponentGroupRef Id='templatesFolder' />
+        <MergeRef Id='VCRuntime' />
+        <MergeRef Id='VCRuntimePolicy' />
+      </Feature>
+      <?ifdef MercurialExtraFeatures?>
+        <?foreach EXTRAFEAT in $(var.MercurialExtraFeatures)?>
+          <FeatureRef Id="$(var.EXTRAFEAT)" />
+        <?endforeach?>
+      <?endif?>
+      <Feature Id='Locales' Title='Translations' Description='Translations' Level='1'>
+        <ComponentGroupRef Id='localeFolder' />
+        <ComponentRef Id='i18nFolder' />
+      </Feature>
+      <Feature Id='Documentation' Title='Documentation' Description='HTML man pages' Level='1'>
+        <ComponentGroupRef Id='docFolder' />
+      </Feature>
+      <Feature Id='Misc' Title='Miscellaneous' Description='Contributed scripts' Level='1'>
+        <ComponentGroupRef Id='contribFolder' />
+      </Feature>
+    </Feature>
+
+    <UIRef Id="WixUI_FeatureTree" />
+    <UIRef Id="WixUI_ErrorProgressText" />
+
+    <WixVariable Id="WixUILicenseRtf" Value="contrib\packaging\wix\COPYING.rtf" />
+
+    <Icon Id="hgIcon.ico" SourceFile="contrib/win32/mercurial.ico" />
+
+    <Upgrade Id='$(var.ProductUpgradeCode)'>
+      <UpgradeVersion
+        IncludeMinimum='yes' Minimum='0.0.0' IncludeMaximum='no' OnlyDetect='no'
+        Property='INSTALLEDMERCURIALPRODUCTS' />
+    </Upgrade>
+
+    <InstallExecuteSequence>
+      <RemoveExistingProducts After='InstallInitialize'/>
+    </InstallExecuteSequence>
+
+  </Product>
+</Wix>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/readme.rst	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,71 @@
+WiX Installer
+=============
+
+The files in this directory are used to produce an MSI installer using
+the WiX Toolset (http://wixtoolset.org/).
+
+The MSI installers require elevated (admin) privileges due to the
+installation of MSVC CRT libraries into the Windows system store. See
+the Inno Setup installers in the ``inno`` sibling directory for installers
+that do not have this requirement.
+
+Requirements
+============
+
+Building the WiX installers requires a Windows machine. The following
+dependencies must be installed:
+
+* Python 2.7 (download from https://www.python.org/downloads/)
+* Microsoft Visual C++ Compiler for Python 2.7
+  (https://www.microsoft.com/en-us/download/details.aspx?id=44266)
+* Python 3.5+ (to run the ``build.py`` script)
+
+Building
+========
+
+The ``build.py`` script automates the process of producing an MSI
+installer. It manages fetching and configuring non-system dependencies
+(such as py2exe, gettext, and various Python packages).
+
+The script requires an activated ``Visual C++ 2008`` command prompt.
+A shortcut to such a prompt was installed with ``Microsoft Visual
+C++ Compiler for Python 2.7``. From your Start Menu, look for
+``Microsoft Visual C++ Compiler Package for Python 2.7`` then
+launch either ``Visual C++ 2008 32-bit Command Prompt`` or
+``Visual C++ 2008 64-bit Command Prompt``.
+
+From the prompt, change to the Mercurial source directory. e.g.
+``cd c:\src\hg``.
+
+Next, invoke ``build.py`` to produce an MSI installer. You will need
+to supply the path to the Python interpreter to use.::
+
+   $ python3 contrib\packaging\wix\build.py \
+      --python c:\python27\python.exe
+
+.. note::
+
+   The script validates that the Visual C++ environment is active and
+   that the architecture of the specified Python interpreter matches the
+   Visual C++ environment. An error is raised otherwise.
+
+If everything runs as intended, dependencies will be fetched and
+configured into the ``build`` sub-directory, Mercurial will be built,
+and an installer placed in the ``dist`` sub-directory. The final line
+of output should print the name of the generated installer.
+
+Additional options may be configured. Run ``build.py --help`` to see
+a list of program flags.
+
+Relationship to TortoiseHG
+==========================
+
+TortoiseHG uses the WiX files in this directory.
+
+The code for building TortoiseHG installers lives at
+https://bitbucket.org/tortoisehg/thg-winbuild and is maintained by
+Steve Borho (steve@borho.org).
+
+When changing behavior of the WiX installer, be sure to notify
+the TortoiseHG Project of the changes so they have ample time
+provide feedback and react to those changes.
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/requirements.txt	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,13 @@
+#
+# This file is autogenerated by pip-compile
+# To update, run:
+#
+#    pip-compile --generate-hashes contrib/packaging/wix/requirements.txt.in -o contrib/packaging/wix/requirements.txt
+#
+docutils==0.14 \
+    --hash=sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6 \
+    --hash=sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274 \
+    --hash=sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6
+pygments==2.3.1 \
+    --hash=sha256:5ffada19f6203563680669ee7f53b64dabbeb100eb51b61996085e99c03b284a \
+    --hash=sha256:e8218dd399a61674745138520d0d4cf2621d7e032439341bc3f647bff125818d
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/requirements.txt.in	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,2 @@
+docutils
+pygments
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/packaging/wix/templates.wxs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,251 @@
+<?xml version="1.0" encoding="utf-8"?>
+<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
+
+  <?include guids.wxi ?>
+  <?include defines.wxi ?>
+
+  <Fragment>
+    <ComponentGroup Id="templatesFolder">
+
+      <ComponentRef Id="templates.root" />
+
+      <ComponentRef Id="templates.atom" />
+      <ComponentRef Id="templates.coal" />
+      <ComponentRef Id="templates.gitweb" />
+      <ComponentRef Id="templates.json" />
+      <ComponentRef Id="templates.monoblue" />
+      <ComponentRef Id="templates.paper" />
+      <ComponentRef Id="templates.raw" />
+      <ComponentRef Id="templates.rss" />
+      <ComponentRef Id="templates.spartan" />
+      <ComponentRef Id="templates.static" />
+
+    </ComponentGroup>
+  </Fragment>
+
+  <Fragment>
+    <DirectoryRef Id="INSTALLDIR">
+
+      <Directory Id="templatesdir" Name="templates" FileSource="$(var.SourceDir)">
+
+        <Component Id="templates.root" Guid="$(var.templates.root.guid)" Win64='$(var.IsX64)'>
+          <File Name="map-cmdline.changelog" KeyPath="yes" />
+          <File Name="map-cmdline.compact" />
+          <File Name="map-cmdline.default" />
+          <File Name="map-cmdline.show" />
+          <File Name="map-cmdline.bisect" />
+          <File Name="map-cmdline.xml" />
+          <File Name="map-cmdline.status" />
+          <File Name="map-cmdline.phases" />
+        </Component>
+
+        <Directory Id="templates.jsondir" Name="json">
+          <Component Id="templates.json" Guid="$(var.templates.json.guid)" Win64='$(var.IsX64)'>
+            <File Id="json.changelist.tmpl" Name="changelist.tmpl" KeyPath="yes" />
+            <File Id="json.graph.tmpl"      Name="graph.tmpl" />
+            <File Id="json.map"             Name="map" />
+          </Component>
+        </Directory>
+
+        <Directory Id="templates.atomdir" Name="atom">
+          <Component Id="templates.atom" Guid="$(var.templates.atom.guid)" Win64='$(var.IsX64)'>
+            <File Id="atom.changelog.tmpl"      Name="changelog.tmpl" KeyPath="yes" />
+            <File Id="atom.changelogentry.tmpl" Name="changelogentry.tmpl" />
+            <File Id="atom.error.tmpl"          Name="error.tmpl" />
+            <File Id="atom.filelog.tmpl"        Name="filelog.tmpl" />
+            <File Id="atom.header.tmpl"         Name="header.tmpl" />
+            <File Id="atom.map"                 Name="map" />
+            <File Id="atom.tagentry.tmpl"       Name="tagentry.tmpl" />
+            <File Id="atom.tags.tmpl"           Name="tags.tmpl" />
+            <File Id="atom.branchentry.tmpl"    Name="branchentry.tmpl" />
+            <File Id="atom.branches.tmpl"       Name="branches.tmpl" />
+            <File Id="atom.bookmarks.tmpl"      Name="bookmarks.tmpl" />
+            <File Id="atom.bookmarkentry.tmpl"  Name="bookmarkentry.tmpl" />
+          </Component>
+        </Directory>
+
+        <Directory Id="templates.coaldir" Name="coal">
+          <Component Id="templates.coal" Guid="$(var.templates.coal.guid)" Win64='$(var.IsX64)'>
+            <File Id="coal.header.tmpl" Name="header.tmpl" KeyPath="yes" />
+            <File Id="coal.map"         Name="map" />
+          </Component>
+        </Directory>
+
+        <Directory Id="templates.gitwebdir" Name="gitweb">
+          <Component Id="templates.gitweb" Guid="$(var.templates.gitweb.guid)" Win64='$(var.IsX64)'>
+            <File Id="gitweb.branches.tmpl"       Name="branches.tmpl" KeyPath="yes" />
+            <File Id="gitweb.bookmarks.tmpl"      Name="bookmarks.tmpl" />
+            <File Id="gitweb.changelog.tmpl"      Name="changelog.tmpl" />
+            <File Id="gitweb.changelogentry.tmpl" Name="changelogentry.tmpl" />
+            <File Id="gitweb.changeset.tmpl"      Name="changeset.tmpl" />
+            <File Id="gitweb.error.tmpl"          Name="error.tmpl" />
+            <File Id="gitweb.fileannotate.tmpl"   Name="fileannotate.tmpl" />
+            <File Id="gitweb.filecomparison.tmpl" Name="filecomparison.tmpl" />
+            <File Id="gitweb.filediff.tmpl"       Name="filediff.tmpl" />
+            <File Id="gitweb.filelog.tmpl"        Name="filelog.tmpl" />
+            <File Id="gitweb.filerevision.tmpl"   Name="filerevision.tmpl" />
+            <File Id="gitweb.footer.tmpl"         Name="footer.tmpl" />
+            <File Id="gitweb.graph.tmpl"          Name="graph.tmpl" />
+            <File Id="gitweb.graphentry.tmpl"     Name="graphentry.tmpl" />
+            <File Id="gitweb.header.tmpl"         Name="header.tmpl" />
+            <File Id="gitweb.index.tmpl"          Name="index.tmpl" />
+            <File Id="gitweb.manifest.tmpl"       Name="manifest.tmpl" />
+            <File Id="gitweb.map"                 Name="map" />
+            <File Id="gitweb.notfound.tmpl"       Name="notfound.tmpl" />
+            <File Id="gitweb.search.tmpl"         Name="search.tmpl" />
+            <File Id="gitweb.shortlog.tmpl"       Name="shortlog.tmpl" />
+            <File Id="gitweb.summary.tmpl"        Name="summary.tmpl" />
+            <File Id="gitweb.tags.tmpl"           Name="tags.tmpl" />
+            <File Id="gitweb.help.tmpl"           Name="help.tmpl" />
+            <File Id="gitweb.helptopics.tmpl"     Name="helptopics.tmpl" />
+          </Component>
+        </Directory>
+
+        <Directory Id="templates.monobluedir" Name="monoblue">
+          <Component Id="templates.monoblue" Guid="$(var.templates.monoblue.guid)" Win64='$(var.IsX64)'>
+            <File Id="monoblue.branches.tmpl"       Name="branches.tmpl" KeyPath="yes" />
+            <File Id="monoblue.bookmarks.tmpl"      Name="bookmarks.tmpl" />
+            <File Id="monoblue.changelog.tmpl"      Name="changelog.tmpl" />
+            <File Id="monoblue.changelogentry.tmpl" Name="changelogentry.tmpl" />
+            <File Id="monoblue.changeset.tmpl"      Name="changeset.tmpl" />
+            <File Id="monoblue.error.tmpl"          Name="error.tmpl" />
+            <File Id="monoblue.fileannotate.tmpl"   Name="fileannotate.tmpl" />
+            <File Id="monoblue.filecomparison.tmpl" Name="filecomparison.tmpl" />
+            <File Id="monoblue.filediff.tmpl"       Name="filediff.tmpl" />
+            <File Id="monoblue.filelog.tmpl"        Name="filelog.tmpl" />
+            <File Id="monoblue.filerevision.tmpl"   Name="filerevision.tmpl" />
+            <File Id="monoblue.footer.tmpl"         Name="footer.tmpl" />
+            <File Id="monoblue.graph.tmpl"          Name="graph.tmpl" />
+            <File Id="monoblue.graphentry.tmpl"     Name="graphentry.tmpl" />
+            <File Id="monoblue.header.tmpl"         Name="header.tmpl" />
+            <File Id="monoblue.index.tmpl"          Name="index.tmpl" />
+            <File Id="monoblue.manifest.tmpl"       Name="manifest.tmpl" />
+            <File Id="monoblue.map"                 Name="map" />
+            <File Id="monoblue.notfound.tmpl"       Name="notfound.tmpl" />
+            <File Id="monoblue.search.tmpl"         Name="search.tmpl" />
+            <File Id="monoblue.shortlog.tmpl"       Name="shortlog.tmpl" />
+            <File Id="monoblue.summary.tmpl"        Name="summary.tmpl" />
+            <File Id="monoblue.tags.tmpl"           Name="tags.tmpl" />
+            <File Id="monoblue.help.tmpl"           Name="help.tmpl" />
+            <File Id="monoblue.helptopics.tmpl"     Name="helptopics.tmpl" />
+          </Component>
+        </Directory>
+
+        <Directory Id="templates.paperdir" Name="paper">
+          <Component Id="templates.paper" Guid="$(var.templates.paper.guid)" Win64='$(var.IsX64)'>
+            <File Id="paper.branches.tmpl"      Name="branches.tmpl" KeyPath="yes" />
+            <File Id="paper.bookmarks.tmpl"     Name="bookmarks.tmpl" />
+            <File Id="paper.changeset.tmpl"     Name="changeset.tmpl" />
+            <File Id="paper.diffstat.tmpl"      Name="diffstat.tmpl" />
+            <File Id="paper.error.tmpl"         Name="error.tmpl" />
+            <File Id="paper.fileannotate.tmpl"  Name="fileannotate.tmpl" />
+            <File Id="paper.filecomparison.tmpl" Name="filecomparison.tmpl" />
+            <File Id="paper.filediff.tmpl"      Name="filediff.tmpl" />
+            <File Id="paper.filelog.tmpl"       Name="filelog.tmpl" />
+            <File Id="paper.filelogentry.tmpl"  Name="filelogentry.tmpl" />
+            <File Id="paper.filerevision.tmpl"  Name="filerevision.tmpl" />
+            <File Id="paper.footer.tmpl"        Name="footer.tmpl" />
+            <File Id="paper.graph.tmpl"         Name="graph.tmpl" />
+            <File Id="paper.graphentry.tmpl"    Name="graphentry.tmpl" />
+            <File Id="paper.header.tmpl"        Name="header.tmpl" />
+            <File Id="paper.index.tmpl"         Name="index.tmpl" />
+            <File Id="paper.manifest.tmpl"      Name="manifest.tmpl" />
+            <File Id="paper.map"                Name="map" />
+            <File Id="paper.notfound.tmpl"      Name="notfound.tmpl" />
+            <File Id="paper.search.tmpl"        Name="search.tmpl" />
+            <File Id="paper.shortlog.tmpl"      Name="shortlog.tmpl" />
+            <File Id="paper.shortlogentry.tmpl" Name="shortlogentry.tmpl" />
+            <File Id="paper.tags.tmpl"          Name="tags.tmpl" />
+            <File Id="paper.help.tmpl"          Name="help.tmpl" />
+            <File Id="paper.helptopics.tmpl"    Name="helptopics.tmpl" />
+          </Component>
+        </Directory>
+
+        <Directory Id="templates.rawdir" Name="raw">
+          <Component Id="templates.raw" Guid="$(var.templates.raw.guid)" Win64='$(var.IsX64)'>
+            <File Id="raw.changeset.tmpl"    Name="changeset.tmpl" KeyPath="yes" />
+            <File Id="raw.error.tmpl"        Name="error.tmpl" />
+            <File Id="raw.fileannotate.tmpl" Name="fileannotate.tmpl" />
+            <File Id="raw.filediff.tmpl"     Name="filediff.tmpl" />
+            <File Id="raw.graph.tmpl"        Name="graph.tmpl" />
+            <File Id="raw.graphedge.tmpl"    Name="graphedge.tmpl" />
+            <File Id="raw.graphnode.tmpl"    Name="graphnode.tmpl" />
+            <File Id="raw.index.tmpl"        Name="index.tmpl" />
+            <File Id="raw.manifest.tmpl"     Name="manifest.tmpl" />
+            <File Id="raw.map"               Name="map" />
+            <File Id="raw.notfound.tmpl"     Name="notfound.tmpl" />
+            <File Id="raw.search.tmpl"       Name="search.tmpl" />
+            <File Id="raw.logentry.tmpl"     Name="logentry.tmpl" />
+            <File Id="raw.changelog.tmpl"    Name="changelog.tmpl" />
+          </Component>
+        </Directory>
+
+        <Directory Id="templates.rssdir" Name="rss">
+          <Component Id="templates.rss" Guid="$(var.templates.rss.guid)" Win64='$(var.IsX64)'>
+            <File Id="rss.changelog.tmpl"      Name="changelog.tmpl" KeyPath="yes" />
+            <File Id="rss.changelogentry.tmpl" Name="changelogentry.tmpl" />
+            <File Id="rss.error.tmpl"          Name="error.tmpl" />
+            <File Id="rss.filelog.tmpl"        Name="filelog.tmpl" />
+            <File Id="rss.filelogentry.tmpl"   Name="filelogentry.tmpl" />
+            <File Id="rss.header.tmpl"         Name="header.tmpl" />
+            <File Id="rss.map"                 Name="map" />
+            <File Id="rss.tagentry.tmpl"       Name="tagentry.tmpl" />
+            <File Id="rss.tags.tmpl"           Name="tags.tmpl" />
+            <File Id="rss.bookmarks.tmpl"      Name="bookmarks.tmpl" />
+            <File Id="rss.bookmarkentry.tmpl"  Name="bookmarkentry.tmpl" />
+            <File Id="rss.branchentry.tmpl"    Name="branchentry.tmpl" />
+            <File Id="rss.branches.tmpl"       Name="branches.tmpl" />
+          </Component>
+        </Directory>
+
+        <Directory Id="templates.spartandir" Name="spartan">
+          <Component Id="templates.spartan" Guid="$(var.templates.spartan.guid)" Win64='$(var.IsX64)'>
+            <File Id="spartan.branches.tmpl"       Name="branches.tmpl" KeyPath="yes" />
+            <File Id="spartan.changelog.tmpl"      Name="changelog.tmpl" />
+            <File Id="spartan.changelogentry.tmpl" Name="changelogentry.tmpl" />
+            <File Id="spartan.changeset.tmpl"      Name="changeset.tmpl" />
+            <File Id="spartan.error.tmpl"          Name="error.tmpl" />
+            <File Id="spartan.fileannotate.tmpl"   Name="fileannotate.tmpl" />
+            <File Id="spartan.filediff.tmpl"       Name="filediff.tmpl" />
+            <File Id="spartan.filelog.tmpl"        Name="filelog.tmpl" />
+            <File Id="spartan.filelogentry.tmpl"   Name="filelogentry.tmpl" />
+            <File Id="spartan.filerevision.tmpl"   Name="filerevision.tmpl" />
+            <File Id="spartan.footer.tmpl"         Name="footer.tmpl" />
+            <File Id="spartan.graph.tmpl"          Name="graph.tmpl" />
+            <File Id="spartan.graphentry.tmpl"     Name="graphentry.tmpl" />
+            <File Id="spartan.header.tmpl"         Name="header.tmpl" />
+            <File Id="spartan.index.tmpl"          Name="index.tmpl" />
+            <File Id="spartan.manifest.tmpl"       Name="manifest.tmpl" />
+            <File Id="spartan.map"                 Name="map" />
+            <File Id="spartan.notfound.tmpl"       Name="notfound.tmpl" />
+            <File Id="spartan.search.tmpl"         Name="search.tmpl" />
+            <File Id="spartan.shortlog.tmpl"       Name="shortlog.tmpl" />
+            <File Id="spartan.shortlogentry.tmpl"  Name="shortlogentry.tmpl" />
+            <File Id="spartan.tags.tmpl"           Name="tags.tmpl" />
+          </Component>
+        </Directory>
+
+        <Directory Id="templates.staticdir" Name="static">
+          <Component Id="templates.static" Guid="$(var.templates.static.guid)" Win64='$(var.IsX64)'>
+            <File Id="static.background.png"     Name="background.png" KeyPath="yes" />
+            <File Id="static.coal.file.png"      Name="coal-file.png" />
+            <File Id="static.coal.folder.png"    Name="coal-folder.png" />
+            <File Id="static.followlines.js"     Name="followlines.js" />
+            <File Id="static.mercurial.js"       Name="mercurial.js" />
+            <File Id="static.hgicon.png"         Name="hgicon.png" />
+            <File Id="static.hglogo.png"         Name="hglogo.png" />
+            <File Id="static.style.coal.css"     Name="style-extra-coal.css" />
+            <File Id="static.style.gitweb.css"   Name="style-gitweb.css" />
+            <File Id="static.style.monoblue.css" Name="style-monoblue.css" />
+            <File Id="static.style.paper.css"    Name="style-paper.css" />
+            <File Id="static.style.css"          Name="style.css" />
+            <File Id="static.feed.icon"          Name="feed-icon-14x14.png" />
+          </Component>
+        </Directory>
+
+      </Directory>
+
+    </DirectoryRef>
+  </Fragment>
+
+ </Wix>
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/perf-utils/discovery-helper.sh	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,107 @@
+#!/bin/bash
+#
+# produces two repositories with different common and missing subsets
+#
+#   $ discovery-helper.sh REPO NBHEADS DEPT
+#
+# The Goal is to produce two repositories with some common part and some
+# exclusive part on each side. Provide a source repository REPO, it will
+# produce two repositories REPO-left and REPO-right.
+#
+# Each repository will be missing some revisions exclusive to NBHEADS of the
+# repo topological heads. These heads and revisions exclusive to them (up to
+# DEPTH depth) are stripped.
+#
+# The "left" repository will use the NBHEADS first heads (sorted by
+# description). The "right" use the last NBHEADS one.
+#
+# To find out how many topological heads a repo has, use:
+#
+#   $ hg heads -t -T '{rev}\n' | wc -l
+#
+# Example:
+#
+#  The `pypy-2018-09-01` repository has 192 heads. To produce two repositories
+#  with 92 common heads and ~50 exclusive heads on each side.
+#
+#    $ ./discovery-helper.sh pypy-2018-08-01 50 10
+
+set -euo pipefail
+
+printusage () {
+     echo "usage: `basename $0` REPO NBHEADS DEPTH [left|right]" >&2
+}
+
+if [ $# -lt 3 ]; then
+    printusage
+    exit 64
+fi
+
+repo="$1"
+shift
+
+nbheads="$1"
+shift
+
+depth="$1"
+shift
+
+doleft=1
+doright=1
+if [ $# -gt 1 ]; then
+    printusage
+    exit 64
+elif [ $# -eq 1 ]; then
+    if [ "$1" == "left" ]; then
+        doleft=1
+        doright=0
+    elif [ "$1" == "right" ]; then
+        doleft=0
+        doright=1
+    else
+        printusage
+        exit 64
+    fi
+fi
+
+leftrepo="${repo}-${nbheads}h-${depth}d-left"
+rightrepo="${repo}-${nbheads}h-${depth}d-right"
+
+left="first(sort(heads(all()), 'desc'), $nbheads)"
+right="last(sort(heads(all()), 'desc'), $nbheads)"
+
+leftsubset="ancestors($left, $depth) and only($left, heads(all() - $left))"
+rightsubset="ancestors($right, $depth) and only($right, heads(all() - $right))"
+
+echo '### creating left/right repositories with missing changesets:'
+if [ $doleft -eq 1 ]; then
+    echo '# left  revset:' '"'${leftsubset}'"'
+fi
+if [ $doright -eq 1 ]; then
+    echo '# right revset:' '"'${rightsubset}'"'
+fi
+
+buildone() {
+    side="$1"
+    dest="$2"
+    revset="$3"
+    echo "### building $side repository: $dest"
+    if [ -e "$dest" ]; then
+        echo "destination repo already exists: $dest" >&2
+        exit 1
+    fi
+    echo '# cloning'
+    if ! cp --recursive --reflink=always ${repo} ${dest}; then
+        hg clone --noupdate "${repo}" "${dest}"
+    fi
+    echo '# stripping' '"'${revset}'"'
+    hg -R "${dest}" --config extensions.strip= strip --rev "$revset" --no-backup
+}
+
+if [ $doleft -eq 1 ]; then
+    buildone left "$leftrepo" "$leftsubset"
+fi
+
+if [ $doright -eq 1 ]; then
+    buildone right "$rightrepo" "$rightsubset"
+fi
--- a/contrib/perf.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/perf.py	Wed Apr 17 13:41:18 2019 -0400
@@ -1,5 +1,34 @@
 # perf.py - performance test routines
-'''helper extension to measure performance'''
+'''helper extension to measure performance
+
+Configurations
+==============
+
+``perf``
+--------
+
+``all-timing``
+    When set, additional statistics will be reported for each benchmark: best,
+    worst, median average. If not set only the best timing is reported
+    (default: off).
+
+``presleep``
+  number of second to wait before any group of runs (default: 1)
+
+``run-limits``
+  Control the number of runs each benchmark will perform. The option value
+  should be a list of `<time>-<numberofrun>` pairs. After each run the
+  conditions are considered in order with the following logic:
+
+      If benchmark has been running for <time> seconds, and we have performed
+      <numberofrun> iterations, stop the benchmark,
+
+  The default value is: `3.0-100, 10.0-3`
+
+``stub``
+    When set, benchmarks will only be run once, useful for testing
+    (default: off)
+'''
 
 # "historical portability" policy of perf.py:
 #
@@ -65,6 +94,10 @@
 except ImportError:
     pass
 try:
+    from mercurial.utils import repoviewutil # since 5.0
+except ImportError:
+    repoviewutil = None
+try:
     from mercurial import scmutil # since 1.9 (or 8b252e826c68)
 except ImportError:
     pass
@@ -207,6 +240,9 @@
     configitem(b'perf', b'all-timing',
         default=mercurial.configitems.dynamicdefault,
     )
+    configitem(b'perf', b'run-limits',
+        default=mercurial.configitems.dynamicdefault,
+    )
 except (ImportError, AttributeError):
     pass
 
@@ -279,7 +315,34 @@
 
     # experimental config: perf.all-timing
     displayall = ui.configbool(b"perf", b"all-timing", False)
-    return functools.partial(_timer, fm, displayall=displayall), fm
+
+    # experimental config: perf.run-limits
+    limitspec = ui.configlist(b"perf", b"run-limits", [])
+    limits = []
+    for item in limitspec:
+        parts = item.split(b'-', 1)
+        if len(parts) < 2:
+            ui.warn((b'malformatted run limit entry, missing "-": %s\n'
+                     % item))
+            continue
+        try:
+            time_limit = float(pycompat.sysstr(parts[0]))
+        except ValueError as e:
+            ui.warn((b'malformatted run limit entry, %s: %s\n'
+                     % (pycompat.bytestr(e), item)))
+            continue
+        try:
+            run_limit = int(pycompat.sysstr(parts[1]))
+        except ValueError as e:
+            ui.warn((b'malformatted run limit entry, %s: %s\n'
+                     % (pycompat.bytestr(e), item)))
+            continue
+        limits.append((time_limit, run_limit))
+    if not limits:
+        limits = DEFAULTLIMITS
+
+    t = functools.partial(_timer, fm, displayall=displayall, limits=limits)
+    return t, fm
 
 def stub_timer(fm, func, setup=None, title=None):
     if setup is not None:
@@ -297,12 +360,21 @@
     a, b = ostart, ostop
     r.append((cstop - cstart, b[0] - a[0], b[1]-a[1]))
 
-def _timer(fm, func, setup=None, title=None, displayall=False):
+
+# list of stop condition (elapsed time, minimal run count)
+DEFAULTLIMITS = (
+    (3.0, 100),
+    (10.0, 3),
+)
+
+def _timer(fm, func, setup=None, title=None, displayall=False,
+           limits=DEFAULTLIMITS):
     gc.collect()
     results = []
     begin = util.timer()
     count = 0
-    while True:
+    keepgoing = True
+    while keepgoing:
         if setup is not None:
             setup()
         with timeone() as item:
@@ -310,10 +382,12 @@
         count += 1
         results.append(item[0])
         cstop = util.timer()
-        if cstop - begin > 3 and count >= 100:
-            break
-        if cstop - begin > 10 and count >= 3:
-            break
+        # Look for a stop condition.
+        elapsed = cstop - begin
+        for t, mincount in limits:
+            if elapsed >= t and count >= mincount:
+                keepgoing = False
+                break
 
     formatone(fm, results, title=title, result=r,
               displayall=displayall)
@@ -401,7 +475,8 @@
     # subsettable is defined in:
     # - branchmap since 2.9 (or 175c6fd8cacc)
     # - repoview since 2.5 (or 59a9f18d4587)
-    for mod in (branchmap, repoview):
+    # - repoviewutil since 5.0
+    for mod in (branchmap, repoview, repoviewutil):
         subsettable = getattr(mod, 'subsettable', None)
         if subsettable:
             return subsettable
@@ -519,7 +594,11 @@
         repo.ui.quiet = True
         matcher = scmutil.match(repo[None])
         opts[b'dry_run'] = True
-        timer(lambda: scmutil.addremove(repo, matcher, b"", opts))
+        if b'uipathfn' in getargspec(scmutil.addremove).args:
+            uipathfn = scmutil.getuipathfn(repo)
+            timer(lambda: scmutil.addremove(repo, matcher, b"", uipathfn, opts))
+        else:
+            timer(lambda: scmutil.addremove(repo, matcher, b"", opts))
     finally:
         repo.ui.quiet = oldquiet
         fm.end()
@@ -535,13 +614,15 @@
 
 @command(b'perfheads', formatteropts)
 def perfheads(ui, repo, **opts):
+    """benchmark the computation of a changelog heads"""
     opts = _byteskwargs(opts)
     timer, fm = gettimer(ui, opts)
     cl = repo.changelog
+    def s():
+        clearcaches(cl)
     def d():
         len(cl.headrevs())
-        clearcaches(cl)
-    timer(d)
+    timer(d, setup=s)
     fm.end()
 
 @command(b'perftags', formatteropts+
@@ -911,9 +992,7 @@
         raise error.Abort((b'default repository not configured!'),
                           hint=(b"see 'hg help config.paths'"))
     dest = path.pushloc or path.loc
-    branches = (path.branch, opts.get(b'branch') or [])
     ui.status((b'analysing phase of %s\n') % util.hidepassword(dest))
-    revs, checkout = hg.addbranchrevs(repo, repo, branches, opts.get(b'rev'))
     other = hg.peer(repo, opts, dest)
 
     # easier to perform discovery through the operation
@@ -1014,18 +1093,44 @@
     fm.end()
 
 @command(b'perfindex', [
-            (b'', b'rev', b'', b'revision to be looked up (default tip)'),
+            (b'', b'rev', [], b'revision to be looked up (default tip)'),
+            (b'', b'no-lookup', None, b'do not revision lookup post creation'),
          ] + formatteropts)
 def perfindex(ui, repo, **opts):
+    """benchmark index creation time followed by a lookup
+
+    The default is to look `tip` up. Depending on the index implementation,
+    the revision looked up can matters. For example, an implementation
+    scanning the index will have a faster lookup time for `--rev tip` than for
+    `--rev 0`. The number of looked up revisions and their order can also
+    matters.
+
+    Example of useful set to test:
+    * tip
+    * 0
+    * -10:
+    * :10
+    * -10: + :10
+    * :10: + -10:
+    * -10000:
+    * -10000: + 0
+
+    It is not currently possible to check for lookup of a missing node. For
+    deeper lookup benchmarking, checkout the `perfnodemap` command."""
     import mercurial.revlog
     opts = _byteskwargs(opts)
     timer, fm = gettimer(ui, opts)
     mercurial.revlog._prereadsize = 2**24 # disable lazy parser in old hg
-    if opts[b'rev'] is None:
-        n = repo[b"tip"].node()
+    if opts[b'no_lookup']:
+        if opts['rev']:
+            raise error.Abort('--no-lookup and --rev are mutually exclusive')
+        nodes = []
+    elif not opts[b'rev']:
+        nodes = [repo[b"tip"].node()]
     else:
-        rev = scmutil.revsingle(repo, opts[b'rev'])
-        n = repo[rev].node()
+        revs = scmutil.revrange(repo, opts[b'rev'])
+        cl = repo.changelog
+        nodes = [cl.node(r) for r in revs]
 
     unfi = repo.unfiltered()
     # find the filecache func directly
@@ -1036,7 +1141,67 @@
         clearchangelog(unfi)
     def d():
         cl = makecl(unfi)
-        cl.rev(n)
+        for n in nodes:
+            cl.rev(n)
+    timer(d, setup=setup)
+    fm.end()
+
+@command(b'perfnodemap', [
+          (b'', b'rev', [], b'revision to be looked up (default tip)'),
+          (b'', b'clear-caches', True, b'clear revlog cache between calls'),
+    ] + formatteropts)
+def perfnodemap(ui, repo, **opts):
+    """benchmark the time necessary to look up revision from a cold nodemap
+
+    Depending on the implementation, the amount and order of revision we look
+    up can varies. Example of useful set to test:
+    * tip
+    * 0
+    * -10:
+    * :10
+    * -10: + :10
+    * :10: + -10:
+    * -10000:
+    * -10000: + 0
+
+    The command currently focus on valid binary lookup. Benchmarking for
+    hexlookup, prefix lookup and missing lookup would also be valuable.
+    """
+    import mercurial.revlog
+    opts = _byteskwargs(opts)
+    timer, fm = gettimer(ui, opts)
+    mercurial.revlog._prereadsize = 2**24 # disable lazy parser in old hg
+
+    unfi = repo.unfiltered()
+    clearcaches = opts['clear_caches']
+    # find the filecache func directly
+    # This avoid polluting the benchmark with the filecache logic
+    makecl = unfi.__class__.changelog.func
+    if not opts[b'rev']:
+        raise error.Abort('use --rev to specify revisions to look up')
+    revs = scmutil.revrange(repo, opts[b'rev'])
+    cl = repo.changelog
+    nodes = [cl.node(r) for r in revs]
+
+    # use a list to pass reference to a nodemap from one closure to the next
+    nodeget = [None]
+    def setnodeget():
+        # probably not necessary, but for good measure
+        clearchangelog(unfi)
+        nodeget[0] = makecl(unfi).nodemap.get
+
+    def d():
+        get = nodeget[0]
+        for n in nodes:
+            get(n)
+
+    setup = None
+    if clearcaches:
+        def setup():
+            setnodeget()
+    else:
+        setnodeget()
+        d() # prewarm the data structure
     timer(d, setup=setup)
     fm.end()
 
@@ -1056,6 +1221,13 @@
 
 @command(b'perfparents', formatteropts)
 def perfparents(ui, repo, **opts):
+    """benchmark the time necessary to fetch one changeset's parents.
+
+    The fetch is done using the `node identifier`, traversing all object layers
+    from the repository object. The first N revisions will be used for this
+    benchmark. N is controlled by the ``perf.parentscount`` config option
+    (default: 1000).
+    """
     opts = _byteskwargs(opts)
     timer, fm = gettimer(ui, opts)
     # control the number of commits perfparents iterates over
@@ -2290,13 +2462,18 @@
             view = repo
         else:
             view = repo.filtered(filtername)
+        if util.safehasattr(view._branchcaches, '_per_filter'):
+            filtered = view._branchcaches._per_filter
+        else:
+            # older versions
+            filtered = view._branchcaches
         def d():
             if clear_revbranch:
                 repo.revbranchcache()._clear()
             if full:
                 view._branchcaches.clear()
             else:
-                view._branchcaches.pop(filtername, None)
+                filtered.pop(filtername, None)
             view.branchmap()
         return d
     # add filter in smaller subset to bigger subset
@@ -2323,10 +2500,15 @@
         # add unfiltered
         allfilters.append(None)
 
-    branchcacheread = safeattrsetter(branchmap, b'read')
+    if util.safehasattr(branchmap.branchcache, 'fromfile'):
+        branchcacheread = safeattrsetter(branchmap.branchcache, b'fromfile')
+        branchcacheread.set(classmethod(lambda *args: None))
+    else:
+        # older versions
+        branchcacheread = safeattrsetter(branchmap, b'read')
+        branchcacheread.set(lambda *args: None)
     branchcachewrite = safeattrsetter(branchmap.branchcache, b'write')
-    branchcacheread.set(lambda repo: None)
-    branchcachewrite.set(lambda bc, repo: None)
+    branchcachewrite.set(lambda *args: None)
     try:
         for name in allfilters:
             printname = name
@@ -2470,9 +2652,15 @@
 
     repo.branchmap() # make sure we have a relevant, up to date branchmap
 
+    try:
+        fromfile = branchmap.branchcache.fromfile
+    except AttributeError:
+        # older versions
+        fromfile = branchmap.read
+
     currentfilter = filter
     # try once without timer, the filter may not be cached
-    while branchmap.read(repo) is None:
+    while fromfile(repo) is None:
         currentfilter = subsettable.get(currentfilter)
         if currentfilter is None:
             raise error.Abort(b'No branchmap cached for %s repo'
@@ -2483,7 +2671,7 @@
         if clearrevlogs:
             clearchangelog(repo)
     def bench():
-        branchmap.read(repo)
+        fromfile(repo)
     timer(bench, setup=setup)
     fm.end()
 
--- a/contrib/python-zstandard/MANIFEST.in	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/MANIFEST.in	Wed Apr 17 13:41:18 2019 -0400
@@ -5,6 +5,5 @@
 include make_cffi.py
 include setup_zstd.py
 include zstd.c
-include zstd_cffi.py
 include LICENSE
 include NEWS.rst
--- a/contrib/python-zstandard/NEWS.rst	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/NEWS.rst	Wed Apr 17 13:41:18 2019 -0400
@@ -8,8 +8,18 @@
 Actions Blocking Release
 ------------------------
 
-* compression and decompression APIs that support ``io.rawIOBase`` interface
+* compression and decompression APIs that support ``io.RawIOBase`` interface
   (#13).
+* ``stream_writer()`` APIs should support ``io.RawIOBase`` interface.
+* Properly handle non-blocking I/O and partial writes for objects implementing
+  ``io.RawIOBase``.
+* Make ``write_return_read=True`` the default for objects implementing
+  ``io.RawIOBase``.
+* Audit for consistent and proper behavior of ``flush()`` and ``close()`` for
+  all objects implementing ``io.RawIOBase``. Is calling ``close()`` on
+  wrapped stream acceptable, should ``__exit__`` always call ``close()``,
+  should ``close()`` imply ``flush()``, etc.
+* Consider making reads across frames configurable behavior.
 * Refactor module names so C and CFFI extensions live under ``zstandard``
   package.
 * Overall API design review.
@@ -43,6 +53,11 @@
 * Consider a ``chunker()`` API for decompression.
 * Consider stats for ``chunker()`` API, including finding the last consumed
   offset of input data.
+* Consider exposing ``ZSTD_cParam_getBounds()`` and
+  ``ZSTD_dParam_getBounds()`` APIs.
+* Consider controls over resetting compression contexts (session only, parameters,
+  or session and parameters).
+* Actually use the CFFI backend in fuzzing tests.
 
 Other Actions Not Blocking Release
 ---------------------------------------
@@ -51,6 +66,207 @@
 * API for ensuring max memory ceiling isn't exceeded.
 * Move off nose for testing.
 
+0.11.0 (released 2019-02-24)
+============================
+
+Backwards Compatibility Nodes
+-----------------------------
+
+* ``ZstdDecompressor.read()`` now allows reading sizes of ``-1`` or ``0``
+  and defaults to ``-1``, per the documented behavior of
+  ``io.RawIOBase.read()``. Previously, we required an argument that was
+  a positive value.
+* The ``readline()``, ``readlines()``, ``__iter__``, and ``__next__`` methods
+  of ``ZstdDecompressionReader()`` now raise ``io.UnsupportedOperation``
+  instead of ``NotImplementedError``.
+* ``ZstdDecompressor.stream_reader()`` now accepts a ``read_across_frames``
+  argument. The default value will likely be changed in a future release
+  and consumers are advised to pass the argument to avoid unwanted change
+  of behavior in the future.
+* ``setup.py`` now always disables the CFFI backend if the installed
+  CFFI package does not meet the minimum version requirements. Before, it was
+  possible for the CFFI backend to be generated and a run-time error to
+  occur.
+* In the CFFI backend, ``CompressionReader`` and ``DecompressionReader``
+  were renamed to ``ZstdCompressionReader`` and ``ZstdDecompressionReader``,
+  respectively so naming is identical to the C extension. This should have
+  no meaningful end-user impact, as instances aren't meant to be
+  constructed directly.
+* ``ZstdDecompressor.stream_writer()`` now accepts a ``write_return_read``
+  argument to control whether ``write()`` returns the number of bytes
+  read from the source / written to the decompressor. It defaults to off,
+  which preserves the existing behavior of returning the number of bytes
+  emitted from the decompressor. The default will change in a future release
+  so behavior aligns with the specified behavior of ``io.RawIOBase``.
+* ``ZstdDecompressionWriter.__exit__`` now calls ``self.close()``. This
+  will result in that stream plus the underlying stream being closed as
+  well. If this behavior is not desirable, do not use instances as
+  context managers.
+* ``ZstdCompressor.stream_writer()`` now accepts a ``write_return_read``
+  argument to control whether ``write()`` returns the number of bytes read
+  from the source / written to the compressor. It defaults to off, which
+  preserves the existing behavior of returning the number of bytes emitted
+  from the compressor. The default will change in a future release so
+  behavior aligns with the specified behavior of ``io.RawIOBase``.
+* ``ZstdCompressionWriter.__exit__`` now calls ``self.close()``. This will
+  result in that stream plus any underlying stream being closed as well. If
+  this behavior is not desirable, do not use instances as context managers.
+* ``ZstdDecompressionWriter`` no longer requires being used as a context
+  manager (#57).
+* ``ZstdCompressionWriter`` no longer requires being used as a context
+  manager (#57).
+* The ``overlap_size_log`` attribute on ``CompressionParameters`` instances
+  has been deprecated and will be removed in a future release. The
+  ``overlap_log`` attribute should be used instead.
+* The ``overlap_size_log`` argument to ``CompressionParameters`` has been
+  deprecated and will be removed in a future release. The ``overlap_log``
+  argument should be used instead.
+* The ``ldm_hash_every_log`` attribute on ``CompressionParameters`` instances
+  has been deprecated and will be removed in a future release. The
+  ``ldm_hash_rate_log`` attribute should be used instead.
+* The ``ldm_hash_every_log`` argument to ``CompressionParameters`` has been
+  deprecated and will be removed in a future release. The ``ldm_hash_rate_log``
+  argument should be used instead.
+* The ``compression_strategy`` argument to ``CompressionParameters`` has been
+  deprecated and will be removed in a future release. The ``strategy``
+  argument should be used instead.
+* The ``SEARCHLENGTH_MIN`` and ``SEARCHLENGTH_MAX`` constants are deprecated
+  and will be removed in a future release. Use ``MINMATCH_MIN`` and
+  ``MINMATCH_MAX`` instead.
+* The ``zstd_cffi`` module has been renamed to ``zstandard.cffi``. As had
+  been documented in the ``README`` file since the ``0.9.0`` release, the
+  module should not be imported directly at its new location. Instead,
+  ``import zstandard`` to cause an appropriate backend module to be loaded
+  automatically.
+
+Bug Fixes
+---------
+
+* CFFI backend could encounter a failure when sending an empty chunk into
+  ``ZstdDecompressionObj.decompress()``. The issue has been fixed.
+* CFFI backend could encounter an error when calling
+  ``ZstdDecompressionReader.read()`` if there was data remaining in an
+  internal buffer. The issue has been fixed. (#71)
+
+Changes
+-------
+
+* ``ZstDecompressionObj.decompress()`` now properly handles empty inputs in
+  the CFFI backend.
+* ``ZstdCompressionReader`` now implements ``read1()`` and ``readinto1()``.
+  These are part of the ``io.BufferedIOBase`` interface.
+* ``ZstdCompressionReader`` has gained a ``readinto(b)`` method for reading
+  compressed output into an existing buffer.
+* ``ZstdCompressionReader.read()`` now defaults to ``size=-1`` and accepts
+  read sizes of ``-1`` and ``0``. The new behavior aligns with the documented
+  behavior of ``io.RawIOBase``.
+* ``ZstdCompressionReader`` now implements ``readall()``. Previously, this
+  method raised ``NotImplementedError``.
+* ``ZstdDecompressionReader`` now implements ``read1()`` and ``readinto1()``.
+  These are part of the ``io.BufferedIOBase`` interface.
+* ``ZstdDecompressionReader.read()`` now defaults to ``size=-1`` and accepts
+  read sizes of ``-1`` and ``0``. The new behavior aligns with the documented
+  behavior of ``io.RawIOBase``.
+* ``ZstdDecompressionReader()`` now implements ``readall()``. Previously, this
+  method raised ``NotImplementedError``.
+* The ``readline()``, ``readlines()``, ``__iter__``, and ``__next__`` methods
+  of ``ZstdDecompressionReader()`` now raise ``io.UnsupportedOperation``
+  instead of ``NotImplementedError``. This reflects a decision to never
+  implement text-based I/O on (de)compressors and keep the low-level API
+  operating in the binary domain. (#13)
+* ``README.rst`` now documented how to achieve linewise iteration using
+  an ``io.TextIOWrapper`` with a ``ZstdDecompressionReader``.
+* ``ZstdDecompressionReader`` has gained a ``readinto(b)`` method for
+  reading decompressed output into an existing buffer. This allows chaining
+  to an ``io.TextIOWrapper`` on Python 3 without using an ``io.BufferedReader``.
+* ``ZstdDecompressor.stream_reader()`` now accepts a ``read_across_frames``
+  argument to control behavior when the input data has multiple zstd
+  *frames*. When ``False`` (the default for backwards compatibility), a
+  ``read()`` will stop when the end of a zstd *frame* is encountered. When
+  ``True``, ``read()`` can potentially return data spanning multiple zstd
+  *frames*. The default will likely be changed to ``True`` in a future
+  release.
+* ``setup.py`` now performs CFFI version sniffing and disables the CFFI
+  backend if CFFI is too old. Previously, we only used ``install_requires``
+  to enforce the CFFI version and not all build modes would properly enforce
+  the minimum CFFI version. (#69)
+* CFFI's ``ZstdDecompressionReader.read()`` now properly handles data
+  remaining in any internal buffer. Before, repeated ``read()`` could
+  result in *random* errors. (#71)
+* Upgraded various Python packages in CI environment.
+* Upgrade to hypothesis 4.5.11.
+* In the CFFI backend, ``CompressionReader`` and ``DecompressionReader``
+  were renamed to ``ZstdCompressionReader`` and ``ZstdDecompressionReader``,
+  respectively.
+* ``ZstdDecompressor.stream_writer()`` now accepts a ``write_return_read``
+  argument to control whether ``write()`` returns the number of bytes read
+  from the source. It defaults to ``False`` to preserve backwards
+  compatibility.
+* ``ZstdDecompressor.stream_writer()`` now implements the ``io.RawIOBase``
+  interface and behaves as a proper stream object.
+* ``ZstdCompressor.stream_writer()`` now accepts a ``write_return_read``
+  argument to control whether ``write()`` returns the number of bytes read
+  from the source. It defaults to ``False`` to preserve backwards
+  compatibility.
+* ``ZstdCompressionWriter`` now implements the ``io.RawIOBase`` interface and
+  behaves as a proper stream object. ``close()`` will now close the stream
+  and the underlying stream (if possible). ``__exit__`` will now call
+  ``close()``. Methods like ``writable()`` and ``fileno()`` are implemented.
+* ``ZstdDecompressionWriter`` no longer must be used as a context manager.
+* ``ZstdCompressionWriter`` no longer must be used as a context manager.
+  When not using as a context manager, it is important to call
+  ``flush(FRAME_FRAME)`` or the compression stream won't be properly
+  terminated and decoders may complain about malformed input.
+* ``ZstdCompressionWriter.flush()`` (what is returned from
+  ``ZstdCompressor.stream_writer()``) now accepts an argument controlling the
+  flush behavior. Its value can be one of the new constants
+  ``FLUSH_BLOCK`` or ``FLUSH_FRAME``.
+* ``ZstdDecompressionObj`` instances now have a ``flush([length=None])`` method.
+  This provides parity with standard library equivalent types. (#65)
+* ``CompressionParameters`` no longer redundantly store individual compression
+  parameters on each instance. Instead, compression parameters are stored inside
+  the underlying ``ZSTD_CCtx_params`` instance. Attributes for obtaining
+  parameters are now properties rather than instance variables.
+* Exposed the ``STRATEGY_BTULTRA2`` constant.
+* ``CompressionParameters`` instances now expose an ``overlap_log`` attribute.
+  This behaves identically to the ``overlap_size_log`` attribute.
+* ``CompressionParameters()`` now accepts an ``overlap_log`` argument that
+  behaves identically to the ``overlap_size_log`` argument. An error will be
+  raised if both arguments are specified.
+* ``CompressionParameters`` instances now expose an ``ldm_hash_rate_log``
+  attribute. This behaves identically to the ``ldm_hash_every_log`` attribute.
+* ``CompressionParameters()`` now accepts a ``ldm_hash_rate_log`` argument that
+  behaves identically to the ``ldm_hash_every_log`` argument. An error will be
+  raised if both arguments are specified.
+* ``CompressionParameters()`` now accepts a ``strategy`` argument that behaves
+  identically to the ``compression_strategy`` argument. An error will be raised
+  if both arguments are specified.
+* The ``MINMATCH_MIN`` and ``MINMATCH_MAX`` constants were added. They are
+  semantically equivalent to the old ``SEARCHLENGTH_MIN`` and
+  ``SEARCHLENGTH_MAX`` constants.
+* Bundled zstandard library upgraded from 1.3.7 to 1.3.8.
+* ``setup.py`` denotes support for Python 3.7 (Python 3.7 was supported and
+  tested in the 0.10 release).
+* ``zstd_cffi`` module has been renamed to ``zstandard.cffi``.
+* ``ZstdCompressor.stream_writer()`` now reuses a buffer in order to avoid
+  allocating a new buffer for every operation. This should result in faster
+  performance in cases where ``write()`` or ``flush()`` are being called
+  frequently. (#62)
+* Bundled zstandard library upgraded from 1.3.6 to 1.3.7.
+
+0.10.2 (released 2018-11-03)
+============================
+
+Bug Fixes
+---------
+
+* ``zstd_cffi.py`` added to ``setup.py`` (#60).
+
+Changes
+-------
+
+* Change some integer casts to avoid ``ssize_t`` (#61).
+
 0.10.1 (released 2018-10-08)
 ============================
 
--- a/contrib/python-zstandard/README.rst	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/README.rst	Wed Apr 17 13:41:18 2019 -0400
@@ -20,9 +20,9 @@
 Requirements
 ============
 
-This extension is designed to run with Python 2.7, 3.4, 3.5, and 3.6
-on common platforms (Linux, Windows, and OS X). x86 and x86_64 are well-tested
-on Windows. Only x86_64 is well-tested on Linux and macOS.
+This extension is designed to run with Python 2.7, 3.4, 3.5, 3.6, and 3.7
+on common platforms (Linux, Windows, and OS X). On PyPy (both PyPy2 and PyPy3) we support version 6.0.0 and above. 
+x86 and x86_64 are well-tested on Windows. Only x86_64 is well-tested on Linux and macOS.
 
 Installing
 ==========
@@ -215,7 +215,7 @@
 
                # Do something with compressed chunk.
 
-When the context manager exists or ``close()`` is called, the stream is closed,
+When the context manager exits or ``close()`` is called, the stream is closed,
 underlying resources are released, and future operations against the compression
 stream will fail.
 
@@ -251,8 +251,54 @@
 Streaming Input API
 ^^^^^^^^^^^^^^^^^^^
 
-``stream_writer(fh)`` (which behaves as a context manager) allows you to *stream*
-data into a compressor.::
+``stream_writer(fh)`` allows you to *stream* data into a compressor.
+
+Returned instances implement the ``io.RawIOBase`` interface. Only methods
+that involve writing will do useful things.
+
+The argument to ``stream_writer()`` must have a ``write(data)`` method. As
+compressed data is available, ``write()`` will be called with the compressed
+data as its argument. Many common Python types implement ``write()``, including
+open file handles and ``io.BytesIO``.
+
+The ``write(data)`` method is used to feed data into the compressor.
+
+The ``flush([flush_mode=FLUSH_BLOCK])`` method can be called to evict whatever
+data remains within the compressor's internal state into the output object. This
+may result in 0 or more ``write()`` calls to the output object. This method
+accepts an optional ``flush_mode`` argument to control the flushing behavior.
+Its value can be any of the ``FLUSH_*`` constants.
+
+Both ``write()`` and ``flush()`` return the number of bytes written to the
+object's ``write()``. In many cases, small inputs do not accumulate enough
+data to cause a write and ``write()`` will return ``0``.
+
+Calling ``close()`` will mark the stream as closed and subsequent I/O
+operations will raise ``ValueError`` (per the documented behavior of
+``io.RawIOBase``). ``close()`` will also call ``close()`` on the underlying
+stream if such a method exists.
+
+Typically usage is as follows::
+
+   cctx = zstd.ZstdCompressor(level=10)
+   compressor = cctx.stream_writer(fh)
+
+   compressor.write(b'chunk 0\n')
+   compressor.write(b'chunk 1\n')
+   compressor.flush()
+   # Receiver will be able to decode ``chunk 0\nchunk 1\n`` at this point.
+   # Receiver is also expecting more data in the zstd *frame*.
+
+   compressor.write(b'chunk 2\n')
+   compressor.flush(zstd.FLUSH_FRAME)
+   # Receiver will be able to decode ``chunk 0\nchunk 1\nchunk 2``.
+   # Receiver is expecting no more data, as the zstd frame is closed.
+   # Any future calls to ``write()`` at this point will construct a new
+   # zstd frame.
+
+Instances can be used as context managers. Exiting the context manager is
+the equivalent of calling ``close()``, which is equivalent to calling
+``flush(zstd.FLUSH_FRAME)``::
 
    cctx = zstd.ZstdCompressor(level=10)
    with cctx.stream_writer(fh) as compressor:
@@ -260,22 +306,12 @@
        compressor.write(b'chunk 1')
        ...
 
-The argument to ``stream_writer()`` must have a ``write(data)`` method. As
-compressed data is available, ``write()`` will be called with the compressed
-data as its argument. Many common Python types implement ``write()``, including
-open file handles and ``io.BytesIO``.
+.. important::
 
-``stream_writer()`` returns an object representing a streaming compressor
-instance. It **must** be used as a context manager. That object's
-``write(data)`` method is used to feed data into the compressor.
-
-A ``flush()`` method can be called to evict whatever data remains within the
-compressor's internal state into the output object. This may result in 0 or
-more ``write()`` calls to the output object.
-
-Both ``write()`` and ``flush()`` return the number of bytes written to the
-object's ``write()``. In many cases, small inputs do not accumulate enough
-data to cause a write and ``write()`` will return ``0``.
+   If ``flush(FLUSH_FRAME)`` is not called, emitted data doesn't constitute
+   a full zstd *frame* and consumers of this data may complain about malformed
+   input. It is recommended to use instances as a context manager to ensure
+   *frames* are properly finished.
 
 If the size of the data being fed to this streaming compressor is known,
 you can declare it before compression begins::
@@ -310,6 +346,14 @@
         ...
         total_written = compressor.tell()
 
+``stream_writer()`` accepts a ``write_return_read`` boolean argument to control
+the return value of ``write()``. When ``False`` (the default), ``write()`` returns
+the number of bytes that were ``write()``en to the underlying object. When
+``True``, ``write()`` returns the number of bytes read from the input that
+were subsequently written to the compressor. ``True`` is the *proper* behavior
+for ``write()`` as specified by the ``io.RawIOBase`` interface and will become
+the default value in a future release.
+
 Streaming Output API
 ^^^^^^^^^^^^^^^^^^^^
 
@@ -654,27 +698,63 @@
 ``tell()`` returns the number of decompressed bytes read so far.
 
 Not all I/O methods are implemented. Notably missing is support for
-``readline()``, ``readlines()``, and linewise iteration support. Support for
-these is planned for a future release.
+``readline()``, ``readlines()``, and linewise iteration support. This is
+because streams operate on binary data - not text data. If you want to
+convert decompressed output to text, you can chain an ``io.TextIOWrapper``
+to the stream::
+
+   with open(path, 'rb') as fh:
+       dctx = zstd.ZstdDecompressor()
+       stream_reader = dctx.stream_reader(fh)
+       text_stream = io.TextIOWrapper(stream_reader, encoding='utf-8')
+
+       for line in text_stream:
+           ...
+
+The ``read_across_frames`` argument to ``stream_reader()`` controls the
+behavior of read operations when the end of a zstd *frame* is encountered.
+When ``False`` (the default), a read will complete when the end of a
+zstd *frame* is encountered. When ``True``, a read can potentially
+return data spanning multiple zstd *frames*.
 
 Streaming Input API
 ^^^^^^^^^^^^^^^^^^^
 
-``stream_writer(fh)`` can be used to incrementally send compressed data to a
-decompressor.::
+``stream_writer(fh)`` allows you to *stream* data into a decompressor.
+
+Returned instances implement the ``io.RawIOBase`` interface. Only methods
+that involve writing will do useful things.
+
+The argument to ``stream_writer()`` is typically an object that also implements
+``io.RawIOBase``. But any object with a ``write(data)`` method will work. Many
+common Python types conform to this interface, including open file handles
+and ``io.BytesIO``.
+
+Behavior is similar to ``ZstdCompressor.stream_writer()``: compressed data
+is sent to the decompressor by calling ``write(data)`` and decompressed
+output is written to the underlying stream by calling its ``write(data)``
+method.::
 
     dctx = zstd.ZstdDecompressor()
-    with dctx.stream_writer(fh) as decompressor:
-        decompressor.write(compressed_data)
+    decompressor = dctx.stream_writer(fh)
 
-This behaves similarly to ``zstd.ZstdCompressor``: compressed data is written to
-the decompressor by calling ``write(data)`` and decompressed output is written
-to the output object by calling its ``write(data)`` method.
+    decompressor.write(compressed_data)
+    ...
+
 
 Calls to ``write()`` will return the number of bytes written to the output
 object. Not all inputs will result in bytes being written, so return values
 of ``0`` are possible.
 
+Like the ``stream_writer()`` compressor, instances can be used as context
+managers. However, context managers add no extra special behavior and offer
+little to no benefit to being used.
+
+Calling ``close()`` will mark the stream as closed and subsequent I/O operations
+will raise ``ValueError`` (per the documented behavior of ``io.RawIOBase``).
+``close()`` will also call ``close()`` on the underlying stream if such a
+method exists.
+
 The size of chunks being ``write()`` to the destination can be specified::
 
     dctx = zstd.ZstdDecompressor()
@@ -687,6 +767,13 @@
     with dctx.stream_writer(fh) as decompressor:
         byte_size = decompressor.memory_size()
 
+``stream_writer()`` accepts a ``write_return_read`` boolean argument to control
+the return value of ``write()``. When ``False`` (the default)``, ``write()``
+returns the number of bytes that were ``write()``en to the underlying stream.
+When ``True``, ``write()`` returns the number of bytes read from the input.
+``True`` is the *proper* behavior for ``write()`` as specified by the
+``io.RawIOBase`` interface and will become the default in a future release.
+
 Streaming Output API
 ^^^^^^^^^^^^^^^^^^^^
 
@@ -791,6 +878,10 @@
    memory (re)allocations, this streaming decompression API isn't as
    efficient as other APIs.
 
+For compatibility with the standard library APIs, instances expose a
+``flush([length=None])`` method. This method no-ops and has no meaningful
+side-effects, making it safe to call any time.
+
 Batch Decompression API
 ^^^^^^^^^^^^^^^^^^^^^^^
 
@@ -1147,18 +1238,21 @@
 * search_log
 * min_match
 * target_length
-* compression_strategy
+* strategy
+* compression_strategy (deprecated: same as ``strategy``)
 * write_content_size
 * write_checksum
 * write_dict_id
 * job_size
-* overlap_size_log
+* overlap_log
+* overlap_size_log (deprecated: same as ``overlap_log``)
 * force_max_window
 * enable_ldm
 * ldm_hash_log
 * ldm_min_match
 * ldm_bucket_size_log
-* ldm_hash_every_log
+* ldm_hash_rate_log
+* ldm_hash_every_log (deprecated: same as ``ldm_hash_rate_log``)
 * threads
 
 Some of these are very low-level settings. It may help to consult the official
@@ -1240,6 +1334,13 @@
 MAGIC_NUMBER
     Frame header as an integer
 
+FLUSH_BLOCK
+    Flushing behavior that denotes to flush a zstd block. A decompressor will
+    be able to decode all data fed into the compressor so far.
+FLUSH_FRAME
+    Flushing behavior that denotes to end a zstd frame. Any new data fed
+    to the compressor will start a new frame.
+
 CONTENTSIZE_UNKNOWN
     Value for content size when the content size is unknown.
 CONTENTSIZE_ERROR
@@ -1261,10 +1362,18 @@
     Minimum value for compression parameter
 SEARCHLOG_MAX
     Maximum value for compression parameter
+MINMATCH_MIN
+    Minimum value for compression parameter
+MINMATCH_MAX
+    Maximum value for compression parameter
 SEARCHLENGTH_MIN
     Minimum value for compression parameter
+
+    Deprecated: use ``MINMATCH_MIN``
 SEARCHLENGTH_MAX
     Maximum value for compression parameter
+
+    Deprecated: use ``MINMATCH_MAX``
 TARGETLENGTH_MIN
     Minimum value for compression parameter
 STRATEGY_FAST
@@ -1283,6 +1392,8 @@
     Compression strategy
 STRATEGY_BTULTRA
     Compression strategy
+STRATEGY_BTULTRA2
+    Compression strategy
 
 FORMAT_ZSTD1
     Zstandard frame format
--- a/contrib/python-zstandard/c-ext/compressionchunker.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/compressionchunker.c	Wed Apr 17 13:41:18 2019 -0400
@@ -43,7 +43,7 @@
 	/* If we have data left in the input, consume it. */
 	while (chunker->input.pos < chunker->input.size) {
 		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_compress_generic(chunker->compressor->cctx, &chunker->output,
+		zresult = ZSTD_compressStream2(chunker->compressor->cctx, &chunker->output,
 			&chunker->input, ZSTD_e_continue);
 		Py_END_ALLOW_THREADS
 
@@ -104,7 +104,7 @@
 	}
 
 	Py_BEGIN_ALLOW_THREADS
-	zresult = ZSTD_compress_generic(chunker->compressor->cctx, &chunker->output,
+	zresult = ZSTD_compressStream2(chunker->compressor->cctx, &chunker->output,
 		&chunker->input, zFlushMode);
 	Py_END_ALLOW_THREADS
 
--- a/contrib/python-zstandard/c-ext/compressiondict.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/compressiondict.c	Wed Apr 17 13:41:18 2019 -0400
@@ -298,13 +298,9 @@
 		cParams = ZSTD_getCParams(level, 0, self->dictSize);
 	}
 	else {
-		cParams.chainLog = compressionParams->chainLog;
-		cParams.hashLog = compressionParams->hashLog;
-		cParams.searchLength = compressionParams->minMatch;
-		cParams.searchLog = compressionParams->searchLog;
-		cParams.strategy = compressionParams->compressionStrategy;
-		cParams.targetLength = compressionParams->targetLength;
-		cParams.windowLog = compressionParams->windowLog;
+		if (to_cparams(compressionParams, &cParams)) {
+			return NULL;
+		}
 	}
 
 	assert(!self->cdict);
--- a/contrib/python-zstandard/c-ext/compressionparams.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/compressionparams.c	Wed Apr 17 13:41:18 2019 -0400
@@ -10,7 +10,7 @@
 
 extern PyObject* ZstdError;
 
-int set_parameter(ZSTD_CCtx_params* params, ZSTD_cParameter param, unsigned value) {
+int set_parameter(ZSTD_CCtx_params* params, ZSTD_cParameter param, int value) {
 	size_t zresult = ZSTD_CCtxParam_setParameter(params, param, value);
 	if (ZSTD_isError(zresult)) {
 		PyErr_Format(ZstdError, "unable to set compression context parameter: %s",
@@ -23,28 +23,41 @@
 
 #define TRY_SET_PARAMETER(params, param, value) if (set_parameter(params, param, value)) return -1;
 
+#define TRY_COPY_PARAMETER(source, dest, param) { \
+	int result; \
+	size_t zresult = ZSTD_CCtxParam_getParameter(source, param, &result); \
+	if (ZSTD_isError(zresult)) { \
+		return 1; \
+	} \
+	zresult = ZSTD_CCtxParam_setParameter(dest, param, result); \
+	if (ZSTD_isError(zresult)) { \
+		return 1; \
+	} \
+}
+
 int set_parameters(ZSTD_CCtx_params* params, ZstdCompressionParametersObject* obj) {
-	TRY_SET_PARAMETER(params, ZSTD_p_format, obj->format);
-	TRY_SET_PARAMETER(params, ZSTD_p_compressionLevel, (unsigned)obj->compressionLevel);
-	TRY_SET_PARAMETER(params, ZSTD_p_windowLog, obj->windowLog);
-	TRY_SET_PARAMETER(params, ZSTD_p_hashLog, obj->hashLog);
-	TRY_SET_PARAMETER(params, ZSTD_p_chainLog, obj->chainLog);
-	TRY_SET_PARAMETER(params, ZSTD_p_searchLog, obj->searchLog);
-	TRY_SET_PARAMETER(params, ZSTD_p_minMatch, obj->minMatch);
-	TRY_SET_PARAMETER(params, ZSTD_p_targetLength, obj->targetLength);
-	TRY_SET_PARAMETER(params, ZSTD_p_compressionStrategy, obj->compressionStrategy);
-	TRY_SET_PARAMETER(params, ZSTD_p_contentSizeFlag, obj->contentSizeFlag);
-	TRY_SET_PARAMETER(params, ZSTD_p_checksumFlag, obj->checksumFlag);
-	TRY_SET_PARAMETER(params, ZSTD_p_dictIDFlag, obj->dictIDFlag);
-	TRY_SET_PARAMETER(params, ZSTD_p_nbWorkers, obj->threads);
-	TRY_SET_PARAMETER(params, ZSTD_p_jobSize, obj->jobSize);
-	TRY_SET_PARAMETER(params, ZSTD_p_overlapSizeLog, obj->overlapSizeLog);
-	TRY_SET_PARAMETER(params, ZSTD_p_forceMaxWindow, obj->forceMaxWindow);
-	TRY_SET_PARAMETER(params, ZSTD_p_enableLongDistanceMatching, obj->enableLongDistanceMatching);
-	TRY_SET_PARAMETER(params, ZSTD_p_ldmHashLog, obj->ldmHashLog);
-	TRY_SET_PARAMETER(params, ZSTD_p_ldmMinMatch, obj->ldmMinMatch);
-	TRY_SET_PARAMETER(params, ZSTD_p_ldmBucketSizeLog, obj->ldmBucketSizeLog);
-	TRY_SET_PARAMETER(params, ZSTD_p_ldmHashEveryLog, obj->ldmHashEveryLog);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_nbWorkers);
+
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_format);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_compressionLevel);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_windowLog);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_hashLog);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_chainLog);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_searchLog);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_minMatch);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_targetLength);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_strategy);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_contentSizeFlag);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_checksumFlag);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_dictIDFlag);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_jobSize);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_overlapLog);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_forceMaxWindow);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_enableLongDistanceMatching);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_ldmHashLog);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_ldmMinMatch);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_ldmBucketSizeLog);
+	TRY_COPY_PARAMETER(obj->params, params, ZSTD_c_ldmHashRateLog);
 
 	return 0;
 }
@@ -64,6 +77,41 @@
 	return set_parameters(params->params, params);
 }
 
+#define TRY_GET_PARAMETER(params, param, value) { \
+    size_t zresult = ZSTD_CCtxParam_getParameter(params, param, value); \
+    if (ZSTD_isError(zresult)) { \
+        PyErr_Format(ZstdError, "unable to retrieve parameter: %s", ZSTD_getErrorName(zresult)); \
+        return 1; \
+    } \
+}
+
+int to_cparams(ZstdCompressionParametersObject* params, ZSTD_compressionParameters* cparams) {
+	int value;
+
+	TRY_GET_PARAMETER(params->params, ZSTD_c_windowLog, &value);
+	cparams->windowLog = value;
+
+	TRY_GET_PARAMETER(params->params, ZSTD_c_chainLog, &value);
+	cparams->chainLog = value;
+
+	TRY_GET_PARAMETER(params->params, ZSTD_c_hashLog, &value);
+	cparams->hashLog = value;
+
+	TRY_GET_PARAMETER(params->params, ZSTD_c_searchLog, &value);
+	cparams->searchLog = value;
+
+	TRY_GET_PARAMETER(params->params, ZSTD_c_minMatch, &value);
+	cparams->minMatch = value;
+
+	TRY_GET_PARAMETER(params->params, ZSTD_c_targetLength, &value);
+	cparams->targetLength = value;
+
+	TRY_GET_PARAMETER(params->params, ZSTD_c_strategy, &value);
+	cparams->strategy = value;
+
+	return 0;
+}
+
 static int ZstdCompressionParameters_init(ZstdCompressionParametersObject* self, PyObject* args, PyObject* kwargs) {
 	static char* kwlist[] = {
 		"format",
@@ -75,50 +123,60 @@
 		"min_match",
 		"target_length",
 		"compression_strategy",
+		"strategy",
 		"write_content_size",
 		"write_checksum",
 		"write_dict_id",
 		"job_size",
+		"overlap_log",
 		"overlap_size_log",
 		"force_max_window",
 		"enable_ldm",
 		"ldm_hash_log",
 		"ldm_min_match",
 		"ldm_bucket_size_log",
+		"ldm_hash_rate_log",
 		"ldm_hash_every_log",
 		"threads",
 		NULL
 	};
 
-	unsigned format = 0;
+	int format = 0;
 	int compressionLevel = 0;
-	unsigned windowLog = 0;
-	unsigned hashLog = 0;
-	unsigned chainLog = 0;
-	unsigned searchLog = 0;
-	unsigned minMatch = 0;
-	unsigned targetLength = 0;
-	unsigned compressionStrategy = 0;
-	unsigned contentSizeFlag = 1;
-	unsigned checksumFlag = 0;
-	unsigned dictIDFlag = 0;
-	unsigned jobSize = 0;
-	unsigned overlapSizeLog = 0;
-	unsigned forceMaxWindow = 0;
-	unsigned enableLDM = 0;
-	unsigned ldmHashLog = 0;
-	unsigned ldmMinMatch = 0;
-	unsigned ldmBucketSizeLog = 0;
-	unsigned ldmHashEveryLog = 0;
+	int windowLog = 0;
+	int hashLog = 0;
+	int chainLog = 0;
+	int searchLog = 0;
+	int minMatch = 0;
+	int targetLength = 0;
+	int compressionStrategy = -1;
+	int strategy = -1;
+	int contentSizeFlag = 1;
+	int checksumFlag = 0;
+	int dictIDFlag = 0;
+	int jobSize = 0;
+	int overlapLog = -1;
+	int overlapSizeLog = -1;
+	int forceMaxWindow = 0;
+	int enableLDM = 0;
+	int ldmHashLog = 0;
+	int ldmMinMatch = 0;
+	int ldmBucketSizeLog = 0;
+	int ldmHashRateLog = -1;
+	int ldmHashEveryLog = -1;
 	int threads = 0;
 
 	if (!PyArg_ParseTupleAndKeywords(args, kwargs,
-		"|IiIIIIIIIIIIIIIIIIIIi:CompressionParameters",
+		"|iiiiiiiiiiiiiiiiiiiiiiii:CompressionParameters",
 		kwlist, &format, &compressionLevel, &windowLog, &hashLog, &chainLog,
-		&searchLog, &minMatch, &targetLength, &compressionStrategy,
-		&contentSizeFlag, &checksumFlag, &dictIDFlag, &jobSize, &overlapSizeLog,
-		&forceMaxWindow, &enableLDM, &ldmHashLog, &ldmMinMatch, &ldmBucketSizeLog,
-		&ldmHashEveryLog, &threads)) {
+		&searchLog, &minMatch, &targetLength, &compressionStrategy, &strategy,
+		&contentSizeFlag, &checksumFlag, &dictIDFlag, &jobSize, &overlapLog,
+		&overlapSizeLog, &forceMaxWindow, &enableLDM, &ldmHashLog, &ldmMinMatch,
+		&ldmBucketSizeLog, &ldmHashRateLog, &ldmHashEveryLog, &threads)) {
+		return -1;
+	}
+
+	if (reset_params(self)) {
 		return -1;
 	}
 
@@ -126,32 +184,70 @@
 		threads = cpu_count();
 	}
 
-	self->format = format;
-	self->compressionLevel = compressionLevel;
-	self->windowLog = windowLog;
-	self->hashLog = hashLog;
-	self->chainLog = chainLog;
-	self->searchLog = searchLog;
-	self->minMatch = minMatch;
-	self->targetLength = targetLength;
-	self->compressionStrategy = compressionStrategy;
-	self->contentSizeFlag = contentSizeFlag;
-	self->checksumFlag = checksumFlag;
-	self->dictIDFlag = dictIDFlag;
-	self->threads = threads;
-	self->jobSize = jobSize;
-	self->overlapSizeLog = overlapSizeLog;
-	self->forceMaxWindow = forceMaxWindow;
-	self->enableLongDistanceMatching = enableLDM;
-	self->ldmHashLog = ldmHashLog;
-	self->ldmMinMatch = ldmMinMatch;
-	self->ldmBucketSizeLog = ldmBucketSizeLog;
-	self->ldmHashEveryLog = ldmHashEveryLog;
+	/* We need to set ZSTD_c_nbWorkers before ZSTD_c_jobSize and ZSTD_c_overlapLog
+	 * because setting ZSTD_c_nbWorkers resets the other parameters. */
+	TRY_SET_PARAMETER(self->params, ZSTD_c_nbWorkers, threads);
+
+	TRY_SET_PARAMETER(self->params, ZSTD_c_format, format);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_compressionLevel, compressionLevel);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_windowLog, windowLog);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_hashLog, hashLog);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_chainLog, chainLog);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_searchLog, searchLog);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_minMatch, minMatch);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_targetLength, targetLength);
 
-	if (reset_params(self)) {
+	if (compressionStrategy != -1 && strategy != -1) {
+		PyErr_SetString(PyExc_ValueError, "cannot specify both compression_strategy and strategy");
+		return -1;
+    }
+
+	if (compressionStrategy != -1) {
+		strategy = compressionStrategy;
+	}
+	else if (strategy == -1) {
+		strategy = 0;
+	}
+
+	TRY_SET_PARAMETER(self->params, ZSTD_c_strategy, strategy);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_contentSizeFlag, contentSizeFlag);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_checksumFlag, checksumFlag);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_dictIDFlag, dictIDFlag);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_jobSize, jobSize);
+
+	if (overlapLog != -1 && overlapSizeLog != -1) {
+		PyErr_SetString(PyExc_ValueError, "cannot specify both overlap_log and overlap_size_log");
 		return -1;
 	}
 
+	if (overlapSizeLog != -1) {
+		overlapLog = overlapSizeLog;
+	}
+	else if (overlapLog == -1) {
+		overlapLog = 0;
+	}
+
+	TRY_SET_PARAMETER(self->params, ZSTD_c_overlapLog, overlapLog);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_forceMaxWindow, forceMaxWindow);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_enableLongDistanceMatching, enableLDM);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_ldmHashLog, ldmHashLog);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_ldmMinMatch, ldmMinMatch);
+	TRY_SET_PARAMETER(self->params, ZSTD_c_ldmBucketSizeLog, ldmBucketSizeLog);
+
+	if (ldmHashRateLog != -1 && ldmHashEveryLog != -1) {
+		PyErr_SetString(PyExc_ValueError, "cannot specify both ldm_hash_rate_log and ldm_hash_everyLog");
+		return -1;
+	}
+
+	if (ldmHashEveryLog != -1) {
+		ldmHashRateLog = ldmHashEveryLog;
+	}
+	else if (ldmHashRateLog == -1) {
+		ldmHashRateLog = 0;
+	}
+
+	TRY_SET_PARAMETER(self->params, ZSTD_c_ldmHashRateLog, ldmHashRateLog);
+
 	return 0;
 }
 
@@ -259,7 +355,7 @@
 
 	val = PyDict_GetItemString(kwargs, "min_match");
 	if (!val) {
-		val = PyLong_FromUnsignedLong(params.searchLength);
+		val = PyLong_FromUnsignedLong(params.minMatch);
 		if (!val) {
 			goto cleanup;
 		}
@@ -336,6 +432,41 @@
 	PyObject_Del(self);
 }
 
+#define PARAM_GETTER(name, param) PyObject* ZstdCompressionParameters_get_##name(PyObject* self, void* unused) { \
+    int result; \
+    size_t zresult; \
+    ZstdCompressionParametersObject* p = (ZstdCompressionParametersObject*)(self); \
+    zresult = ZSTD_CCtxParam_getParameter(p->params, param, &result); \
+    if (ZSTD_isError(zresult)) { \
+        PyErr_Format(ZstdError, "unable to get compression parameter: %s", \
+            ZSTD_getErrorName(zresult)); \
+        return NULL; \
+    } \
+    return PyLong_FromLong(result); \
+}
+
+PARAM_GETTER(format, ZSTD_c_format)
+PARAM_GETTER(compression_level, ZSTD_c_compressionLevel)
+PARAM_GETTER(window_log, ZSTD_c_windowLog)
+PARAM_GETTER(hash_log, ZSTD_c_hashLog)
+PARAM_GETTER(chain_log, ZSTD_c_chainLog)
+PARAM_GETTER(search_log, ZSTD_c_searchLog)
+PARAM_GETTER(min_match, ZSTD_c_minMatch)
+PARAM_GETTER(target_length, ZSTD_c_targetLength)
+PARAM_GETTER(compression_strategy, ZSTD_c_strategy)
+PARAM_GETTER(write_content_size, ZSTD_c_contentSizeFlag)
+PARAM_GETTER(write_checksum, ZSTD_c_checksumFlag)
+PARAM_GETTER(write_dict_id, ZSTD_c_dictIDFlag)
+PARAM_GETTER(job_size, ZSTD_c_jobSize)
+PARAM_GETTER(overlap_log, ZSTD_c_overlapLog)
+PARAM_GETTER(force_max_window, ZSTD_c_forceMaxWindow)
+PARAM_GETTER(enable_ldm, ZSTD_c_enableLongDistanceMatching)
+PARAM_GETTER(ldm_hash_log, ZSTD_c_ldmHashLog)
+PARAM_GETTER(ldm_min_match, ZSTD_c_ldmMinMatch)
+PARAM_GETTER(ldm_bucket_size_log, ZSTD_c_ldmBucketSizeLog)
+PARAM_GETTER(ldm_hash_rate_log, ZSTD_c_ldmHashRateLog)
+PARAM_GETTER(threads, ZSTD_c_nbWorkers)
+
 static PyMethodDef ZstdCompressionParameters_methods[] = {
 	{
 		"from_level",
@@ -352,70 +483,34 @@
 	{ NULL, NULL }
 };
 
-static PyMemberDef ZstdCompressionParameters_members[] = {
-	{ "format", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, format), READONLY,
-	  "compression format" },
-	{ "compression_level", T_INT,
-	  offsetof(ZstdCompressionParametersObject, compressionLevel), READONLY,
-	  "compression level" },
-	{ "window_log", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, windowLog), READONLY,
-	  "window log" },
-	{ "hash_log", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, hashLog), READONLY,
-	  "hash log" },
-	{ "chain_log", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, chainLog), READONLY,
-	  "chain log" },
-	{ "search_log", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, searchLog), READONLY,
-	  "search log" },
-	{ "min_match", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, minMatch), READONLY,
-	  "search length" },
-	{ "target_length", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, targetLength), READONLY,
-	  "target length" },
-	{ "compression_strategy", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, compressionStrategy), READONLY,
-	  "compression strategy" },
-	{ "write_content_size", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, contentSizeFlag), READONLY,
-	  "whether to write content size in frames" },
-	{ "write_checksum", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, checksumFlag), READONLY,
-	  "whether to write checksum in frames" },
-	{ "write_dict_id", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, dictIDFlag), READONLY,
-	  "whether to write dictionary ID in frames" },
-	{ "threads", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, threads), READONLY,
-	  "number of threads to use" },
-	{ "job_size", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, jobSize), READONLY,
-	  "size of compression job when using multiple threads" },
-	{ "overlap_size_log", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, overlapSizeLog), READONLY,
-	  "Size of previous input reloaded at the beginning of each job" },
-	{ "force_max_window", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, forceMaxWindow), READONLY,
-	  "force back references to remain smaller than window size" },
-	{ "enable_ldm", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, enableLongDistanceMatching), READONLY,
-	  "whether to enable long distance matching" },
-	{ "ldm_hash_log", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, ldmHashLog), READONLY,
-	  "Size of the table for long distance matching, as a power of 2" },
-	{ "ldm_min_match", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, ldmMinMatch), READONLY,
-	  "minimum size of searched matches for long distance matcher" },
-	{ "ldm_bucket_size_log", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, ldmBucketSizeLog), READONLY,
-	  "log size of each bucket in the LDM hash table for collision resolution" },
-	{ "ldm_hash_every_log", T_UINT,
-	  offsetof(ZstdCompressionParametersObject, ldmHashEveryLog), READONLY,
-	  "frequency of inserting/looking up entries in the LDM hash table" },
+#define GET_SET_ENTRY(name) { #name, ZstdCompressionParameters_get_##name, NULL, NULL, NULL }
+
+static PyGetSetDef ZstdCompressionParameters_getset[] = {
+	GET_SET_ENTRY(format),
+	GET_SET_ENTRY(compression_level),
+	GET_SET_ENTRY(window_log),
+	GET_SET_ENTRY(hash_log),
+	GET_SET_ENTRY(chain_log),
+	GET_SET_ENTRY(search_log),
+	GET_SET_ENTRY(min_match),
+	GET_SET_ENTRY(target_length),
+	GET_SET_ENTRY(compression_strategy),
+	GET_SET_ENTRY(write_content_size),
+	GET_SET_ENTRY(write_checksum),
+	GET_SET_ENTRY(write_dict_id),
+	GET_SET_ENTRY(threads),
+	GET_SET_ENTRY(job_size),
+	GET_SET_ENTRY(overlap_log),
+	/* TODO remove this deprecated attribute */
+	{ "overlap_size_log", ZstdCompressionParameters_get_overlap_log, NULL, NULL, NULL },
+	GET_SET_ENTRY(force_max_window),
+	GET_SET_ENTRY(enable_ldm),
+	GET_SET_ENTRY(ldm_hash_log),
+	GET_SET_ENTRY(ldm_min_match),
+	GET_SET_ENTRY(ldm_bucket_size_log),
+	GET_SET_ENTRY(ldm_hash_rate_log),
+	/* TODO remove this deprecated attribute */
+	{ "ldm_hash_every_log", ZstdCompressionParameters_get_ldm_hash_rate_log, NULL, NULL, NULL },
 	{ NULL }
 };
 
@@ -448,8 +543,8 @@
 	0,                         /* tp_iter */
 	0,                         /* tp_iternext */
 	ZstdCompressionParameters_methods, /* tp_methods */
-	ZstdCompressionParameters_members, /* tp_members */
-	0,                         /* tp_getset */
+	0,                          /* tp_members */
+	ZstdCompressionParameters_getset,  /* tp_getset */
 	0,                         /* tp_base */
 	0,                         /* tp_dict */
 	0,                         /* tp_descr_get */
--- a/contrib/python-zstandard/c-ext/compressionreader.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/compressionreader.c	Wed Apr 17 13:41:18 2019 -0400
@@ -128,6 +128,96 @@
 	return PyLong_FromUnsignedLongLong(self->bytesCompressed);
 }
 
+int read_compressor_input(ZstdCompressionReader* self) {
+	if (self->finishedInput) {
+		return 0;
+	}
+
+	if (self->input.pos != self->input.size) {
+		return 0;
+	}
+
+	if (self->reader) {
+		Py_buffer buffer;
+
+		assert(self->readResult == NULL);
+
+		self->readResult = PyObject_CallMethod(self->reader, "read",
+		    "k", self->readSize);
+
+		if (NULL == self->readResult) {
+			return -1;
+		}
+
+		memset(&buffer, 0, sizeof(buffer));
+
+		if (0 != PyObject_GetBuffer(self->readResult, &buffer, PyBUF_CONTIG_RO)) {
+			return -1;
+		}
+
+		/* EOF */
+		if (0 == buffer.len) {
+			self->finishedInput = 1;
+			Py_CLEAR(self->readResult);
+		}
+		else {
+			self->input.src = buffer.buf;
+			self->input.size = buffer.len;
+			self->input.pos = 0;
+		}
+
+		PyBuffer_Release(&buffer);
+	}
+	else {
+		assert(self->buffer.buf);
+
+		self->input.src = self->buffer.buf;
+		self->input.size = self->buffer.len;
+		self->input.pos = 0;
+	}
+
+	return 1;
+}
+
+int compress_input(ZstdCompressionReader* self, ZSTD_outBuffer* output) {
+	size_t oldPos;
+	size_t zresult;
+
+	/* If we have data left over, consume it. */
+	if (self->input.pos < self->input.size) {
+		oldPos = output->pos;
+
+		Py_BEGIN_ALLOW_THREADS
+		zresult = ZSTD_compressStream2(self->compressor->cctx,
+		    output, &self->input, ZSTD_e_continue);
+		Py_END_ALLOW_THREADS
+
+		self->bytesCompressed += output->pos - oldPos;
+
+		/* Input exhausted. Clear out state tracking. */
+		if (self->input.pos == self->input.size) {
+			memset(&self->input, 0, sizeof(self->input));
+			Py_CLEAR(self->readResult);
+
+			if (self->buffer.buf) {
+				self->finishedInput = 1;
+			}
+		}
+
+		if (ZSTD_isError(zresult)) {
+			PyErr_Format(ZstdError, "zstd compress error: %s", ZSTD_getErrorName(zresult));
+			return -1;
+		}
+	}
+
+    if (output->pos && output->pos == output->size) {
+        return 1;
+    }
+    else {
+        return 0;
+    }
+}
+
 static PyObject* reader_read(ZstdCompressionReader* self, PyObject* args, PyObject* kwargs) {
 	static char* kwlist[] = {
 		"size",
@@ -140,25 +230,30 @@
 	Py_ssize_t resultSize;
 	size_t zresult;
 	size_t oldPos;
+	int readResult, compressResult;
 
 	if (self->closed) {
 		PyErr_SetString(PyExc_ValueError, "stream is closed");
 		return NULL;
 	}
 
-	if (self->finishedOutput) {
-		return PyBytes_FromStringAndSize("", 0);
-	}
-
-	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "n", kwlist, &size)) {
+	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|n", kwlist, &size)) {
 		return NULL;
 	}
 
-	if (size < 1) {
-		PyErr_SetString(PyExc_ValueError, "cannot read negative or size 0 amounts");
+	if (size < -1) {
+		PyErr_SetString(PyExc_ValueError, "cannot read negative amounts less than -1");
 		return NULL;
 	}
 
+	if (size == -1) {
+		return PyObject_CallMethod((PyObject*)self, "readall", NULL);
+	}
+
+	if (self->finishedOutput || size == 0) {
+		return PyBytes_FromStringAndSize("", 0);
+	}
+
 	result = PyBytes_FromStringAndSize(NULL, size);
 	if (NULL == result) {
 		return NULL;
@@ -172,86 +267,34 @@
 
 readinput:
 
-	/* If we have data left over, consume it. */
-	if (self->input.pos < self->input.size) {
-		oldPos = self->output.pos;
-
-		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_compress_generic(self->compressor->cctx,
-			&self->output, &self->input, ZSTD_e_continue);
-
-		Py_END_ALLOW_THREADS
-
-		self->bytesCompressed += self->output.pos - oldPos;
-
-		/* Input exhausted. Clear out state tracking. */
-		if (self->input.pos == self->input.size) {
-			memset(&self->input, 0, sizeof(self->input));
-			Py_CLEAR(self->readResult);
+    compressResult = compress_input(self, &self->output);
 
-			if (self->buffer.buf) {
-				self->finishedInput = 1;
-			}
-		}
-
-		if (ZSTD_isError(zresult)) {
-			PyErr_Format(ZstdError, "zstd compress error: %s", ZSTD_getErrorName(zresult));
-			return NULL;
-		}
-
-		if (self->output.pos) {
-			/* If no more room in output, emit it. */
-			if (self->output.pos == self->output.size) {
-				memset(&self->output, 0, sizeof(self->output));
-				return result;
-			}
-
-			/*
-			 * There is room in the output. We fall through to below, which will either
-			 * get more input for us or will attempt to end the stream.
-			 */
-		}
-
-		/* Fall through to gather more input. */
+	if (-1 == compressResult) {
+		Py_XDECREF(result);
+		return NULL;
+	}
+	else if (0 == compressResult) {
+		/* There is room in the output. We fall through to below, which will
+		 * either get more input for us or will attempt to end the stream.
+		 */
+	}
+	else if (1 == compressResult) {
+		memset(&self->output, 0, sizeof(self->output));
+		return result;
+	}
+	else {
+		assert(0);
 	}
 
-	if (!self->finishedInput) {
-		if (self->reader) {
-			Py_buffer buffer;
-
-			assert(self->readResult == NULL);
-			self->readResult = PyObject_CallMethod(self->reader, "read",
-				"k", self->readSize);
-			if (self->readResult == NULL) {
-				return NULL;
-			}
-
-			memset(&buffer, 0, sizeof(buffer));
-
-			if (0 != PyObject_GetBuffer(self->readResult, &buffer, PyBUF_CONTIG_RO)) {
-				return NULL;
-			}
+	readResult = read_compressor_input(self);
 
-			/* EOF */
-			if (0 == buffer.len) {
-				self->finishedInput = 1;
-				Py_CLEAR(self->readResult);
-			}
-			else {
-				self->input.src = buffer.buf;
-				self->input.size = buffer.len;
-				self->input.pos = 0;
-			}
-
-			PyBuffer_Release(&buffer);
-		}
-		else {
-			assert(self->buffer.buf);
-
-			self->input.src = self->buffer.buf;
-			self->input.size = self->buffer.len;
-			self->input.pos = 0;
-		}
+	if (-1 == readResult) {
+		return NULL;
+	}
+	else if (0 == readResult) { }
+	else if (1 == readResult) { }
+	else {
+		assert(0);
 	}
 
 	if (self->input.size) {
@@ -261,7 +304,7 @@
 	/* Else EOF */
 	oldPos = self->output.pos;
 
-	zresult = ZSTD_compress_generic(self->compressor->cctx, &self->output,
+	zresult = ZSTD_compressStream2(self->compressor->cctx, &self->output,
 		&self->input, ZSTD_e_end);
 
 	self->bytesCompressed += self->output.pos - oldPos;
@@ -269,6 +312,7 @@
 	if (ZSTD_isError(zresult)) {
 		PyErr_Format(ZstdError, "error ending compression stream: %s",
 			ZSTD_getErrorName(zresult));
+		Py_XDECREF(result);
 		return NULL;
 	}
 
@@ -288,9 +332,394 @@
 	return result;
 }
 
+static PyObject* reader_read1(ZstdCompressionReader* self, PyObject* args, PyObject* kwargs) {
+	static char* kwlist[] = {
+		"size",
+		NULL
+	};
+
+	Py_ssize_t size = -1;
+	PyObject* result = NULL;
+	char* resultBuffer;
+	Py_ssize_t resultSize;
+	ZSTD_outBuffer output;
+	int compressResult;
+	size_t oldPos;
+	size_t zresult;
+
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
+	}
+
+	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|n:read1", kwlist, &size)) {
+		return NULL;
+	}
+
+	if (size < -1) {
+		PyErr_SetString(PyExc_ValueError, "cannot read negative amounts less than -1");
+		return NULL;
+	}
+
+	if (self->finishedOutput || size == 0) {
+		return PyBytes_FromStringAndSize("", 0);
+	}
+
+	if (size == -1) {
+		size = ZSTD_CStreamOutSize();
+	}
+
+	result = PyBytes_FromStringAndSize(NULL, size);
+	if (NULL == result) {
+		return NULL;
+	}
+
+	PyBytes_AsStringAndSize(result, &resultBuffer, &resultSize);
+
+	output.dst = resultBuffer;
+	output.size = resultSize;
+	output.pos = 0;
+
+	/* read1() is supposed to use at most 1 read() from the underlying stream.
+	   However, we can't satisfy this requirement with compression because
+	   not every input will generate output. We /could/ flush the compressor,
+	   but this may not be desirable. We allow multiple read() from the
+	   underlying stream. But unlike read(), we return as soon as output data
+	   is available.
+	*/
+
+	compressResult = compress_input(self, &output);
+
+	if (-1 == compressResult) {
+		Py_XDECREF(result);
+		return NULL;
+	}
+	else if (0 == compressResult || 1 == compressResult) { }
+	else {
+		assert(0);
+	}
+
+	if (output.pos) {
+		goto finally;
+	}
+
+	while (!self->finishedInput) {
+		int readResult = read_compressor_input(self);
+
+		if (-1 == readResult) {
+			Py_XDECREF(result);
+			return NULL;
+		}
+		else if (0 == readResult || 1 == readResult) { }
+		else {
+			assert(0);
+		}
+
+		compressResult = compress_input(self, &output);
+
+		if (-1 == compressResult) {
+			Py_XDECREF(result);
+			return NULL;
+		}
+		else if (0 == compressResult || 1 == compressResult) { }
+		else {
+			assert(0);
+		}
+
+		if (output.pos) {
+			goto finally;
+		}
+	}
+
+	/* EOF */
+	oldPos = output.pos;
+
+	zresult = ZSTD_compressStream2(self->compressor->cctx, &output, &self->input,
+        ZSTD_e_end);
+
+	self->bytesCompressed += output.pos - oldPos;
+
+	if (ZSTD_isError(zresult)) {
+		PyErr_Format(ZstdError, "error ending compression stream: %s",
+		    ZSTD_getErrorName(zresult));
+		Py_XDECREF(result);
+		return NULL;
+	}
+
+	if (zresult == 0) {
+		self->finishedOutput = 1;
+	}
+
+finally:
+	if (result) {
+		if (safe_pybytes_resize(&result, output.pos)) {
+			Py_XDECREF(result);
+			return NULL;
+		}
+	}
+
+	return result;
+}
+
 static PyObject* reader_readall(PyObject* self) {
-	PyErr_SetNone(PyExc_NotImplementedError);
-	return NULL;
+	PyObject* chunks = NULL;
+	PyObject* empty = NULL;
+	PyObject* result = NULL;
+
+	/* Our strategy is to collect chunks into a list then join all the
+	 * chunks at the end. We could potentially use e.g. an io.BytesIO. But
+	 * this feels simple enough to implement and avoids potentially expensive
+	 * reallocations of large buffers.
+	 */
+	chunks = PyList_New(0);
+	if (NULL == chunks) {
+		return NULL;
+	}
+
+	while (1) {
+		PyObject* chunk = PyObject_CallMethod(self, "read", "i", 1048576);
+		if (NULL == chunk) {
+			Py_DECREF(chunks);
+			return NULL;
+		}
+
+		if (!PyBytes_Size(chunk)) {
+			Py_DECREF(chunk);
+			break;
+		}
+
+		if (PyList_Append(chunks, chunk)) {
+			Py_DECREF(chunk);
+			Py_DECREF(chunks);
+			return NULL;
+		}
+
+		Py_DECREF(chunk);
+	}
+
+	empty = PyBytes_FromStringAndSize("", 0);
+	if (NULL == empty) {
+		Py_DECREF(chunks);
+		return NULL;
+	}
+
+	result = PyObject_CallMethod(empty, "join", "O", chunks);
+
+	Py_DECREF(empty);
+	Py_DECREF(chunks);
+
+	return result;
+}
+
+static PyObject* reader_readinto(ZstdCompressionReader* self, PyObject* args) {
+	Py_buffer dest;
+	ZSTD_outBuffer output;
+	int readResult, compressResult;
+	PyObject* result = NULL;
+	size_t zresult;
+	size_t oldPos;
+
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
+	}
+
+	if (self->finishedOutput) {
+		return PyLong_FromLong(0);
+	}
+
+	if (!PyArg_ParseTuple(args, "w*:readinto", &dest)) {
+		return NULL;
+	}
+
+	if (!PyBuffer_IsContiguous(&dest, 'C') || dest.ndim > 1) {
+		PyErr_SetString(PyExc_ValueError,
+		    "destination buffer should be contiguous and have at most one dimension");
+		goto finally;
+	}
+
+	output.dst = dest.buf;
+	output.size = dest.len;
+	output.pos = 0;
+
+	compressResult = compress_input(self, &output);
+
+	if (-1 == compressResult) {
+		goto finally;
+	}
+	else if (0 == compressResult) {	}
+	else if (1 == compressResult) {
+		result = PyLong_FromSize_t(output.pos);
+		goto finally;
+	}
+	else {
+		assert(0);
+	}
+
+	while (!self->finishedInput) {
+		readResult = read_compressor_input(self);
+
+		if (-1 == readResult) {
+			goto finally;
+		}
+		else if (0 == readResult || 1 == readResult) {}
+		else {
+			assert(0);
+		}
+
+		compressResult = compress_input(self, &output);
+
+		if (-1 == compressResult) {
+			goto finally;
+		}
+		else if (0 == compressResult) { }
+		else if (1 == compressResult) {
+			result = PyLong_FromSize_t(output.pos);
+			goto finally;
+		}
+		else {
+			assert(0);
+		}
+	}
+
+	/* EOF */
+	oldPos = output.pos;
+
+	zresult = ZSTD_compressStream2(self->compressor->cctx, &output, &self->input,
+	    ZSTD_e_end);
+
+	self->bytesCompressed += self->output.pos - oldPos;
+
+	if (ZSTD_isError(zresult)) {
+		PyErr_Format(ZstdError, "error ending compression stream: %s",
+		    ZSTD_getErrorName(zresult));
+		goto finally;
+	}
+
+	assert(output.pos);
+
+	if (0 == zresult) {
+		self->finishedOutput = 1;
+	}
+
+	result = PyLong_FromSize_t(output.pos);
+
+finally:
+	PyBuffer_Release(&dest);
+
+	return result;
+}
+
+static PyObject* reader_readinto1(ZstdCompressionReader* self, PyObject* args) {
+	Py_buffer dest;
+	PyObject* result = NULL;
+	ZSTD_outBuffer output;
+	int compressResult;
+	size_t oldPos;
+	size_t zresult;
+
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
+	}
+
+	if (self->finishedOutput) {
+		return PyLong_FromLong(0);
+	}
+
+	if (!PyArg_ParseTuple(args, "w*:readinto1", &dest)) {
+		return NULL;
+	}
+
+	if (!PyBuffer_IsContiguous(&dest, 'C') || dest.ndim > 1) {
+		PyErr_SetString(PyExc_ValueError,
+		    "destination buffer should be contiguous and have at most one dimension");
+		goto finally;
+	}
+
+	output.dst = dest.buf;
+	output.size = dest.len;
+	output.pos = 0;
+
+	compressResult = compress_input(self, &output);
+
+	if (-1 == compressResult) {
+		goto finally;
+	}
+	else if (0 == compressResult || 1 == compressResult) { }
+	else {
+		assert(0);
+	}
+
+	if (output.pos) {
+		result = PyLong_FromSize_t(output.pos);
+		goto finally;
+	}
+
+	while (!self->finishedInput) {
+		int readResult = read_compressor_input(self);
+
+		if (-1 == readResult) {
+			goto finally;
+		}
+		else if (0 == readResult || 1 == readResult) { }
+		else {
+			assert(0);
+		}
+
+		compressResult = compress_input(self, &output);
+
+		if (-1 == compressResult) {
+			goto finally;
+		}
+		else if (0 == compressResult) { }
+		else if (1 == compressResult) {
+			result = PyLong_FromSize_t(output.pos);
+			goto finally;
+		}
+		else {
+			assert(0);
+		}
+
+		/* If we produced output and we're not done with input, emit
+		 * that output now, as we've hit restrictions of read1().
+		 */
+		if (output.pos && !self->finishedInput) {
+			result = PyLong_FromSize_t(output.pos);
+			goto finally;
+		}
+
+		/* Otherwise we either have no output or we've exhausted the
+		 * input. Either we try to get more input or we fall through
+		 * to EOF below */
+	}
+
+	/* EOF */
+	oldPos = output.pos;
+
+	zresult = ZSTD_compressStream2(self->compressor->cctx, &output, &self->input,
+	    ZSTD_e_end);
+
+	self->bytesCompressed += self->output.pos - oldPos;
+
+	if (ZSTD_isError(zresult)) {
+		PyErr_Format(ZstdError, "error ending compression stream: %s",
+		    ZSTD_getErrorName(zresult));
+		goto finally;
+	}
+
+	assert(output.pos);
+
+	if (0 == zresult) {
+		self->finishedOutput = 1;
+	}
+
+	result = PyLong_FromSize_t(output.pos);
+
+finally:
+	PyBuffer_Release(&dest);
+
+	return result;
 }
 
 static PyObject* reader_iter(PyObject* self) {
@@ -315,7 +744,10 @@
 	{ "readable", (PyCFunction)reader_readable, METH_NOARGS,
 	PyDoc_STR("Returns True") },
 	{ "read", (PyCFunction)reader_read, METH_VARARGS | METH_KEYWORDS, PyDoc_STR("read compressed data") },
+	{ "read1", (PyCFunction)reader_read1, METH_VARARGS | METH_KEYWORDS, NULL },
 	{ "readall", (PyCFunction)reader_readall, METH_NOARGS, PyDoc_STR("Not implemented") },
+	{ "readinto", (PyCFunction)reader_readinto, METH_VARARGS, NULL },
+	{ "readinto1", (PyCFunction)reader_readinto1, METH_VARARGS, NULL },
 	{ "readline", (PyCFunction)reader_readline, METH_VARARGS, PyDoc_STR("Not implemented") },
 	{ "readlines", (PyCFunction)reader_readlines, METH_VARARGS, PyDoc_STR("Not implemented") },
 	{ "seekable", (PyCFunction)reader_seekable, METH_NOARGS,
--- a/contrib/python-zstandard/c-ext/compressionwriter.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/compressionwriter.c	Wed Apr 17 13:41:18 2019 -0400
@@ -18,24 +18,23 @@
 	Py_XDECREF(self->compressor);
 	Py_XDECREF(self->writer);
 
+	PyMem_Free(self->output.dst);
+	self->output.dst = NULL;
+
 	PyObject_Del(self);
 }
 
 static PyObject* ZstdCompressionWriter_enter(ZstdCompressionWriter* self) {
-	size_t zresult;
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
+	}
 
 	if (self->entered) {
 		PyErr_SetString(ZstdError, "cannot __enter__ multiple times");
 		return NULL;
 	}
 
-	zresult = ZSTD_CCtx_setPledgedSrcSize(self->compressor->cctx, self->sourceSize);
-	if (ZSTD_isError(zresult)) {
-		PyErr_Format(ZstdError, "error setting source size: %s",
-			ZSTD_getErrorName(zresult));
-		return NULL;
-	}
-
 	self->entered = 1;
 
 	Py_INCREF(self);
@@ -46,10 +45,6 @@
 	PyObject* exc_type;
 	PyObject* exc_value;
 	PyObject* exc_tb;
-	size_t zresult;
-
-	ZSTD_outBuffer output;
-	PyObject* res;
 
 	if (!PyArg_ParseTuple(args, "OOO:__exit__", &exc_type, &exc_value, &exc_tb)) {
 		return NULL;
@@ -58,46 +53,11 @@
 	self->entered = 0;
 
 	if (exc_type == Py_None && exc_value == Py_None && exc_tb == Py_None) {
-		ZSTD_inBuffer inBuffer;
-
-		inBuffer.src = NULL;
-		inBuffer.size = 0;
-		inBuffer.pos = 0;
-
-		output.dst = PyMem_Malloc(self->outSize);
-		if (!output.dst) {
-			return PyErr_NoMemory();
-		}
-		output.size = self->outSize;
-		output.pos = 0;
+		PyObject* result = PyObject_CallMethod((PyObject*)self, "close", NULL);
 
-		while (1) {
-			zresult = ZSTD_compress_generic(self->compressor->cctx, &output, &inBuffer, ZSTD_e_end);
-			if (ZSTD_isError(zresult)) {
-				PyErr_Format(ZstdError, "error ending compression stream: %s",
-					ZSTD_getErrorName(zresult));
-				PyMem_Free(output.dst);
-				return NULL;
-			}
-
-			if (output.pos) {
-#if PY_MAJOR_VERSION >= 3
-				res = PyObject_CallMethod(self->writer, "write", "y#",
-#else
-				res = PyObject_CallMethod(self->writer, "write", "s#",
-#endif
-					output.dst, output.pos);
-				Py_XDECREF(res);
-			}
-
-			if (!zresult) {
-				break;
-			}
-
-			output.pos = 0;
+		if (NULL == result) {
+			return NULL;
 		}
-
-		PyMem_Free(output.dst);
 	}
 
 	Py_RETURN_FALSE;
@@ -117,7 +77,6 @@
 	Py_buffer source;
 	size_t zresult;
 	ZSTD_inBuffer input;
-	ZSTD_outBuffer output;
 	PyObject* res;
 	Py_ssize_t totalWrite = 0;
 
@@ -130,143 +89,240 @@
 		return NULL;
 	}
 
-	if (!self->entered) {
-		PyErr_SetString(ZstdError, "compress must be called from an active context manager");
-		goto finally;
-	}
-
 	if (!PyBuffer_IsContiguous(&source, 'C') || source.ndim > 1) {
 		PyErr_SetString(PyExc_ValueError,
 			"data buffer should be contiguous and have at most one dimension");
 		goto finally;
 	}
 
-	output.dst = PyMem_Malloc(self->outSize);
-	if (!output.dst) {
-		PyErr_NoMemory();
-		goto finally;
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
 	}
-	output.size = self->outSize;
-	output.pos = 0;
+
+	self->output.pos = 0;
 
 	input.src = source.buf;
 	input.size = source.len;
 	input.pos = 0;
 
-	while ((ssize_t)input.pos < source.len) {
+	while (input.pos < (size_t)source.len) {
 		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_compress_generic(self->compressor->cctx, &output, &input, ZSTD_e_continue);
+		zresult = ZSTD_compressStream2(self->compressor->cctx, &self->output, &input, ZSTD_e_continue);
 		Py_END_ALLOW_THREADS
 
 		if (ZSTD_isError(zresult)) {
-			PyMem_Free(output.dst);
 			PyErr_Format(ZstdError, "zstd compress error: %s", ZSTD_getErrorName(zresult));
 			goto finally;
 		}
 
 		/* Copy data from output buffer to writer. */
-		if (output.pos) {
+		if (self->output.pos) {
 #if PY_MAJOR_VERSION >= 3
 			res = PyObject_CallMethod(self->writer, "write", "y#",
 #else
 			res = PyObject_CallMethod(self->writer, "write", "s#",
 #endif
-				output.dst, output.pos);
+				self->output.dst, self->output.pos);
 			Py_XDECREF(res);
-			totalWrite += output.pos;
-			self->bytesCompressed += output.pos;
+			totalWrite += self->output.pos;
+			self->bytesCompressed += self->output.pos;
 		}
-		output.pos = 0;
+		self->output.pos = 0;
 	}
 
-	PyMem_Free(output.dst);
-
-	result = PyLong_FromSsize_t(totalWrite);
+	if (self->writeReturnRead) {
+		result = PyLong_FromSize_t(input.pos);
+	}
+	else {
+		result = PyLong_FromSsize_t(totalWrite);
+	}
 
 finally:
 	PyBuffer_Release(&source);
 	return result;
 }
 
-static PyObject* ZstdCompressionWriter_flush(ZstdCompressionWriter* self, PyObject* args) {
+static PyObject* ZstdCompressionWriter_flush(ZstdCompressionWriter* self, PyObject* args, PyObject* kwargs) {
+	static char* kwlist[] = {
+		"flush_mode",
+		NULL
+	};
+
 	size_t zresult;
-	ZSTD_outBuffer output;
 	ZSTD_inBuffer input;
 	PyObject* res;
 	Py_ssize_t totalWrite = 0;
+	unsigned flush_mode = 0;
+	ZSTD_EndDirective flush;
 
-	if (!self->entered) {
-		PyErr_SetString(ZstdError, "flush must be called from an active context manager");
+    if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|I:flush",
+		kwlist, &flush_mode)) {
 		return NULL;
 	}
 
+	switch (flush_mode) {
+		case 0:
+			flush = ZSTD_e_flush;
+			break;
+		case 1:
+			flush = ZSTD_e_end;
+			break;
+		default:
+			PyErr_Format(PyExc_ValueError, "unknown flush_mode: %d", flush_mode);
+			return NULL;
+	}
+
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
+	}
+
+	self->output.pos = 0;
+
 	input.src = NULL;
 	input.size = 0;
 	input.pos = 0;
 
-	output.dst = PyMem_Malloc(self->outSize);
-	if (!output.dst) {
-		return PyErr_NoMemory();
-	}
-	output.size = self->outSize;
-	output.pos = 0;
-
 	while (1) {
 		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_compress_generic(self->compressor->cctx, &output, &input, ZSTD_e_flush);
+		zresult = ZSTD_compressStream2(self->compressor->cctx, &self->output, &input, flush);
 		Py_END_ALLOW_THREADS
 
 		if (ZSTD_isError(zresult)) {
-			PyMem_Free(output.dst);
 			PyErr_Format(ZstdError, "zstd compress error: %s", ZSTD_getErrorName(zresult));
 			return NULL;
 		}
 
 		/* Copy data from output buffer to writer. */
-		if (output.pos) {
+		if (self->output.pos) {
 #if PY_MAJOR_VERSION >= 3
 			res = PyObject_CallMethod(self->writer, "write", "y#",
 #else
 			res = PyObject_CallMethod(self->writer, "write", "s#",
 #endif
-				output.dst, output.pos);
+				self->output.dst, self->output.pos);
 			Py_XDECREF(res);
-			totalWrite += output.pos;
-			self->bytesCompressed += output.pos;
+			totalWrite += self->output.pos;
+			self->bytesCompressed += self->output.pos;
 		}
 
-		output.pos = 0;
+		self->output.pos = 0;
 
 		if (!zresult) {
 			break;
 		}
 	}
 
-	PyMem_Free(output.dst);
+	return PyLong_FromSsize_t(totalWrite);
+}
+
+static PyObject* ZstdCompressionWriter_close(ZstdCompressionWriter* self) {
+	PyObject* result;
+
+	if (self->closed) {
+		Py_RETURN_NONE;
+	}
+
+	result = PyObject_CallMethod((PyObject*)self, "flush", "I", 1);
+	self->closed = 1;
+
+	if (NULL == result) {
+	    return NULL;
+	}
 
-	return PyLong_FromSsize_t(totalWrite);
+    /* Call close on underlying stream as well. */
+	if (PyObject_HasAttrString(self->writer, "close")) {
+		return PyObject_CallMethod(self->writer, "close", NULL);
+	}
+
+	Py_RETURN_NONE;
+}
+
+static PyObject* ZstdCompressionWriter_fileno(ZstdCompressionWriter* self) {
+	if (PyObject_HasAttrString(self->writer, "fileno")) {
+		return PyObject_CallMethod(self->writer, "fileno", NULL);
+	}
+	else {
+		PyErr_SetString(PyExc_OSError, "fileno not available on underlying writer");
+		return NULL;
+	}
 }
 
 static PyObject* ZstdCompressionWriter_tell(ZstdCompressionWriter* self) {
 	return PyLong_FromUnsignedLongLong(self->bytesCompressed);
 }
 
+static PyObject* ZstdCompressionWriter_writelines(PyObject* self, PyObject* args) {
+	PyErr_SetNone(PyExc_NotImplementedError);
+	return NULL;
+}
+
+static PyObject* ZstdCompressionWriter_false(PyObject* self, PyObject* args) {
+	Py_RETURN_FALSE;
+}
+
+static PyObject* ZstdCompressionWriter_true(PyObject* self, PyObject* args) {
+	Py_RETURN_TRUE;
+}
+
+static PyObject* ZstdCompressionWriter_unsupported(PyObject* self, PyObject* args, PyObject* kwargs) {
+	PyObject* iomod;
+	PyObject* exc;
+
+	iomod = PyImport_ImportModule("io");
+	if (NULL == iomod) {
+		return NULL;
+	}
+
+	exc = PyObject_GetAttrString(iomod, "UnsupportedOperation");
+	if (NULL == exc) {
+		Py_DECREF(iomod);
+		return NULL;
+	}
+
+	PyErr_SetNone(exc);
+	Py_DECREF(exc);
+	Py_DECREF(iomod);
+
+	return NULL;
+}
+
 static PyMethodDef ZstdCompressionWriter_methods[] = {
 	{ "__enter__", (PyCFunction)ZstdCompressionWriter_enter, METH_NOARGS,
 	PyDoc_STR("Enter a compression context.") },
 	{ "__exit__", (PyCFunction)ZstdCompressionWriter_exit, METH_VARARGS,
 	PyDoc_STR("Exit a compression context.") },
+	{ "close", (PyCFunction)ZstdCompressionWriter_close, METH_NOARGS, NULL },
+	{ "fileno", (PyCFunction)ZstdCompressionWriter_fileno, METH_NOARGS, NULL },
+	{ "isatty", (PyCFunction)ZstdCompressionWriter_false, METH_NOARGS, NULL },
+	{ "readable", (PyCFunction)ZstdCompressionWriter_false, METH_NOARGS, NULL },
+	{ "readline", (PyCFunction)ZstdCompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "readlines", (PyCFunction)ZstdCompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "seek", (PyCFunction)ZstdCompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "seekable", ZstdCompressionWriter_false, METH_NOARGS, NULL },
+	{ "truncate", (PyCFunction)ZstdCompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "writable", ZstdCompressionWriter_true, METH_NOARGS, NULL },
+	{ "writelines", ZstdCompressionWriter_writelines, METH_VARARGS, NULL },
+	{ "read", (PyCFunction)ZstdCompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "readall", (PyCFunction)ZstdCompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "readinto", (PyCFunction)ZstdCompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
 	{ "memory_size", (PyCFunction)ZstdCompressionWriter_memory_size, METH_NOARGS,
 	PyDoc_STR("Obtain the memory size of the underlying compressor") },
 	{ "write", (PyCFunction)ZstdCompressionWriter_write, METH_VARARGS | METH_KEYWORDS,
 	PyDoc_STR("Compress data") },
-	{ "flush", (PyCFunction)ZstdCompressionWriter_flush, METH_NOARGS,
+	{ "flush", (PyCFunction)ZstdCompressionWriter_flush, METH_VARARGS | METH_KEYWORDS,
 	PyDoc_STR("Flush data and finish a zstd frame") },
 	{ "tell", (PyCFunction)ZstdCompressionWriter_tell, METH_NOARGS,
 	PyDoc_STR("Returns current number of bytes compressed") },
 	{ NULL, NULL }
 };
 
+static PyMemberDef ZstdCompressionWriter_members[] = {
+	 { "closed", T_BOOL, offsetof(ZstdCompressionWriter, closed), READONLY, NULL },
+	 { NULL }
+};
+
 PyTypeObject ZstdCompressionWriterType = {
 	PyVarObject_HEAD_INIT(NULL, 0)
 	"zstd.ZstdCompressionWriter",  /* tp_name */
@@ -296,7 +352,7 @@
 	0,                              /* tp_iter */
 	0,                              /* tp_iternext */
 	ZstdCompressionWriter_methods,  /* tp_methods */
-	0,                              /* tp_members */
+	ZstdCompressionWriter_members,  /* tp_members */
 	0,                              /* tp_getset */
 	0,                              /* tp_base */
 	0,                              /* tp_dict */
--- a/contrib/python-zstandard/c-ext/compressobj.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/compressobj.c	Wed Apr 17 13:41:18 2019 -0400
@@ -59,9 +59,9 @@
 	input.size = source.len;
 	input.pos = 0;
 
-	while ((ssize_t)input.pos < source.len) {
+	while (input.pos < (size_t)source.len) {
 		Py_BEGIN_ALLOW_THREADS
-			zresult = ZSTD_compress_generic(self->compressor->cctx, &self->output,
+			zresult = ZSTD_compressStream2(self->compressor->cctx, &self->output,
 				&input, ZSTD_e_continue);
 		Py_END_ALLOW_THREADS
 
@@ -154,7 +154,7 @@
 
 	while (1) {
 		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_compress_generic(self->compressor->cctx, &self->output,
+		zresult = ZSTD_compressStream2(self->compressor->cctx, &self->output,
 			&input, zFlushMode);
 		Py_END_ALLOW_THREADS
 
--- a/contrib/python-zstandard/c-ext/compressor.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/compressor.c	Wed Apr 17 13:41:18 2019 -0400
@@ -204,27 +204,27 @@
 		}
 	}
 	else {
-		if (set_parameter(self->params, ZSTD_p_compressionLevel, level)) {
+		if (set_parameter(self->params, ZSTD_c_compressionLevel, level)) {
 			return -1;
 		}
 
-		if (set_parameter(self->params, ZSTD_p_contentSizeFlag,
+		if (set_parameter(self->params, ZSTD_c_contentSizeFlag,
 			writeContentSize ? PyObject_IsTrue(writeContentSize) : 1)) {
 			return -1;
 		}
 
-		if (set_parameter(self->params, ZSTD_p_checksumFlag,
+		if (set_parameter(self->params, ZSTD_c_checksumFlag,
 			writeChecksum ? PyObject_IsTrue(writeChecksum) : 0)) {
 			return -1;
 		}
 
-		if (set_parameter(self->params, ZSTD_p_dictIDFlag,
+		if (set_parameter(self->params, ZSTD_c_dictIDFlag,
 			writeDictID ? PyObject_IsTrue(writeDictID) : 1)) {
 			return -1;
 		}
 
 		if (threads) {
-			if (set_parameter(self->params, ZSTD_p_nbWorkers, threads)) {
+			if (set_parameter(self->params, ZSTD_c_nbWorkers, threads)) {
 				return -1;
 			}
 		}
@@ -344,7 +344,7 @@
 		return NULL;
 	}
 
-	ZSTD_CCtx_reset(self->cctx);
+	ZSTD_CCtx_reset(self->cctx, ZSTD_reset_session_only);
 
 	zresult = ZSTD_CCtx_setPledgedSrcSize(self->cctx, sourceSize);
 	if (ZSTD_isError(zresult)) {
@@ -391,7 +391,7 @@
 
 		while (input.pos < input.size) {
 			Py_BEGIN_ALLOW_THREADS
-			zresult = ZSTD_compress_generic(self->cctx, &output, &input, ZSTD_e_continue);
+			zresult = ZSTD_compressStream2(self->cctx, &output, &input, ZSTD_e_continue);
 			Py_END_ALLOW_THREADS
 
 			if (ZSTD_isError(zresult)) {
@@ -421,7 +421,7 @@
 
 	while (1) {
 		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_compress_generic(self->cctx, &output, &input, ZSTD_e_end);
+		zresult = ZSTD_compressStream2(self->cctx, &output, &input, ZSTD_e_end);
 		Py_END_ALLOW_THREADS
 
 		if (ZSTD_isError(zresult)) {
@@ -517,7 +517,7 @@
 		goto except;
 	}
 
-	ZSTD_CCtx_reset(self->cctx);
+	ZSTD_CCtx_reset(self->cctx, ZSTD_reset_session_only);
 
 	zresult = ZSTD_CCtx_setPledgedSrcSize(self->cctx, sourceSize);
 	if (ZSTD_isError(zresult)) {
@@ -577,7 +577,7 @@
 		goto finally;
 	}
 
-	ZSTD_CCtx_reset(self->cctx);
+	ZSTD_CCtx_reset(self->cctx, ZSTD_reset_session_only);
 
 	destSize = ZSTD_compressBound(source.len);
 	output = PyBytes_FromStringAndSize(NULL, destSize);
@@ -605,7 +605,7 @@
 	/* By avoiding ZSTD_compress(), we don't necessarily write out content
 		size. This means the argument to ZstdCompressor to control frame
 		parameters is honored. */
-	zresult = ZSTD_compress_generic(self->cctx, &outBuffer, &inBuffer, ZSTD_e_end);
+	zresult = ZSTD_compressStream2(self->cctx, &outBuffer, &inBuffer, ZSTD_e_end);
 	Py_END_ALLOW_THREADS
 
 	if (ZSTD_isError(zresult)) {
@@ -651,7 +651,7 @@
 		return NULL;
 	}
 
-	ZSTD_CCtx_reset(self->cctx);
+	ZSTD_CCtx_reset(self->cctx, ZSTD_reset_session_only);
 
 	zresult = ZSTD_CCtx_setPledgedSrcSize(self->cctx, inSize);
 	if (ZSTD_isError(zresult)) {
@@ -740,7 +740,7 @@
 		goto except;
 	}
 
-	ZSTD_CCtx_reset(self->cctx);
+	ZSTD_CCtx_reset(self->cctx, ZSTD_reset_session_only);
 
 	zresult = ZSTD_CCtx_setPledgedSrcSize(self->cctx, sourceSize);
 	if (ZSTD_isError(zresult)) {
@@ -794,16 +794,19 @@
 		"writer",
 		"size",
 		"write_size",
+		"write_return_read",
 		NULL
 	};
 
 	PyObject* writer;
 	ZstdCompressionWriter* result;
+	size_t zresult;
 	unsigned long long sourceSize = ZSTD_CONTENTSIZE_UNKNOWN;
 	size_t outSize = ZSTD_CStreamOutSize();
+	PyObject* writeReturnRead = NULL;
 
-	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|Kk:stream_writer", kwlist,
-		&writer, &sourceSize, &outSize)) {
+	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|KkO:stream_writer", kwlist,
+		&writer, &sourceSize, &outSize, &writeReturnRead)) {
 		return NULL;
 	}
 
@@ -812,22 +815,38 @@
 		return NULL;
 	}
 
-	ZSTD_CCtx_reset(self->cctx);
+	ZSTD_CCtx_reset(self->cctx, ZSTD_reset_session_only);
+
+	zresult = ZSTD_CCtx_setPledgedSrcSize(self->cctx, sourceSize);
+	if (ZSTD_isError(zresult)) {
+		PyErr_Format(ZstdError, "error setting source size: %s",
+			ZSTD_getErrorName(zresult));
+		return NULL;
+	}
 
 	result = (ZstdCompressionWriter*)PyObject_CallObject((PyObject*)&ZstdCompressionWriterType, NULL);
 	if (!result) {
 		return NULL;
 	}
 
+	result->output.dst = PyMem_Malloc(outSize);
+	if (!result->output.dst) {
+		Py_DECREF(result);
+		return (ZstdCompressionWriter*)PyErr_NoMemory();
+	}
+
+	result->output.pos = 0;
+	result->output.size = outSize;
+
 	result->compressor = self;
 	Py_INCREF(result->compressor);
 
 	result->writer = writer;
 	Py_INCREF(result->writer);
 
-	result->sourceSize = sourceSize;
 	result->outSize = outSize;
 	result->bytesCompressed = 0;
+	result->writeReturnRead = writeReturnRead ? PyObject_IsTrue(writeReturnRead) : 0;
 
 	return result;
 }
@@ -853,7 +872,7 @@
 		return NULL;
 	}
 
-	ZSTD_CCtx_reset(self->cctx);
+	ZSTD_CCtx_reset(self->cctx, ZSTD_reset_session_only);
 
 	zresult = ZSTD_CCtx_setPledgedSrcSize(self->cctx, sourceSize);
 	if (ZSTD_isError(zresult)) {
@@ -1115,7 +1134,7 @@
 			break;
 		}
 
-		zresult = ZSTD_compress_generic(state->cctx, &opOutBuffer, &opInBuffer, ZSTD_e_end);
+		zresult = ZSTD_compressStream2(state->cctx, &opOutBuffer, &opInBuffer, ZSTD_e_end);
 		if (ZSTD_isError(zresult)) {
 			state->error = WorkerError_zstd;
 			state->zresult = zresult;
--- a/contrib/python-zstandard/c-ext/compressoriterator.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/compressoriterator.c	Wed Apr 17 13:41:18 2019 -0400
@@ -57,7 +57,7 @@
 	/* If we have data left in the input, consume it. */
 	if (self->input.pos < self->input.size) {
 		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_compress_generic(self->compressor->cctx, &self->output,
+		zresult = ZSTD_compressStream2(self->compressor->cctx, &self->output,
 			&self->input, ZSTD_e_continue);
 		Py_END_ALLOW_THREADS
 
@@ -127,7 +127,7 @@
 		self->input.size = 0;
 		self->input.pos = 0;
 
-		zresult = ZSTD_compress_generic(self->compressor->cctx, &self->output,
+		zresult = ZSTD_compressStream2(self->compressor->cctx, &self->output,
 			&self->input, ZSTD_e_end);
 		if (ZSTD_isError(zresult)) {
 			PyErr_Format(ZstdError, "error ending compression stream: %s",
@@ -152,7 +152,7 @@
 	self->input.pos = 0;
 
 	Py_BEGIN_ALLOW_THREADS
-	zresult = ZSTD_compress_generic(self->compressor->cctx, &self->output,
+	zresult = ZSTD_compressStream2(self->compressor->cctx, &self->output,
 		&self->input, ZSTD_e_continue);
 	Py_END_ALLOW_THREADS
 
--- a/contrib/python-zstandard/c-ext/constants.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/constants.c	Wed Apr 17 13:41:18 2019 -0400
@@ -32,6 +32,9 @@
 	ZstdError = PyErr_NewException("zstd.ZstdError", NULL, NULL);
 	PyModule_AddObject(mod, "ZstdError", ZstdError);
 
+	PyModule_AddIntConstant(mod, "FLUSH_BLOCK", 0);
+	PyModule_AddIntConstant(mod, "FLUSH_FRAME", 1);
+
 	PyModule_AddIntConstant(mod, "COMPRESSOBJ_FLUSH_FINISH", compressorobj_flush_finish);
 	PyModule_AddIntConstant(mod, "COMPRESSOBJ_FLUSH_BLOCK", compressorobj_flush_block);
 
@@ -77,8 +80,11 @@
 	PyModule_AddIntConstant(mod, "HASHLOG3_MAX", ZSTD_HASHLOG3_MAX);
 	PyModule_AddIntConstant(mod, "SEARCHLOG_MIN", ZSTD_SEARCHLOG_MIN);
 	PyModule_AddIntConstant(mod, "SEARCHLOG_MAX", ZSTD_SEARCHLOG_MAX);
-	PyModule_AddIntConstant(mod, "SEARCHLENGTH_MIN", ZSTD_SEARCHLENGTH_MIN);
-	PyModule_AddIntConstant(mod, "SEARCHLENGTH_MAX", ZSTD_SEARCHLENGTH_MAX);
+	PyModule_AddIntConstant(mod, "MINMATCH_MIN", ZSTD_MINMATCH_MIN);
+	PyModule_AddIntConstant(mod, "MINMATCH_MAX", ZSTD_MINMATCH_MAX);
+	/* TODO SEARCHLENGTH_* is deprecated. */
+	PyModule_AddIntConstant(mod, "SEARCHLENGTH_MIN", ZSTD_MINMATCH_MIN);
+	PyModule_AddIntConstant(mod, "SEARCHLENGTH_MAX", ZSTD_MINMATCH_MAX);
 	PyModule_AddIntConstant(mod, "TARGETLENGTH_MIN", ZSTD_TARGETLENGTH_MIN);
 	PyModule_AddIntConstant(mod, "TARGETLENGTH_MAX", ZSTD_TARGETLENGTH_MAX);
 	PyModule_AddIntConstant(mod, "LDM_MINMATCH_MIN", ZSTD_LDM_MINMATCH_MIN);
@@ -93,6 +99,7 @@
 	PyModule_AddIntConstant(mod, "STRATEGY_BTLAZY2", ZSTD_btlazy2);
 	PyModule_AddIntConstant(mod, "STRATEGY_BTOPT", ZSTD_btopt);
 	PyModule_AddIntConstant(mod, "STRATEGY_BTULTRA", ZSTD_btultra);
+	PyModule_AddIntConstant(mod, "STRATEGY_BTULTRA2", ZSTD_btultra2);
 
 	PyModule_AddIntConstant(mod, "DICT_TYPE_AUTO", ZSTD_dct_auto);
 	PyModule_AddIntConstant(mod, "DICT_TYPE_RAWCONTENT", ZSTD_dct_rawContent);
--- a/contrib/python-zstandard/c-ext/decompressionreader.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/decompressionreader.c	Wed Apr 17 13:41:18 2019 -0400
@@ -102,6 +102,114 @@
 	Py_RETURN_FALSE;
 }
 
+/**
+ * Read available input.
+ *
+ * Returns 0 if no data was added to input.
+ * Returns 1 if new input data is available.
+ * Returns -1 on error and sets a Python exception as a side-effect.
+ */
+int read_decompressor_input(ZstdDecompressionReader* self) {
+	if (self->finishedInput) {
+		return 0;
+	}
+
+	if (self->input.pos != self->input.size) {
+		return 0;
+	}
+
+	if (self->reader) {
+        Py_buffer buffer;
+
+        assert(self->readResult == NULL);
+        self->readResult = PyObject_CallMethod(self->reader, "read",
+            "k", self->readSize);
+        if (NULL == self->readResult) {
+            return -1;
+        }
+
+        memset(&buffer, 0, sizeof(buffer));
+
+        if (0 != PyObject_GetBuffer(self->readResult, &buffer, PyBUF_CONTIG_RO)) {
+            return -1;
+        }
+
+        /* EOF */
+        if (0 == buffer.len) {
+            self->finishedInput = 1;
+            Py_CLEAR(self->readResult);
+        }
+        else {
+            self->input.src = buffer.buf;
+            self->input.size = buffer.len;
+            self->input.pos = 0;
+        }
+
+        PyBuffer_Release(&buffer);
+	}
+	else {
+		assert(self->buffer.buf);
+        /*
+         * We should only get here once since expectation is we always
+         * exhaust input buffer before reading again.
+         */
+        assert(self->input.src == NULL);
+
+		self->input.src = self->buffer.buf;
+        self->input.size = self->buffer.len;
+        self->input.pos = 0;
+	}
+
+	return 1;
+}
+
+/**
+ * Decompresses available input into an output buffer.
+ *
+ * Returns 0 if we need more input.
+ * Returns 1 if output buffer should be emitted.
+ * Returns -1 on error and sets a Python exception.
+ */
+int decompress_input(ZstdDecompressionReader* self, ZSTD_outBuffer* output) {
+	size_t zresult;
+
+	if (self->input.pos >= self->input.size) {
+		return 0;
+	}
+
+	Py_BEGIN_ALLOW_THREADS
+	zresult = ZSTD_decompressStream(self->decompressor->dctx, output, &self->input);
+	Py_END_ALLOW_THREADS
+
+	/* Input exhausted. Clear our state tracking. */
+	if (self->input.pos == self->input.size) {
+		memset(&self->input, 0, sizeof(self->input));
+		Py_CLEAR(self->readResult);
+
+		if (self->buffer.buf) {
+			self->finishedInput = 1;
+		}
+	}
+
+	if (ZSTD_isError(zresult)) {
+		PyErr_Format(ZstdError, "zstd decompress error: %s", ZSTD_getErrorName(zresult));
+		return -1;
+	}
+
+	/* We fulfilled the full read request. Signal to emit. */
+	if (output->pos && output->pos == output->size) {
+		return 1;
+	}
+	/* We're at the end of a frame and we aren't allowed to return data
+	   spanning frames. */
+	else if (output->pos && zresult == 0 && !self->readAcrossFrames) {
+		return 1;
+	}
+
+	/* There is more room in the output. Signal to collect more data. */
+	return 0;
+}
+
 static PyObject* reader_read(ZstdDecompressionReader* self, PyObject* args, PyObject* kwargs) {
 	static char* kwlist[] = {
 		"size",
@@ -113,26 +221,30 @@
 	char* resultBuffer;
 	Py_ssize_t resultSize;
 	ZSTD_outBuffer output;
-	size_t zresult;
+	int decompressResult, readResult;
 
 	if (self->closed) {
 		PyErr_SetString(PyExc_ValueError, "stream is closed");
 		return NULL;
 	}
 
-	if (self->finishedOutput) {
-		return PyBytes_FromStringAndSize("", 0);
-	}
-
-	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "n", kwlist, &size)) {
+	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|n", kwlist, &size)) {
 		return NULL;
 	}
 
-	if (size < 1) {
-		PyErr_SetString(PyExc_ValueError, "cannot read negative or size 0 amounts");
+	if (size < -1) {
+		PyErr_SetString(PyExc_ValueError, "cannot read negative amounts less than -1");
 		return NULL;
 	}
 
+	if (size == -1) {
+		return PyObject_CallMethod((PyObject*)self, "readall", NULL);
+	}
+
+	if (self->finishedOutput || size == 0) {
+		return PyBytes_FromStringAndSize("", 0);
+	}
+
 	result = PyBytes_FromStringAndSize(NULL, size);
 	if (NULL == result) {
 		return NULL;
@@ -146,85 +258,38 @@
 
 readinput:
 
-	/* Consume input data left over from last time. */
-	if (self->input.pos < self->input.size) {
-		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_decompress_generic(self->decompressor->dctx,
-			&output, &self->input);
-		Py_END_ALLOW_THREADS
+	decompressResult = decompress_input(self, &output);
 
-		/* Input exhausted. Clear our state tracking. */
-		if (self->input.pos == self->input.size) {
-			memset(&self->input, 0, sizeof(self->input));
-			Py_CLEAR(self->readResult);
+	if (-1 == decompressResult) {
+		Py_XDECREF(result);
+		return NULL;
+	}
+	else if (0 == decompressResult) { }
+	else if (1 == decompressResult) {
+		self->bytesDecompressed += output.pos;
 
-			if (self->buffer.buf) {
-				self->finishedInput = 1;
+		if (output.pos != output.size) {
+			if (safe_pybytes_resize(&result, output.pos)) {
+				Py_XDECREF(result);
+				return NULL;
 			}
 		}
-
-		if (ZSTD_isError(zresult)) {
-			PyErr_Format(ZstdError, "zstd decompress error: %s", ZSTD_getErrorName(zresult));
-			return NULL;
-		}
-		else if (0 == zresult) {
-			self->finishedOutput = 1;
-		}
-
-		/* We fulfilled the full read request. Emit it. */
-		if (output.pos && output.pos == output.size) {
-			self->bytesDecompressed += output.size;
-			return result;
-		}
-
-		/*
-		 * There is more room in the output. Fall through to try to collect
-		 * more data so we can try to fill the output.
-		 */
+		return result;
+	}
+	else {
+		assert(0);
 	}
 
-	if (!self->finishedInput) {
-		if (self->reader) {
-			Py_buffer buffer;
-
-			assert(self->readResult == NULL);
-			self->readResult = PyObject_CallMethod(self->reader, "read",
-				"k", self->readSize);
-			if (NULL == self->readResult) {
-				return NULL;
-			}
-
-			memset(&buffer, 0, sizeof(buffer));
-
-			if (0 != PyObject_GetBuffer(self->readResult, &buffer, PyBUF_CONTIG_RO)) {
-				return NULL;
-			}
+	readResult = read_decompressor_input(self);
 
-			/* EOF */
-			if (0 == buffer.len) {
-				self->finishedInput = 1;
-				Py_CLEAR(self->readResult);
-			}
-			else {
-				self->input.src = buffer.buf;
-				self->input.size = buffer.len;
-				self->input.pos = 0;
-			}
-
-			PyBuffer_Release(&buffer);
-		}
-		else {
-			assert(self->buffer.buf);
-			/*
-			 * We should only get here once since above block will exhaust
-			 * source buffer until finishedInput is set.
-			 */
-			assert(self->input.src == NULL);
-
-			self->input.src = self->buffer.buf;
-			self->input.size = self->buffer.len;
-			self->input.pos = 0;
-		}
+	if (-1 == readResult) {
+		Py_XDECREF(result);
+		return NULL;
+	}
+	else if (0 == readResult) {}
+	else if (1 == readResult) {}
+	else {
+		assert(0);
 	}
 
 	if (self->input.size) {
@@ -242,18 +307,288 @@
 	return result;
 }
 
+static PyObject* reader_read1(ZstdDecompressionReader* self, PyObject* args, PyObject* kwargs) {
+	static char* kwlist[] = {
+		"size",
+		NULL
+	};
+
+	Py_ssize_t size = -1;
+	PyObject* result = NULL;
+	char* resultBuffer;
+	Py_ssize_t resultSize;
+	ZSTD_outBuffer output;
+
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
+	}
+
+	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|n", kwlist, &size)) {
+		return NULL;
+	}
+
+	if (size < -1) {
+		PyErr_SetString(PyExc_ValueError, "cannot read negative amounts less than -1");
+		return NULL;
+	}
+
+	if (self->finishedOutput || size == 0) {
+		return PyBytes_FromStringAndSize("", 0);
+	}
+
+	if (size == -1) {
+		size = ZSTD_DStreamOutSize();
+	}
+
+	result = PyBytes_FromStringAndSize(NULL, size);
+	if (NULL == result) {
+		return NULL;
+	}
+
+	PyBytes_AsStringAndSize(result, &resultBuffer, &resultSize);
+
+	output.dst = resultBuffer;
+	output.size = resultSize;
+	output.pos = 0;
+
+	/* read1() is supposed to use at most 1 read() from the underlying stream.
+	 * However, we can't satisfy this requirement with decompression due to the
+	 * nature of how decompression works. Our strategy is to read + decompress
+	 * until we get any output, at which point we return. This satisfies the
+	 * intent of the read1() API to limit read operations.
+	 */
+	while (!self->finishedInput) {
+		int readResult, decompressResult;
+
+		readResult = read_decompressor_input(self);
+		if (-1 == readResult) {
+			Py_XDECREF(result);
+			return NULL;
+		}
+		else if (0 == readResult || 1 == readResult) { }
+		else {
+			assert(0);
+		}
+
+		decompressResult = decompress_input(self, &output);
+
+		if (-1 == decompressResult) {
+			Py_XDECREF(result);
+			return NULL;
+		}
+		else if (0 == decompressResult || 1 == decompressResult) { }
+		else {
+			assert(0);
+		}
+
+		if (output.pos) {
+		    break;
+		}
+	}
+
+	self->bytesDecompressed += output.pos;
+	if (safe_pybytes_resize(&result, output.pos)) {
+		Py_XDECREF(result);
+		return NULL;
+	}
+
+	return result;
+}
+
+static PyObject* reader_readinto(ZstdDecompressionReader* self, PyObject* args) {
+	Py_buffer dest;
+	ZSTD_outBuffer output;
+	int decompressResult, readResult;
+	PyObject* result = NULL;
+
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
+	}
+
+	if (self->finishedOutput) {
+		return PyLong_FromLong(0);
+	}
+
+	if (!PyArg_ParseTuple(args, "w*:readinto", &dest)) {
+		return NULL;
+	}
+
+	if (!PyBuffer_IsContiguous(&dest, 'C') || dest.ndim > 1) {
+		PyErr_SetString(PyExc_ValueError,
+			"destination buffer should be contiguous and have at most one dimension");
+	    goto finally;
+	}
+
+	output.dst = dest.buf;
+	output.size = dest.len;
+	output.pos = 0;
+
+readinput:
+
+	decompressResult = decompress_input(self, &output);
+
+	if (-1 == decompressResult) {
+		goto finally;
+	}
+	else if (0 == decompressResult) { }
+	else if (1 == decompressResult) {
+		self->bytesDecompressed += output.pos;
+		result = PyLong_FromSize_t(output.pos);
+		goto finally;
+	}
+	else {
+		assert(0);
+	}
+
+	readResult = read_decompressor_input(self);
+
+	if (-1 == readResult) {
+		goto finally;
+	}
+	else if (0 == readResult) {}
+	else if (1 == readResult) {}
+	else {
+		assert(0);
+	}
+
+	if (self->input.size) {
+		goto readinput;
+	}
+
+	/* EOF */
+	self->bytesDecompressed += output.pos;
+	result = PyLong_FromSize_t(output.pos);
+
+finally:
+	PyBuffer_Release(&dest);
+
+	return result;
+}
+
+static PyObject* reader_readinto1(ZstdDecompressionReader* self, PyObject* args) {
+	Py_buffer dest;
+	ZSTD_outBuffer output;
+	PyObject* result = NULL;
+
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
+	}
+
+	if (self->finishedOutput) {
+		return PyLong_FromLong(0);
+	}
+
+	if (!PyArg_ParseTuple(args, "w*:readinto1", &dest)) {
+		return NULL;
+	}
+
+	if (!PyBuffer_IsContiguous(&dest, 'C') || dest.ndim > 1) {
+		PyErr_SetString(PyExc_ValueError,
+			"destination buffer should be contiguous and have at most one dimension");
+	    goto finally;
+	}
+
+	output.dst = dest.buf;
+	output.size = dest.len;
+	output.pos = 0;
+
+	while (!self->finishedInput && !self->finishedOutput) {
+		int decompressResult, readResult;
+
+		readResult = read_decompressor_input(self);
+
+		if (-1 == readResult) {
+			goto finally;
+		}
+		else if (0 == readResult || 1 == readResult) {}
+		else {
+			assert(0);
+		}
+
+		decompressResult = decompress_input(self, &output);
+
+		if (-1 == decompressResult) {
+			goto finally;
+		}
+		else if (0 == decompressResult || 1 == decompressResult) {}
+		else {
+			assert(0);
+		}
+
+		if (output.pos) {
+			break;
+		}
+	}
+
+	self->bytesDecompressed += output.pos;
+	result = PyLong_FromSize_t(output.pos);
+
+finally:
+	PyBuffer_Release(&dest);
+
+	return result;
+}
+
 static PyObject* reader_readall(PyObject* self) {
-	PyErr_SetNone(PyExc_NotImplementedError);
-	return NULL;
+	PyObject* chunks = NULL;
+	PyObject* empty = NULL;
+	PyObject* result = NULL;
+
+	/* Our strategy is to collect chunks into a list then join all the
+	 * chunks at the end. We could potentially use e.g. an io.BytesIO. But
+	 * this feels simple enough to implement and avoids potentially expensive
+	 * reallocations of large buffers.
+	 */
+	chunks = PyList_New(0);
+	if (NULL == chunks) {
+		return NULL;
+	}
+
+	while (1) {
+		PyObject* chunk = PyObject_CallMethod(self, "read", "i", 1048576);
+		if (NULL == chunk) {
+			Py_DECREF(chunks);
+			return NULL;
+		}
+
+		if (!PyBytes_Size(chunk)) {
+			Py_DECREF(chunk);
+			break;
+		}
+
+		if (PyList_Append(chunks, chunk)) {
+			Py_DECREF(chunk);
+			Py_DECREF(chunks);
+			return NULL;
+		}
+
+		Py_DECREF(chunk);
+	}
+
+	empty = PyBytes_FromStringAndSize("", 0);
+	if (NULL == empty) {
+		Py_DECREF(chunks);
+		return NULL;
+	}
+
+	result = PyObject_CallMethod(empty, "join", "O", chunks);
+
+	Py_DECREF(empty);
+	Py_DECREF(chunks);
+
+	return result;
 }
 
 static PyObject* reader_readline(PyObject* self) {
-	PyErr_SetNone(PyExc_NotImplementedError);
+	set_unsupported_operation();
 	return NULL;
 }
 
 static PyObject* reader_readlines(PyObject* self) {
-	PyErr_SetNone(PyExc_NotImplementedError);
+	set_unsupported_operation();
 	return NULL;
 }
 
@@ -345,12 +680,12 @@
 }
 
 static PyObject* reader_iter(PyObject* self) {
-	PyErr_SetNone(PyExc_NotImplementedError);
+	set_unsupported_operation();
 	return NULL;
 }
 
 static PyObject* reader_iternext(PyObject* self) {
-	PyErr_SetNone(PyExc_NotImplementedError);
+	set_unsupported_operation();
 	return NULL;
 }
 
@@ -367,6 +702,10 @@
 	PyDoc_STR("Returns True") },
 	{ "read", (PyCFunction)reader_read, METH_VARARGS | METH_KEYWORDS,
 	PyDoc_STR("read compressed data") },
+	{ "read1", (PyCFunction)reader_read1, METH_VARARGS | METH_KEYWORDS,
+	PyDoc_STR("read compressed data") },
+	{ "readinto", (PyCFunction)reader_readinto, METH_VARARGS, NULL },
+	{ "readinto1", (PyCFunction)reader_readinto1, METH_VARARGS, NULL },
 	{ "readall", (PyCFunction)reader_readall, METH_NOARGS, PyDoc_STR("Not implemented") },
 	{ "readline", (PyCFunction)reader_readline, METH_NOARGS, PyDoc_STR("Not implemented") },
 	{ "readlines", (PyCFunction)reader_readlines, METH_NOARGS, PyDoc_STR("Not implemented") },
--- a/contrib/python-zstandard/c-ext/decompressionwriter.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/decompressionwriter.c	Wed Apr 17 13:41:18 2019 -0400
@@ -22,12 +22,13 @@
 }
 
 static PyObject* ZstdDecompressionWriter_enter(ZstdDecompressionWriter* self) {
-	if (self->entered) {
-		PyErr_SetString(ZstdError, "cannot __enter__ multiple times");
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
 		return NULL;
 	}
 
-	if (ensure_dctx(self->decompressor, 1)) {
+	if (self->entered) {
+		PyErr_SetString(ZstdError, "cannot __enter__ multiple times");
 		return NULL;
 	}
 
@@ -40,6 +41,10 @@
 static PyObject* ZstdDecompressionWriter_exit(ZstdDecompressionWriter* self, PyObject* args) {
 	self->entered = 0;
 
+	if (NULL == PyObject_CallMethod((PyObject*)self, "close", NULL)) {
+		return NULL;
+	}
+
 	Py_RETURN_FALSE;
 }
 
@@ -76,9 +81,9 @@
 		goto finally;
 	}
 
-	if (!self->entered) {
-		PyErr_SetString(ZstdError, "write must be called from an active context manager");
-		goto finally;
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
 	}
 
 	output.dst = PyMem_Malloc(self->outSize);
@@ -93,9 +98,9 @@
 	input.size = source.len;
 	input.pos = 0;
 
-	while ((ssize_t)input.pos < source.len) {
+	while (input.pos < (size_t)source.len) {
 		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_decompress_generic(self->decompressor->dctx, &output, &input);
+		zresult = ZSTD_decompressStream(self->decompressor->dctx, &output, &input);
 		Py_END_ALLOW_THREADS
 
 		if (ZSTD_isError(zresult)) {
@@ -120,13 +125,94 @@
 
 	PyMem_Free(output.dst);
 
-	result = PyLong_FromSsize_t(totalWrite);
+	if (self->writeReturnRead) {
+		result = PyLong_FromSize_t(input.pos);
+	}
+	else {
+		result = PyLong_FromSsize_t(totalWrite);
+	}
 
 finally:
 	PyBuffer_Release(&source);
 	return result;
 }
 
+static PyObject* ZstdDecompressionWriter_close(ZstdDecompressionWriter* self) {
+	PyObject* result;
+
+	if (self->closed) {
+		Py_RETURN_NONE;
+	}
+
+	result = PyObject_CallMethod((PyObject*)self, "flush", NULL);
+	self->closed = 1;
+
+	if (NULL == result) {
+		return NULL;
+	}
+
+	/* Call close on underlying stream as well. */
+	if (PyObject_HasAttrString(self->writer, "close")) {
+		return PyObject_CallMethod(self->writer, "close", NULL);
+	}
+
+	Py_RETURN_NONE;
+}
+
+static PyObject* ZstdDecompressionWriter_fileno(ZstdDecompressionWriter* self) {
+	if (PyObject_HasAttrString(self->writer, "fileno")) {
+		return PyObject_CallMethod(self->writer, "fileno", NULL);
+	}
+	else {
+		PyErr_SetString(PyExc_OSError, "fileno not available on underlying writer");
+		return NULL;
+	}
+}
+
+static PyObject* ZstdDecompressionWriter_flush(ZstdDecompressionWriter* self) {
+	if (self->closed) {
+		PyErr_SetString(PyExc_ValueError, "stream is closed");
+		return NULL;
+	}
+
+	if (PyObject_HasAttrString(self->writer, "flush")) {
+		return PyObject_CallMethod(self->writer, "flush", NULL);
+	}
+	else {
+		Py_RETURN_NONE;
+	}
+}
+
+static PyObject* ZstdDecompressionWriter_false(PyObject* self, PyObject* args) {
+	Py_RETURN_FALSE;
+}
+
+static PyObject* ZstdDecompressionWriter_true(PyObject* self, PyObject* args) {
+	Py_RETURN_TRUE;
+}
+
+static PyObject* ZstdDecompressionWriter_unsupported(PyObject* self, PyObject* args, PyObject* kwargs) {
+	PyObject* iomod;
+	PyObject* exc;
+
+	iomod = PyImport_ImportModule("io");
+	if (NULL == iomod) {
+		return NULL;
+	}
+
+	exc = PyObject_GetAttrString(iomod, "UnsupportedOperation");
+	if (NULL == exc) {
+		Py_DECREF(iomod);
+		return NULL;
+	}
+
+	PyErr_SetNone(exc);
+	Py_DECREF(exc);
+	Py_DECREF(iomod);
+
+	return NULL;
+}
+
 static PyMethodDef ZstdDecompressionWriter_methods[] = {
 	{ "__enter__", (PyCFunction)ZstdDecompressionWriter_enter, METH_NOARGS,
 	PyDoc_STR("Enter a decompression context.") },
@@ -134,11 +220,32 @@
 	PyDoc_STR("Exit a decompression context.") },
 	{ "memory_size", (PyCFunction)ZstdDecompressionWriter_memory_size, METH_NOARGS,
 	PyDoc_STR("Obtain the memory size in bytes of the underlying decompressor.") },
+	{ "close", (PyCFunction)ZstdDecompressionWriter_close, METH_NOARGS, NULL },
+	{ "fileno", (PyCFunction)ZstdDecompressionWriter_fileno, METH_NOARGS, NULL },
+	{ "flush", (PyCFunction)ZstdDecompressionWriter_flush, METH_NOARGS, NULL },
+	{ "isatty", ZstdDecompressionWriter_false, METH_NOARGS, NULL },
+	{ "readable", ZstdDecompressionWriter_false, METH_NOARGS, NULL },
+	{ "readline", (PyCFunction)ZstdDecompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "readlines", (PyCFunction)ZstdDecompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "seek", (PyCFunction)ZstdDecompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "seekable", ZstdDecompressionWriter_false, METH_NOARGS, NULL },
+	{ "tell", (PyCFunction)ZstdDecompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "truncate", (PyCFunction)ZstdDecompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "writable", ZstdDecompressionWriter_true, METH_NOARGS, NULL },
+	{ "writelines" , (PyCFunction)ZstdDecompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "read", (PyCFunction)ZstdDecompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "readall", (PyCFunction)ZstdDecompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
+	{ "readinto", (PyCFunction)ZstdDecompressionWriter_unsupported, METH_VARARGS | METH_KEYWORDS, NULL },
 	{ "write", (PyCFunction)ZstdDecompressionWriter_write, METH_VARARGS | METH_KEYWORDS,
 	PyDoc_STR("Compress data") },
 	{ NULL, NULL }
 };
 
+static PyMemberDef ZstdDecompressionWriter_members[] = {
+	{ "closed", T_BOOL, offsetof(ZstdDecompressionWriter, closed), READONLY, NULL },
+	{ NULL }
+};
+
 PyTypeObject ZstdDecompressionWriterType = {
 	PyVarObject_HEAD_INIT(NULL, 0)
 	"zstd.ZstdDecompressionWriter", /* tp_name */
@@ -168,7 +275,7 @@
 	0,                              /* tp_iter */
 	0,                              /* tp_iternext */
 	ZstdDecompressionWriter_methods,/* tp_methods */
-	0,                              /* tp_members */
+	ZstdDecompressionWriter_members,/* tp_members */
 	0,                              /* tp_getset */
 	0,                              /* tp_base */
 	0,                              /* tp_dict */
--- a/contrib/python-zstandard/c-ext/decompressobj.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/decompressobj.c	Wed Apr 17 13:41:18 2019 -0400
@@ -75,7 +75,7 @@
 
 	while (1) {
 		Py_BEGIN_ALLOW_THREADS
-		zresult = ZSTD_decompress_generic(self->decompressor->dctx, &output, &input);
+		zresult = ZSTD_decompressStream(self->decompressor->dctx, &output, &input);
 		Py_END_ALLOW_THREADS
 
 		if (ZSTD_isError(zresult)) {
@@ -130,9 +130,26 @@
 	return result;
 }
 
+static PyObject* DecompressionObj_flush(ZstdDecompressionObj* self, PyObject* args, PyObject* kwargs) {
+	static char* kwlist[] = {
+		"length",
+		NULL
+	};
+
+	PyObject* length = NULL;
+
+	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|O:flush", kwlist, &length)) {
+	return NULL;
+	}
+
+	Py_RETURN_NONE;
+}
+
 static PyMethodDef DecompressionObj_methods[] = {
 	{ "decompress", (PyCFunction)DecompressionObj_decompress,
 	  METH_VARARGS | METH_KEYWORDS, PyDoc_STR("decompress data") },
+	{ "flush", (PyCFunction)DecompressionObj_flush,
+	  METH_VARARGS | METH_KEYWORDS, PyDoc_STR("no-op") },
 	{ NULL, NULL }
 };
 
--- a/contrib/python-zstandard/c-ext/decompressor.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/decompressor.c	Wed Apr 17 13:41:18 2019 -0400
@@ -17,7 +17,7 @@
 int ensure_dctx(ZstdDecompressor* decompressor, int loadDict) {
 	size_t zresult;
 
-	ZSTD_DCtx_reset(decompressor->dctx);
+	ZSTD_DCtx_reset(decompressor->dctx, ZSTD_reset_session_only);
 
 	if (decompressor->maxWindowSize) {
 		zresult = ZSTD_DCtx_setMaxWindowSize(decompressor->dctx, decompressor->maxWindowSize);
@@ -229,7 +229,7 @@
 
 		while (input.pos < input.size) {
 			Py_BEGIN_ALLOW_THREADS
-			zresult = ZSTD_decompress_generic(self->dctx, &output, &input);
+			zresult = ZSTD_decompressStream(self->dctx, &output, &input);
 			Py_END_ALLOW_THREADS
 
 			if (ZSTD_isError(zresult)) {
@@ -379,7 +379,7 @@
 	inBuffer.pos = 0;
 
 	Py_BEGIN_ALLOW_THREADS
-	zresult = ZSTD_decompress_generic(self->dctx, &outBuffer, &inBuffer);
+	zresult = ZSTD_decompressStream(self->dctx, &outBuffer, &inBuffer);
 	Py_END_ALLOW_THREADS
 
 	if (ZSTD_isError(zresult)) {
@@ -550,28 +550,35 @@
 }
 
 PyDoc_STRVAR(Decompressor_stream_reader__doc__,
-"stream_reader(source, [read_size=default])\n"
+"stream_reader(source, [read_size=default, [read_across_frames=False]])\n"
 "\n"
 "Obtain an object that behaves like an I/O stream that can be used for\n"
 "reading decompressed output from an object.\n"
 "\n"
 "The source object can be any object with a ``read(size)`` method or that\n"
 "conforms to the buffer protocol.\n"
+"\n"
+"``read_across_frames`` controls the behavior of ``read()`` when the end\n"
+"of a zstd frame is reached. When ``True``, ``read()`` can potentially\n"
+"return data belonging to multiple zstd frames. When ``False``, ``read()``\n"
+"will return when the end of a frame is reached.\n"
 );
 
 static ZstdDecompressionReader* Decompressor_stream_reader(ZstdDecompressor* self, PyObject* args, PyObject* kwargs) {
 	static char* kwlist[] = {
 		"source",
 		"read_size",
+		"read_across_frames",
 		NULL
 	};
 
 	PyObject* source;
 	size_t readSize = ZSTD_DStreamInSize();
+	PyObject* readAcrossFrames = NULL;
 	ZstdDecompressionReader* result;
 
-	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|k:stream_reader", kwlist,
-		&source, &readSize)) {
+	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|kO:stream_reader", kwlist,
+		&source, &readSize, &readAcrossFrames)) {
 		return NULL;
 	}
 
@@ -604,6 +611,7 @@
 
 	result->decompressor = self;
 	Py_INCREF(self);
+	result->readAcrossFrames = readAcrossFrames ? PyObject_IsTrue(readAcrossFrames) : 0;
 
 	return result;
 }
@@ -625,15 +633,17 @@
 	static char* kwlist[] = {
 		"writer",
 		"write_size",
+		"write_return_read",
 		NULL
 	};
 
 	PyObject* writer;
 	size_t outSize = ZSTD_DStreamOutSize();
+	PyObject* writeReturnRead = NULL;
 	ZstdDecompressionWriter* result;
 
-	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|k:stream_writer", kwlist,
-		&writer, &outSize)) {
+	if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|kO:stream_writer", kwlist,
+		&writer, &outSize, &writeReturnRead)) {
 		return NULL;
 	}
 
@@ -642,6 +652,10 @@
 		return NULL;
 	}
 
+	if (ensure_dctx(self, 1)) {
+		return NULL;
+	}
+
 	result = (ZstdDecompressionWriter*)PyObject_CallObject((PyObject*)&ZstdDecompressionWriterType, NULL);
 	if (!result) {
 		return NULL;
@@ -654,6 +668,7 @@
 	Py_INCREF(result->writer);
 
 	result->outSize = outSize;
+	result->writeReturnRead = writeReturnRead ? PyObject_IsTrue(writeReturnRead) : 0;
 
 	return result;
 }
@@ -756,7 +771,7 @@
 	inBuffer.pos = 0;
 
 	Py_BEGIN_ALLOW_THREADS
-	zresult = ZSTD_decompress_generic(self->dctx, &outBuffer, &inBuffer);
+	zresult = ZSTD_decompressStream(self->dctx, &outBuffer, &inBuffer);
 	Py_END_ALLOW_THREADS
 	if (ZSTD_isError(zresult)) {
 		PyErr_Format(ZstdError, "could not decompress chunk 0: %s", ZSTD_getErrorName(zresult));
@@ -852,7 +867,7 @@
 			outBuffer.pos = 0;
 
 			Py_BEGIN_ALLOW_THREADS
-			zresult = ZSTD_decompress_generic(self->dctx, &outBuffer, &inBuffer);
+			zresult = ZSTD_decompressStream(self->dctx, &outBuffer, &inBuffer);
 			Py_END_ALLOW_THREADS
 			if (ZSTD_isError(zresult)) {
 				PyErr_Format(ZstdError, "could not decompress chunk %zd: %s",
@@ -892,7 +907,7 @@
 			outBuffer.pos = 0;
 
 			Py_BEGIN_ALLOW_THREADS
-			zresult = ZSTD_decompress_generic(self->dctx, &outBuffer, &inBuffer);
+			zresult = ZSTD_decompressStream(self->dctx, &outBuffer, &inBuffer);
 			Py_END_ALLOW_THREADS
 			if (ZSTD_isError(zresult)) {
 				PyErr_Format(ZstdError, "could not decompress chunk %zd: %s",
@@ -1176,7 +1191,7 @@
 		inBuffer.size = sourceSize;
 		inBuffer.pos = 0;
 
-		zresult = ZSTD_decompress_generic(state->dctx, &outBuffer, &inBuffer);
+		zresult = ZSTD_decompressStream(state->dctx, &outBuffer, &inBuffer);
 		if (ZSTD_isError(zresult)) {
 			state->error = WorkerError_zstd;
 			state->zresult = zresult;
--- a/contrib/python-zstandard/c-ext/decompressoriterator.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/decompressoriterator.c	Wed Apr 17 13:41:18 2019 -0400
@@ -57,7 +57,7 @@
 	self->output.pos = 0;
 
 	Py_BEGIN_ALLOW_THREADS
-	zresult = ZSTD_decompress_generic(self->decompressor->dctx, &self->output, &self->input);
+	zresult = ZSTD_decompressStream(self->decompressor->dctx, &self->output, &self->input);
 	Py_END_ALLOW_THREADS
 
 	/* We're done with the pointer. Nullify to prevent anyone from getting a
--- a/contrib/python-zstandard/c-ext/python-zstandard.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/c-ext/python-zstandard.h	Wed Apr 17 13:41:18 2019 -0400
@@ -16,7 +16,7 @@
 #include <zdict.h>
 
 /* Remember to change the string in zstandard/__init__ as well */
-#define PYTHON_ZSTANDARD_VERSION "0.10.1"
+#define PYTHON_ZSTANDARD_VERSION "0.11.0"
 
 typedef enum {
 	compressorobj_flush_finish,
@@ -31,27 +31,6 @@
 typedef struct {
 	PyObject_HEAD
 	ZSTD_CCtx_params* params;
-	unsigned format;
-	int compressionLevel;
-	unsigned windowLog;
-	unsigned hashLog;
-	unsigned chainLog;
-	unsigned searchLog;
-	unsigned minMatch;
-	unsigned targetLength;
-	unsigned compressionStrategy;
-	unsigned contentSizeFlag;
-	unsigned checksumFlag;
-	unsigned dictIDFlag;
-	unsigned threads;
-	unsigned jobSize;
-	unsigned overlapSizeLog;
-	unsigned forceMaxWindow;
-	unsigned enableLongDistanceMatching;
-	unsigned ldmHashLog;
-	unsigned ldmMinMatch;
-	unsigned ldmBucketSizeLog;
-	unsigned ldmHashEveryLog;
 } ZstdCompressionParametersObject;
 
 extern PyTypeObject ZstdCompressionParametersType;
@@ -129,9 +108,11 @@
 
 	ZstdCompressor* compressor;
 	PyObject* writer;
-	unsigned long long sourceSize;
+	ZSTD_outBuffer output;
 	size_t outSize;
 	int entered;
+	int closed;
+	int writeReturnRead;
 	unsigned long long bytesCompressed;
 } ZstdCompressionWriter;
 
@@ -235,6 +216,8 @@
 	PyObject* reader;
 	/* Size for read() operations on reader. */
 	size_t readSize;
+	/* Whether a read() can return data spanning multiple zstd frames. */
+	int readAcrossFrames;
 	/* Buffer to read from (if reading from a buffer). */
 	Py_buffer buffer;
 
@@ -267,6 +250,8 @@
 	PyObject* writer;
 	size_t outSize;
 	int entered;
+	int closed;
+	int writeReturnRead;
 } ZstdDecompressionWriter;
 
 extern PyTypeObject ZstdDecompressionWriterType;
@@ -360,8 +345,9 @@
 
 extern PyTypeObject ZstdBufferWithSegmentsCollectionType;
 
-int set_parameter(ZSTD_CCtx_params* params, ZSTD_cParameter param, unsigned value);
+int set_parameter(ZSTD_CCtx_params* params, ZSTD_cParameter param, int value);
 int set_parameters(ZSTD_CCtx_params* params, ZstdCompressionParametersObject* obj);
+int to_cparams(ZstdCompressionParametersObject* params, ZSTD_compressionParameters* cparams);
 FrameParametersObject* get_frame_parameters(PyObject* self, PyObject* args, PyObject* kwargs);
 int ensure_ddict(ZstdCompressionDict* dict);
 int ensure_dctx(ZstdDecompressor* decompressor, int loadDict);
--- a/contrib/python-zstandard/make_cffi.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/make_cffi.py	Wed Apr 17 13:41:18 2019 -0400
@@ -36,7 +36,9 @@
     'compress/zstd_opt.c',
     'compress/zstdmt_compress.c',
     'decompress/huf_decompress.c',
+    'decompress/zstd_ddict.c',
     'decompress/zstd_decompress.c',
+    'decompress/zstd_decompress_block.c',
     'dictBuilder/cover.c',
     'dictBuilder/fastcover.c',
     'dictBuilder/divsufsort.c',
--- a/contrib/python-zstandard/setup.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/setup.py	Wed Apr 17 13:41:18 2019 -0400
@@ -5,12 +5,32 @@
 # This software may be modified and distributed under the terms
 # of the BSD license. See the LICENSE file for details.
 
+from __future__ import print_function
+
+from distutils.version import LooseVersion
 import os
 import sys
 from setuptools import setup
 
+# Need change in 1.10 for ffi.from_buffer() to handle all buffer types
+# (like memoryview).
+# Need feature in 1.11 for ffi.gc() to declare size of objects so we avoid
+# garbage collection pitfalls.
+MINIMUM_CFFI_VERSION = '1.11'
+
 try:
     import cffi
+
+    # PyPy (and possibly other distros) have CFFI distributed as part of
+    # them. The install_requires for CFFI below won't work. We need to sniff
+    # out the CFFI version here and reject CFFI if it is too old.
+    cffi_version = LooseVersion(cffi.__version__)
+    if cffi_version < LooseVersion(MINIMUM_CFFI_VERSION):
+        print('CFFI 1.11 or newer required (%s found); '
+              'not building CFFI backend' % cffi_version,
+              file=sys.stderr)
+        cffi = None
+
 except ImportError:
     cffi = None
 
@@ -49,12 +69,7 @@
 if cffi:
     import make_cffi
     extensions.append(make_cffi.ffi.distutils_extension())
-
-    # Need change in 1.10 for ffi.from_buffer() to handle all buffer types
-    # (like memoryview).
-    # Need feature in 1.11 for ffi.gc() to declare size of objects so we avoid
-    # garbage collection pitfalls.
-    install_requires.append('cffi>=1.11')
+    install_requires.append('cffi>=%s' % MINIMUM_CFFI_VERSION)
 
 version = None
 
@@ -88,6 +103,7 @@
         'Programming Language :: Python :: 3.4',
         'Programming Language :: Python :: 3.5',
         'Programming Language :: Python :: 3.6',
+        'Programming Language :: Python :: 3.7',
     ],
     keywords='zstandard zstd compression',
     packages=['zstandard'],
--- a/contrib/python-zstandard/setup_zstd.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/setup_zstd.py	Wed Apr 17 13:41:18 2019 -0400
@@ -30,7 +30,9 @@
     'compress/zstd_opt.c',
     'compress/zstdmt_compress.c',
     'decompress/huf_decompress.c',
+    'decompress/zstd_ddict.c',
     'decompress/zstd_decompress.c',
+    'decompress/zstd_decompress_block.c',
     'dictBuilder/cover.c',
     'dictBuilder/divsufsort.c',
     'dictBuilder/fastcover.c',
--- a/contrib/python-zstandard/tests/common.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/tests/common.py	Wed Apr 17 13:41:18 2019 -0400
@@ -79,12 +79,37 @@
     return cls
 
 
-class OpCountingBytesIO(io.BytesIO):
+class NonClosingBytesIO(io.BytesIO):
+    """BytesIO that saves the underlying buffer on close().
+
+    This allows us to access written data after close().
+    """
     def __init__(self, *args, **kwargs):
+        super(NonClosingBytesIO, self).__init__(*args, **kwargs)
+        self._saved_buffer = None
+
+    def close(self):
+        self._saved_buffer = self.getvalue()
+        return super(NonClosingBytesIO, self).close()
+
+    def getvalue(self):
+        if self.closed:
+            return self._saved_buffer
+        else:
+            return super(NonClosingBytesIO, self).getvalue()
+
+
+class OpCountingBytesIO(NonClosingBytesIO):
+    def __init__(self, *args, **kwargs):
+        self._flush_count = 0
         self._read_count = 0
         self._write_count = 0
         return super(OpCountingBytesIO, self).__init__(*args, **kwargs)
 
+    def flush(self):
+        self._flush_count += 1
+        return super(OpCountingBytesIO, self).flush()
+
     def read(self, *args):
         self._read_count += 1
         return super(OpCountingBytesIO, self).read(*args)
@@ -117,6 +142,13 @@
             except OSError:
                 pass
 
+    # Also add some actual random data.
+    _source_files.append(os.urandom(100))
+    _source_files.append(os.urandom(1000))
+    _source_files.append(os.urandom(10000))
+    _source_files.append(os.urandom(100000))
+    _source_files.append(os.urandom(1000000))
+
     return _source_files
 
 
@@ -140,12 +172,14 @@
 
 
 if hypothesis:
-    default_settings = hypothesis.settings()
+    default_settings = hypothesis.settings(deadline=10000)
     hypothesis.settings.register_profile('default', default_settings)
 
-    ci_settings = hypothesis.settings(max_examples=2500,
-                                      max_iterations=2500)
+    ci_settings = hypothesis.settings(deadline=20000, max_examples=1000)
     hypothesis.settings.register_profile('ci', ci_settings)
 
+    expensive_settings = hypothesis.settings(deadline=None, max_examples=10000)
+    hypothesis.settings.register_profile('expensive', expensive_settings)
+
     hypothesis.settings.load_profile(
         os.environ.get('HYPOTHESIS_PROFILE', 'default'))
--- a/contrib/python-zstandard/tests/test_buffer_util.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/tests/test_buffer_util.py	Wed Apr 17 13:41:18 2019 -0400
@@ -8,6 +8,9 @@
 
 class TestBufferWithSegments(unittest.TestCase):
     def test_arguments(self):
+        if not hasattr(zstd, 'BufferWithSegments'):
+            self.skipTest('BufferWithSegments not available')
+
         with self.assertRaises(TypeError):
             zstd.BufferWithSegments()
 
@@ -19,10 +22,16 @@
             zstd.BufferWithSegments(b'foo', b'\x00\x00')
 
     def test_invalid_offset(self):
+        if not hasattr(zstd, 'BufferWithSegments'):
+            self.skipTest('BufferWithSegments not available')
+
         with self.assertRaisesRegexp(ValueError, 'offset within segments array references memory'):
             zstd.BufferWithSegments(b'foo', ss.pack(0, 4))
 
     def test_invalid_getitem(self):
+        if not hasattr(zstd, 'BufferWithSegments'):
+            self.skipTest('BufferWithSegments not available')
+
         b = zstd.BufferWithSegments(b'foo', ss.pack(0, 3))
 
         with self.assertRaisesRegexp(IndexError, 'offset must be non-negative'):
@@ -35,6 +44,9 @@
             test = b[2]
 
     def test_single(self):
+        if not hasattr(zstd, 'BufferWithSegments'):
+            self.skipTest('BufferWithSegments not available')
+
         b = zstd.BufferWithSegments(b'foo', ss.pack(0, 3))
         self.assertEqual(len(b), 1)
         self.assertEqual(b.size, 3)
@@ -45,6 +57,9 @@
         self.assertEqual(b[0].tobytes(), b'foo')
 
     def test_multiple(self):
+        if not hasattr(zstd, 'BufferWithSegments'):
+            self.skipTest('BufferWithSegments not available')
+
         b = zstd.BufferWithSegments(b'foofooxfooxy', b''.join([ss.pack(0, 3),
                                                                ss.pack(3, 4),
                                                                ss.pack(7, 5)]))
@@ -59,10 +74,16 @@
 
 class TestBufferWithSegmentsCollection(unittest.TestCase):
     def test_empty_constructor(self):
+        if not hasattr(zstd, 'BufferWithSegmentsCollection'):
+            self.skipTest('BufferWithSegmentsCollection not available')
+
         with self.assertRaisesRegexp(ValueError, 'must pass at least 1 argument'):
             zstd.BufferWithSegmentsCollection()
 
     def test_argument_validation(self):
+        if not hasattr(zstd, 'BufferWithSegmentsCollection'):
+            self.skipTest('BufferWithSegmentsCollection not available')
+
         with self.assertRaisesRegexp(TypeError, 'arguments must be BufferWithSegments'):
             zstd.BufferWithSegmentsCollection(None)
 
@@ -74,6 +95,9 @@
             zstd.BufferWithSegmentsCollection(zstd.BufferWithSegments(b'', b''))
 
     def test_length(self):
+        if not hasattr(zstd, 'BufferWithSegmentsCollection'):
+            self.skipTest('BufferWithSegmentsCollection not available')
+
         b1 = zstd.BufferWithSegments(b'foo', ss.pack(0, 3))
         b2 = zstd.BufferWithSegments(b'barbaz', b''.join([ss.pack(0, 3),
                                                           ss.pack(3, 3)]))
@@ -91,6 +115,9 @@
         self.assertEqual(c.size(), 9)
 
     def test_getitem(self):
+        if not hasattr(zstd, 'BufferWithSegmentsCollection'):
+            self.skipTest('BufferWithSegmentsCollection not available')
+
         b1 = zstd.BufferWithSegments(b'foo', ss.pack(0, 3))
         b2 = zstd.BufferWithSegments(b'barbaz', b''.join([ss.pack(0, 3),
                                                           ss.pack(3, 3)]))
--- a/contrib/python-zstandard/tests/test_compressor.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/tests/test_compressor.py	Wed Apr 17 13:41:18 2019 -0400
@@ -1,14 +1,17 @@
 import hashlib
 import io
+import os
 import struct
 import sys
 import tarfile
+import tempfile
 import unittest
 
 import zstandard as zstd
 
 from .common import (
     make_cffi,
+    NonClosingBytesIO,
     OpCountingBytesIO,
 )
 
@@ -272,7 +275,7 @@
 
         params = zstd.get_frame_parameters(result)
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
-        self.assertEqual(params.window_size, 1048576)
+        self.assertEqual(params.window_size, 2097152)
         self.assertEqual(params.dict_id, 0)
         self.assertFalse(params.has_checksum)
 
@@ -321,7 +324,7 @@
         cobj.compress(b'foo')
         cobj.flush()
 
-        with self.assertRaisesRegexp(zstd.ZstdError, 'cannot call compress\(\) after compressor'):
+        with self.assertRaisesRegexp(zstd.ZstdError, r'cannot call compress\(\) after compressor'):
             cobj.compress(b'foo')
 
         with self.assertRaisesRegexp(zstd.ZstdError, 'compressor object already finished'):
@@ -453,7 +456,7 @@
 
         params = zstd.get_frame_parameters(dest.getvalue())
         self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
-        self.assertEqual(params.window_size, 1048576)
+        self.assertEqual(params.window_size, 2097152)
         self.assertEqual(params.dict_id, 0)
         self.assertFalse(params.has_checksum)
 
@@ -605,10 +608,6 @@
             with self.assertRaises(io.UnsupportedOperation):
                 reader.readlines()
 
-            # This could probably be implemented someday.
-            with self.assertRaises(NotImplementedError):
-                reader.readall()
-
             with self.assertRaises(io.UnsupportedOperation):
                 iter(reader)
 
@@ -644,15 +643,16 @@
             with self.assertRaisesRegexp(ValueError, 'stream is closed'):
                 reader.read(10)
 
-    def test_read_bad_size(self):
+    def test_read_sizes(self):
         cctx = zstd.ZstdCompressor()
+        foo = cctx.compress(b'foo')
 
         with cctx.stream_reader(b'foo') as reader:
-            with self.assertRaisesRegexp(ValueError, 'cannot read negative or size 0 amounts'):
-                reader.read(-1)
+            with self.assertRaisesRegexp(ValueError, 'cannot read negative amounts less than -1'):
+                reader.read(-2)
 
-            with self.assertRaisesRegexp(ValueError, 'cannot read negative or size 0 amounts'):
-                reader.read(0)
+            self.assertEqual(reader.read(0), b'')
+            self.assertEqual(reader.read(), foo)
 
     def test_read_buffer(self):
         cctx = zstd.ZstdCompressor()
@@ -746,11 +746,202 @@
         with cctx.stream_reader(source, size=42):
             pass
 
+    def test_readall(self):
+        cctx = zstd.ZstdCompressor()
+        frame = cctx.compress(b'foo' * 1024)
+
+        reader = cctx.stream_reader(b'foo' * 1024)
+        self.assertEqual(reader.readall(), frame)
+
+    def test_readinto(self):
+        cctx = zstd.ZstdCompressor()
+        foo = cctx.compress(b'foo')
+
+        reader = cctx.stream_reader(b'foo')
+        with self.assertRaises(Exception):
+            reader.readinto(b'foobar')
+
+        # readinto() with sufficiently large destination.
+        b = bytearray(1024)
+        reader = cctx.stream_reader(b'foo')
+        self.assertEqual(reader.readinto(b), len(foo))
+        self.assertEqual(b[0:len(foo)], foo)
+        self.assertEqual(reader.readinto(b), 0)
+        self.assertEqual(b[0:len(foo)], foo)
+
+        # readinto() with small reads.
+        b = bytearray(1024)
+        reader = cctx.stream_reader(b'foo', read_size=1)
+        self.assertEqual(reader.readinto(b), len(foo))
+        self.assertEqual(b[0:len(foo)], foo)
+
+        # Too small destination buffer.
+        b = bytearray(2)
+        reader = cctx.stream_reader(b'foo')
+        self.assertEqual(reader.readinto(b), 2)
+        self.assertEqual(b[:], foo[0:2])
+        self.assertEqual(reader.readinto(b), 2)
+        self.assertEqual(b[:], foo[2:4])
+        self.assertEqual(reader.readinto(b), 2)
+        self.assertEqual(b[:], foo[4:6])
+
+    def test_readinto1(self):
+        cctx = zstd.ZstdCompressor()
+        foo = b''.join(cctx.read_to_iter(io.BytesIO(b'foo')))
+
+        reader = cctx.stream_reader(b'foo')
+        with self.assertRaises(Exception):
+            reader.readinto1(b'foobar')
+
+        b = bytearray(1024)
+        source = OpCountingBytesIO(b'foo')
+        reader = cctx.stream_reader(source)
+        self.assertEqual(reader.readinto1(b), len(foo))
+        self.assertEqual(b[0:len(foo)], foo)
+        self.assertEqual(source._read_count, 2)
+
+        # readinto1() with small reads.
+        b = bytearray(1024)
+        source = OpCountingBytesIO(b'foo')
+        reader = cctx.stream_reader(source, read_size=1)
+        self.assertEqual(reader.readinto1(b), len(foo))
+        self.assertEqual(b[0:len(foo)], foo)
+        self.assertEqual(source._read_count, 4)
+
+    def test_read1(self):
+        cctx = zstd.ZstdCompressor()
+        foo = b''.join(cctx.read_to_iter(io.BytesIO(b'foo')))
+
+        b = OpCountingBytesIO(b'foo')
+        reader = cctx.stream_reader(b)
+
+        self.assertEqual(reader.read1(), foo)
+        self.assertEqual(b._read_count, 2)
+
+        b = OpCountingBytesIO(b'foo')
+        reader = cctx.stream_reader(b)
+
+        self.assertEqual(reader.read1(0), b'')
+        self.assertEqual(reader.read1(2), foo[0:2])
+        self.assertEqual(b._read_count, 2)
+        self.assertEqual(reader.read1(2), foo[2:4])
+        self.assertEqual(reader.read1(1024), foo[4:])
+
 
 @make_cffi
 class TestCompressor_stream_writer(unittest.TestCase):
+    def test_io_api(self):
+        buffer = io.BytesIO()
+        cctx = zstd.ZstdCompressor()
+        writer = cctx.stream_writer(buffer)
+
+        self.assertFalse(writer.isatty())
+        self.assertFalse(writer.readable())
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readline()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readline(42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readline(size=42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readlines()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readlines(42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readlines(hint=42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.seek(0)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.seek(10, os.SEEK_SET)
+
+        self.assertFalse(writer.seekable())
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.truncate()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.truncate(42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.truncate(size=42)
+
+        self.assertTrue(writer.writable())
+
+        with self.assertRaises(NotImplementedError):
+            writer.writelines([])
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.read()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.read(42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.read(size=42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readall()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readinto(None)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.fileno()
+
+        self.assertFalse(writer.closed)
+
+    def test_fileno_file(self):
+        with tempfile.TemporaryFile('wb') as tf:
+            cctx = zstd.ZstdCompressor()
+            writer = cctx.stream_writer(tf)
+
+            self.assertEqual(writer.fileno(), tf.fileno())
+
+    def test_close(self):
+        buffer = NonClosingBytesIO()
+        cctx = zstd.ZstdCompressor(level=1)
+        writer = cctx.stream_writer(buffer)
+
+        writer.write(b'foo' * 1024)
+        self.assertFalse(writer.closed)
+        self.assertFalse(buffer.closed)
+        writer.close()
+        self.assertTrue(writer.closed)
+        self.assertTrue(buffer.closed)
+
+        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            writer.write(b'foo')
+
+        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            writer.flush()
+
+        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            with writer:
+                pass
+
+        self.assertEqual(buffer.getvalue(),
+                         b'\x28\xb5\x2f\xfd\x00\x48\x55\x00\x00\x18\x66\x6f'
+                         b'\x6f\x01\x00\xfa\xd3\x77\x43')
+
+        # Context manager exit should close stream.
+        buffer = io.BytesIO()
+        writer = cctx.stream_writer(buffer)
+
+        with writer:
+            writer.write(b'foo')
+
+        self.assertTrue(writer.closed)
+
     def test_empty(self):
-        buffer = io.BytesIO()
+        buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
         with cctx.stream_writer(buffer) as compressor:
             compressor.write(b'')
@@ -764,6 +955,25 @@
         self.assertEqual(params.dict_id, 0)
         self.assertFalse(params.has_checksum)
 
+        # Test without context manager.
+        buffer = io.BytesIO()
+        compressor = cctx.stream_writer(buffer)
+        self.assertEqual(compressor.write(b''), 0)
+        self.assertEqual(buffer.getvalue(), b'')
+        self.assertEqual(compressor.flush(zstd.FLUSH_FRAME), 9)
+        result = buffer.getvalue()
+        self.assertEqual(result, b'\x28\xb5\x2f\xfd\x00\x48\x01\x00\x00')
+
+        params = zstd.get_frame_parameters(result)
+        self.assertEqual(params.content_size, zstd.CONTENTSIZE_UNKNOWN)
+        self.assertEqual(params.window_size, 524288)
+        self.assertEqual(params.dict_id, 0)
+        self.assertFalse(params.has_checksum)
+
+        # Test write_return_read=True
+        compressor = cctx.stream_writer(buffer, write_return_read=True)
+        self.assertEqual(compressor.write(b''), 0)
+
     def test_input_types(self):
         expected = b'\x28\xb5\x2f\xfd\x00\x48\x19\x00\x00\x66\x6f\x6f'
         cctx = zstd.ZstdCompressor(level=1)
@@ -778,14 +988,17 @@
         ]
 
         for source in sources:
-            buffer = io.BytesIO()
+            buffer = NonClosingBytesIO()
             with cctx.stream_writer(buffer) as compressor:
                 compressor.write(source)
 
             self.assertEqual(buffer.getvalue(), expected)
 
+            compressor = cctx.stream_writer(buffer, write_return_read=True)
+            self.assertEqual(compressor.write(source), len(source))
+
     def test_multiple_compress(self):
-        buffer = io.BytesIO()
+        buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=5)
         with cctx.stream_writer(buffer) as compressor:
             self.assertEqual(compressor.write(b'foo'), 0)
@@ -794,9 +1007,27 @@
 
         result = buffer.getvalue()
         self.assertEqual(result,
-                         b'\x28\xb5\x2f\xfd\x00\x50\x75\x00\x00\x38\x66\x6f'
+                         b'\x28\xb5\x2f\xfd\x00\x58\x75\x00\x00\x38\x66\x6f'
                          b'\x6f\x62\x61\x72\x78\x01\x00\xfc\xdf\x03\x23')
 
+        # Test without context manager.
+        buffer = io.BytesIO()
+        compressor = cctx.stream_writer(buffer)
+        self.assertEqual(compressor.write(b'foo'), 0)
+        self.assertEqual(compressor.write(b'bar'), 0)
+        self.assertEqual(compressor.write(b'x' * 8192), 0)
+        self.assertEqual(compressor.flush(zstd.FLUSH_FRAME), 23)
+        result = buffer.getvalue()
+        self.assertEqual(result,
+                         b'\x28\xb5\x2f\xfd\x00\x58\x75\x00\x00\x38\x66\x6f'
+                         b'\x6f\x62\x61\x72\x78\x01\x00\xfc\xdf\x03\x23')
+
+        # Test with write_return_read=True.
+        compressor = cctx.stream_writer(buffer, write_return_read=True)
+        self.assertEqual(compressor.write(b'foo'), 3)
+        self.assertEqual(compressor.write(b'barbiz'), 6)
+        self.assertEqual(compressor.write(b'x' * 8192), 8192)
+
     def test_dictionary(self):
         samples = []
         for i in range(128):
@@ -807,9 +1038,9 @@
         d = zstd.train_dictionary(8192, samples)
 
         h = hashlib.sha1(d.as_bytes()).hexdigest()
-        self.assertEqual(h, '2b3b6428da5bf2c9cc9d4bb58ba0bc5990dd0e79')
+        self.assertEqual(h, '88ca0d38332aff379d4ced166a51c280a7679aad')
 
-        buffer = io.BytesIO()
+        buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=9, dict_data=d)
         with cctx.stream_writer(buffer) as compressor:
             self.assertEqual(compressor.write(b'foo'), 0)
@@ -825,7 +1056,7 @@
         self.assertFalse(params.has_checksum)
 
         h = hashlib.sha1(compressed).hexdigest()
-        self.assertEqual(h, '23f88344263678478f5f82298e0a5d1833125786')
+        self.assertEqual(h, '8703b4316f274d26697ea5dd480f29c08e85d940')
 
         source = b'foo' + b'bar' + (b'foo' * 16384)
 
@@ -842,9 +1073,9 @@
             min_match=5,
             search_log=4,
             target_length=10,
-            compression_strategy=zstd.STRATEGY_FAST)
+            strategy=zstd.STRATEGY_FAST)
 
-        buffer = io.BytesIO()
+        buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(compression_params=params)
         with cctx.stream_writer(buffer) as compressor:
             self.assertEqual(compressor.write(b'foo'), 0)
@@ -863,12 +1094,12 @@
         self.assertEqual(h, '2a8111d72eb5004cdcecbdac37da9f26720d30ef')
 
     def test_write_checksum(self):
-        no_checksum = io.BytesIO()
+        no_checksum = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1)
         with cctx.stream_writer(no_checksum) as compressor:
             self.assertEqual(compressor.write(b'foobar'), 0)
 
-        with_checksum = io.BytesIO()
+        with_checksum = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1, write_checksum=True)
         with cctx.stream_writer(with_checksum) as compressor:
             self.assertEqual(compressor.write(b'foobar'), 0)
@@ -886,12 +1117,12 @@
                          len(no_checksum.getvalue()) + 4)
 
     def test_write_content_size(self):
-        no_size = io.BytesIO()
+        no_size = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1, write_content_size=False)
         with cctx.stream_writer(no_size) as compressor:
             self.assertEqual(compressor.write(b'foobar' * 256), 0)
 
-        with_size = io.BytesIO()
+        with_size = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1)
         with cctx.stream_writer(with_size) as compressor:
             self.assertEqual(compressor.write(b'foobar' * 256), 0)
@@ -902,7 +1133,7 @@
                          len(no_size.getvalue()))
 
         # Declaring size will write the header.
-        with_size = io.BytesIO()
+        with_size = NonClosingBytesIO()
         with cctx.stream_writer(with_size, size=len(b'foobar' * 256)) as compressor:
             self.assertEqual(compressor.write(b'foobar' * 256), 0)
 
@@ -927,7 +1158,7 @@
 
         d = zstd.train_dictionary(1024, samples)
 
-        with_dict_id = io.BytesIO()
+        with_dict_id = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(level=1, dict_data=d)
         with cctx.stream_writer(with_dict_id) as compressor:
             self.assertEqual(compressor.write(b'foobarfoobar'), 0)
@@ -935,7 +1166,7 @@
         self.assertEqual(with_dict_id.getvalue()[4:5], b'\x03')
 
         cctx = zstd.ZstdCompressor(level=1, dict_data=d, write_dict_id=False)
-        no_dict_id = io.BytesIO()
+        no_dict_id = NonClosingBytesIO()
         with cctx.stream_writer(no_dict_id) as compressor:
             self.assertEqual(compressor.write(b'foobarfoobar'), 0)
 
@@ -1009,8 +1240,32 @@
         header = trailing[0:3]
         self.assertEqual(header, b'\x01\x00\x00')
 
+    def test_flush_frame(self):
+        cctx = zstd.ZstdCompressor(level=3)
+        dest = OpCountingBytesIO()
+
+        with cctx.stream_writer(dest) as compressor:
+            self.assertEqual(compressor.write(b'foobar' * 8192), 0)
+            self.assertEqual(compressor.flush(zstd.FLUSH_FRAME), 23)
+            compressor.write(b'biz' * 16384)
+
+        self.assertEqual(dest.getvalue(),
+                         # Frame 1.
+                         b'\x28\xb5\x2f\xfd\x00\x58\x75\x00\x00\x30\x66\x6f\x6f'
+                         b'\x62\x61\x72\x01\x00\xf7\xbf\xe8\xa5\x08'
+                         # Frame 2.
+                         b'\x28\xb5\x2f\xfd\x00\x58\x5d\x00\x00\x18\x62\x69\x7a'
+                         b'\x01\x00\xfa\x3f\x75\x37\x04')
+
+    def test_bad_flush_mode(self):
+        cctx = zstd.ZstdCompressor()
+        dest = io.BytesIO()
+        with cctx.stream_writer(dest) as compressor:
+            with self.assertRaisesRegexp(ValueError, 'unknown flush_mode: 42'):
+                compressor.flush(flush_mode=42)
+
     def test_multithreaded(self):
-        dest = io.BytesIO()
+        dest = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(threads=2)
         with cctx.stream_writer(dest) as compressor:
             compressor.write(b'a' * 1048576)
@@ -1043,22 +1298,21 @@
             pass
 
     def test_tarfile_compat(self):
-        raise unittest.SkipTest('not yet fully working')
-
-        dest = io.BytesIO()
+        dest = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor()
         with cctx.stream_writer(dest) as compressor:
-            with tarfile.open('tf', mode='w', fileobj=compressor) as tf:
+            with tarfile.open('tf', mode='w|', fileobj=compressor) as tf:
                 tf.add(__file__, 'test_compressor.py')
 
-        dest.seek(0)
+        dest = io.BytesIO(dest.getvalue())
 
         dctx = zstd.ZstdDecompressor()
         with dctx.stream_reader(dest) as reader:
-            with tarfile.open(mode='r:', fileobj=reader) as tf:
+            with tarfile.open(mode='r|', fileobj=reader) as tf:
                 for member in tf:
                     self.assertEqual(member.name, 'test_compressor.py')
 
+
 @make_cffi
 class TestCompressor_read_to_iter(unittest.TestCase):
     def test_type_validation(self):
@@ -1192,7 +1446,7 @@
 
         it = chunker.finish()
 
-        self.assertEqual(next(it), b'\x28\xb5\x2f\xfd\x00\x50\x01\x00\x00')
+        self.assertEqual(next(it), b'\x28\xb5\x2f\xfd\x00\x58\x01\x00\x00')
 
         with self.assertRaises(StopIteration):
             next(it)
@@ -1214,7 +1468,7 @@
         it = chunker.finish()
 
         self.assertEqual(next(it),
-                         b'\x28\xb5\x2f\xfd\x00\x50\x7d\x00\x00\x48\x66\x6f'
+                         b'\x28\xb5\x2f\xfd\x00\x58\x7d\x00\x00\x48\x66\x6f'
                          b'\x6f\x62\x61\x72\x62\x61\x7a\x01\x00\xe4\xe4\x8e')
 
         with self.assertRaises(StopIteration):
@@ -1258,7 +1512,7 @@
 
         self.assertEqual(
             b''.join(chunks),
-            b'\x28\xb5\x2f\xfd\x00\x50\x55\x00\x00\x18\x66\x6f\x6f\x01\x00'
+            b'\x28\xb5\x2f\xfd\x00\x58\x55\x00\x00\x18\x66\x6f\x6f\x01\x00'
             b'\xfa\xd3\x77\x43')
 
         dctx = zstd.ZstdDecompressor()
@@ -1283,7 +1537,7 @@
 
             self.assertEqual(list(chunker.compress(source)), [])
             self.assertEqual(list(chunker.finish()), [
-                b'\x28\xb5\x2f\xfd\x00\x50\x19\x00\x00\x66\x6f\x6f'
+                b'\x28\xb5\x2f\xfd\x00\x58\x19\x00\x00\x66\x6f\x6f'
             ])
 
     def test_flush(self):
@@ -1296,7 +1550,7 @@
         chunks1 = list(chunker.flush())
 
         self.assertEqual(chunks1, [
-            b'\x28\xb5\x2f\xfd\x00\x50\x8c\x00\x00\x30\x66\x6f\x6f\x62\x61\x72'
+            b'\x28\xb5\x2f\xfd\x00\x58\x8c\x00\x00\x30\x66\x6f\x6f\x62\x61\x72'
             b'\x02\x00\xfa\x03\xfe\xd0\x9f\xbe\x1b\x02'
         ])
 
@@ -1326,7 +1580,7 @@
 
         with self.assertRaisesRegexp(
                 zstd.ZstdError,
-                'cannot call compress\(\) after compression finished'):
+                r'cannot call compress\(\) after compression finished'):
             list(chunker.compress(b'foo'))
 
     def test_flush_after_finish(self):
@@ -1338,7 +1592,7 @@
 
         with self.assertRaisesRegexp(
                 zstd.ZstdError,
-                'cannot call flush\(\) after compression finished'):
+                r'cannot call flush\(\) after compression finished'):
             list(chunker.flush())
 
     def test_finish_after_finish(self):
@@ -1350,7 +1604,7 @@
 
         with self.assertRaisesRegexp(
                 zstd.ZstdError,
-                'cannot call finish\(\) after compression finished'):
+                r'cannot call finish\(\) after compression finished'):
             list(chunker.finish())
 
 
@@ -1358,6 +1612,9 @@
     def test_invalid_inputs(self):
         cctx = zstd.ZstdCompressor()
 
+        if not hasattr(cctx, 'multi_compress_to_buffer'):
+            self.skipTest('multi_compress_to_buffer not available')
+
         with self.assertRaises(TypeError):
             cctx.multi_compress_to_buffer(True)
 
@@ -1370,6 +1627,9 @@
     def test_empty_input(self):
         cctx = zstd.ZstdCompressor()
 
+        if not hasattr(cctx, 'multi_compress_to_buffer'):
+            self.skipTest('multi_compress_to_buffer not available')
+
         with self.assertRaisesRegexp(ValueError, 'no source elements found'):
             cctx.multi_compress_to_buffer([])
 
@@ -1379,6 +1639,9 @@
     def test_list_input(self):
         cctx = zstd.ZstdCompressor(write_checksum=True)
 
+        if not hasattr(cctx, 'multi_compress_to_buffer'):
+            self.skipTest('multi_compress_to_buffer not available')
+
         original = [b'foo' * 12, b'bar' * 6]
         frames = [cctx.compress(c) for c in original]
         b = cctx.multi_compress_to_buffer(original)
@@ -1394,6 +1657,9 @@
     def test_buffer_with_segments_input(self):
         cctx = zstd.ZstdCompressor(write_checksum=True)
 
+        if not hasattr(cctx, 'multi_compress_to_buffer'):
+            self.skipTest('multi_compress_to_buffer not available')
+
         original = [b'foo' * 4, b'bar' * 6]
         frames = [cctx.compress(c) for c in original]
 
@@ -1412,6 +1678,9 @@
     def test_buffer_with_segments_collection_input(self):
         cctx = zstd.ZstdCompressor(write_checksum=True)
 
+        if not hasattr(cctx, 'multi_compress_to_buffer'):
+            self.skipTest('multi_compress_to_buffer not available')
+
         original = [
             b'foo1',
             b'foo2' * 2,
@@ -1449,6 +1718,9 @@
 
         cctx = zstd.ZstdCompressor(write_checksum=True)
 
+        if not hasattr(cctx, 'multi_compress_to_buffer'):
+            self.skipTest('multi_compress_to_buffer not available')
+
         frames = []
         frames.extend(b'x' * 64 for i in range(256))
         frames.extend(b'y' * 64 for i in range(256))
--- a/contrib/python-zstandard/tests/test_compressor_fuzzing.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/tests/test_compressor_fuzzing.py	Wed Apr 17 13:41:18 2019 -0400
@@ -12,6 +12,7 @@
 
 from . common import (
     make_cffi,
+    NonClosingBytesIO,
     random_input_data,
 )
 
@@ -19,6 +20,62 @@
 @unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
 @make_cffi
 class TestCompressor_stream_reader_fuzzing(unittest.TestCase):
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+    def test_stream_source_read(self, original, level, source_read_size,
+                                read_size):
+        if read_size == 0:
+            read_size = -1
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(io.BytesIO(original), size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                chunk = reader.read(read_size)
+                if not chunk:
+                    break
+
+                chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+    def test_buffer_source_read(self, original, level, source_read_size,
+                                read_size):
+        if read_size == 0:
+            read_size = -1
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(original, size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                chunk = reader.read(read_size)
+                if not chunk:
+                    break
+
+                chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
     @hypothesis.given(original=strategies.sampled_from(random_input_data()),
                       level=strategies.integers(min_value=1, max_value=5),
                       source_read_size=strategies.integers(1, 16384),
@@ -33,15 +90,17 @@
                                 read_size=source_read_size) as reader:
             chunks = []
             while True:
-                read_size = read_sizes.draw(strategies.integers(1, 16384))
+                read_size = read_sizes.draw(strategies.integers(-1, 16384))
                 chunk = reader.read(read_size)
+                if not chunk and read_size:
+                    break
 
-                if not chunk:
-                    break
                 chunks.append(chunk)
 
         self.assertEqual(b''.join(chunks), ref_frame)
 
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
     @hypothesis.given(original=strategies.sampled_from(random_input_data()),
                       level=strategies.integers(min_value=1, max_value=5),
                       source_read_size=strategies.integers(1, 16384),
@@ -57,14 +116,343 @@
                                 read_size=source_read_size) as reader:
             chunks = []
             while True:
+                read_size = read_sizes.draw(strategies.integers(-1, 16384))
+                chunk = reader.read(read_size)
+                if not chunk and read_size:
+                    break
+
+                chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+    def test_stream_source_readinto(self, original, level,
+                                    source_read_size, read_size):
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(io.BytesIO(original), size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                b = bytearray(read_size)
+                count = reader.readinto(b)
+
+                if not count:
+                    break
+
+                chunks.append(bytes(b[0:count]))
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+    def test_buffer_source_readinto(self, original, level,
+                                    source_read_size, read_size):
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(original, size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                b = bytearray(read_size)
+                count = reader.readinto(b)
+
+                if not count:
+                    break
+
+                chunks.append(bytes(b[0:count]))
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_sizes=strategies.data())
+    def test_stream_source_readinto_variance(self, original, level,
+                                             source_read_size, read_sizes):
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(io.BytesIO(original), size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
                 read_size = read_sizes.draw(strategies.integers(1, 16384))
-                chunk = reader.read(read_size)
+                b = bytearray(read_size)
+                count = reader.readinto(b)
+
+                if not count:
+                    break
+
+                chunks.append(bytes(b[0:count]))
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_sizes=strategies.data())
+    def test_buffer_source_readinto_variance(self, original, level,
+                                             source_read_size, read_sizes):
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(original, size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                read_size = read_sizes.draw(strategies.integers(1, 16384))
+                b = bytearray(read_size)
+                count = reader.readinto(b)
+
+                if not count:
+                    break
+
+                chunks.append(bytes(b[0:count]))
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+    def test_stream_source_read1(self, original, level, source_read_size,
+                                 read_size):
+        if read_size == 0:
+            read_size = -1
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(io.BytesIO(original), size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                chunk = reader.read1(read_size)
                 if not chunk:
                     break
+
                 chunks.append(chunk)
 
         self.assertEqual(b''.join(chunks), ref_frame)
 
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_size=strategies.integers(-1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+    def test_buffer_source_read1(self, original, level, source_read_size,
+                                 read_size):
+        if read_size == 0:
+            read_size = -1
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(original, size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                chunk = reader.read1(read_size)
+                if not chunk:
+                    break
+
+                chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_sizes=strategies.data())
+    def test_stream_source_read1_variance(self, original, level, source_read_size,
+                                          read_sizes):
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(io.BytesIO(original), size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                read_size = read_sizes.draw(strategies.integers(-1, 16384))
+                chunk = reader.read1(read_size)
+                if not chunk and read_size:
+                    break
+
+                chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_sizes=strategies.data())
+    def test_buffer_source_read1_variance(self, original, level, source_read_size,
+                                          read_sizes):
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(original, size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                read_size = read_sizes.draw(strategies.integers(-1, 16384))
+                chunk = reader.read1(read_size)
+                if not chunk and read_size:
+                    break
+
+                chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+    def test_stream_source_readinto1(self, original, level, source_read_size,
+                                     read_size):
+        if read_size == 0:
+            read_size = -1
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(io.BytesIO(original), size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                b = bytearray(read_size)
+                count = reader.readinto1(b)
+
+                if not count:
+                    break
+
+                chunks.append(bytes(b[0:count]))
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_size=strategies.integers(1, zstd.COMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+    def test_buffer_source_readinto1(self, original, level, source_read_size,
+                                     read_size):
+        if read_size == 0:
+            read_size = -1
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(original, size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                b = bytearray(read_size)
+                count = reader.readinto1(b)
+
+                if not count:
+                    break
+
+                chunks.append(bytes(b[0:count]))
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_sizes=strategies.data())
+    def test_stream_source_readinto1_variance(self, original, level, source_read_size,
+                                              read_sizes):
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(io.BytesIO(original), size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                read_size = read_sizes.draw(strategies.integers(1, 16384))
+                b = bytearray(read_size)
+                count = reader.readinto1(b)
+
+                if not count:
+                    break
+
+                chunks.append(bytes(b[0:count]))
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      source_read_size=strategies.integers(1, 16384),
+                      read_sizes=strategies.data())
+    def test_buffer_source_readinto1_variance(self, original, level, source_read_size,
+                                              read_sizes):
+
+        refctx = zstd.ZstdCompressor(level=level)
+        ref_frame = refctx.compress(original)
+
+        cctx = zstd.ZstdCompressor(level=level)
+        with cctx.stream_reader(original, size=len(original),
+                                read_size=source_read_size) as reader:
+            chunks = []
+            while True:
+                read_size = read_sizes.draw(strategies.integers(1, 16384))
+                b = bytearray(read_size)
+                count = reader.readinto1(b)
+
+                if not count:
+                    break
+
+                chunks.append(bytes(b[0:count]))
+
+        self.assertEqual(b''.join(chunks), ref_frame)
+
+
 
 @unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
 @make_cffi
@@ -77,7 +465,7 @@
         ref_frame = refctx.compress(original)
 
         cctx = zstd.ZstdCompressor(level=level)
-        b = io.BytesIO()
+        b = NonClosingBytesIO()
         with cctx.stream_writer(b, size=len(original), write_size=write_size) as compressor:
             compressor.write(original)
 
@@ -219,6 +607,9 @@
                                    write_checksum=True,
                                    **kwargs)
 
+        if not hasattr(cctx, 'multi_compress_to_buffer'):
+            self.skipTest('multi_compress_to_buffer not available')
+
         result = cctx.multi_compress_to_buffer(original, threads=-1)
 
         self.assertEqual(len(result), len(original))
--- a/contrib/python-zstandard/tests/test_data_structures.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/tests/test_data_structures.py	Wed Apr 17 13:41:18 2019 -0400
@@ -15,17 +15,17 @@
                                        chain_log=zstd.CHAINLOG_MIN,
                                        hash_log=zstd.HASHLOG_MIN,
                                        search_log=zstd.SEARCHLOG_MIN,
-                                       min_match=zstd.SEARCHLENGTH_MIN + 1,
+                                       min_match=zstd.MINMATCH_MIN + 1,
                                        target_length=zstd.TARGETLENGTH_MIN,
-                                       compression_strategy=zstd.STRATEGY_FAST)
+                                       strategy=zstd.STRATEGY_FAST)
 
         zstd.ZstdCompressionParameters(window_log=zstd.WINDOWLOG_MAX,
                                        chain_log=zstd.CHAINLOG_MAX,
                                        hash_log=zstd.HASHLOG_MAX,
                                        search_log=zstd.SEARCHLOG_MAX,
-                                       min_match=zstd.SEARCHLENGTH_MAX - 1,
+                                       min_match=zstd.MINMATCH_MAX - 1,
                                        target_length=zstd.TARGETLENGTH_MAX,
-                                       compression_strategy=zstd.STRATEGY_BTULTRA)
+                                       strategy=zstd.STRATEGY_BTULTRA2)
 
     def test_from_level(self):
         p = zstd.ZstdCompressionParameters.from_level(1)
@@ -43,7 +43,7 @@
                                            search_log=4,
                                            min_match=5,
                                            target_length=8,
-                                           compression_strategy=1)
+                                           strategy=1)
         self.assertEqual(p.window_log, 10)
         self.assertEqual(p.chain_log, 6)
         self.assertEqual(p.hash_log, 7)
@@ -59,9 +59,10 @@
         self.assertEqual(p.threads, 4)
 
         p = zstd.ZstdCompressionParameters(threads=2, job_size=1048576,
-                                       overlap_size_log=6)
+                                           overlap_log=6)
         self.assertEqual(p.threads, 2)
         self.assertEqual(p.job_size, 1048576)
+        self.assertEqual(p.overlap_log, 6)
         self.assertEqual(p.overlap_size_log, 6)
 
         p = zstd.ZstdCompressionParameters(compression_level=-1)
@@ -85,8 +86,9 @@
         p = zstd.ZstdCompressionParameters(ldm_bucket_size_log=7)
         self.assertEqual(p.ldm_bucket_size_log, 7)
 
-        p = zstd.ZstdCompressionParameters(ldm_hash_every_log=8)
+        p = zstd.ZstdCompressionParameters(ldm_hash_rate_log=8)
         self.assertEqual(p.ldm_hash_every_log, 8)
+        self.assertEqual(p.ldm_hash_rate_log, 8)
 
     def test_estimated_compression_context_size(self):
         p = zstd.ZstdCompressionParameters(window_log=20,
@@ -95,12 +97,44 @@
                                            search_log=1,
                                            min_match=5,
                                            target_length=16,
-                                           compression_strategy=zstd.STRATEGY_DFAST)
+                                           strategy=zstd.STRATEGY_DFAST)
 
         # 32-bit has slightly different values from 64-bit.
         self.assertAlmostEqual(p.estimated_compression_context_size(), 1294072,
                                delta=250)
 
+    def test_strategy(self):
+        with self.assertRaisesRegexp(ValueError, 'cannot specify both compression_strategy'):
+            zstd.ZstdCompressionParameters(strategy=0, compression_strategy=0)
+
+        p = zstd.ZstdCompressionParameters(strategy=2)
+        self.assertEqual(p.compression_strategy, 2)
+
+        p = zstd.ZstdCompressionParameters(strategy=3)
+        self.assertEqual(p.compression_strategy, 3)
+
+    def test_ldm_hash_rate_log(self):
+        with self.assertRaisesRegexp(ValueError, 'cannot specify both ldm_hash_rate_log'):
+            zstd.ZstdCompressionParameters(ldm_hash_rate_log=8, ldm_hash_every_log=4)
+
+        p = zstd.ZstdCompressionParameters(ldm_hash_rate_log=8)
+        self.assertEqual(p.ldm_hash_every_log, 8)
+
+        p = zstd.ZstdCompressionParameters(ldm_hash_every_log=16)
+        self.assertEqual(p.ldm_hash_every_log, 16)
+
+    def test_overlap_log(self):
+        with self.assertRaisesRegexp(ValueError, 'cannot specify both overlap_log'):
+            zstd.ZstdCompressionParameters(overlap_log=1, overlap_size_log=9)
+
+        p = zstd.ZstdCompressionParameters(overlap_log=2)
+        self.assertEqual(p.overlap_log, 2)
+        self.assertEqual(p.overlap_size_log, 2)
+
+        p = zstd.ZstdCompressionParameters(overlap_size_log=4)
+        self.assertEqual(p.overlap_log, 4)
+        self.assertEqual(p.overlap_size_log, 4)
+
 
 @make_cffi
 class TestFrameParameters(unittest.TestCase):
--- a/contrib/python-zstandard/tests/test_data_structures_fuzzing.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/tests/test_data_structures_fuzzing.py	Wed Apr 17 13:41:18 2019 -0400
@@ -24,8 +24,8 @@
                                 max_value=zstd.HASHLOG_MAX)
 s_searchlog = strategies.integers(min_value=zstd.SEARCHLOG_MIN,
                                     max_value=zstd.SEARCHLOG_MAX)
-s_searchlength = strategies.integers(min_value=zstd.SEARCHLENGTH_MIN,
-                                     max_value=zstd.SEARCHLENGTH_MAX)
+s_minmatch = strategies.integers(min_value=zstd.MINMATCH_MIN,
+                                 max_value=zstd.MINMATCH_MAX)
 s_targetlength = strategies.integers(min_value=zstd.TARGETLENGTH_MIN,
                                      max_value=zstd.TARGETLENGTH_MAX)
 s_strategy = strategies.sampled_from((zstd.STRATEGY_FAST,
@@ -35,41 +35,42 @@
                                         zstd.STRATEGY_LAZY2,
                                         zstd.STRATEGY_BTLAZY2,
                                         zstd.STRATEGY_BTOPT,
-                                        zstd.STRATEGY_BTULTRA))
+                                        zstd.STRATEGY_BTULTRA,
+                                        zstd.STRATEGY_BTULTRA2))
 
 
 @make_cffi
 @unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
 class TestCompressionParametersHypothesis(unittest.TestCase):
     @hypothesis.given(s_windowlog, s_chainlog, s_hashlog, s_searchlog,
-                        s_searchlength, s_targetlength, s_strategy)
+                        s_minmatch, s_targetlength, s_strategy)
     def test_valid_init(self, windowlog, chainlog, hashlog, searchlog,
-                        searchlength, targetlength, strategy):
+                        minmatch, targetlength, strategy):
         zstd.ZstdCompressionParameters(window_log=windowlog,
                                        chain_log=chainlog,
                                        hash_log=hashlog,
                                        search_log=searchlog,
-                                       min_match=searchlength,
+                                       min_match=minmatch,
                                        target_length=targetlength,
-                                       compression_strategy=strategy)
+                                       strategy=strategy)
 
     @hypothesis.given(s_windowlog, s_chainlog, s_hashlog, s_searchlog,
-                        s_searchlength, s_targetlength, s_strategy)
+                      s_minmatch, s_targetlength, s_strategy)
     def test_estimated_compression_context_size(self, windowlog, chainlog,
                                                 hashlog, searchlog,
-                                                searchlength, targetlength,
+                                                minmatch, targetlength,
                                                 strategy):
-        if searchlength == zstd.SEARCHLENGTH_MIN and strategy in (zstd.STRATEGY_FAST, zstd.STRATEGY_GREEDY):
-            searchlength += 1
-        elif searchlength == zstd.SEARCHLENGTH_MAX and strategy != zstd.STRATEGY_FAST:
-            searchlength -= 1
+        if minmatch == zstd.MINMATCH_MIN and strategy in (zstd.STRATEGY_FAST, zstd.STRATEGY_GREEDY):
+            minmatch += 1
+        elif minmatch == zstd.MINMATCH_MAX and strategy != zstd.STRATEGY_FAST:
+            minmatch -= 1
 
         p = zstd.ZstdCompressionParameters(window_log=windowlog,
                                            chain_log=chainlog,
                                            hash_log=hashlog,
                                            search_log=searchlog,
-                                           min_match=searchlength,
+                                           min_match=minmatch,
                                            target_length=targetlength,
-                                           compression_strategy=strategy)
+                                           strategy=strategy)
         size = p.estimated_compression_context_size()
 
--- a/contrib/python-zstandard/tests/test_decompressor.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/tests/test_decompressor.py	Wed Apr 17 13:41:18 2019 -0400
@@ -3,6 +3,7 @@
 import random
 import struct
 import sys
+import tempfile
 import unittest
 
 import zstandard as zstd
@@ -10,6 +11,7 @@
 from .common import (
     generate_samples,
     make_cffi,
+    NonClosingBytesIO,
     OpCountingBytesIO,
 )
 
@@ -219,7 +221,7 @@
         cctx = zstd.ZstdCompressor(write_content_size=False)
         frame = cctx.compress(source)
 
-        dctx = zstd.ZstdDecompressor(max_window_size=1)
+        dctx = zstd.ZstdDecompressor(max_window_size=2**zstd.WINDOWLOG_MIN)
 
         with self.assertRaisesRegexp(
             zstd.ZstdError, 'decompression error: Frame requires too much memory'):
@@ -302,19 +304,16 @@
         dctx = zstd.ZstdDecompressor()
 
         with dctx.stream_reader(b'foo') as reader:
-            with self.assertRaises(NotImplementedError):
+            with self.assertRaises(io.UnsupportedOperation):
                 reader.readline()
 
-            with self.assertRaises(NotImplementedError):
+            with self.assertRaises(io.UnsupportedOperation):
                 reader.readlines()
 
-            with self.assertRaises(NotImplementedError):
-                reader.readall()
-
-            with self.assertRaises(NotImplementedError):
+            with self.assertRaises(io.UnsupportedOperation):
                 iter(reader)
 
-            with self.assertRaises(NotImplementedError):
+            with self.assertRaises(io.UnsupportedOperation):
                 next(reader)
 
             with self.assertRaises(io.UnsupportedOperation):
@@ -347,15 +346,18 @@
             with self.assertRaisesRegexp(ValueError, 'stream is closed'):
                 reader.read(1)
 
-    def test_bad_read_size(self):
+    def test_read_sizes(self):
+        cctx = zstd.ZstdCompressor()
+        foo = cctx.compress(b'foo')
+
         dctx = zstd.ZstdDecompressor()
 
-        with dctx.stream_reader(b'foo') as reader:
-            with self.assertRaisesRegexp(ValueError, 'cannot read negative or size 0 amounts'):
-                reader.read(-1)
+        with dctx.stream_reader(foo) as reader:
+            with self.assertRaisesRegexp(ValueError, 'cannot read negative amounts less than -1'):
+                reader.read(-2)
 
-            with self.assertRaisesRegexp(ValueError, 'cannot read negative or size 0 amounts'):
-                reader.read(0)
+            self.assertEqual(reader.read(0), b'')
+            self.assertEqual(reader.read(), b'foo')
 
     def test_read_buffer(self):
         cctx = zstd.ZstdCompressor()
@@ -524,13 +526,243 @@
         reader = dctx.stream_reader(source)
 
         with reader:
-            with self.assertRaises(TypeError):
-                reader.read()
+            reader.read(0)
 
         with reader:
             with self.assertRaisesRegexp(ValueError, 'stream is closed'):
                 reader.read(100)
 
+    def test_partial_read(self):
+        # Inspired by https://github.com/indygreg/python-zstandard/issues/71.
+        buffer = io.BytesIO()
+        cctx = zstd.ZstdCompressor()
+        writer = cctx.stream_writer(buffer)
+        writer.write(bytearray(os.urandom(1000000)))
+        writer.flush(zstd.FLUSH_FRAME)
+        buffer.seek(0)
+
+        dctx = zstd.ZstdDecompressor()
+        reader = dctx.stream_reader(buffer)
+
+        while True:
+            chunk = reader.read(8192)
+            if not chunk:
+                break
+
+    def test_read_multiple_frames(self):
+        cctx = zstd.ZstdCompressor()
+        source = io.BytesIO()
+        writer = cctx.stream_writer(source)
+        writer.write(b'foo')
+        writer.flush(zstd.FLUSH_FRAME)
+        writer.write(b'bar')
+        writer.flush(zstd.FLUSH_FRAME)
+
+        dctx = zstd.ZstdDecompressor()
+
+        reader = dctx.stream_reader(source.getvalue())
+        self.assertEqual(reader.read(2), b'fo')
+        self.assertEqual(reader.read(2), b'o')
+        self.assertEqual(reader.read(2), b'ba')
+        self.assertEqual(reader.read(2), b'r')
+
+        source.seek(0)
+        reader = dctx.stream_reader(source)
+        self.assertEqual(reader.read(2), b'fo')
+        self.assertEqual(reader.read(2), b'o')
+        self.assertEqual(reader.read(2), b'ba')
+        self.assertEqual(reader.read(2), b'r')
+
+        reader = dctx.stream_reader(source.getvalue())
+        self.assertEqual(reader.read(3), b'foo')
+        self.assertEqual(reader.read(3), b'bar')
+
+        source.seek(0)
+        reader = dctx.stream_reader(source)
+        self.assertEqual(reader.read(3), b'foo')
+        self.assertEqual(reader.read(3), b'bar')
+
+        reader = dctx.stream_reader(source.getvalue())
+        self.assertEqual(reader.read(4), b'foo')
+        self.assertEqual(reader.read(4), b'bar')
+
+        source.seek(0)
+        reader = dctx.stream_reader(source)
+        self.assertEqual(reader.read(4), b'foo')
+        self.assertEqual(reader.read(4), b'bar')
+
+        reader = dctx.stream_reader(source.getvalue())
+        self.assertEqual(reader.read(128), b'foo')
+        self.assertEqual(reader.read(128), b'bar')
+
+        source.seek(0)
+        reader = dctx.stream_reader(source)
+        self.assertEqual(reader.read(128), b'foo')
+        self.assertEqual(reader.read(128), b'bar')
+
+        # Now tests for reads spanning frames.
+        reader = dctx.stream_reader(source.getvalue(), read_across_frames=True)
+        self.assertEqual(reader.read(3), b'foo')
+        self.assertEqual(reader.read(3), b'bar')
+
+        source.seek(0)
+        reader = dctx.stream_reader(source, read_across_frames=True)
+        self.assertEqual(reader.read(3), b'foo')
+        self.assertEqual(reader.read(3), b'bar')
+
+        reader = dctx.stream_reader(source.getvalue(), read_across_frames=True)
+        self.assertEqual(reader.read(6), b'foobar')
+
+        source.seek(0)
+        reader = dctx.stream_reader(source, read_across_frames=True)
+        self.assertEqual(reader.read(6), b'foobar')
+
+        reader = dctx.stream_reader(source.getvalue(), read_across_frames=True)
+        self.assertEqual(reader.read(7), b'foobar')
+
+        source.seek(0)
+        reader = dctx.stream_reader(source, read_across_frames=True)
+        self.assertEqual(reader.read(7), b'foobar')
+
+        reader = dctx.stream_reader(source.getvalue(), read_across_frames=True)
+        self.assertEqual(reader.read(128), b'foobar')
+
+        source.seek(0)
+        reader = dctx.stream_reader(source, read_across_frames=True)
+        self.assertEqual(reader.read(128), b'foobar')
+
+    def test_readinto(self):
+        cctx = zstd.ZstdCompressor()
+        foo = cctx.compress(b'foo')
+
+        dctx = zstd.ZstdDecompressor()
+
+        # Attempting to readinto() a non-writable buffer fails.
+        # The exact exception varies based on the backend.
+        reader = dctx.stream_reader(foo)
+        with self.assertRaises(Exception):
+            reader.readinto(b'foobar')
+
+        # readinto() with sufficiently large destination.
+        b = bytearray(1024)
+        reader = dctx.stream_reader(foo)
+        self.assertEqual(reader.readinto(b), 3)
+        self.assertEqual(b[0:3], b'foo')
+        self.assertEqual(reader.readinto(b), 0)
+        self.assertEqual(b[0:3], b'foo')
+
+        # readinto() with small reads.
+        b = bytearray(1024)
+        reader = dctx.stream_reader(foo, read_size=1)
+        self.assertEqual(reader.readinto(b), 3)
+        self.assertEqual(b[0:3], b'foo')
+
+        # Too small destination buffer.
+        b = bytearray(2)
+        reader = dctx.stream_reader(foo)
+        self.assertEqual(reader.readinto(b), 2)
+        self.assertEqual(b[:], b'fo')
+
+    def test_readinto1(self):
+        cctx = zstd.ZstdCompressor()
+        foo = cctx.compress(b'foo')
+
+        dctx = zstd.ZstdDecompressor()
+
+        reader = dctx.stream_reader(foo)
+        with self.assertRaises(Exception):
+            reader.readinto1(b'foobar')
+
+        # Sufficiently large destination.
+        b = bytearray(1024)
+        reader = dctx.stream_reader(foo)
+        self.assertEqual(reader.readinto1(b), 3)
+        self.assertEqual(b[0:3], b'foo')
+        self.assertEqual(reader.readinto1(b), 0)
+        self.assertEqual(b[0:3], b'foo')
+
+        # readinto() with small reads.
+        b = bytearray(1024)
+        reader = dctx.stream_reader(foo, read_size=1)
+        self.assertEqual(reader.readinto1(b), 3)
+        self.assertEqual(b[0:3], b'foo')
+
+        # Too small destination buffer.
+        b = bytearray(2)
+        reader = dctx.stream_reader(foo)
+        self.assertEqual(reader.readinto1(b), 2)
+        self.assertEqual(b[:], b'fo')
+
+    def test_readall(self):
+        cctx = zstd.ZstdCompressor()
+        foo = cctx.compress(b'foo')
+
+        dctx = zstd.ZstdDecompressor()
+        reader = dctx.stream_reader(foo)
+
+        self.assertEqual(reader.readall(), b'foo')
+
+    def test_read1(self):
+        cctx = zstd.ZstdCompressor()
+        foo = cctx.compress(b'foo')
+
+        dctx = zstd.ZstdDecompressor()
+
+        b = OpCountingBytesIO(foo)
+        reader = dctx.stream_reader(b)
+
+        self.assertEqual(reader.read1(), b'foo')
+        self.assertEqual(b._read_count, 1)
+
+        b = OpCountingBytesIO(foo)
+        reader = dctx.stream_reader(b)
+
+        self.assertEqual(reader.read1(0), b'')
+        self.assertEqual(reader.read1(2), b'fo')
+        self.assertEqual(b._read_count, 1)
+        self.assertEqual(reader.read1(1), b'o')
+        self.assertEqual(b._read_count, 1)
+        self.assertEqual(reader.read1(1), b'')
+        self.assertEqual(b._read_count, 2)
+
+    def test_read_lines(self):
+        cctx = zstd.ZstdCompressor()
+        source = b'\n'.join(('line %d' % i).encode('ascii') for i in range(1024))
+
+        frame = cctx.compress(source)
+
+        dctx = zstd.ZstdDecompressor()
+        reader = dctx.stream_reader(frame)
+        tr = io.TextIOWrapper(reader, encoding='utf-8')
+
+        lines = []
+        for line in tr:
+            lines.append(line.encode('utf-8'))
+
+        self.assertEqual(len(lines), 1024)
+        self.assertEqual(b''.join(lines), source)
+
+        reader = dctx.stream_reader(frame)
+        tr = io.TextIOWrapper(reader, encoding='utf-8')
+
+        lines = tr.readlines()
+        self.assertEqual(len(lines), 1024)
+        self.assertEqual(''.join(lines).encode('utf-8'), source)
+
+        reader = dctx.stream_reader(frame)
+        tr = io.TextIOWrapper(reader, encoding='utf-8')
+
+        lines = []
+        while True:
+            line = tr.readline()
+            if not line:
+                break
+
+            lines.append(line.encode('utf-8'))
+
+        self.assertEqual(len(lines), 1024)
+        self.assertEqual(b''.join(lines), source)
+
 
 @make_cffi
 class TestDecompressor_decompressobj(unittest.TestCase):
@@ -540,6 +772,9 @@
         dctx = zstd.ZstdDecompressor()
         dobj = dctx.decompressobj()
         self.assertEqual(dobj.decompress(data), b'foobar')
+        self.assertIsNone(dobj.flush())
+        self.assertIsNone(dobj.flush(10))
+        self.assertIsNone(dobj.flush(length=100))
 
     def test_input_types(self):
         compressed = zstd.ZstdCompressor(level=1).compress(b'foo')
@@ -557,7 +792,11 @@
 
         for source in sources:
             dobj = dctx.decompressobj()
+            self.assertIsNone(dobj.flush())
+            self.assertIsNone(dobj.flush(10))
+            self.assertIsNone(dobj.flush(length=100))
             self.assertEqual(dobj.decompress(source), b'foo')
+            self.assertIsNone(dobj.flush())
 
     def test_reuse(self):
         data = zstd.ZstdCompressor(level=1).compress(b'foobar')
@@ -568,6 +807,7 @@
 
         with self.assertRaisesRegexp(zstd.ZstdError, 'cannot use a decompressobj'):
             dobj.decompress(data)
+            self.assertIsNone(dobj.flush())
 
     def test_bad_write_size(self):
         dctx = zstd.ZstdDecompressor()
@@ -585,16 +825,141 @@
             dobj = dctx.decompressobj(write_size=i + 1)
             self.assertEqual(dobj.decompress(data), source)
 
+
 def decompress_via_writer(data):
     buffer = io.BytesIO()
     dctx = zstd.ZstdDecompressor()
-    with dctx.stream_writer(buffer) as decompressor:
-        decompressor.write(data)
+    decompressor = dctx.stream_writer(buffer)
+    decompressor.write(data)
+
     return buffer.getvalue()
 
 
 @make_cffi
 class TestDecompressor_stream_writer(unittest.TestCase):
+    def test_io_api(self):
+        buffer = io.BytesIO()
+        dctx = zstd.ZstdDecompressor()
+        writer = dctx.stream_writer(buffer)
+
+        self.assertFalse(writer.closed)
+        self.assertFalse(writer.isatty())
+        self.assertFalse(writer.readable())
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readline()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readline(42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readline(size=42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readlines()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readlines(42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readlines(hint=42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.seek(0)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.seek(10, os.SEEK_SET)
+
+        self.assertFalse(writer.seekable())
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.tell()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.truncate()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.truncate(42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.truncate(size=42)
+
+        self.assertTrue(writer.writable())
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.writelines([])
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.read()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.read(42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.read(size=42)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readall()
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.readinto(None)
+
+        with self.assertRaises(io.UnsupportedOperation):
+            writer.fileno()
+
+    def test_fileno_file(self):
+        with tempfile.TemporaryFile('wb') as tf:
+            dctx = zstd.ZstdDecompressor()
+            writer = dctx.stream_writer(tf)
+
+            self.assertEqual(writer.fileno(), tf.fileno())
+
+    def test_close(self):
+        foo = zstd.ZstdCompressor().compress(b'foo')
+
+        buffer = NonClosingBytesIO()
+        dctx = zstd.ZstdDecompressor()
+        writer = dctx.stream_writer(buffer)
+
+        writer.write(foo)
+        self.assertFalse(writer.closed)
+        self.assertFalse(buffer.closed)
+        writer.close()
+        self.assertTrue(writer.closed)
+        self.assertTrue(buffer.closed)
+
+        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            writer.write(b'')
+
+        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            writer.flush()
+
+        with self.assertRaisesRegexp(ValueError, 'stream is closed'):
+            with writer:
+                pass
+
+        self.assertEqual(buffer.getvalue(), b'foo')
+
+        # Context manager exit should close stream.
+        buffer = NonClosingBytesIO()
+        writer = dctx.stream_writer(buffer)
+
+        with writer:
+            writer.write(foo)
+
+        self.assertTrue(writer.closed)
+        self.assertEqual(buffer.getvalue(), b'foo')
+
+    def test_flush(self):
+        buffer = OpCountingBytesIO()
+        dctx = zstd.ZstdDecompressor()
+        writer = dctx.stream_writer(buffer)
+
+        writer.flush()
+        self.assertEqual(buffer._flush_count, 1)
+        writer.flush()
+        self.assertEqual(buffer._flush_count, 2)
+
     def test_empty_roundtrip(self):
         cctx = zstd.ZstdCompressor()
         empty = cctx.compress(b'')
@@ -616,9 +981,21 @@
         dctx = zstd.ZstdDecompressor()
         for source in sources:
             buffer = io.BytesIO()
+
+            decompressor = dctx.stream_writer(buffer)
+            decompressor.write(source)
+            self.assertEqual(buffer.getvalue(), b'foo')
+
+            buffer = NonClosingBytesIO()
+
             with dctx.stream_writer(buffer) as decompressor:
-                decompressor.write(source)
+                self.assertEqual(decompressor.write(source), 3)
+
+            self.assertEqual(buffer.getvalue(), b'foo')
 
+            buffer = io.BytesIO()
+            writer = dctx.stream_writer(buffer, write_return_read=True)
+            self.assertEqual(writer.write(source), len(source))
             self.assertEqual(buffer.getvalue(), b'foo')
 
     def test_large_roundtrip(self):
@@ -641,7 +1018,7 @@
         cctx = zstd.ZstdCompressor()
         compressed = cctx.compress(orig)
 
-        buffer = io.BytesIO()
+        buffer = NonClosingBytesIO()
         dctx = zstd.ZstdDecompressor()
         with dctx.stream_writer(buffer) as decompressor:
             pos = 0
@@ -651,6 +1028,17 @@
                 pos += 8192
         self.assertEqual(buffer.getvalue(), orig)
 
+        # Again with write_return_read=True
+        buffer = io.BytesIO()
+        writer = dctx.stream_writer(buffer, write_return_read=True)
+        pos = 0
+        while pos < len(compressed):
+            pos2 = pos + 8192
+            chunk = compressed[pos:pos2]
+            self.assertEqual(writer.write(chunk), len(chunk))
+            pos += 8192
+        self.assertEqual(buffer.getvalue(), orig)
+
     def test_dictionary(self):
         samples = []
         for i in range(128):
@@ -661,7 +1049,7 @@
         d = zstd.train_dictionary(8192, samples)
 
         orig = b'foobar' * 16384
-        buffer = io.BytesIO()
+        buffer = NonClosingBytesIO()
         cctx = zstd.ZstdCompressor(dict_data=d)
         with cctx.stream_writer(buffer) as compressor:
             self.assertEqual(compressor.write(orig), 0)
@@ -670,6 +1058,12 @@
         buffer = io.BytesIO()
 
         dctx = zstd.ZstdDecompressor(dict_data=d)
+        decompressor = dctx.stream_writer(buffer)
+        self.assertEqual(decompressor.write(compressed), len(orig))
+        self.assertEqual(buffer.getvalue(), orig)
+
+        buffer = NonClosingBytesIO()
+
         with dctx.stream_writer(buffer) as decompressor:
             self.assertEqual(decompressor.write(compressed), len(orig))
 
@@ -678,6 +1072,11 @@
     def test_memory_size(self):
         dctx = zstd.ZstdDecompressor()
         buffer = io.BytesIO()
+
+        decompressor = dctx.stream_writer(buffer)
+        size = decompressor.memory_size()
+        self.assertGreater(size, 100000)
+
         with dctx.stream_writer(buffer) as decompressor:
             size = decompressor.memory_size()
 
@@ -810,7 +1209,7 @@
     @unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
     def test_large_input(self):
         bytes = list(struct.Struct('>B').pack(i) for i in range(256))
-        compressed = io.BytesIO()
+        compressed = NonClosingBytesIO()
         input_size = 0
         cctx = zstd.ZstdCompressor(level=1)
         with cctx.stream_writer(compressed) as compressor:
@@ -823,7 +1222,7 @@
                 if have_compressed and have_raw:
                     break
 
-        compressed.seek(0)
+        compressed = io.BytesIO(compressed.getvalue())
         self.assertGreater(len(compressed.getvalue()),
                            zstd.DECOMPRESSION_RECOMMENDED_INPUT_SIZE)
 
@@ -861,7 +1260,7 @@
 
         source = io.BytesIO()
 
-        compressed = io.BytesIO()
+        compressed = NonClosingBytesIO()
         with cctx.stream_writer(compressed) as compressor:
             for i in range(256):
                 chunk = b'\0' * 1024
@@ -874,7 +1273,7 @@
                                  max_output_size=len(source.getvalue()))
         self.assertEqual(simple, source.getvalue())
 
-        compressed.seek(0)
+        compressed = io.BytesIO(compressed.getvalue())
         streamed = b''.join(dctx.read_to_iter(compressed))
         self.assertEqual(streamed, source.getvalue())
 
@@ -1001,6 +1400,9 @@
     def test_invalid_inputs(self):
         dctx = zstd.ZstdDecompressor()
 
+        if not hasattr(dctx, 'multi_decompress_to_buffer'):
+            self.skipTest('multi_decompress_to_buffer not available')
+
         with self.assertRaises(TypeError):
             dctx.multi_decompress_to_buffer(True)
 
@@ -1020,6 +1422,10 @@
         frames = [cctx.compress(d) for d in original]
 
         dctx = zstd.ZstdDecompressor()
+
+        if not hasattr(dctx, 'multi_decompress_to_buffer'):
+            self.skipTest('multi_decompress_to_buffer not available')
+
         result = dctx.multi_decompress_to_buffer(frames)
 
         self.assertEqual(len(result), len(frames))
@@ -1041,6 +1447,10 @@
         sizes = struct.pack('=' + 'Q' * len(original), *map(len, original))
 
         dctx = zstd.ZstdDecompressor()
+
+        if not hasattr(dctx, 'multi_decompress_to_buffer'):
+            self.skipTest('multi_decompress_to_buffer not available')
+
         result = dctx.multi_decompress_to_buffer(frames, decompressed_sizes=sizes)
 
         self.assertEqual(len(result), len(frames))
@@ -1057,6 +1467,9 @@
 
         dctx = zstd.ZstdDecompressor()
 
+        if not hasattr(dctx, 'multi_decompress_to_buffer'):
+            self.skipTest('multi_decompress_to_buffer not available')
+
         segments = struct.pack('=QQQQ', 0, len(frames[0]), len(frames[0]), len(frames[1]))
         b = zstd.BufferWithSegments(b''.join(frames), segments)
 
@@ -1074,12 +1487,16 @@
         frames = [cctx.compress(d) for d in original]
         sizes = struct.pack('=' + 'Q' * len(original), *map(len, original))
 
+        dctx = zstd.ZstdDecompressor()
+
+        if not hasattr(dctx, 'multi_decompress_to_buffer'):
+            self.skipTest('multi_decompress_to_buffer not available')
+
         segments = struct.pack('=QQQQQQ', 0, len(frames[0]),
                                len(frames[0]), len(frames[1]),
                                len(frames[0]) + len(frames[1]), len(frames[2]))
         b = zstd.BufferWithSegments(b''.join(frames), segments)
 
-        dctx = zstd.ZstdDecompressor()
         result = dctx.multi_decompress_to_buffer(b, decompressed_sizes=sizes)
 
         self.assertEqual(len(result), len(frames))
@@ -1099,10 +1516,14 @@
             b'foo4' * 6,
         ]
 
+        if not hasattr(cctx, 'multi_compress_to_buffer'):
+            self.skipTest('multi_compress_to_buffer not available')
+
         frames = cctx.multi_compress_to_buffer(original)
 
         # Check round trip.
         dctx = zstd.ZstdDecompressor()
+
         decompressed = dctx.multi_decompress_to_buffer(frames, threads=3)
 
         self.assertEqual(len(decompressed), len(original))
@@ -1138,7 +1559,12 @@
         frames = [cctx.compress(s) for s in generate_samples()]
 
         dctx = zstd.ZstdDecompressor(dict_data=d)
+
+        if not hasattr(dctx, 'multi_decompress_to_buffer'):
+            self.skipTest('multi_decompress_to_buffer not available')
+
         result = dctx.multi_decompress_to_buffer(frames)
+
         self.assertEqual([o.tobytes() for o in result], generate_samples())
 
     def test_multiple_threads(self):
@@ -1149,6 +1575,10 @@
         frames.extend(cctx.compress(b'y' * 64) for i in range(256))
 
         dctx = zstd.ZstdDecompressor()
+
+        if not hasattr(dctx, 'multi_decompress_to_buffer'):
+            self.skipTest('multi_decompress_to_buffer not available')
+
         result = dctx.multi_decompress_to_buffer(frames, threads=-1)
 
         self.assertEqual(len(result), len(frames))
@@ -1164,6 +1594,9 @@
 
         dctx = zstd.ZstdDecompressor()
 
+        if not hasattr(dctx, 'multi_decompress_to_buffer'):
+            self.skipTest('multi_decompress_to_buffer not available')
+
         with self.assertRaisesRegexp(zstd.ZstdError,
                                      'error decompressing item 1: ('
                                      'Corrupted block|'
--- a/contrib/python-zstandard/tests/test_decompressor_fuzzing.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/tests/test_decompressor_fuzzing.py	Wed Apr 17 13:41:18 2019 -0400
@@ -12,6 +12,7 @@
 
 from . common import (
     make_cffi,
+    NonClosingBytesIO,
     random_input_data,
 )
 
@@ -23,22 +24,200 @@
         suppress_health_check=[hypothesis.HealthCheck.large_base_example])
     @hypothesis.given(original=strategies.sampled_from(random_input_data()),
                       level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
+                      streaming=strategies.booleans(),
+                      source_read_size=strategies.integers(1, 1048576),
                       read_sizes=strategies.data())
-    def test_stream_source_read_variance(self, original, level, source_read_size,
-                                         read_sizes):
+    def test_stream_source_read_variance(self, original, level, streaming,
+                                         source_read_size, read_sizes):
         cctx = zstd.ZstdCompressor(level=level)
-        frame = cctx.compress(original)
+
+        if streaming:
+            source = io.BytesIO()
+            writer = cctx.stream_writer(source)
+            writer.write(original)
+            writer.flush(zstd.FLUSH_FRAME)
+            source.seek(0)
+        else:
+            frame = cctx.compress(original)
+            source = io.BytesIO(frame)
 
         dctx = zstd.ZstdDecompressor()
-        source = io.BytesIO(frame)
 
         chunks = []
         with dctx.stream_reader(source, read_size=source_read_size) as reader:
             while True:
-                read_size = read_sizes.draw(strategies.integers(1, 16384))
+                read_size = read_sizes.draw(strategies.integers(-1, 131072))
+                chunk = reader.read(read_size)
+                if not chunk and read_size:
+                    break
+
+                chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), original)
+
+    # Similar to above except we have a constant read() size.
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      streaming=strategies.booleans(),
+                      source_read_size=strategies.integers(1, 1048576),
+                      read_size=strategies.integers(-1, 131072))
+    def test_stream_source_read_size(self, original, level, streaming,
+                                     source_read_size, read_size):
+        if read_size == 0:
+            read_size = 1
+
+        cctx = zstd.ZstdCompressor(level=level)
+
+        if streaming:
+            source = io.BytesIO()
+            writer = cctx.stream_writer(source)
+            writer.write(original)
+            writer.flush(zstd.FLUSH_FRAME)
+            source.seek(0)
+        else:
+            frame = cctx.compress(original)
+            source = io.BytesIO(frame)
+
+        dctx = zstd.ZstdDecompressor()
+
+        chunks = []
+        reader = dctx.stream_reader(source, read_size=source_read_size)
+        while True:
+            chunk = reader.read(read_size)
+            if not chunk and read_size:
+                break
+
+            chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), original)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      streaming=strategies.booleans(),
+                      source_read_size=strategies.integers(1, 1048576),
+                      read_sizes=strategies.data())
+    def test_buffer_source_read_variance(self, original, level, streaming,
+                                         source_read_size, read_sizes):
+        cctx = zstd.ZstdCompressor(level=level)
+
+        if streaming:
+            source = io.BytesIO()
+            writer = cctx.stream_writer(source)
+            writer.write(original)
+            writer.flush(zstd.FLUSH_FRAME)
+            frame = source.getvalue()
+        else:
+            frame = cctx.compress(original)
+
+        dctx = zstd.ZstdDecompressor()
+        chunks = []
+
+        with dctx.stream_reader(frame, read_size=source_read_size) as reader:
+            while True:
+                read_size = read_sizes.draw(strategies.integers(-1, 131072))
                 chunk = reader.read(read_size)
-                if not chunk:
+                if not chunk and read_size:
+                    break
+
+                chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), original)
+
+    # Similar to above except we have a constant read() size.
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      streaming=strategies.booleans(),
+                      source_read_size=strategies.integers(1, 1048576),
+                      read_size=strategies.integers(-1, 131072))
+    def test_buffer_source_constant_read_size(self, original, level, streaming,
+                                              source_read_size, read_size):
+        if read_size == 0:
+            read_size = -1
+
+        cctx = zstd.ZstdCompressor(level=level)
+
+        if streaming:
+            source = io.BytesIO()
+            writer = cctx.stream_writer(source)
+            writer.write(original)
+            writer.flush(zstd.FLUSH_FRAME)
+            frame = source.getvalue()
+        else:
+            frame = cctx.compress(original)
+
+        dctx = zstd.ZstdDecompressor()
+        chunks = []
+
+        reader = dctx.stream_reader(frame, read_size=source_read_size)
+        while True:
+            chunk = reader.read(read_size)
+            if not chunk and read_size:
+                break
+
+            chunks.append(chunk)
+
+        self.assertEqual(b''.join(chunks), original)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      streaming=strategies.booleans(),
+                      source_read_size=strategies.integers(1, 1048576))
+    def test_stream_source_readall(self, original, level, streaming,
+                                         source_read_size):
+        cctx = zstd.ZstdCompressor(level=level)
+
+        if streaming:
+            source = io.BytesIO()
+            writer = cctx.stream_writer(source)
+            writer.write(original)
+            writer.flush(zstd.FLUSH_FRAME)
+            source.seek(0)
+        else:
+            frame = cctx.compress(original)
+            source = io.BytesIO(frame)
+
+        dctx = zstd.ZstdDecompressor()
+
+        data = dctx.stream_reader(source, read_size=source_read_size).readall()
+        self.assertEqual(data, original)
+
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(original=strategies.sampled_from(random_input_data()),
+                      level=strategies.integers(min_value=1, max_value=5),
+                      streaming=strategies.booleans(),
+                      source_read_size=strategies.integers(1, 1048576),
+                      read_sizes=strategies.data())
+    def test_stream_source_read1_variance(self, original, level, streaming,
+                                          source_read_size, read_sizes):
+        cctx = zstd.ZstdCompressor(level=level)
+
+        if streaming:
+            source = io.BytesIO()
+            writer = cctx.stream_writer(source)
+            writer.write(original)
+            writer.flush(zstd.FLUSH_FRAME)
+            source.seek(0)
+        else:
+            frame = cctx.compress(original)
+            source = io.BytesIO(frame)
+
+        dctx = zstd.ZstdDecompressor()
+
+        chunks = []
+        with dctx.stream_reader(source, read_size=source_read_size) as reader:
+            while True:
+                read_size = read_sizes.draw(strategies.integers(-1, 131072))
+                chunk = reader.read1(read_size)
+                if not chunk and read_size:
                     break
 
                 chunks.append(chunk)
@@ -49,24 +228,36 @@
         suppress_health_check=[hypothesis.HealthCheck.large_base_example])
     @hypothesis.given(original=strategies.sampled_from(random_input_data()),
                       level=strategies.integers(min_value=1, max_value=5),
-                      source_read_size=strategies.integers(1, 16384),
+                      streaming=strategies.booleans(),
+                      source_read_size=strategies.integers(1, 1048576),
                       read_sizes=strategies.data())
-    def test_buffer_source_read_variance(self, original, level, source_read_size,
-                                         read_sizes):
+    def test_stream_source_readinto1_variance(self, original, level, streaming,
+                                          source_read_size, read_sizes):
         cctx = zstd.ZstdCompressor(level=level)
-        frame = cctx.compress(original)
+
+        if streaming:
+            source = io.BytesIO()
+            writer = cctx.stream_writer(source)
+            writer.write(original)
+            writer.flush(zstd.FLUSH_FRAME)
+            source.seek(0)
+        else:
+            frame = cctx.compress(original)
+            source = io.BytesIO(frame)
 
         dctx = zstd.ZstdDecompressor()
+
         chunks = []
-
-        with dctx.stream_reader(frame, read_size=source_read_size) as reader:
+        with dctx.stream_reader(source, read_size=source_read_size) as reader:
             while True:
-                read_size = read_sizes.draw(strategies.integers(1, 16384))
-                chunk = reader.read(read_size)
-                if not chunk:
+                read_size = read_sizes.draw(strategies.integers(1, 131072))
+                b = bytearray(read_size)
+                count = reader.readinto1(b)
+
+                if not count:
                     break
 
-                chunks.append(chunk)
+                chunks.append(bytes(b[0:count]))
 
         self.assertEqual(b''.join(chunks), original)
 
@@ -75,7 +266,7 @@
     @hypothesis.given(
         original=strategies.sampled_from(random_input_data()),
         level=strategies.integers(min_value=1, max_value=5),
-        source_read_size=strategies.integers(1, 16384),
+        source_read_size=strategies.integers(1, 1048576),
         seek_amounts=strategies.data(),
         read_sizes=strategies.data())
     def test_relative_seeks(self, original, level, source_read_size, seek_amounts,
@@ -99,6 +290,46 @@
 
                 self.assertEqual(original[offset:offset + len(chunk)], chunk)
 
+    @hypothesis.settings(
+        suppress_health_check=[hypothesis.HealthCheck.large_base_example])
+    @hypothesis.given(
+        originals=strategies.data(),
+        frame_count=strategies.integers(min_value=2, max_value=10),
+        level=strategies.integers(min_value=1, max_value=5),
+        source_read_size=strategies.integers(1, 1048576),
+        read_sizes=strategies.data())
+    def test_multiple_frames(self, originals, frame_count, level,
+                             source_read_size, read_sizes):
+
+        cctx = zstd.ZstdCompressor(level=level)
+        source = io.BytesIO()
+        buffer = io.BytesIO()
+        writer = cctx.stream_writer(buffer)
+
+        for i in range(frame_count):
+            data = originals.draw(strategies.sampled_from(random_input_data()))
+            source.write(data)
+            writer.write(data)
+            writer.flush(zstd.FLUSH_FRAME)
+
+        dctx = zstd.ZstdDecompressor()
+        buffer.seek(0)
+        reader = dctx.stream_reader(buffer, read_size=source_read_size,
+                                    read_across_frames=True)
+
+        chunks = []
+
+        while True:
+            read_amount = read_sizes.draw(strategies.integers(-1, 16384))
+            chunk = reader.read(read_amount)
+
+            if not chunk and read_amount:
+                break
+
+            chunks.append(chunk)
+
+        self.assertEqual(source.getvalue(), b''.join(chunks))
+
 
 @unittest.skipUnless('ZSTD_SLOW_TESTS' in os.environ, 'ZSTD_SLOW_TESTS not set')
 @make_cffi
@@ -113,7 +344,7 @@
 
         dctx = zstd.ZstdDecompressor()
         source = io.BytesIO(frame)
-        dest = io.BytesIO()
+        dest = NonClosingBytesIO()
 
         with dctx.stream_writer(dest, write_size=write_size) as decompressor:
             while True:
@@ -234,10 +465,12 @@
                                    write_checksum=True,
                                    **kwargs)
 
+        if not hasattr(cctx, 'multi_compress_to_buffer'):
+            self.skipTest('multi_compress_to_buffer not available')
+
         frames_buffer = cctx.multi_compress_to_buffer(original, threads=-1)
 
         dctx = zstd.ZstdDecompressor(**kwargs)
-
         result = dctx.multi_decompress_to_buffer(frames_buffer)
 
         self.assertEqual(len(result), len(original))
--- a/contrib/python-zstandard/tests/test_module_attributes.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/tests/test_module_attributes.py	Wed Apr 17 13:41:18 2019 -0400
@@ -12,9 +12,9 @@
 @make_cffi
 class TestModuleAttributes(unittest.TestCase):
     def test_version(self):
-        self.assertEqual(zstd.ZSTD_VERSION, (1, 3, 6))
+        self.assertEqual(zstd.ZSTD_VERSION, (1, 3, 8))
 
-        self.assertEqual(zstd.__version__, '0.10.1')
+        self.assertEqual(zstd.__version__, '0.11.0')
 
     def test_constants(self):
         self.assertEqual(zstd.MAX_COMPRESSION_LEVEL, 22)
@@ -29,6 +29,8 @@
             'DECOMPRESSION_RECOMMENDED_INPUT_SIZE',
             'DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE',
             'MAGIC_NUMBER',
+            'FLUSH_BLOCK',
+            'FLUSH_FRAME',
             'BLOCKSIZELOG_MAX',
             'BLOCKSIZE_MAX',
             'WINDOWLOG_MIN',
@@ -38,6 +40,8 @@
             'HASHLOG_MIN',
             'HASHLOG_MAX',
             'HASHLOG3_MAX',
+            'MINMATCH_MIN',
+            'MINMATCH_MAX',
             'SEARCHLOG_MIN',
             'SEARCHLOG_MAX',
             'SEARCHLENGTH_MIN',
@@ -55,6 +59,7 @@
             'STRATEGY_BTLAZY2',
             'STRATEGY_BTOPT',
             'STRATEGY_BTULTRA',
+            'STRATEGY_BTULTRA2',
             'DICT_TYPE_AUTO',
             'DICT_TYPE_RAWCONTENT',
             'DICT_TYPE_FULLDICT',
--- a/contrib/python-zstandard/zstandard/__init__.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstandard/__init__.py	Wed Apr 17 13:41:18 2019 -0400
@@ -35,31 +35,31 @@
         from zstd import *
         backend = 'cext'
     elif platform.python_implementation() in ('PyPy',):
-        from zstd_cffi import *
+        from .cffi import *
         backend = 'cffi'
     else:
         try:
             from zstd import *
             backend = 'cext'
         except ImportError:
-            from zstd_cffi import *
+            from .cffi import *
             backend = 'cffi'
 elif _module_policy == 'cffi_fallback':
     try:
         from zstd import *
         backend = 'cext'
     except ImportError:
-        from zstd_cffi import *
+        from .cffi import *
         backend = 'cffi'
 elif _module_policy == 'cext':
     from zstd import *
     backend = 'cext'
 elif _module_policy == 'cffi':
-    from zstd_cffi import *
+    from .cffi import *
     backend = 'cffi'
 else:
     raise ImportError('unknown module import policy: %s; use default, cffi_fallback, '
                       'cext, or cffi' % _module_policy)
 
 # Keep this in sync with python-zstandard.h.
-__version__ = '0.10.1'
+__version__ = '0.11.0'
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/python-zstandard/zstandard/cffi.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,2515 @@
+# Copyright (c) 2016-present, Gregory Szorc
+# All rights reserved.
+#
+# This software may be modified and distributed under the terms
+# of the BSD license. See the LICENSE file for details.
+
+"""Python interface to the Zstandard (zstd) compression library."""
+
+from __future__ import absolute_import, unicode_literals
+
+# This should match what the C extension exports.
+__all__ = [
+    #'BufferSegment',
+    #'BufferSegments',
+    #'BufferWithSegments',
+    #'BufferWithSegmentsCollection',
+    'CompressionParameters',
+    'ZstdCompressionDict',
+    'ZstdCompressionParameters',
+    'ZstdCompressor',
+    'ZstdError',
+    'ZstdDecompressor',
+    'FrameParameters',
+    'estimate_decompression_context_size',
+    'frame_content_size',
+    'frame_header_size',
+    'get_frame_parameters',
+    'train_dictionary',
+
+    # Constants.
+    'FLUSH_BLOCK',
+    'FLUSH_FRAME',
+    'COMPRESSOBJ_FLUSH_FINISH',
+    'COMPRESSOBJ_FLUSH_BLOCK',
+    'ZSTD_VERSION',
+    'FRAME_HEADER',
+    'CONTENTSIZE_UNKNOWN',
+    'CONTENTSIZE_ERROR',
+    'MAX_COMPRESSION_LEVEL',
+    'COMPRESSION_RECOMMENDED_INPUT_SIZE',
+    'COMPRESSION_RECOMMENDED_OUTPUT_SIZE',
+    'DECOMPRESSION_RECOMMENDED_INPUT_SIZE',
+    'DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE',
+    'MAGIC_NUMBER',
+    'BLOCKSIZELOG_MAX',
+    'BLOCKSIZE_MAX',
+    'WINDOWLOG_MIN',
+    'WINDOWLOG_MAX',
+    'CHAINLOG_MIN',
+    'CHAINLOG_MAX',
+    'HASHLOG_MIN',
+    'HASHLOG_MAX',
+    'HASHLOG3_MAX',
+    'MINMATCH_MIN',
+    'MINMATCH_MAX',
+    'SEARCHLOG_MIN',
+    'SEARCHLOG_MAX',
+    'SEARCHLENGTH_MIN',
+    'SEARCHLENGTH_MAX',
+    'TARGETLENGTH_MIN',
+    'TARGETLENGTH_MAX',
+    'LDM_MINMATCH_MIN',
+    'LDM_MINMATCH_MAX',
+    'LDM_BUCKETSIZELOG_MAX',
+    'STRATEGY_FAST',
+    'STRATEGY_DFAST',
+    'STRATEGY_GREEDY',
+    'STRATEGY_LAZY',
+    'STRATEGY_LAZY2',
+    'STRATEGY_BTLAZY2',
+    'STRATEGY_BTOPT',
+    'STRATEGY_BTULTRA',
+    'STRATEGY_BTULTRA2',
+    'DICT_TYPE_AUTO',
+    'DICT_TYPE_RAWCONTENT',
+    'DICT_TYPE_FULLDICT',
+    'FORMAT_ZSTD1',
+    'FORMAT_ZSTD1_MAGICLESS',
+]
+
+import io
+import os
+import sys
+
+from _zstd_cffi import (
+    ffi,
+    lib,
+)
+
+if sys.version_info[0] == 2:
+    bytes_type = str
+    int_type = long
+else:
+    bytes_type = bytes
+    int_type = int
+
+
+COMPRESSION_RECOMMENDED_INPUT_SIZE = lib.ZSTD_CStreamInSize()
+COMPRESSION_RECOMMENDED_OUTPUT_SIZE = lib.ZSTD_CStreamOutSize()
+DECOMPRESSION_RECOMMENDED_INPUT_SIZE = lib.ZSTD_DStreamInSize()
+DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE = lib.ZSTD_DStreamOutSize()
+
+new_nonzero = ffi.new_allocator(should_clear_after_alloc=False)
+
+
+MAX_COMPRESSION_LEVEL = lib.ZSTD_maxCLevel()
+MAGIC_NUMBER = lib.ZSTD_MAGICNUMBER
+FRAME_HEADER = b'\x28\xb5\x2f\xfd'
+CONTENTSIZE_UNKNOWN = lib.ZSTD_CONTENTSIZE_UNKNOWN
+CONTENTSIZE_ERROR = lib.ZSTD_CONTENTSIZE_ERROR
+ZSTD_VERSION = (lib.ZSTD_VERSION_MAJOR, lib.ZSTD_VERSION_MINOR, lib.ZSTD_VERSION_RELEASE)
+
+BLOCKSIZELOG_MAX = lib.ZSTD_BLOCKSIZELOG_MAX
+BLOCKSIZE_MAX = lib.ZSTD_BLOCKSIZE_MAX
+WINDOWLOG_MIN = lib.ZSTD_WINDOWLOG_MIN
+WINDOWLOG_MAX = lib.ZSTD_WINDOWLOG_MAX
+CHAINLOG_MIN = lib.ZSTD_CHAINLOG_MIN
+CHAINLOG_MAX = lib.ZSTD_CHAINLOG_MAX
+HASHLOG_MIN = lib.ZSTD_HASHLOG_MIN
+HASHLOG_MAX = lib.ZSTD_HASHLOG_MAX
+HASHLOG3_MAX = lib.ZSTD_HASHLOG3_MAX
+MINMATCH_MIN = lib.ZSTD_MINMATCH_MIN
+MINMATCH_MAX = lib.ZSTD_MINMATCH_MAX
+SEARCHLOG_MIN = lib.ZSTD_SEARCHLOG_MIN
+SEARCHLOG_MAX = lib.ZSTD_SEARCHLOG_MAX
+SEARCHLENGTH_MIN = lib.ZSTD_MINMATCH_MIN
+SEARCHLENGTH_MAX = lib.ZSTD_MINMATCH_MAX
+TARGETLENGTH_MIN = lib.ZSTD_TARGETLENGTH_MIN
+TARGETLENGTH_MAX = lib.ZSTD_TARGETLENGTH_MAX
+LDM_MINMATCH_MIN = lib.ZSTD_LDM_MINMATCH_MIN
+LDM_MINMATCH_MAX = lib.ZSTD_LDM_MINMATCH_MAX
+LDM_BUCKETSIZELOG_MAX = lib.ZSTD_LDM_BUCKETSIZELOG_MAX
+
+STRATEGY_FAST = lib.ZSTD_fast
+STRATEGY_DFAST = lib.ZSTD_dfast
+STRATEGY_GREEDY = lib.ZSTD_greedy
+STRATEGY_LAZY = lib.ZSTD_lazy
+STRATEGY_LAZY2 = lib.ZSTD_lazy2
+STRATEGY_BTLAZY2 = lib.ZSTD_btlazy2
+STRATEGY_BTOPT = lib.ZSTD_btopt
+STRATEGY_BTULTRA = lib.ZSTD_btultra
+STRATEGY_BTULTRA2 = lib.ZSTD_btultra2
+
+DICT_TYPE_AUTO = lib.ZSTD_dct_auto
+DICT_TYPE_RAWCONTENT = lib.ZSTD_dct_rawContent
+DICT_TYPE_FULLDICT = lib.ZSTD_dct_fullDict
+
+FORMAT_ZSTD1 = lib.ZSTD_f_zstd1
+FORMAT_ZSTD1_MAGICLESS = lib.ZSTD_f_zstd1_magicless
+
+FLUSH_BLOCK = 0
+FLUSH_FRAME = 1
+
+COMPRESSOBJ_FLUSH_FINISH = 0
+COMPRESSOBJ_FLUSH_BLOCK = 1
+
+
+def _cpu_count():
+    # os.cpu_count() was introducd in Python 3.4.
+    try:
+        return os.cpu_count() or 0
+    except AttributeError:
+        pass
+
+    # Linux.
+    try:
+        if sys.version_info[0] == 2:
+            return os.sysconf(b'SC_NPROCESSORS_ONLN')
+        else:
+            return os.sysconf(u'SC_NPROCESSORS_ONLN')
+    except (AttributeError, ValueError):
+        pass
+
+    # TODO implement on other platforms.
+    return 0
+
+
+class ZstdError(Exception):
+    pass
+
+
+def _zstd_error(zresult):
+    # Resolves to bytes on Python 2 and 3. We use the string for formatting
+    # into error messages, which will be literal unicode. So convert it to
+    # unicode.
+    return ffi.string(lib.ZSTD_getErrorName(zresult)).decode('utf-8')
+
+def _make_cctx_params(params):
+    res = lib.ZSTD_createCCtxParams()
+    if res == ffi.NULL:
+        raise MemoryError()
+
+    res = ffi.gc(res, lib.ZSTD_freeCCtxParams)
+
+    attrs = [
+        (lib.ZSTD_c_format, params.format),
+        (lib.ZSTD_c_compressionLevel, params.compression_level),
+        (lib.ZSTD_c_windowLog, params.window_log),
+        (lib.ZSTD_c_hashLog, params.hash_log),
+        (lib.ZSTD_c_chainLog, params.chain_log),
+        (lib.ZSTD_c_searchLog, params.search_log),
+        (lib.ZSTD_c_minMatch, params.min_match),
+        (lib.ZSTD_c_targetLength, params.target_length),
+        (lib.ZSTD_c_strategy, params.compression_strategy),
+        (lib.ZSTD_c_contentSizeFlag, params.write_content_size),
+        (lib.ZSTD_c_checksumFlag, params.write_checksum),
+        (lib.ZSTD_c_dictIDFlag, params.write_dict_id),
+        (lib.ZSTD_c_nbWorkers, params.threads),
+        (lib.ZSTD_c_jobSize, params.job_size),
+        (lib.ZSTD_c_overlapLog, params.overlap_log),
+        (lib.ZSTD_c_forceMaxWindow, params.force_max_window),
+        (lib.ZSTD_c_enableLongDistanceMatching, params.enable_ldm),
+        (lib.ZSTD_c_ldmHashLog, params.ldm_hash_log),
+        (lib.ZSTD_c_ldmMinMatch, params.ldm_min_match),
+        (lib.ZSTD_c_ldmBucketSizeLog, params.ldm_bucket_size_log),
+        (lib.ZSTD_c_ldmHashRateLog, params.ldm_hash_rate_log),
+    ]
+
+    for param, value in attrs:
+        _set_compression_parameter(res, param, value)
+
+    return res
+
+class ZstdCompressionParameters(object):
+    @staticmethod
+    def from_level(level, source_size=0, dict_size=0, **kwargs):
+        params = lib.ZSTD_getCParams(level, source_size, dict_size)
+
+        args = {
+            'window_log': 'windowLog',
+            'chain_log': 'chainLog',
+            'hash_log': 'hashLog',
+            'search_log': 'searchLog',
+            'min_match': 'minMatch',
+            'target_length': 'targetLength',
+            'compression_strategy': 'strategy',
+        }
+
+        for arg, attr in args.items():
+            if arg not in kwargs:
+                kwargs[arg] = getattr(params, attr)
+
+        return ZstdCompressionParameters(**kwargs)
+
+    def __init__(self, format=0, compression_level=0, window_log=0, hash_log=0,
+                 chain_log=0, search_log=0, min_match=0, target_length=0,
+                 strategy=-1, compression_strategy=-1,
+                 write_content_size=1, write_checksum=0,
+                 write_dict_id=0, job_size=0, overlap_log=-1,
+                 overlap_size_log=-1, force_max_window=0, enable_ldm=0,
+                 ldm_hash_log=0, ldm_min_match=0, ldm_bucket_size_log=0,
+                 ldm_hash_rate_log=-1, ldm_hash_every_log=-1, threads=0):
+
+        params = lib.ZSTD_createCCtxParams()
+        if params == ffi.NULL:
+            raise MemoryError()
+
+        params = ffi.gc(params, lib.ZSTD_freeCCtxParams)
+
+        self._params = params
+
+        if threads < 0:
+            threads = _cpu_count()
+
+        # We need to set ZSTD_c_nbWorkers before ZSTD_c_jobSize and ZSTD_c_overlapLog
+        # because setting ZSTD_c_nbWorkers resets the other parameters.
+        _set_compression_parameter(params, lib.ZSTD_c_nbWorkers, threads)
+
+        _set_compression_parameter(params, lib.ZSTD_c_format, format)
+        _set_compression_parameter(params, lib.ZSTD_c_compressionLevel, compression_level)
+        _set_compression_parameter(params, lib.ZSTD_c_windowLog, window_log)
+        _set_compression_parameter(params, lib.ZSTD_c_hashLog, hash_log)
+        _set_compression_parameter(params, lib.ZSTD_c_chainLog, chain_log)
+        _set_compression_parameter(params, lib.ZSTD_c_searchLog, search_log)
+        _set_compression_parameter(params, lib.ZSTD_c_minMatch, min_match)
+        _set_compression_parameter(params, lib.ZSTD_c_targetLength, target_length)
+
+        if strategy != -1 and compression_strategy != -1:
+            raise ValueError('cannot specify both compression_strategy and strategy')
+
+        if compression_strategy != -1:
+            strategy = compression_strategy
+        elif strategy == -1:
+            strategy = 0
+
+        _set_compression_parameter(params, lib.ZSTD_c_strategy, strategy)
+        _set_compression_parameter(params, lib.ZSTD_c_contentSizeFlag, write_content_size)
+        _set_compression_parameter(params, lib.ZSTD_c_checksumFlag, write_checksum)
+        _set_compression_parameter(params, lib.ZSTD_c_dictIDFlag, write_dict_id)
+        _set_compression_parameter(params, lib.ZSTD_c_jobSize, job_size)
+
+        if overlap_log != -1 and overlap_size_log != -1:
+            raise ValueError('cannot specify both overlap_log and overlap_size_log')
+
+        if overlap_size_log != -1:
+            overlap_log = overlap_size_log
+        elif overlap_log == -1:
+            overlap_log = 0
+
+        _set_compression_parameter(params, lib.ZSTD_c_overlapLog, overlap_log)
+        _set_compression_parameter(params, lib.ZSTD_c_forceMaxWindow, force_max_window)
+        _set_compression_parameter(params, lib.ZSTD_c_enableLongDistanceMatching, enable_ldm)
+        _set_compression_parameter(params, lib.ZSTD_c_ldmHashLog, ldm_hash_log)
+        _set_compression_parameter(params, lib.ZSTD_c_ldmMinMatch, ldm_min_match)
+        _set_compression_parameter(params, lib.ZSTD_c_ldmBucketSizeLog, ldm_bucket_size_log)
+
+        if ldm_hash_rate_log != -1 and ldm_hash_every_log != -1:
+            raise ValueError('cannot specify both ldm_hash_rate_log and ldm_hash_every_log')
+
+        if ldm_hash_every_log != -1:
+            ldm_hash_rate_log = ldm_hash_every_log
+        elif ldm_hash_rate_log == -1:
+            ldm_hash_rate_log = 0
+
+        _set_compression_parameter(params, lib.ZSTD_c_ldmHashRateLog, ldm_hash_rate_log)
+
+    @property
+    def format(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_format)
+
+    @property
+    def compression_level(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_compressionLevel)
+
+    @property
+    def window_log(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_windowLog)
+
+    @property
+    def hash_log(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_hashLog)
+
+    @property
+    def chain_log(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_chainLog)
+
+    @property
+    def search_log(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_searchLog)
+
+    @property
+    def min_match(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_minMatch)
+
+    @property
+    def target_length(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_targetLength)
+
+    @property
+    def compression_strategy(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_strategy)
+
+    @property
+    def write_content_size(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_contentSizeFlag)
+
+    @property
+    def write_checksum(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_checksumFlag)
+
+    @property
+    def write_dict_id(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_dictIDFlag)
+
+    @property
+    def job_size(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_jobSize)
+
+    @property
+    def overlap_log(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_overlapLog)
+
+    @property
+    def overlap_size_log(self):
+        return self.overlap_log
+
+    @property
+    def force_max_window(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_forceMaxWindow)
+
+    @property
+    def enable_ldm(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_enableLongDistanceMatching)
+
+    @property
+    def ldm_hash_log(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_ldmHashLog)
+
+    @property
+    def ldm_min_match(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_ldmMinMatch)
+
+    @property
+    def ldm_bucket_size_log(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_ldmBucketSizeLog)
+
+    @property
+    def ldm_hash_rate_log(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_ldmHashRateLog)
+
+    @property
+    def ldm_hash_every_log(self):
+        return self.ldm_hash_rate_log
+
+    @property
+    def threads(self):
+        return _get_compression_parameter(self._params, lib.ZSTD_c_nbWorkers)
+
+    def estimated_compression_context_size(self):
+        return lib.ZSTD_estimateCCtxSize_usingCCtxParams(self._params)
+
+CompressionParameters = ZstdCompressionParameters
+
+def estimate_decompression_context_size():
+    return lib.ZSTD_estimateDCtxSize()
+
+
+def _set_compression_parameter(params, param, value):
+    zresult = lib.ZSTD_CCtxParam_setParameter(params, param, value)
+    if lib.ZSTD_isError(zresult):
+        raise ZstdError('unable to set compression context parameter: %s' %
+                        _zstd_error(zresult))
+
+
+def _get_compression_parameter(params, param):
+    result = ffi.new('int *')
+
+    zresult = lib.ZSTD_CCtxParam_getParameter(params, param, result)
+    if lib.ZSTD_isError(zresult):
+        raise ZstdError('unable to get compression context parameter: %s' %
+                        _zstd_error(zresult))
+
+    return result[0]
+
+
+class ZstdCompressionWriter(object):
+    def __init__(self, compressor, writer, source_size, write_size,
+                 write_return_read):
+        self._compressor = compressor
+        self._writer = writer
+        self._write_size = write_size
+        self._write_return_read = bool(write_return_read)
+        self._entered = False
+        self._closed = False
+        self._bytes_compressed = 0
+
+        self._dst_buffer = ffi.new('char[]', write_size)
+        self._out_buffer = ffi.new('ZSTD_outBuffer *')
+        self._out_buffer.dst = self._dst_buffer
+        self._out_buffer.size = len(self._dst_buffer)
+        self._out_buffer.pos = 0
+
+        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(compressor._cctx,
+                                                  source_size)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error setting source size: %s' %
+                            _zstd_error(zresult))
+
+    def __enter__(self):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if self._entered:
+            raise ZstdError('cannot __enter__ multiple times')
+
+        self._entered = True
+        return self
+
+    def __exit__(self, exc_type, exc_value, exc_tb):
+        self._entered = False
+
+        if not exc_type and not exc_value and not exc_tb:
+            self.close()
+
+        self._compressor = None
+
+        return False
+
+    def memory_size(self):
+        return lib.ZSTD_sizeof_CCtx(self._compressor._cctx)
+
+    def fileno(self):
+        f = getattr(self._writer, 'fileno', None)
+        if f:
+            return f()
+        else:
+            raise OSError('fileno not available on underlying writer')
+
+    def close(self):
+        if self._closed:
+            return
+
+        try:
+            self.flush(FLUSH_FRAME)
+        finally:
+            self._closed = True
+
+        # Call close() on underlying stream as well.
+        f = getattr(self._writer, 'close', None)
+        if f:
+            f()
+
+    @property
+    def closed(self):
+        return self._closed
+
+    def isatty(self):
+        return False
+
+    def readable(self):
+        return False
+
+    def readline(self, size=-1):
+        raise io.UnsupportedOperation()
+
+    def readlines(self, hint=-1):
+        raise io.UnsupportedOperation()
+
+    def seek(self, offset, whence=None):
+        raise io.UnsupportedOperation()
+
+    def seekable(self):
+        return False
+
+    def truncate(self, size=None):
+        raise io.UnsupportedOperation()
+
+    def writable(self):
+        return True
+
+    def writelines(self, lines):
+        raise NotImplementedError('writelines() is not yet implemented')
+
+    def read(self, size=-1):
+        raise io.UnsupportedOperation()
+
+    def readall(self):
+        raise io.UnsupportedOperation()
+
+    def readinto(self, b):
+        raise io.UnsupportedOperation()
+
+    def write(self, data):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        total_write = 0
+
+        data_buffer = ffi.from_buffer(data)
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer.src = data_buffer
+        in_buffer.size = len(data_buffer)
+        in_buffer.pos = 0
+
+        out_buffer = self._out_buffer
+        out_buffer.pos = 0
+
+        while in_buffer.pos < in_buffer.size:
+            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                               out_buffer, in_buffer,
+                                               lib.ZSTD_e_continue)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('zstd compress error: %s' %
+                                _zstd_error(zresult))
+
+            if out_buffer.pos:
+                self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
+                total_write += out_buffer.pos
+                self._bytes_compressed += out_buffer.pos
+                out_buffer.pos = 0
+
+        if self._write_return_read:
+            return in_buffer.pos
+        else:
+            return total_write
+
+    def flush(self, flush_mode=FLUSH_BLOCK):
+        if flush_mode == FLUSH_BLOCK:
+            flush = lib.ZSTD_e_flush
+        elif flush_mode == FLUSH_FRAME:
+            flush = lib.ZSTD_e_end
+        else:
+            raise ValueError('unknown flush_mode: %r' % flush_mode)
+
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        total_write = 0
+
+        out_buffer = self._out_buffer
+        out_buffer.pos = 0
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer.src = ffi.NULL
+        in_buffer.size = 0
+        in_buffer.pos = 0
+
+        while True:
+            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                               out_buffer, in_buffer,
+                                               flush)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('zstd compress error: %s' %
+                                _zstd_error(zresult))
+
+            if out_buffer.pos:
+                self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
+                total_write += out_buffer.pos
+                self._bytes_compressed += out_buffer.pos
+                out_buffer.pos = 0
+
+            if not zresult:
+                break
+
+        return total_write
+
+    def tell(self):
+        return self._bytes_compressed
+
+
+class ZstdCompressionObj(object):
+    def compress(self, data):
+        if self._finished:
+            raise ZstdError('cannot call compress() after compressor finished')
+
+        data_buffer = ffi.from_buffer(data)
+        source = ffi.new('ZSTD_inBuffer *')
+        source.src = data_buffer
+        source.size = len(data_buffer)
+        source.pos = 0
+
+        chunks = []
+
+        while source.pos < len(data):
+            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                               self._out,
+                                               source,
+                                               lib.ZSTD_e_continue)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('zstd compress error: %s' %
+                                _zstd_error(zresult))
+
+            if self._out.pos:
+                chunks.append(ffi.buffer(self._out.dst, self._out.pos)[:])
+                self._out.pos = 0
+
+        return b''.join(chunks)
+
+    def flush(self, flush_mode=COMPRESSOBJ_FLUSH_FINISH):
+        if flush_mode not in (COMPRESSOBJ_FLUSH_FINISH, COMPRESSOBJ_FLUSH_BLOCK):
+            raise ValueError('flush mode not recognized')
+
+        if self._finished:
+            raise ZstdError('compressor object already finished')
+
+        if flush_mode == COMPRESSOBJ_FLUSH_BLOCK:
+            z_flush_mode = lib.ZSTD_e_flush
+        elif flush_mode == COMPRESSOBJ_FLUSH_FINISH:
+            z_flush_mode = lib.ZSTD_e_end
+            self._finished = True
+        else:
+            raise ZstdError('unhandled flush mode')
+
+        assert self._out.pos == 0
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer.src = ffi.NULL
+        in_buffer.size = 0
+        in_buffer.pos = 0
+
+        chunks = []
+
+        while True:
+            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                               self._out,
+                                               in_buffer,
+                                               z_flush_mode)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('error ending compression stream: %s' %
+                                _zstd_error(zresult))
+
+            if self._out.pos:
+                chunks.append(ffi.buffer(self._out.dst, self._out.pos)[:])
+                self._out.pos = 0
+
+            if not zresult:
+                break
+
+        return b''.join(chunks)
+
+
+class ZstdCompressionChunker(object):
+    def __init__(self, compressor, chunk_size):
+        self._compressor = compressor
+        self._out = ffi.new('ZSTD_outBuffer *')
+        self._dst_buffer = ffi.new('char[]', chunk_size)
+        self._out.dst = self._dst_buffer
+        self._out.size = chunk_size
+        self._out.pos = 0
+
+        self._in = ffi.new('ZSTD_inBuffer *')
+        self._in.src = ffi.NULL
+        self._in.size = 0
+        self._in.pos = 0
+        self._finished = False
+
+    def compress(self, data):
+        if self._finished:
+            raise ZstdError('cannot call compress() after compression finished')
+
+        if self._in.src != ffi.NULL:
+            raise ZstdError('cannot perform operation before consuming output '
+                            'from previous operation')
+
+        data_buffer = ffi.from_buffer(data)
+
+        if not len(data_buffer):
+            return
+
+        self._in.src = data_buffer
+        self._in.size = len(data_buffer)
+        self._in.pos = 0
+
+        while self._in.pos < self._in.size:
+            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                               self._out,
+                                               self._in,
+                                               lib.ZSTD_e_continue)
+
+            if self._in.pos == self._in.size:
+                self._in.src = ffi.NULL
+                self._in.size = 0
+                self._in.pos = 0
+
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('zstd compress error: %s' %
+                                _zstd_error(zresult))
+
+            if self._out.pos == self._out.size:
+                yield ffi.buffer(self._out.dst, self._out.pos)[:]
+                self._out.pos = 0
+
+    def flush(self):
+        if self._finished:
+            raise ZstdError('cannot call flush() after compression finished')
+
+        if self._in.src != ffi.NULL:
+            raise ZstdError('cannot call flush() before consuming output from '
+                            'previous operation')
+
+        while True:
+            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                               self._out, self._in,
+                                               lib.ZSTD_e_flush)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('zstd compress error: %s' % _zstd_error(zresult))
+
+            if self._out.pos:
+                yield ffi.buffer(self._out.dst, self._out.pos)[:]
+                self._out.pos = 0
+
+            if not zresult:
+                return
+
+    def finish(self):
+        if self._finished:
+            raise ZstdError('cannot call finish() after compression finished')
+
+        if self._in.src != ffi.NULL:
+            raise ZstdError('cannot call finish() before consuming output from '
+                            'previous operation')
+
+        while True:
+            zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                               self._out, self._in,
+                                               lib.ZSTD_e_end)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('zstd compress error: %s' % _zstd_error(zresult))
+
+            if self._out.pos:
+                yield ffi.buffer(self._out.dst, self._out.pos)[:]
+                self._out.pos = 0
+
+            if not zresult:
+                self._finished = True
+                return
+
+
+class ZstdCompressionReader(object):
+    def __init__(self, compressor, source, read_size):
+        self._compressor = compressor
+        self._source = source
+        self._read_size = read_size
+        self._entered = False
+        self._closed = False
+        self._bytes_compressed = 0
+        self._finished_input = False
+        self._finished_output = False
+
+        self._in_buffer = ffi.new('ZSTD_inBuffer *')
+        # Holds a ref so backing bytes in self._in_buffer stay alive.
+        self._source_buffer = None
+
+    def __enter__(self):
+        if self._entered:
+            raise ValueError('cannot __enter__ multiple times')
+
+        self._entered = True
+        return self
+
+    def __exit__(self, exc_type, exc_value, exc_tb):
+        self._entered = False
+        self._closed = True
+        self._source = None
+        self._compressor = None
+
+        return False
+
+    def readable(self):
+        return True
+
+    def writable(self):
+        return False
+
+    def seekable(self):
+        return False
+
+    def readline(self):
+        raise io.UnsupportedOperation()
+
+    def readlines(self):
+        raise io.UnsupportedOperation()
+
+    def write(self, data):
+        raise OSError('stream is not writable')
+
+    def writelines(self, ignored):
+        raise OSError('stream is not writable')
+
+    def isatty(self):
+        return False
+
+    def flush(self):
+        return None
+
+    def close(self):
+        self._closed = True
+        return None
+
+    @property
+    def closed(self):
+        return self._closed
+
+    def tell(self):
+        return self._bytes_compressed
+
+    def readall(self):
+        chunks = []
+
+        while True:
+            chunk = self.read(1048576)
+            if not chunk:
+                break
+
+            chunks.append(chunk)
+
+        return b''.join(chunks)
+
+    def __iter__(self):
+        raise io.UnsupportedOperation()
+
+    def __next__(self):
+        raise io.UnsupportedOperation()
+
+    next = __next__
+
+    def _read_input(self):
+        if self._finished_input:
+            return
+
+        if hasattr(self._source, 'read'):
+            data = self._source.read(self._read_size)
+
+            if not data:
+                self._finished_input = True
+                return
+
+            self._source_buffer = ffi.from_buffer(data)
+            self._in_buffer.src = self._source_buffer
+            self._in_buffer.size = len(self._source_buffer)
+            self._in_buffer.pos = 0
+        else:
+            self._source_buffer = ffi.from_buffer(self._source)
+            self._in_buffer.src = self._source_buffer
+            self._in_buffer.size = len(self._source_buffer)
+            self._in_buffer.pos = 0
+
+    def _compress_into_buffer(self, out_buffer):
+        if self._in_buffer.pos >= self._in_buffer.size:
+            return
+
+        old_pos = out_buffer.pos
+
+        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                           out_buffer, self._in_buffer,
+                                           lib.ZSTD_e_continue)
+
+        self._bytes_compressed += out_buffer.pos - old_pos
+
+        if self._in_buffer.pos == self._in_buffer.size:
+            self._in_buffer.src = ffi.NULL
+            self._in_buffer.pos = 0
+            self._in_buffer.size = 0
+            self._source_buffer = None
+
+            if not hasattr(self._source, 'read'):
+                self._finished_input = True
+
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('zstd compress error: %s',
+                            _zstd_error(zresult))
+
+        return out_buffer.pos and out_buffer.pos == out_buffer.size
+
+    def read(self, size=-1):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if size < -1:
+            raise ValueError('cannot read negative amounts less than -1')
+
+        if size == -1:
+            return self.readall()
+
+        if self._finished_output or size == 0:
+            return b''
+
+        # Need a dedicated ref to dest buffer otherwise it gets collected.
+        dst_buffer = ffi.new('char[]', size)
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = dst_buffer
+        out_buffer.size = size
+        out_buffer.pos = 0
+
+        if self._compress_into_buffer(out_buffer):
+            return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+        while not self._finished_input:
+            self._read_input()
+
+            if self._compress_into_buffer(out_buffer):
+                return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+        # EOF
+        old_pos = out_buffer.pos
+
+        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                           out_buffer, self._in_buffer,
+                                           lib.ZSTD_e_end)
+
+        self._bytes_compressed += out_buffer.pos - old_pos
+
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error ending compression stream: %s',
+                            _zstd_error(zresult))
+
+        if zresult == 0:
+            self._finished_output = True
+
+        return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+    def read1(self, size=-1):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if size < -1:
+            raise ValueError('cannot read negative amounts less than -1')
+
+        if self._finished_output or size == 0:
+            return b''
+
+        # -1 returns arbitrary number of bytes.
+        if size == -1:
+            size = COMPRESSION_RECOMMENDED_OUTPUT_SIZE
+
+        dst_buffer = ffi.new('char[]', size)
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = dst_buffer
+        out_buffer.size = size
+        out_buffer.pos = 0
+
+        # read1() dictates that we can perform at most 1 call to the
+        # underlying stream to get input. However, we can't satisfy this
+        # restriction with compression because not all input generates output.
+        # It is possible to perform a block flush in order to ensure output.
+        # But this may not be desirable behavior. So we allow multiple read()
+        # to the underlying stream. But unlike read(), we stop once we have
+        # any output.
+
+        self._compress_into_buffer(out_buffer)
+        if out_buffer.pos:
+            return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+        while not self._finished_input:
+            self._read_input()
+
+            # If we've filled the output buffer, return immediately.
+            if self._compress_into_buffer(out_buffer):
+                return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+            # If we've populated the output buffer and we're not at EOF,
+            # also return, as we've satisfied the read1() limits.
+            if out_buffer.pos and not self._finished_input:
+                return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+            # Else if we're at EOS and we have room left in the buffer,
+            # fall through to below and try to add more data to the output.
+
+        # EOF.
+        old_pos = out_buffer.pos
+
+        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                           out_buffer, self._in_buffer,
+                                           lib.ZSTD_e_end)
+
+        self._bytes_compressed += out_buffer.pos - old_pos
+
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error ending compression stream: %s' %
+                            _zstd_error(zresult))
+
+        if zresult == 0:
+            self._finished_output = True
+
+        return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+    def readinto(self, b):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if self._finished_output:
+            return 0
+
+        # TODO use writable=True once we require CFFI >= 1.12.
+        dest_buffer = ffi.from_buffer(b)
+        ffi.memmove(b, b'', 0)
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = dest_buffer
+        out_buffer.size = len(dest_buffer)
+        out_buffer.pos = 0
+
+        if self._compress_into_buffer(out_buffer):
+            return out_buffer.pos
+
+        while not self._finished_input:
+            self._read_input()
+            if self._compress_into_buffer(out_buffer):
+                return out_buffer.pos
+
+        # EOF.
+        old_pos = out_buffer.pos
+        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                           out_buffer, self._in_buffer,
+                                           lib.ZSTD_e_end)
+
+        self._bytes_compressed += out_buffer.pos - old_pos
+
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error ending compression stream: %s',
+                            _zstd_error(zresult))
+
+        if zresult == 0:
+            self._finished_output = True
+
+        return out_buffer.pos
+
+    def readinto1(self, b):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if self._finished_output:
+            return 0
+
+        # TODO use writable=True once we require CFFI >= 1.12.
+        dest_buffer = ffi.from_buffer(b)
+        ffi.memmove(b, b'', 0)
+
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = dest_buffer
+        out_buffer.size = len(dest_buffer)
+        out_buffer.pos = 0
+
+        self._compress_into_buffer(out_buffer)
+        if out_buffer.pos:
+            return out_buffer.pos
+
+        while not self._finished_input:
+            self._read_input()
+
+            if self._compress_into_buffer(out_buffer):
+                return out_buffer.pos
+
+            if out_buffer.pos and not self._finished_input:
+                return out_buffer.pos
+
+        # EOF.
+        old_pos = out_buffer.pos
+
+        zresult = lib.ZSTD_compressStream2(self._compressor._cctx,
+                                           out_buffer, self._in_buffer,
+                                           lib.ZSTD_e_end)
+
+        self._bytes_compressed += out_buffer.pos - old_pos
+
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error ending compression stream: %s' %
+                            _zstd_error(zresult))
+
+        if zresult == 0:
+            self._finished_output = True
+
+        return out_buffer.pos
+
+
+class ZstdCompressor(object):
+    def __init__(self, level=3, dict_data=None, compression_params=None,
+                 write_checksum=None, write_content_size=None,
+                 write_dict_id=None, threads=0):
+        if level > lib.ZSTD_maxCLevel():
+            raise ValueError('level must be less than %d' % lib.ZSTD_maxCLevel())
+
+        if threads < 0:
+            threads = _cpu_count()
+
+        if compression_params and write_checksum is not None:
+            raise ValueError('cannot define compression_params and '
+                             'write_checksum')
+
+        if compression_params and write_content_size is not None:
+            raise ValueError('cannot define compression_params and '
+                             'write_content_size')
+
+        if compression_params and write_dict_id is not None:
+            raise ValueError('cannot define compression_params and '
+                             'write_dict_id')
+
+        if compression_params and threads:
+            raise ValueError('cannot define compression_params and threads')
+
+        if compression_params:
+            self._params = _make_cctx_params(compression_params)
+        else:
+            if write_dict_id is None:
+                write_dict_id = True
+
+            params = lib.ZSTD_createCCtxParams()
+            if params == ffi.NULL:
+                raise MemoryError()
+
+            self._params = ffi.gc(params, lib.ZSTD_freeCCtxParams)
+
+            _set_compression_parameter(self._params,
+                                       lib.ZSTD_c_compressionLevel,
+                                       level)
+
+            _set_compression_parameter(
+                self._params,
+                lib.ZSTD_c_contentSizeFlag,
+                write_content_size if write_content_size is not None else 1)
+
+            _set_compression_parameter(self._params,
+                                       lib.ZSTD_c_checksumFlag,
+                                       1 if write_checksum else 0)
+
+            _set_compression_parameter(self._params,
+                                       lib.ZSTD_c_dictIDFlag,
+                                       1 if write_dict_id else 0)
+
+            if threads:
+                _set_compression_parameter(self._params,
+                                           lib.ZSTD_c_nbWorkers,
+                                           threads)
+
+        cctx = lib.ZSTD_createCCtx()
+        if cctx == ffi.NULL:
+            raise MemoryError()
+
+        self._cctx = cctx
+        self._dict_data = dict_data
+
+        # We defer setting up garbage collection until after calling
+        # _setup_cctx() to ensure the memory size estimate is more accurate.
+        try:
+            self._setup_cctx()
+        finally:
+            self._cctx = ffi.gc(cctx, lib.ZSTD_freeCCtx,
+                                size=lib.ZSTD_sizeof_CCtx(cctx))
+
+    def _setup_cctx(self):
+        zresult = lib.ZSTD_CCtx_setParametersUsingCCtxParams(self._cctx,
+                                                             self._params)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('could not set compression parameters: %s' %
+                            _zstd_error(zresult))
+
+        dict_data = self._dict_data
+
+        if dict_data:
+            if dict_data._cdict:
+                zresult = lib.ZSTD_CCtx_refCDict(self._cctx, dict_data._cdict)
+            else:
+                zresult = lib.ZSTD_CCtx_loadDictionary_advanced(
+                    self._cctx, dict_data.as_bytes(), len(dict_data),
+                    lib.ZSTD_dlm_byRef, dict_data._dict_type)
+
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('could not load compression dictionary: %s' %
+                                _zstd_error(zresult))
+
+    def memory_size(self):
+        return lib.ZSTD_sizeof_CCtx(self._cctx)
+
+    def compress(self, data):
+        lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
+
+        data_buffer = ffi.from_buffer(data)
+
+        dest_size = lib.ZSTD_compressBound(len(data_buffer))
+        out = new_nonzero('char[]', dest_size)
+
+        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, len(data_buffer))
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error setting source size: %s' %
+                            _zstd_error(zresult))
+
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+
+        out_buffer.dst = out
+        out_buffer.size = dest_size
+        out_buffer.pos = 0
+
+        in_buffer.src = data_buffer
+        in_buffer.size = len(data_buffer)
+        in_buffer.pos = 0
+
+        zresult = lib.ZSTD_compressStream2(self._cctx,
+                                           out_buffer,
+                                           in_buffer,
+                                           lib.ZSTD_e_end)
+
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('cannot compress: %s' %
+                            _zstd_error(zresult))
+        elif zresult:
+            raise ZstdError('unexpected partial frame flush')
+
+        return ffi.buffer(out, out_buffer.pos)[:]
+
+    def compressobj(self, size=-1):
+        lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
+
+        if size < 0:
+            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
+
+        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error setting source size: %s' %
+                            _zstd_error(zresult))
+
+        cobj = ZstdCompressionObj()
+        cobj._out = ffi.new('ZSTD_outBuffer *')
+        cobj._dst_buffer = ffi.new('char[]', COMPRESSION_RECOMMENDED_OUTPUT_SIZE)
+        cobj._out.dst = cobj._dst_buffer
+        cobj._out.size = COMPRESSION_RECOMMENDED_OUTPUT_SIZE
+        cobj._out.pos = 0
+        cobj._compressor = self
+        cobj._finished = False
+
+        return cobj
+
+    def chunker(self, size=-1, chunk_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE):
+        lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
+
+        if size < 0:
+            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
+
+        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error setting source size: %s' %
+                            _zstd_error(zresult))
+
+        return ZstdCompressionChunker(self, chunk_size=chunk_size)
+
+    def copy_stream(self, ifh, ofh, size=-1,
+                    read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE,
+                    write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE):
+
+        if not hasattr(ifh, 'read'):
+            raise ValueError('first argument must have a read() method')
+        if not hasattr(ofh, 'write'):
+            raise ValueError('second argument must have a write() method')
+
+        lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
+
+        if size < 0:
+            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
+
+        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error setting source size: %s' %
+                            _zstd_error(zresult))
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+
+        dst_buffer = ffi.new('char[]', write_size)
+        out_buffer.dst = dst_buffer
+        out_buffer.size = write_size
+        out_buffer.pos = 0
+
+        total_read, total_write = 0, 0
+
+        while True:
+            data = ifh.read(read_size)
+            if not data:
+                break
+
+            data_buffer = ffi.from_buffer(data)
+            total_read += len(data_buffer)
+            in_buffer.src = data_buffer
+            in_buffer.size = len(data_buffer)
+            in_buffer.pos = 0
+
+            while in_buffer.pos < in_buffer.size:
+                zresult = lib.ZSTD_compressStream2(self._cctx,
+                                                   out_buffer,
+                                                   in_buffer,
+                                                   lib.ZSTD_e_continue)
+                if lib.ZSTD_isError(zresult):
+                    raise ZstdError('zstd compress error: %s' %
+                                    _zstd_error(zresult))
+
+                if out_buffer.pos:
+                    ofh.write(ffi.buffer(out_buffer.dst, out_buffer.pos))
+                    total_write += out_buffer.pos
+                    out_buffer.pos = 0
+
+        # We've finished reading. Flush the compressor.
+        while True:
+            zresult = lib.ZSTD_compressStream2(self._cctx,
+                                               out_buffer,
+                                               in_buffer,
+                                               lib.ZSTD_e_end)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('error ending compression stream: %s' %
+                                _zstd_error(zresult))
+
+            if out_buffer.pos:
+                ofh.write(ffi.buffer(out_buffer.dst, out_buffer.pos))
+                total_write += out_buffer.pos
+                out_buffer.pos = 0
+
+            if zresult == 0:
+                break
+
+        return total_read, total_write
+
+    def stream_reader(self, source, size=-1,
+                      read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE):
+        lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
+
+        try:
+            size = len(source)
+        except Exception:
+            pass
+
+        if size < 0:
+            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
+
+        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error setting source size: %s' %
+                            _zstd_error(zresult))
+
+        return ZstdCompressionReader(self, source, read_size)
+
+    def stream_writer(self, writer, size=-1,
+                 write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE,
+                 write_return_read=False):
+
+        if not hasattr(writer, 'write'):
+            raise ValueError('must pass an object with a write() method')
+
+        lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
+
+        if size < 0:
+            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
+
+        return ZstdCompressionWriter(self, writer, size, write_size,
+                                     write_return_read)
+
+    write_to = stream_writer
+
+    def read_to_iter(self, reader, size=-1,
+                     read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE,
+                     write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE):
+        if hasattr(reader, 'read'):
+            have_read = True
+        elif hasattr(reader, '__getitem__'):
+            have_read = False
+            buffer_offset = 0
+            size = len(reader)
+        else:
+            raise ValueError('must pass an object with a read() method or '
+                             'conforms to buffer protocol')
+
+        lib.ZSTD_CCtx_reset(self._cctx, lib.ZSTD_reset_session_only)
+
+        if size < 0:
+            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
+
+        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('error setting source size: %s' %
+                            _zstd_error(zresult))
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+
+        in_buffer.src = ffi.NULL
+        in_buffer.size = 0
+        in_buffer.pos = 0
+
+        dst_buffer = ffi.new('char[]', write_size)
+        out_buffer.dst = dst_buffer
+        out_buffer.size = write_size
+        out_buffer.pos = 0
+
+        while True:
+            # We should never have output data sitting around after a previous
+            # iteration.
+            assert out_buffer.pos == 0
+
+            # Collect input data.
+            if have_read:
+                read_result = reader.read(read_size)
+            else:
+                remaining = len(reader) - buffer_offset
+                slice_size = min(remaining, read_size)
+                read_result = reader[buffer_offset:buffer_offset + slice_size]
+                buffer_offset += slice_size
+
+            # No new input data. Break out of the read loop.
+            if not read_result:
+                break
+
+            # Feed all read data into the compressor and emit output until
+            # exhausted.
+            read_buffer = ffi.from_buffer(read_result)
+            in_buffer.src = read_buffer
+            in_buffer.size = len(read_buffer)
+            in_buffer.pos = 0
+
+            while in_buffer.pos < in_buffer.size:
+                zresult = lib.ZSTD_compressStream2(self._cctx, out_buffer, in_buffer,
+                                                   lib.ZSTD_e_continue)
+                if lib.ZSTD_isError(zresult):
+                    raise ZstdError('zstd compress error: %s' %
+                                    _zstd_error(zresult))
+
+                if out_buffer.pos:
+                    data = ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+                    out_buffer.pos = 0
+                    yield data
+
+            assert out_buffer.pos == 0
+
+            # And repeat the loop to collect more data.
+            continue
+
+        # If we get here, input is exhausted. End the stream and emit what
+        # remains.
+        while True:
+            assert out_buffer.pos == 0
+            zresult = lib.ZSTD_compressStream2(self._cctx,
+                                               out_buffer,
+                                               in_buffer,
+                                               lib.ZSTD_e_end)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('error ending compression stream: %s' %
+                                _zstd_error(zresult))
+
+            if out_buffer.pos:
+                data = ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+                out_buffer.pos = 0
+                yield data
+
+            if zresult == 0:
+                break
+
+    read_from = read_to_iter
+
+    def frame_progression(self):
+        progression = lib.ZSTD_getFrameProgression(self._cctx)
+
+        return progression.ingested, progression.consumed, progression.produced
+
+
+class FrameParameters(object):
+    def __init__(self, fparams):
+        self.content_size = fparams.frameContentSize
+        self.window_size = fparams.windowSize
+        self.dict_id = fparams.dictID
+        self.has_checksum = bool(fparams.checksumFlag)
+
+
+def frame_content_size(data):
+    data_buffer = ffi.from_buffer(data)
+
+    size = lib.ZSTD_getFrameContentSize(data_buffer, len(data_buffer))
+
+    if size == lib.ZSTD_CONTENTSIZE_ERROR:
+        raise ZstdError('error when determining content size')
+    elif size == lib.ZSTD_CONTENTSIZE_UNKNOWN:
+        return -1
+    else:
+        return size
+
+
+def frame_header_size(data):
+    data_buffer = ffi.from_buffer(data)
+
+    zresult = lib.ZSTD_frameHeaderSize(data_buffer, len(data_buffer))
+    if lib.ZSTD_isError(zresult):
+        raise ZstdError('could not determine frame header size: %s' %
+                        _zstd_error(zresult))
+
+    return zresult
+
+
+def get_frame_parameters(data):
+    params = ffi.new('ZSTD_frameHeader *')
+
+    data_buffer = ffi.from_buffer(data)
+    zresult = lib.ZSTD_getFrameHeader(params, data_buffer, len(data_buffer))
+    if lib.ZSTD_isError(zresult):
+        raise ZstdError('cannot get frame parameters: %s' %
+                        _zstd_error(zresult))
+
+    if zresult:
+        raise ZstdError('not enough data for frame parameters; need %d bytes' %
+                        zresult)
+
+    return FrameParameters(params[0])
+
+
+class ZstdCompressionDict(object):
+    def __init__(self, data, dict_type=DICT_TYPE_AUTO, k=0, d=0):
+        assert isinstance(data, bytes_type)
+        self._data = data
+        self.k = k
+        self.d = d
+
+        if dict_type not in (DICT_TYPE_AUTO, DICT_TYPE_RAWCONTENT,
+                             DICT_TYPE_FULLDICT):
+            raise ValueError('invalid dictionary load mode: %d; must use '
+                             'DICT_TYPE_* constants')
+
+        self._dict_type = dict_type
+        self._cdict = None
+
+    def __len__(self):
+        return len(self._data)
+
+    def dict_id(self):
+        return int_type(lib.ZDICT_getDictID(self._data, len(self._data)))
+
+    def as_bytes(self):
+        return self._data
+
+    def precompute_compress(self, level=0, compression_params=None):
+        if level and compression_params:
+            raise ValueError('must only specify one of level or '
+                             'compression_params')
+
+        if not level and not compression_params:
+            raise ValueError('must specify one of level or compression_params')
+
+        if level:
+            cparams = lib.ZSTD_getCParams(level, 0, len(self._data))
+        else:
+            cparams = ffi.new('ZSTD_compressionParameters')
+            cparams.chainLog = compression_params.chain_log
+            cparams.hashLog = compression_params.hash_log
+            cparams.minMatch = compression_params.min_match
+            cparams.searchLog = compression_params.search_log
+            cparams.strategy = compression_params.compression_strategy
+            cparams.targetLength = compression_params.target_length
+            cparams.windowLog = compression_params.window_log
+
+        cdict = lib.ZSTD_createCDict_advanced(self._data, len(self._data),
+                                              lib.ZSTD_dlm_byRef,
+                                              self._dict_type,
+                                              cparams,
+                                              lib.ZSTD_defaultCMem)
+        if cdict == ffi.NULL:
+            raise ZstdError('unable to precompute dictionary')
+
+        self._cdict = ffi.gc(cdict, lib.ZSTD_freeCDict,
+                             size=lib.ZSTD_sizeof_CDict(cdict))
+
+    @property
+    def _ddict(self):
+        ddict = lib.ZSTD_createDDict_advanced(self._data, len(self._data),
+                                              lib.ZSTD_dlm_byRef,
+                                              self._dict_type,
+                                              lib.ZSTD_defaultCMem)
+
+        if ddict == ffi.NULL:
+            raise ZstdError('could not create decompression dict')
+
+        ddict = ffi.gc(ddict, lib.ZSTD_freeDDict,
+                       size=lib.ZSTD_sizeof_DDict(ddict))
+        self.__dict__['_ddict'] = ddict
+
+        return ddict
+
+def train_dictionary(dict_size, samples, k=0, d=0, notifications=0, dict_id=0,
+                     level=0, steps=0, threads=0):
+    if not isinstance(samples, list):
+        raise TypeError('samples must be a list')
+
+    if threads < 0:
+        threads = _cpu_count()
+
+    total_size = sum(map(len, samples))
+
+    samples_buffer = new_nonzero('char[]', total_size)
+    sample_sizes = new_nonzero('size_t[]', len(samples))
+
+    offset = 0
+    for i, sample in enumerate(samples):
+        if not isinstance(sample, bytes_type):
+            raise ValueError('samples must be bytes')
+
+        l = len(sample)
+        ffi.memmove(samples_buffer + offset, sample, l)
+        offset += l
+        sample_sizes[i] = l
+
+    dict_data = new_nonzero('char[]', dict_size)
+
+    dparams = ffi.new('ZDICT_cover_params_t *')[0]
+    dparams.k = k
+    dparams.d = d
+    dparams.steps = steps
+    dparams.nbThreads = threads
+    dparams.zParams.notificationLevel = notifications
+    dparams.zParams.dictID = dict_id
+    dparams.zParams.compressionLevel = level
+
+    if (not dparams.k and not dparams.d and not dparams.steps
+        and not dparams.nbThreads and not dparams.zParams.notificationLevel
+        and not dparams.zParams.dictID
+        and not dparams.zParams.compressionLevel):
+        zresult = lib.ZDICT_trainFromBuffer(
+            ffi.addressof(dict_data), dict_size,
+            ffi.addressof(samples_buffer),
+            ffi.addressof(sample_sizes, 0), len(samples))
+    elif dparams.steps or dparams.nbThreads:
+        zresult = lib.ZDICT_optimizeTrainFromBuffer_cover(
+            ffi.addressof(dict_data), dict_size,
+            ffi.addressof(samples_buffer),
+            ffi.addressof(sample_sizes, 0), len(samples),
+            ffi.addressof(dparams))
+    else:
+        zresult = lib.ZDICT_trainFromBuffer_cover(
+            ffi.addressof(dict_data), dict_size,
+            ffi.addressof(samples_buffer),
+            ffi.addressof(sample_sizes, 0), len(samples),
+            dparams)
+
+    if lib.ZDICT_isError(zresult):
+        msg = ffi.string(lib.ZDICT_getErrorName(zresult)).decode('utf-8')
+        raise ZstdError('cannot train dict: %s' % msg)
+
+    return ZstdCompressionDict(ffi.buffer(dict_data, zresult)[:],
+                               dict_type=DICT_TYPE_FULLDICT,
+                               k=dparams.k, d=dparams.d)
+
+
+class ZstdDecompressionObj(object):
+    def __init__(self, decompressor, write_size):
+        self._decompressor = decompressor
+        self._write_size = write_size
+        self._finished = False
+
+    def decompress(self, data):
+        if self._finished:
+            raise ZstdError('cannot use a decompressobj multiple times')
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+
+        data_buffer = ffi.from_buffer(data)
+
+        if len(data_buffer) == 0:
+            return b''
+
+        in_buffer.src = data_buffer
+        in_buffer.size = len(data_buffer)
+        in_buffer.pos = 0
+
+        dst_buffer = ffi.new('char[]', self._write_size)
+        out_buffer.dst = dst_buffer
+        out_buffer.size = len(dst_buffer)
+        out_buffer.pos = 0
+
+        chunks = []
+
+        while True:
+            zresult = lib.ZSTD_decompressStream(self._decompressor._dctx,
+                                                out_buffer, in_buffer)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('zstd decompressor error: %s' %
+                                _zstd_error(zresult))
+
+            if zresult == 0:
+                self._finished = True
+                self._decompressor = None
+
+            if out_buffer.pos:
+                chunks.append(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
+
+            if (zresult == 0 or
+                    (in_buffer.pos == in_buffer.size and out_buffer.pos == 0)):
+                break
+
+            out_buffer.pos = 0
+
+        return b''.join(chunks)
+
+    def flush(self, length=0):
+        pass
+
+
+class ZstdDecompressionReader(object):
+    def __init__(self, decompressor, source, read_size, read_across_frames):
+        self._decompressor = decompressor
+        self._source = source
+        self._read_size = read_size
+        self._read_across_frames = bool(read_across_frames)
+        self._entered = False
+        self._closed = False
+        self._bytes_decompressed = 0
+        self._finished_input = False
+        self._finished_output = False
+        self._in_buffer = ffi.new('ZSTD_inBuffer *')
+        # Holds a ref to self._in_buffer.src.
+        self._source_buffer = None
+
+    def __enter__(self):
+        if self._entered:
+            raise ValueError('cannot __enter__ multiple times')
+
+        self._entered = True
+        return self
+
+    def __exit__(self, exc_type, exc_value, exc_tb):
+        self._entered = False
+        self._closed = True
+        self._source = None
+        self._decompressor = None
+
+        return False
+
+    def readable(self):
+        return True
+
+    def writable(self):
+        return False
+
+    def seekable(self):
+        return True
+
+    def readline(self):
+        raise io.UnsupportedOperation()
+
+    def readlines(self):
+        raise io.UnsupportedOperation()
+
+    def write(self, data):
+        raise io.UnsupportedOperation()
+
+    def writelines(self, lines):
+        raise io.UnsupportedOperation()
+
+    def isatty(self):
+        return False
+
+    def flush(self):
+        return None
+
+    def close(self):
+        self._closed = True
+        return None
+
+    @property
+    def closed(self):
+        return self._closed
+
+    def tell(self):
+        return self._bytes_decompressed
+
+    def readall(self):
+        chunks = []
+
+        while True:
+            chunk = self.read(1048576)
+            if not chunk:
+                break
+
+            chunks.append(chunk)
+
+        return b''.join(chunks)
+
+    def __iter__(self):
+        raise io.UnsupportedOperation()
+
+    def __next__(self):
+        raise io.UnsupportedOperation()
+
+    next = __next__
+
+    def _read_input(self):
+        # We have data left over in the input buffer. Use it.
+        if self._in_buffer.pos < self._in_buffer.size:
+            return
+
+        # All input data exhausted. Nothing to do.
+        if self._finished_input:
+            return
+
+        # Else populate the input buffer from our source.
+        if hasattr(self._source, 'read'):
+            data = self._source.read(self._read_size)
+
+            if not data:
+                self._finished_input = True
+                return
+
+            self._source_buffer = ffi.from_buffer(data)
+            self._in_buffer.src = self._source_buffer
+            self._in_buffer.size = len(self._source_buffer)
+            self._in_buffer.pos = 0
+        else:
+            self._source_buffer = ffi.from_buffer(self._source)
+            self._in_buffer.src = self._source_buffer
+            self._in_buffer.size = len(self._source_buffer)
+            self._in_buffer.pos = 0
+
+    def _decompress_into_buffer(self, out_buffer):
+        """Decompress available input into an output buffer.
+
+        Returns True if data in output buffer should be emitted.
+        """
+        zresult = lib.ZSTD_decompressStream(self._decompressor._dctx,
+                                            out_buffer, self._in_buffer)
+
+        if self._in_buffer.pos == self._in_buffer.size:
+            self._in_buffer.src = ffi.NULL
+            self._in_buffer.pos = 0
+            self._in_buffer.size = 0
+            self._source_buffer = None
+
+            if not hasattr(self._source, 'read'):
+                self._finished_input = True
+
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('zstd decompress error: %s' %
+                            _zstd_error(zresult))
+
+        # Emit data if there is data AND either:
+        # a) output buffer is full (read amount is satisfied)
+        # b) we're at end of a frame and not in frame spanning mode
+        return (out_buffer.pos and
+                (out_buffer.pos == out_buffer.size or
+                 zresult == 0 and not self._read_across_frames))
+
+    def read(self, size=-1):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if size < -1:
+            raise ValueError('cannot read negative amounts less than -1')
+
+        if size == -1:
+            # This is recursive. But it gets the job done.
+            return self.readall()
+
+        if self._finished_output or size == 0:
+            return b''
+
+        # We /could/ call into readinto() here. But that introduces more
+        # overhead.
+        dst_buffer = ffi.new('char[]', size)
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = dst_buffer
+        out_buffer.size = size
+        out_buffer.pos = 0
+
+        self._read_input()
+        if self._decompress_into_buffer(out_buffer):
+            self._bytes_decompressed += out_buffer.pos
+            return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+        while not self._finished_input:
+            self._read_input()
+            if self._decompress_into_buffer(out_buffer):
+                self._bytes_decompressed += out_buffer.pos
+                return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+        self._bytes_decompressed += out_buffer.pos
+        return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+    def readinto(self, b):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if self._finished_output:
+            return 0
+
+        # TODO use writable=True once we require CFFI >= 1.12.
+        dest_buffer = ffi.from_buffer(b)
+        ffi.memmove(b, b'', 0)
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = dest_buffer
+        out_buffer.size = len(dest_buffer)
+        out_buffer.pos = 0
+
+        self._read_input()
+        if self._decompress_into_buffer(out_buffer):
+            self._bytes_decompressed += out_buffer.pos
+            return out_buffer.pos
+
+        while not self._finished_input:
+            self._read_input()
+            if self._decompress_into_buffer(out_buffer):
+                self._bytes_decompressed += out_buffer.pos
+                return out_buffer.pos
+
+        self._bytes_decompressed += out_buffer.pos
+        return out_buffer.pos
+
+    def read1(self, size=-1):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if size < -1:
+            raise ValueError('cannot read negative amounts less than -1')
+
+        if self._finished_output or size == 0:
+            return b''
+
+        # -1 returns arbitrary number of bytes.
+        if size == -1:
+            size = DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE
+
+        dst_buffer = ffi.new('char[]', size)
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = dst_buffer
+        out_buffer.size = size
+        out_buffer.pos = 0
+
+        # read1() dictates that we can perform at most 1 call to underlying
+        # stream to get input. However, we can't satisfy this restriction with
+        # decompression because not all input generates output. So we allow
+        # multiple read(). But unlike read(), we stop once we have any output.
+        while not self._finished_input:
+            self._read_input()
+            self._decompress_into_buffer(out_buffer)
+
+            if out_buffer.pos:
+                break
+
+        self._bytes_decompressed += out_buffer.pos
+        return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+
+    def readinto1(self, b):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if self._finished_output:
+            return 0
+
+        # TODO use writable=True once we require CFFI >= 1.12.
+        dest_buffer = ffi.from_buffer(b)
+        ffi.memmove(b, b'', 0)
+
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = dest_buffer
+        out_buffer.size = len(dest_buffer)
+        out_buffer.pos = 0
+
+        while not self._finished_input and not self._finished_output:
+            self._read_input()
+            self._decompress_into_buffer(out_buffer)
+
+            if out_buffer.pos:
+                break
+
+        self._bytes_decompressed += out_buffer.pos
+        return out_buffer.pos
+
+    def seek(self, pos, whence=os.SEEK_SET):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        read_amount = 0
+
+        if whence == os.SEEK_SET:
+            if pos < 0:
+                raise ValueError('cannot seek to negative position with SEEK_SET')
+
+            if pos < self._bytes_decompressed:
+                raise ValueError('cannot seek zstd decompression stream '
+                                 'backwards')
+
+            read_amount = pos - self._bytes_decompressed
+
+        elif whence == os.SEEK_CUR:
+            if pos < 0:
+                raise ValueError('cannot seek zstd decompression stream '
+                                 'backwards')
+
+            read_amount = pos
+        elif whence == os.SEEK_END:
+            raise ValueError('zstd decompression streams cannot be seeked '
+                             'with SEEK_END')
+
+        while read_amount:
+            result = self.read(min(read_amount,
+                                   DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE))
+
+            if not result:
+                break
+
+            read_amount -= len(result)
+
+        return self._bytes_decompressed
+
+class ZstdDecompressionWriter(object):
+    def __init__(self, decompressor, writer, write_size, write_return_read):
+        decompressor._ensure_dctx()
+
+        self._decompressor = decompressor
+        self._writer = writer
+        self._write_size = write_size
+        self._write_return_read = bool(write_return_read)
+        self._entered = False
+        self._closed = False
+
+    def __enter__(self):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        if self._entered:
+            raise ZstdError('cannot __enter__ multiple times')
+
+        self._entered = True
+
+        return self
+
+    def __exit__(self, exc_type, exc_value, exc_tb):
+        self._entered = False
+        self.close()
+
+    def memory_size(self):
+        return lib.ZSTD_sizeof_DCtx(self._decompressor._dctx)
+
+    def close(self):
+        if self._closed:
+            return
+
+        try:
+            self.flush()
+        finally:
+            self._closed = True
+
+        f = getattr(self._writer, 'close', None)
+        if f:
+            f()
+
+    @property
+    def closed(self):
+        return self._closed
+
+    def fileno(self):
+        f = getattr(self._writer, 'fileno', None)
+        if f:
+            return f()
+        else:
+            raise OSError('fileno not available on underlying writer')
+
+    def flush(self):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        f = getattr(self._writer, 'flush', None)
+        if f:
+            return f()
+
+    def isatty(self):
+        return False
+
+    def readable(self):
+        return False
+
+    def readline(self, size=-1):
+        raise io.UnsupportedOperation()
+
+    def readlines(self, hint=-1):
+        raise io.UnsupportedOperation()
+
+    def seek(self, offset, whence=None):
+        raise io.UnsupportedOperation()
+
+    def seekable(self):
+        return False
+
+    def tell(self):
+        raise io.UnsupportedOperation()
+
+    def truncate(self, size=None):
+        raise io.UnsupportedOperation()
+
+    def writable(self):
+        return True
+
+    def writelines(self, lines):
+        raise io.UnsupportedOperation()
+
+    def read(self, size=-1):
+        raise io.UnsupportedOperation()
+
+    def readall(self):
+        raise io.UnsupportedOperation()
+
+    def readinto(self, b):
+        raise io.UnsupportedOperation()
+
+    def write(self, data):
+        if self._closed:
+            raise ValueError('stream is closed')
+
+        total_write = 0
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+
+        data_buffer = ffi.from_buffer(data)
+        in_buffer.src = data_buffer
+        in_buffer.size = len(data_buffer)
+        in_buffer.pos = 0
+
+        dst_buffer = ffi.new('char[]', self._write_size)
+        out_buffer.dst = dst_buffer
+        out_buffer.size = len(dst_buffer)
+        out_buffer.pos = 0
+
+        dctx = self._decompressor._dctx
+
+        while in_buffer.pos < in_buffer.size:
+            zresult = lib.ZSTD_decompressStream(dctx, out_buffer, in_buffer)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('zstd decompress error: %s' %
+                                _zstd_error(zresult))
+
+            if out_buffer.pos:
+                self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
+                total_write += out_buffer.pos
+                out_buffer.pos = 0
+
+        if self._write_return_read:
+            return in_buffer.pos
+        else:
+            return total_write
+
+
+class ZstdDecompressor(object):
+    def __init__(self, dict_data=None, max_window_size=0, format=FORMAT_ZSTD1):
+        self._dict_data = dict_data
+        self._max_window_size = max_window_size
+        self._format = format
+
+        dctx = lib.ZSTD_createDCtx()
+        if dctx == ffi.NULL:
+            raise MemoryError()
+
+        self._dctx = dctx
+
+        # Defer setting up garbage collection until full state is loaded so
+        # the memory size is more accurate.
+        try:
+            self._ensure_dctx()
+        finally:
+            self._dctx = ffi.gc(dctx, lib.ZSTD_freeDCtx,
+                                size=lib.ZSTD_sizeof_DCtx(dctx))
+
+    def memory_size(self):
+        return lib.ZSTD_sizeof_DCtx(self._dctx)
+
+    def decompress(self, data, max_output_size=0):
+        self._ensure_dctx()
+
+        data_buffer = ffi.from_buffer(data)
+
+        output_size = lib.ZSTD_getFrameContentSize(data_buffer, len(data_buffer))
+
+        if output_size == lib.ZSTD_CONTENTSIZE_ERROR:
+            raise ZstdError('error determining content size from frame header')
+        elif output_size == 0:
+            return b''
+        elif output_size == lib.ZSTD_CONTENTSIZE_UNKNOWN:
+            if not max_output_size:
+                raise ZstdError('could not determine content size in frame header')
+
+            result_buffer = ffi.new('char[]', max_output_size)
+            result_size = max_output_size
+            output_size = 0
+        else:
+            result_buffer = ffi.new('char[]', output_size)
+            result_size = output_size
+
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = result_buffer
+        out_buffer.size = result_size
+        out_buffer.pos = 0
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer.src = data_buffer
+        in_buffer.size = len(data_buffer)
+        in_buffer.pos = 0
+
+        zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('decompression error: %s' %
+                            _zstd_error(zresult))
+        elif zresult:
+            raise ZstdError('decompression error: did not decompress full frame')
+        elif output_size and out_buffer.pos != output_size:
+            raise ZstdError('decompression error: decompressed %d bytes; expected %d' %
+                            (zresult, output_size))
+
+        return ffi.buffer(result_buffer, out_buffer.pos)[:]
+
+    def stream_reader(self, source, read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
+                      read_across_frames=False):
+        self._ensure_dctx()
+        return ZstdDecompressionReader(self, source, read_size, read_across_frames)
+
+    def decompressobj(self, write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE):
+        if write_size < 1:
+            raise ValueError('write_size must be positive')
+
+        self._ensure_dctx()
+        return ZstdDecompressionObj(self, write_size=write_size)
+
+    def read_to_iter(self, reader, read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
+                     write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE,
+                     skip_bytes=0):
+        if skip_bytes >= read_size:
+            raise ValueError('skip_bytes must be smaller than read_size')
+
+        if hasattr(reader, 'read'):
+            have_read = True
+        elif hasattr(reader, '__getitem__'):
+            have_read = False
+            buffer_offset = 0
+            size = len(reader)
+        else:
+            raise ValueError('must pass an object with a read() method or '
+                             'conforms to buffer protocol')
+
+        if skip_bytes:
+            if have_read:
+                reader.read(skip_bytes)
+            else:
+                if skip_bytes > size:
+                    raise ValueError('skip_bytes larger than first input chunk')
+
+                buffer_offset = skip_bytes
+
+        self._ensure_dctx()
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+
+        dst_buffer = ffi.new('char[]', write_size)
+        out_buffer.dst = dst_buffer
+        out_buffer.size = len(dst_buffer)
+        out_buffer.pos = 0
+
+        while True:
+            assert out_buffer.pos == 0
+
+            if have_read:
+                read_result = reader.read(read_size)
+            else:
+                remaining = size - buffer_offset
+                slice_size = min(remaining, read_size)
+                read_result = reader[buffer_offset:buffer_offset + slice_size]
+                buffer_offset += slice_size
+
+            # No new input. Break out of read loop.
+            if not read_result:
+                break
+
+            # Feed all read data into decompressor and emit output until
+            # exhausted.
+            read_buffer = ffi.from_buffer(read_result)
+            in_buffer.src = read_buffer
+            in_buffer.size = len(read_buffer)
+            in_buffer.pos = 0
+
+            while in_buffer.pos < in_buffer.size:
+                assert out_buffer.pos == 0
+
+                zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
+                if lib.ZSTD_isError(zresult):
+                    raise ZstdError('zstd decompress error: %s' %
+                                    _zstd_error(zresult))
+
+                if out_buffer.pos:
+                    data = ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
+                    out_buffer.pos = 0
+                    yield data
+
+                if zresult == 0:
+                    return
+
+            # Repeat loop to collect more input data.
+            continue
+
+        # If we get here, input is exhausted.
+
+    read_from = read_to_iter
+
+    def stream_writer(self, writer, write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE,
+                      write_return_read=False):
+        if not hasattr(writer, 'write'):
+            raise ValueError('must pass an object with a write() method')
+
+        return ZstdDecompressionWriter(self, writer, write_size,
+                                       write_return_read)
+
+    write_to = stream_writer
+
+    def copy_stream(self, ifh, ofh,
+                    read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
+                    write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE):
+        if not hasattr(ifh, 'read'):
+            raise ValueError('first argument must have a read() method')
+        if not hasattr(ofh, 'write'):
+            raise ValueError('second argument must have a write() method')
+
+        self._ensure_dctx()
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+
+        dst_buffer = ffi.new('char[]', write_size)
+        out_buffer.dst = dst_buffer
+        out_buffer.size = write_size
+        out_buffer.pos = 0
+
+        total_read, total_write = 0, 0
+
+        # Read all available input.
+        while True:
+            data = ifh.read(read_size)
+            if not data:
+                break
+
+            data_buffer = ffi.from_buffer(data)
+            total_read += len(data_buffer)
+            in_buffer.src = data_buffer
+            in_buffer.size = len(data_buffer)
+            in_buffer.pos = 0
+
+            # Flush all read data to output.
+            while in_buffer.pos < in_buffer.size:
+                zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
+                if lib.ZSTD_isError(zresult):
+                    raise ZstdError('zstd decompressor error: %s' %
+                                    _zstd_error(zresult))
+
+                if out_buffer.pos:
+                    ofh.write(ffi.buffer(out_buffer.dst, out_buffer.pos))
+                    total_write += out_buffer.pos
+                    out_buffer.pos = 0
+
+            # Continue loop to keep reading.
+
+        return total_read, total_write
+
+    def decompress_content_dict_chain(self, frames):
+        if not isinstance(frames, list):
+            raise TypeError('argument must be a list')
+
+        if not frames:
+            raise ValueError('empty input chain')
+
+        # First chunk should not be using a dictionary. We handle it specially.
+        chunk = frames[0]
+        if not isinstance(chunk, bytes_type):
+            raise ValueError('chunk 0 must be bytes')
+
+        # All chunks should be zstd frames and should have content size set.
+        chunk_buffer = ffi.from_buffer(chunk)
+        params = ffi.new('ZSTD_frameHeader *')
+        zresult = lib.ZSTD_getFrameHeader(params, chunk_buffer, len(chunk_buffer))
+        if lib.ZSTD_isError(zresult):
+            raise ValueError('chunk 0 is not a valid zstd frame')
+        elif zresult:
+            raise ValueError('chunk 0 is too small to contain a zstd frame')
+
+        if params.frameContentSize == lib.ZSTD_CONTENTSIZE_UNKNOWN:
+            raise ValueError('chunk 0 missing content size in frame')
+
+        self._ensure_dctx(load_dict=False)
+
+        last_buffer = ffi.new('char[]', params.frameContentSize)
+
+        out_buffer = ffi.new('ZSTD_outBuffer *')
+        out_buffer.dst = last_buffer
+        out_buffer.size = len(last_buffer)
+        out_buffer.pos = 0
+
+        in_buffer = ffi.new('ZSTD_inBuffer *')
+        in_buffer.src = chunk_buffer
+        in_buffer.size = len(chunk_buffer)
+        in_buffer.pos = 0
+
+        zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('could not decompress chunk 0: %s' %
+                            _zstd_error(zresult))
+        elif zresult:
+            raise ZstdError('chunk 0 did not decompress full frame')
+
+        # Special case of chain length of 1
+        if len(frames) == 1:
+            return ffi.buffer(last_buffer, len(last_buffer))[:]
+
+        i = 1
+        while i < len(frames):
+            chunk = frames[i]
+            if not isinstance(chunk, bytes_type):
+                raise ValueError('chunk %d must be bytes' % i)
+
+            chunk_buffer = ffi.from_buffer(chunk)
+            zresult = lib.ZSTD_getFrameHeader(params, chunk_buffer, len(chunk_buffer))
+            if lib.ZSTD_isError(zresult):
+                raise ValueError('chunk %d is not a valid zstd frame' % i)
+            elif zresult:
+                raise ValueError('chunk %d is too small to contain a zstd frame' % i)
+
+            if params.frameContentSize == lib.ZSTD_CONTENTSIZE_UNKNOWN:
+                raise ValueError('chunk %d missing content size in frame' % i)
+
+            dest_buffer = ffi.new('char[]', params.frameContentSize)
+
+            out_buffer.dst = dest_buffer
+            out_buffer.size = len(dest_buffer)
+            out_buffer.pos = 0
+
+            in_buffer.src = chunk_buffer
+            in_buffer.size = len(chunk_buffer)
+            in_buffer.pos = 0
+
+            zresult = lib.ZSTD_decompressStream(self._dctx, out_buffer, in_buffer)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('could not decompress chunk %d: %s' %
+                                _zstd_error(zresult))
+            elif zresult:
+                raise ZstdError('chunk %d did not decompress full frame' % i)
+
+            last_buffer = dest_buffer
+            i += 1
+
+        return ffi.buffer(last_buffer, len(last_buffer))[:]
+
+    def _ensure_dctx(self, load_dict=True):
+        lib.ZSTD_DCtx_reset(self._dctx, lib.ZSTD_reset_session_only)
+
+        if self._max_window_size:
+            zresult = lib.ZSTD_DCtx_setMaxWindowSize(self._dctx,
+                                                     self._max_window_size)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('unable to set max window size: %s' %
+                                _zstd_error(zresult))
+
+        zresult = lib.ZSTD_DCtx_setFormat(self._dctx, self._format)
+        if lib.ZSTD_isError(zresult):
+            raise ZstdError('unable to set decoding format: %s' %
+                            _zstd_error(zresult))
+
+        if self._dict_data and load_dict:
+            zresult = lib.ZSTD_DCtx_refDDict(self._dctx, self._dict_data._ddict)
+            if lib.ZSTD_isError(zresult):
+                raise ZstdError('unable to reference prepared dictionary: %s' %
+                                _zstd_error(zresult))
--- a/contrib/python-zstandard/zstd.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd.c	Wed Apr 17 13:41:18 2019 -0400
@@ -210,7 +210,7 @@
 	   We detect this mismatch here and refuse to load the module if this
 	   scenario is detected.
 	*/
-	if (ZSTD_VERSION_NUMBER != 10306 || ZSTD_versionNumber() != 10306) {
+	if (ZSTD_VERSION_NUMBER != 10308 || ZSTD_versionNumber() != 10308) {
 		PyErr_SetString(PyExc_ImportError, "zstd C API mismatch; Python bindings not compiled against expected zstd version");
 		return;
 	}
--- a/contrib/python-zstandard/zstd/common/bitstream.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/bitstream.h	Wed Apr 17 13:41:18 2019 -0400
@@ -339,17 +339,10 @@
 
 MEM_STATIC size_t BIT_getMiddleBits(size_t bitContainer, U32 const start, U32 const nbBits)
 {
-#if defined(__BMI__) && defined(__GNUC__) && __GNUC__*1000+__GNUC_MINOR__ >= 4008  /* experimental */
-#  if defined(__x86_64__)
-    if (sizeof(bitContainer)==8)
-        return _bextr_u64(bitContainer, start, nbBits);
-    else
-#  endif
-        return _bextr_u32(bitContainer, start, nbBits);
-#else
+    U32 const regMask = sizeof(bitContainer)*8 - 1;
+    /* if start > regMask, bitstream is corrupted, and result is undefined */
     assert(nbBits < BIT_MASK_SIZE);
-    return (bitContainer >> start) & BIT_mask[nbBits];
-#endif
+    return (bitContainer >> (start & regMask)) & BIT_mask[nbBits];
 }
 
 MEM_STATIC size_t BIT_getLowerBits(size_t bitContainer, U32 const nbBits)
@@ -366,9 +359,13 @@
  * @return : value extracted */
 MEM_STATIC size_t BIT_lookBits(const BIT_DStream_t* bitD, U32 nbBits)
 {
-#if defined(__BMI__) && defined(__GNUC__)   /* experimental; fails if bitD->bitsConsumed + nbBits > sizeof(bitD->bitContainer)*8 */
+    /* arbitrate between double-shift and shift+mask */
+#if 1
+    /* if bitD->bitsConsumed + nbBits > sizeof(bitD->bitContainer)*8,
+     * bitstream is likely corrupted, and result is undefined */
     return BIT_getMiddleBits(bitD->bitContainer, (sizeof(bitD->bitContainer)*8) - bitD->bitsConsumed - nbBits, nbBits);
 #else
+    /* this code path is slower on my os-x laptop */
     U32 const regMask = sizeof(bitD->bitContainer)*8 - 1;
     return ((bitD->bitContainer << (bitD->bitsConsumed & regMask)) >> 1) >> ((regMask-nbBits) & regMask);
 #endif
@@ -392,7 +389,7 @@
  *  Read (consume) next n bits from local register and update.
  *  Pay attention to not read more than nbBits contained into local register.
  * @return : extracted value. */
-MEM_STATIC size_t BIT_readBits(BIT_DStream_t* bitD, U32 nbBits)
+MEM_STATIC size_t BIT_readBits(BIT_DStream_t* bitD, unsigned nbBits)
 {
     size_t const value = BIT_lookBits(bitD, nbBits);
     BIT_skipBits(bitD, nbBits);
@@ -401,7 +398,7 @@
 
 /*! BIT_readBitsFast() :
  *  unsafe version; only works only if nbBits >= 1 */
-MEM_STATIC size_t BIT_readBitsFast(BIT_DStream_t* bitD, U32 nbBits)
+MEM_STATIC size_t BIT_readBitsFast(BIT_DStream_t* bitD, unsigned nbBits)
 {
     size_t const value = BIT_lookBitsFast(bitD, nbBits);
     assert(nbBits >= 1);
--- a/contrib/python-zstandard/zstd/common/compiler.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/compiler.h	Wed Apr 17 13:41:18 2019 -0400
@@ -15,6 +15,8 @@
 *  Compiler specifics
 *********************************************************/
 /* force inlining */
+
+#if !defined(ZSTD_NO_INLINE)
 #if defined (__GNUC__) || defined(__cplusplus) || defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L   /* C99 */
 #  define INLINE_KEYWORD inline
 #else
@@ -29,6 +31,13 @@
 #  define FORCE_INLINE_ATTR
 #endif
 
+#else
+
+#define INLINE_KEYWORD
+#define FORCE_INLINE_ATTR
+
+#endif
+
 /**
  * FORCE_INLINE_TEMPLATE is used to define C "templates", which take constant
  * parameters. They must be inlined for the compiler to elimininate the constant
@@ -89,23 +98,21 @@
 #endif
 
 /* prefetch
- * can be disabled, by declaring NO_PREFETCH macro
- * All prefetch invocations use a single default locality 2,
- * generating instruction prefetcht1,
- * which, according to Intel, means "load data into L2 cache".
- * This is a good enough "middle ground" for the time being,
- * though in theory, it would be better to specialize locality depending on data being prefetched.
- * Tests could not determine any sensible difference based on locality value. */
+ * can be disabled, by declaring NO_PREFETCH build macro */
 #if defined(NO_PREFETCH)
-#  define PREFETCH(ptr)     (void)(ptr)  /* disabled */
+#  define PREFETCH_L1(ptr)  (void)(ptr)  /* disabled */
+#  define PREFETCH_L2(ptr)  (void)(ptr)  /* disabled */
 #else
 #  if defined(_MSC_VER) && (defined(_M_X64) || defined(_M_I86))  /* _mm_prefetch() is not defined outside of x86/x64 */
 #    include <mmintrin.h>   /* https://msdn.microsoft.com/fr-fr/library/84szxsww(v=vs.90).aspx */
-#    define PREFETCH(ptr)   _mm_prefetch((const char*)(ptr), _MM_HINT_T1)
+#    define PREFETCH_L1(ptr)  _mm_prefetch((const char*)(ptr), _MM_HINT_T0)
+#    define PREFETCH_L2(ptr)  _mm_prefetch((const char*)(ptr), _MM_HINT_T1)
 #  elif defined(__GNUC__) && ( (__GNUC__ >= 4) || ( (__GNUC__ == 3) && (__GNUC_MINOR__ >= 1) ) )
-#    define PREFETCH(ptr)   __builtin_prefetch((ptr), 0 /* rw==read */, 2 /* locality */)
+#    define PREFETCH_L1(ptr)  __builtin_prefetch((ptr), 0 /* rw==read */, 3 /* locality */)
+#    define PREFETCH_L2(ptr)  __builtin_prefetch((ptr), 0 /* rw==read */, 2 /* locality */)
 #  else
-#    define PREFETCH(ptr)   (void)(ptr)  /* disabled */
+#    define PREFETCH_L1(ptr) (void)(ptr)  /* disabled */
+#    define PREFETCH_L2(ptr) (void)(ptr)  /* disabled */
 #  endif
 #endif  /* NO_PREFETCH */
 
@@ -116,7 +123,7 @@
     size_t const _size = (size_t)(s);     \
     size_t _pos;                          \
     for (_pos=0; _pos<_size; _pos+=CACHELINE_SIZE) {  \
-        PREFETCH(_ptr + _pos);            \
+        PREFETCH_L2(_ptr + _pos);         \
     }                                     \
 }
 
--- a/contrib/python-zstandard/zstd/common/cpu.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/cpu.h	Wed Apr 17 13:41:18 2019 -0400
@@ -78,7 +78,7 @@
       __asm__(
           "pushl %%ebx\n\t"
           "cpuid\n\t"
-          "movl %%ebx, %%eax\n\r"
+          "movl %%ebx, %%eax\n\t"
           "popl %%ebx"
           : "=a"(f7b), "=c"(f7c)
           : "a"(7), "c"(0)
--- a/contrib/python-zstandard/zstd/common/debug.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/debug.h	Wed Apr 17 13:41:18 2019 -0400
@@ -57,9 +57,9 @@
 #endif
 
 
-/* static assert is triggered at compile time, leaving no runtime artefact,
- * but can only work with compile-time constants.
- * This variant can only be used inside a function. */
+/* static assert is triggered at compile time, leaving no runtime artefact.
+ * static assert only works with compile-time constants.
+ * Also, this variant can only be used inside a function. */
 #define DEBUG_STATIC_ASSERT(c) (void)sizeof(char[(c) ? 1 : -1])
 
 
@@ -70,9 +70,19 @@
 #  define DEBUGLEVEL 0
 #endif
 
+
+/* DEBUGFILE can be defined externally,
+ * typically through compiler command line.
+ * note : currently useless.
+ * Value must be stderr or stdout */
+#ifndef DEBUGFILE
+#  define DEBUGFILE stderr
+#endif
+
+
 /* recommended values for DEBUGLEVEL :
- * 0 : no debug, all run-time functions disabled
- * 1 : no display, enables assert() only
+ * 0 : release mode, no debug, all run-time checks disabled
+ * 1 : enables assert() only, no display
  * 2 : reserved, for currently active debug path
  * 3 : events once per object lifetime (CCtx, CDict, etc.)
  * 4 : events once per frame
@@ -81,7 +91,7 @@
  * 7+: events at every position (*very* verbose)
  *
  * It's generally inconvenient to output traces > 5.
- * In which case, it's possible to selectively enable higher verbosity levels
+ * In which case, it's possible to selectively trigger high verbosity levels
  * by modifying g_debug_level.
  */
 
@@ -95,11 +105,12 @@
 
 #if (DEBUGLEVEL>=2)
 #  include <stdio.h>
-extern int g_debuglevel; /* here, this variable is only declared,
-                           it actually lives in debug.c,
-                           and is shared by the whole process.
-                           It's typically used to enable very verbose levels
-                           on selective conditions (such as position in src) */
+extern int g_debuglevel; /* the variable is only declared,
+                            it actually lives in debug.c,
+                            and is shared by the whole process.
+                            It's not thread-safe.
+                            It's useful when enabling very verbose levels
+                            on selective conditions (such as position in src) */
 
 #  define RAWLOG(l, ...) {                                      \
                 if (l<=g_debuglevel) {                          \
--- a/contrib/python-zstandard/zstd/common/error_private.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/error_private.c	Wed Apr 17 13:41:18 2019 -0400
@@ -14,6 +14,10 @@
 
 const char* ERR_getErrorString(ERR_enum code)
 {
+#ifdef ZSTD_STRIP_ERROR_STRINGS
+    (void)code;
+    return "Error strings stripped";
+#else
     static const char* const notErrorCode = "Unspecified error code";
     switch( code )
     {
@@ -39,10 +43,12 @@
     case PREFIX(dictionaryCreation_failed): return "Cannot create Dictionary from provided samples";
     case PREFIX(dstSize_tooSmall): return "Destination buffer is too small";
     case PREFIX(srcSize_wrong): return "Src size is incorrect";
+    case PREFIX(dstBuffer_null): return "Operation on NULL destination buffer";
         /* following error codes are not stable and may be removed or changed in a future version */
     case PREFIX(frameIndex_tooLarge): return "Frame index is too large";
     case PREFIX(seekableIO): return "An I/O error occurred when reading/seeking";
     case PREFIX(maxCode):
     default: return notErrorCode;
     }
+#endif
 }
--- a/contrib/python-zstandard/zstd/common/fse.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/fse.h	Wed Apr 17 13:41:18 2019 -0400
@@ -512,7 +512,7 @@
     const U32 tableLog = MEM_read16(ptr);
     statePtr->value = (ptrdiff_t)1<<tableLog;
     statePtr->stateTable = u16ptr+2;
-    statePtr->symbolTT = ((const U32*)ct + 1 + (tableLog ? (1<<(tableLog-1)) : 1));
+    statePtr->symbolTT = ct + 1 + (tableLog ? (1<<(tableLog-1)) : 1);
     statePtr->stateLog = tableLog;
 }
 
@@ -531,7 +531,7 @@
     }
 }
 
-MEM_STATIC void FSE_encodeSymbol(BIT_CStream_t* bitC, FSE_CState_t* statePtr, U32 symbol)
+MEM_STATIC void FSE_encodeSymbol(BIT_CStream_t* bitC, FSE_CState_t* statePtr, unsigned symbol)
 {
     FSE_symbolCompressionTransform const symbolTT = ((const FSE_symbolCompressionTransform*)(statePtr->symbolTT))[symbol];
     const U16* const stateTable = (const U16*)(statePtr->stateTable);
--- a/contrib/python-zstandard/zstd/common/huf.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/huf.h	Wed Apr 17 13:41:18 2019 -0400
@@ -173,15 +173,19 @@
 *  Advanced decompression functions
 ******************************************/
 size_t HUF_decompress4X1 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);   /**< single-symbol decoder */
+#ifndef HUF_FORCE_DECOMPRESS_X1
 size_t HUF_decompress4X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);   /**< double-symbols decoder */
+#endif
 
 size_t HUF_decompress4X_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);   /**< decodes RLE and uncompressed */
 size_t HUF_decompress4X_hufOnly(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize); /**< considers RLE and uncompressed as errors */
 size_t HUF_decompress4X_hufOnly_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize); /**< considers RLE and uncompressed as errors */
 size_t HUF_decompress4X1_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);   /**< single-symbol decoder */
 size_t HUF_decompress4X1_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize);   /**< single-symbol decoder */
+#ifndef HUF_FORCE_DECOMPRESS_X1
 size_t HUF_decompress4X2_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);   /**< double-symbols decoder */
 size_t HUF_decompress4X2_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize);   /**< double-symbols decoder */
+#endif
 
 
 /* ****************************************
@@ -228,7 +232,7 @@
 #define HUF_CTABLE_WORKSPACE_SIZE_U32 (2*HUF_SYMBOLVALUE_MAX +1 +1)
 #define HUF_CTABLE_WORKSPACE_SIZE (HUF_CTABLE_WORKSPACE_SIZE_U32 * sizeof(unsigned))
 size_t HUF_buildCTable_wksp (HUF_CElt* tree,
-                       const U32* count, U32 maxSymbolValue, U32 maxNbBits,
+                       const unsigned* count, U32 maxSymbolValue, U32 maxNbBits,
                              void* workSpace, size_t wkspSize);
 
 /*! HUF_readStats() :
@@ -277,14 +281,22 @@
 #define HUF_DECOMPRESS_WORKSPACE_SIZE (2 << 10)
 #define HUF_DECOMPRESS_WORKSPACE_SIZE_U32 (HUF_DECOMPRESS_WORKSPACE_SIZE / sizeof(U32))
 
+#ifndef HUF_FORCE_DECOMPRESS_X2
 size_t HUF_readDTableX1 (HUF_DTable* DTable, const void* src, size_t srcSize);
 size_t HUF_readDTableX1_wksp (HUF_DTable* DTable, const void* src, size_t srcSize, void* workSpace, size_t wkspSize);
+#endif
+#ifndef HUF_FORCE_DECOMPRESS_X1
 size_t HUF_readDTableX2 (HUF_DTable* DTable, const void* src, size_t srcSize);
 size_t HUF_readDTableX2_wksp (HUF_DTable* DTable, const void* src, size_t srcSize, void* workSpace, size_t wkspSize);
+#endif
 
 size_t HUF_decompress4X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
+#ifndef HUF_FORCE_DECOMPRESS_X2
 size_t HUF_decompress4X1_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
+#endif
+#ifndef HUF_FORCE_DECOMPRESS_X1
 size_t HUF_decompress4X2_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
+#endif
 
 
 /* ====================== */
@@ -306,24 +318,36 @@
                        HUF_CElt* hufTable, HUF_repeat* repeat, int preferRepeat, int bmi2);
 
 size_t HUF_decompress1X1 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);   /* single-symbol decoder */
+#ifndef HUF_FORCE_DECOMPRESS_X1
 size_t HUF_decompress1X2 (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);   /* double-symbol decoder */
+#endif
 
 size_t HUF_decompress1X_DCtx (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);
 size_t HUF_decompress1X_DCtx_wksp (HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize);
+#ifndef HUF_FORCE_DECOMPRESS_X2
 size_t HUF_decompress1X1_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);   /**< single-symbol decoder */
 size_t HUF_decompress1X1_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize);   /**< single-symbol decoder */
+#endif
+#ifndef HUF_FORCE_DECOMPRESS_X1
 size_t HUF_decompress1X2_DCtx(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);   /**< double-symbols decoder */
 size_t HUF_decompress1X2_DCtx_wksp(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize);   /**< double-symbols decoder */
+#endif
 
 size_t HUF_decompress1X_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);   /**< automatic selection of sing or double symbol decoder, based on DTable */
+#ifndef HUF_FORCE_DECOMPRESS_X2
 size_t HUF_decompress1X1_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
+#endif
+#ifndef HUF_FORCE_DECOMPRESS_X1
 size_t HUF_decompress1X2_usingDTable(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable);
+#endif
 
 /* BMI2 variants.
  * If the CPU has BMI2 support, pass bmi2=1, otherwise pass bmi2=0.
  */
 size_t HUF_decompress1X_usingDTable_bmi2(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable, int bmi2);
+#ifndef HUF_FORCE_DECOMPRESS_X2
 size_t HUF_decompress1X1_DCtx_wksp_bmi2(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int bmi2);
+#endif
 size_t HUF_decompress4X_usingDTable_bmi2(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable, int bmi2);
 size_t HUF_decompress4X_hufOnly_wksp_bmi2(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int bmi2);
 
--- a/contrib/python-zstandard/zstd/common/mem.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/mem.h	Wed Apr 17 13:41:18 2019 -0400
@@ -39,6 +39,10 @@
 #  define MEM_STATIC static  /* this version may generate warnings for unused static functions; disable the relevant warning */
 #endif
 
+#ifndef __has_builtin
+#  define __has_builtin(x) 0  /* compat. with non-clang compilers */
+#endif
+
 /* code only tested on 32 and 64 bits systems */
 #define MEM_STATIC_ASSERT(c)   { enum { MEM_static_assert = 1/(int)(!!(c)) }; }
 MEM_STATIC void MEM_check(void) { MEM_STATIC_ASSERT((sizeof(size_t)==4) || (sizeof(size_t)==8)); }
@@ -198,7 +202,8 @@
 {
 #if defined(_MSC_VER)     /* Visual Studio */
     return _byteswap_ulong(in);
-#elif defined (__GNUC__) && (__GNUC__ * 100 + __GNUC_MINOR__ >= 403)
+#elif (defined (__GNUC__) && (__GNUC__ * 100 + __GNUC_MINOR__ >= 403)) \
+  || (defined(__clang__) && __has_builtin(__builtin_bswap32))
     return __builtin_bswap32(in);
 #else
     return  ((in << 24) & 0xff000000 ) |
@@ -212,7 +217,8 @@
 {
 #if defined(_MSC_VER)     /* Visual Studio */
     return _byteswap_uint64(in);
-#elif defined (__GNUC__) && (__GNUC__ * 100 + __GNUC_MINOR__ >= 403)
+#elif (defined (__GNUC__) && (__GNUC__ * 100 + __GNUC_MINOR__ >= 403)) \
+  || (defined(__clang__) && __has_builtin(__builtin_bswap64))
     return __builtin_bswap64(in);
 #else
     return  ((in << 56) & 0xff00000000000000ULL) |
--- a/contrib/python-zstandard/zstd/common/pool.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/pool.c	Wed Apr 17 13:41:18 2019 -0400
@@ -88,8 +88,8 @@
             ctx->numThreadsBusy++;
             ctx->queueEmpty = ctx->queueHead == ctx->queueTail;
             /* Unlock the mutex, signal a pusher, and run the job */
+            ZSTD_pthread_cond_signal(&ctx->queuePushCond);
             ZSTD_pthread_mutex_unlock(&ctx->queueMutex);
-            ZSTD_pthread_cond_signal(&ctx->queuePushCond);
 
             job.function(job.opaque);
 
--- a/contrib/python-zstandard/zstd/common/zstd_common.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/zstd_common.c	Wed Apr 17 13:41:18 2019 -0400
@@ -30,8 +30,10 @@
 /*-****************************************
 *  ZSTD Error Management
 ******************************************/
+#undef ZSTD_isError   /* defined within zstd_internal.h */
 /*! ZSTD_isError() :
- *  tells if a return value is an error code */
+ *  tells if a return value is an error code
+ *  symbol is required for external callers */
 unsigned ZSTD_isError(size_t code) { return ERR_isError(code); }
 
 /*! ZSTD_getErrorName() :
--- a/contrib/python-zstandard/zstd/common/zstd_errors.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/zstd_errors.h	Wed Apr 17 13:41:18 2019 -0400
@@ -72,6 +72,7 @@
   ZSTD_error_workSpace_tooSmall= 66,
   ZSTD_error_dstSize_tooSmall = 70,
   ZSTD_error_srcSize_wrong    = 72,
+  ZSTD_error_dstBuffer_null   = 74,
   /* following error codes are __NOT STABLE__, they can be removed or changed in future versions */
   ZSTD_error_frameIndex_tooLarge = 100,
   ZSTD_error_seekableIO          = 102,
--- a/contrib/python-zstandard/zstd/common/zstd_internal.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/common/zstd_internal.h	Wed Apr 17 13:41:18 2019 -0400
@@ -41,6 +41,9 @@
 
 /* ---- static assert (debug) --- */
 #define ZSTD_STATIC_ASSERT(c) DEBUG_STATIC_ASSERT(c)
+#define ZSTD_isError ERR_isError   /* for inlining */
+#define FSE_isError  ERR_isError
+#define HUF_isError  ERR_isError
 
 
 /*-*************************************
@@ -75,7 +78,6 @@
 #define BIT0   1
 
 #define ZSTD_WINDOWLOG_ABSOLUTEMIN 10
-#define ZSTD_WINDOWLOG_DEFAULTMAX 27 /* Default maximum allowed window log */
 static const size_t ZSTD_fcs_fieldSize[4] = { 0, 2, 4, 8 };
 static const size_t ZSTD_did_fieldSize[4] = { 0, 1, 2, 4 };
 
@@ -242,7 +244,7 @@
     blockType_e blockType;
     U32 lastBlock;
     U32 origSize;
-} blockProperties_t;
+} blockProperties_t;   /* declared here for decompress and fullbench */
 
 /*! ZSTD_getcBlockSize() :
  *  Provides the size of compressed block from block header `src` */
@@ -250,6 +252,13 @@
 size_t ZSTD_getcBlockSize(const void* src, size_t srcSize,
                           blockProperties_t* bpPtr);
 
+/*! ZSTD_decodeSeqHeaders() :
+ *  decode sequence header from src */
+/* Used by: decompress, fullbench (does not get its definition from here) */
+size_t ZSTD_decodeSeqHeaders(ZSTD_DCtx* dctx, int* nbSeqPtr,
+                       const void* src, size_t srcSize);
+
+
 #if defined (__cplusplus)
 }
 #endif
--- a/contrib/python-zstandard/zstd/compress/fse_compress.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/fse_compress.c	Wed Apr 17 13:41:18 2019 -0400
@@ -115,7 +115,7 @@
     /* symbol start positions */
     {   U32 u;
         cumul[0] = 0;
-        for (u=1; u<=maxSymbolValue+1; u++) {
+        for (u=1; u <= maxSymbolValue+1; u++) {
             if (normalizedCounter[u-1]==-1) {  /* Low proba symbol */
                 cumul[u] = cumul[u-1] + 1;
                 tableSymbol[highThreshold--] = (FSE_FUNCTION_TYPE)(u-1);
@@ -658,7 +658,7 @@
     BYTE* op = ostart;
     BYTE* const oend = ostart + dstSize;
 
-    U32   count[FSE_MAX_SYMBOL_VALUE+1];
+    unsigned count[FSE_MAX_SYMBOL_VALUE+1];
     S16   norm[FSE_MAX_SYMBOL_VALUE+1];
     FSE_CTable* CTable = (FSE_CTable*)workSpace;
     size_t const CTableSize = FSE_CTABLE_SIZE_U32(tableLog, maxSymbolValue);
@@ -672,7 +672,7 @@
     if (!tableLog) tableLog = FSE_DEFAULT_TABLELOG;
 
     /* Scan input and build symbol stats */
-    {   CHECK_V_F(maxCount, HIST_count_wksp(count, &maxSymbolValue, src, srcSize, (unsigned*)scratchBuffer) );
+    {   CHECK_V_F(maxCount, HIST_count_wksp(count, &maxSymbolValue, src, srcSize, scratchBuffer, scratchBufferSize) );
         if (maxCount == srcSize) return 1;   /* only a single symbol in src : rle */
         if (maxCount == 1) return 0;         /* each symbol present maximum once => not compressible */
         if (maxCount < (srcSize >> 7)) return 0;   /* Heuristic : not compressible enough */
--- a/contrib/python-zstandard/zstd/compress/hist.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/hist.c	Wed Apr 17 13:41:18 2019 -0400
@@ -73,6 +73,7 @@
     return largestCount;
 }
 
+typedef enum { trustInput, checkMaxSymbolValue } HIST_checkInput_e;
 
 /* HIST_count_parallel_wksp() :
  * store histogram into 4 intermediate tables, recombined at the end.
@@ -85,8 +86,8 @@
 static size_t HIST_count_parallel_wksp(
                                 unsigned* count, unsigned* maxSymbolValuePtr,
                                 const void* source, size_t sourceSize,
-                                unsigned checkMax,
-                                unsigned* const workSpace)
+                                HIST_checkInput_e check,
+                                U32* const workSpace)
 {
     const BYTE* ip = (const BYTE*)source;
     const BYTE* const iend = ip+sourceSize;
@@ -137,7 +138,7 @@
     /* finish last symbols */
     while (ip<iend) Counting1[*ip++]++;
 
-    if (checkMax) {   /* verify stats will fit into destination table */
+    if (check) {   /* verify stats will fit into destination table */
         U32 s; for (s=255; s>maxSymbolValue; s--) {
             Counting1[s] += Counting2[s] + Counting3[s] + Counting4[s];
             if (Counting1[s]) return ERROR(maxSymbolValue_tooSmall);
@@ -157,14 +158,18 @@
 
 /* HIST_countFast_wksp() :
  * Same as HIST_countFast(), but using an externally provided scratch buffer.
- * `workSpace` size must be table of >= HIST_WKSP_SIZE_U32 unsigned */
+ * `workSpace` is a writable buffer which must be 4-bytes aligned,
+ * `workSpaceSize` must be >= HIST_WKSP_SIZE
+ */
 size_t HIST_countFast_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
                           const void* source, size_t sourceSize,
-                          unsigned* workSpace)
+                          void* workSpace, size_t workSpaceSize)
 {
     if (sourceSize < 1500) /* heuristic threshold */
         return HIST_count_simple(count, maxSymbolValuePtr, source, sourceSize);
-    return HIST_count_parallel_wksp(count, maxSymbolValuePtr, source, sourceSize, 0, workSpace);
+    if ((size_t)workSpace & 3) return ERROR(GENERIC);  /* must be aligned on 4-bytes boundaries */
+    if (workSpaceSize < HIST_WKSP_SIZE) return ERROR(workSpace_tooSmall);
+    return HIST_count_parallel_wksp(count, maxSymbolValuePtr, source, sourceSize, trustInput, (U32*)workSpace);
 }
 
 /* fast variant (unsafe : won't check if src contains values beyond count[] limit) */
@@ -172,24 +177,27 @@
                      const void* source, size_t sourceSize)
 {
     unsigned tmpCounters[HIST_WKSP_SIZE_U32];
-    return HIST_countFast_wksp(count, maxSymbolValuePtr, source, sourceSize, tmpCounters);
+    return HIST_countFast_wksp(count, maxSymbolValuePtr, source, sourceSize, tmpCounters, sizeof(tmpCounters));
 }
 
 /* HIST_count_wksp() :
  * Same as HIST_count(), but using an externally provided scratch buffer.
  * `workSpace` size must be table of >= HIST_WKSP_SIZE_U32 unsigned */
 size_t HIST_count_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
-                 const void* source, size_t sourceSize, unsigned* workSpace)
+                       const void* source, size_t sourceSize,
+                       void* workSpace, size_t workSpaceSize)
 {
+    if ((size_t)workSpace & 3) return ERROR(GENERIC);  /* must be aligned on 4-bytes boundaries */
+    if (workSpaceSize < HIST_WKSP_SIZE) return ERROR(workSpace_tooSmall);
     if (*maxSymbolValuePtr < 255)
-        return HIST_count_parallel_wksp(count, maxSymbolValuePtr, source, sourceSize, 1, workSpace);
+        return HIST_count_parallel_wksp(count, maxSymbolValuePtr, source, sourceSize, checkMaxSymbolValue, (U32*)workSpace);
     *maxSymbolValuePtr = 255;
-    return HIST_countFast_wksp(count, maxSymbolValuePtr, source, sourceSize, workSpace);
+    return HIST_countFast_wksp(count, maxSymbolValuePtr, source, sourceSize, workSpace, workSpaceSize);
 }
 
 size_t HIST_count(unsigned* count, unsigned* maxSymbolValuePtr,
                  const void* src, size_t srcSize)
 {
     unsigned tmpCounters[HIST_WKSP_SIZE_U32];
-    return HIST_count_wksp(count, maxSymbolValuePtr, src, srcSize, tmpCounters);
+    return HIST_count_wksp(count, maxSymbolValuePtr, src, srcSize, tmpCounters, sizeof(tmpCounters));
 }
--- a/contrib/python-zstandard/zstd/compress/hist.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/hist.h	Wed Apr 17 13:41:18 2019 -0400
@@ -41,11 +41,11 @@
 
 /*! HIST_count():
  *  Provides the precise count of each byte within a table 'count'.
- *  'count' is a table of unsigned int, of minimum size (*maxSymbolValuePtr+1).
+ * 'count' is a table of unsigned int, of minimum size (*maxSymbolValuePtr+1).
  *  Updates *maxSymbolValuePtr with actual largest symbol value detected.
- *  @return : count of the most frequent symbol (which isn't identified).
- *            or an error code, which can be tested using HIST_isError().
- *            note : if return == srcSize, there is only one symbol.
+ * @return : count of the most frequent symbol (which isn't identified).
+ *           or an error code, which can be tested using HIST_isError().
+ *           note : if return == srcSize, there is only one symbol.
  */
 size_t HIST_count(unsigned* count, unsigned* maxSymbolValuePtr,
                   const void* src, size_t srcSize);
@@ -56,14 +56,16 @@
 /* --- advanced histogram functions --- */
 
 #define HIST_WKSP_SIZE_U32 1024
+#define HIST_WKSP_SIZE    (HIST_WKSP_SIZE_U32 * sizeof(unsigned))
 /** HIST_count_wksp() :
  *  Same as HIST_count(), but using an externally provided scratch buffer.
  *  Benefit is this function will use very little stack space.
- * `workSpace` must be a table of unsigned of size >= HIST_WKSP_SIZE_U32
+ * `workSpace` is a writable buffer which must be 4-bytes aligned,
+ * `workSpaceSize` must be >= HIST_WKSP_SIZE
  */
 size_t HIST_count_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
                        const void* src, size_t srcSize,
-                       unsigned* workSpace);
+                       void* workSpace, size_t workSpaceSize);
 
 /** HIST_countFast() :
  *  same as HIST_count(), but blindly trusts that all byte values within src are <= *maxSymbolValuePtr.
@@ -74,11 +76,12 @@
 
 /** HIST_countFast_wksp() :
  *  Same as HIST_countFast(), but using an externally provided scratch buffer.
- * `workSpace` must be a table of unsigned of size >= HIST_WKSP_SIZE_U32
+ * `workSpace` is a writable buffer which must be 4-bytes aligned,
+ * `workSpaceSize` must be >= HIST_WKSP_SIZE
  */
 size_t HIST_countFast_wksp(unsigned* count, unsigned* maxSymbolValuePtr,
                            const void* src, size_t srcSize,
-                           unsigned* workSpace);
+                           void* workSpace, size_t workSpaceSize);
 
 /*! HIST_count_simple() :
  *  Same as HIST_countFast(), this function is unsafe,
--- a/contrib/python-zstandard/zstd/compress/huf_compress.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/huf_compress.c	Wed Apr 17 13:41:18 2019 -0400
@@ -88,13 +88,13 @@
     BYTE* op = ostart;
     BYTE* const oend = ostart + dstSize;
 
-    U32 maxSymbolValue = HUF_TABLELOG_MAX;
+    unsigned maxSymbolValue = HUF_TABLELOG_MAX;
     U32 tableLog = MAX_FSE_TABLELOG_FOR_HUFF_HEADER;
 
     FSE_CTable CTable[FSE_CTABLE_SIZE_U32(MAX_FSE_TABLELOG_FOR_HUFF_HEADER, HUF_TABLELOG_MAX)];
     BYTE scratchBuffer[1<<MAX_FSE_TABLELOG_FOR_HUFF_HEADER];
 
-    U32 count[HUF_TABLELOG_MAX+1];
+    unsigned count[HUF_TABLELOG_MAX+1];
     S16 norm[HUF_TABLELOG_MAX+1];
 
     /* init conditions */
@@ -134,7 +134,7 @@
     `CTable` : Huffman tree to save, using huf representation.
     @return : size of saved CTable */
 size_t HUF_writeCTable (void* dst, size_t maxDstSize,
-                        const HUF_CElt* CTable, U32 maxSymbolValue, U32 huffLog)
+                        const HUF_CElt* CTable, unsigned maxSymbolValue, unsigned huffLog)
 {
     BYTE bitsToWeight[HUF_TABLELOG_MAX + 1];   /* precomputed conversion table */
     BYTE huffWeight[HUF_SYMBOLVALUE_MAX];
@@ -169,7 +169,7 @@
 }
 
 
-size_t HUF_readCTable (HUF_CElt* CTable, U32* maxSymbolValuePtr, const void* src, size_t srcSize)
+size_t HUF_readCTable (HUF_CElt* CTable, unsigned* maxSymbolValuePtr, const void* src, size_t srcSize)
 {
     BYTE huffWeight[HUF_SYMBOLVALUE_MAX + 1];   /* init not required, even though some static analyzer may complain */
     U32 rankVal[HUF_TABLELOG_ABSOLUTEMAX + 1];   /* large enough for values from 0 to 16 */
@@ -315,7 +315,7 @@
     U32 current;
 } rankPos;
 
-static void HUF_sort(nodeElt* huffNode, const U32* count, U32 maxSymbolValue)
+static void HUF_sort(nodeElt* huffNode, const unsigned* count, U32 maxSymbolValue)
 {
     rankPos rank[32];
     U32 n;
@@ -347,7 +347,7 @@
  */
 #define STARTNODE (HUF_SYMBOLVALUE_MAX+1)
 typedef nodeElt huffNodeTable[HUF_CTABLE_WORKSPACE_SIZE_U32];
-size_t HUF_buildCTable_wksp (HUF_CElt* tree, const U32* count, U32 maxSymbolValue, U32 maxNbBits, void* workSpace, size_t wkspSize)
+size_t HUF_buildCTable_wksp (HUF_CElt* tree, const unsigned* count, U32 maxSymbolValue, U32 maxNbBits, void* workSpace, size_t wkspSize)
 {
     nodeElt* const huffNode0 = (nodeElt*)workSpace;
     nodeElt* const huffNode = huffNode0+1;
@@ -421,7 +421,7 @@
  * @return : maxNbBits
  *  Note : count is used before tree is written, so they can safely overlap
  */
-size_t HUF_buildCTable (HUF_CElt* tree, const U32* count, U32 maxSymbolValue, U32 maxNbBits)
+size_t HUF_buildCTable (HUF_CElt* tree, const unsigned* count, unsigned maxSymbolValue, unsigned maxNbBits)
 {
     huffNodeTable nodeTable;
     return HUF_buildCTable_wksp(tree, count, maxSymbolValue, maxNbBits, nodeTable, sizeof(nodeTable));
@@ -610,13 +610,14 @@
     return HUF_compress4X_usingCTable_internal(dst, dstSize, src, srcSize, CTable, /* bmi2 */ 0);
 }
 
+typedef enum { HUF_singleStream, HUF_fourStreams } HUF_nbStreams_e;
 
 static size_t HUF_compressCTable_internal(
                 BYTE* const ostart, BYTE* op, BYTE* const oend,
                 const void* src, size_t srcSize,
-                unsigned singleStream, const HUF_CElt* CTable, const int bmi2)
+                HUF_nbStreams_e nbStreams, const HUF_CElt* CTable, const int bmi2)
 {
-    size_t const cSize = singleStream ?
+    size_t const cSize = (nbStreams==HUF_singleStream) ?
                          HUF_compress1X_usingCTable_internal(op, oend - op, src, srcSize, CTable, bmi2) :
                          HUF_compress4X_usingCTable_internal(op, oend - op, src, srcSize, CTable, bmi2);
     if (HUF_isError(cSize)) { return cSize; }
@@ -628,21 +629,21 @@
 }
 
 typedef struct {
-    U32 count[HUF_SYMBOLVALUE_MAX + 1];
+    unsigned count[HUF_SYMBOLVALUE_MAX + 1];
     HUF_CElt CTable[HUF_SYMBOLVALUE_MAX + 1];
     huffNodeTable nodeTable;
 } HUF_compress_tables_t;
 
 /* HUF_compress_internal() :
  * `workSpace` must a table of at least HUF_WORKSPACE_SIZE_U32 unsigned */
-static size_t HUF_compress_internal (
-                void* dst, size_t dstSize,
-                const void* src, size_t srcSize,
-                unsigned maxSymbolValue, unsigned huffLog,
-                unsigned singleStream,
-                void* workSpace, size_t wkspSize,
-                HUF_CElt* oldHufTable, HUF_repeat* repeat, int preferRepeat,
-                const int bmi2)
+static size_t
+HUF_compress_internal (void* dst, size_t dstSize,
+                 const void* src, size_t srcSize,
+                       unsigned maxSymbolValue, unsigned huffLog,
+                       HUF_nbStreams_e nbStreams,
+                       void* workSpace, size_t wkspSize,
+                       HUF_CElt* oldHufTable, HUF_repeat* repeat, int preferRepeat,
+                 const int bmi2)
 {
     HUF_compress_tables_t* const table = (HUF_compress_tables_t*)workSpace;
     BYTE* const ostart = (BYTE*)dst;
@@ -651,7 +652,7 @@
 
     /* checks & inits */
     if (((size_t)workSpace & 3) != 0) return ERROR(GENERIC);  /* must be aligned on 4-bytes boundaries */
-    if (wkspSize < sizeof(*table)) return ERROR(workSpace_tooSmall);
+    if (wkspSize < HUF_WORKSPACE_SIZE) return ERROR(workSpace_tooSmall);
     if (!srcSize) return 0;  /* Uncompressed */
     if (!dstSize) return 0;  /* cannot fit anything within dst budget */
     if (srcSize > HUF_BLOCKSIZE_MAX) return ERROR(srcSize_wrong);   /* current block size limit */
@@ -664,11 +665,11 @@
     if (preferRepeat && repeat && *repeat == HUF_repeat_valid) {
         return HUF_compressCTable_internal(ostart, op, oend,
                                            src, srcSize,
-                                           singleStream, oldHufTable, bmi2);
+                                           nbStreams, oldHufTable, bmi2);
     }
 
     /* Scan input and build symbol stats */
-    {   CHECK_V_F(largest, HIST_count_wksp (table->count, &maxSymbolValue, (const BYTE*)src, srcSize, table->count) );
+    {   CHECK_V_F(largest, HIST_count_wksp (table->count, &maxSymbolValue, (const BYTE*)src, srcSize, workSpace, wkspSize) );
         if (largest == srcSize) { *ostart = ((const BYTE*)src)[0]; return 1; }   /* single symbol, rle */
         if (largest <= (srcSize >> 7)+4) return 0;   /* heuristic : probably not compressible enough */
     }
@@ -683,14 +684,15 @@
     if (preferRepeat && repeat && *repeat != HUF_repeat_none) {
         return HUF_compressCTable_internal(ostart, op, oend,
                                            src, srcSize,
-                                           singleStream, oldHufTable, bmi2);
+                                           nbStreams, oldHufTable, bmi2);
     }
 
     /* Build Huffman Tree */
     huffLog = HUF_optimalTableLog(huffLog, srcSize, maxSymbolValue);
-    {   CHECK_V_F(maxBits, HUF_buildCTable_wksp(table->CTable, table->count,
-                                                maxSymbolValue, huffLog,
-                                                table->nodeTable, sizeof(table->nodeTable)) );
+    {   size_t const maxBits = HUF_buildCTable_wksp(table->CTable, table->count,
+                                            maxSymbolValue, huffLog,
+                                            table->nodeTable, sizeof(table->nodeTable));
+        CHECK_F(maxBits);
         huffLog = (U32)maxBits;
         /* Zero unused symbols in CTable, so we can check it for validity */
         memset(table->CTable + (maxSymbolValue + 1), 0,
@@ -706,7 +708,7 @@
             if (oldSize <= hSize + newSize || hSize + 12 >= srcSize) {
                 return HUF_compressCTable_internal(ostart, op, oend,
                                                    src, srcSize,
-                                                   singleStream, oldHufTable, bmi2);
+                                                   nbStreams, oldHufTable, bmi2);
         }   }
 
         /* Use the new huffman table */
@@ -718,7 +720,7 @@
     }
     return HUF_compressCTable_internal(ostart, op, oend,
                                        src, srcSize,
-                                       singleStream, table->CTable, bmi2);
+                                       nbStreams, table->CTable, bmi2);
 }
 
 
@@ -728,7 +730,7 @@
                       void* workSpace, size_t wkspSize)
 {
     return HUF_compress_internal(dst, dstSize, src, srcSize,
-                                 maxSymbolValue, huffLog, 1 /*single stream*/,
+                                 maxSymbolValue, huffLog, HUF_singleStream,
                                  workSpace, wkspSize,
                                  NULL, NULL, 0, 0 /*bmi2*/);
 }
@@ -740,7 +742,7 @@
                       HUF_CElt* hufTable, HUF_repeat* repeat, int preferRepeat, int bmi2)
 {
     return HUF_compress_internal(dst, dstSize, src, srcSize,
-                                 maxSymbolValue, huffLog, 1 /*single stream*/,
+                                 maxSymbolValue, huffLog, HUF_singleStream,
                                  workSpace, wkspSize, hufTable,
                                  repeat, preferRepeat, bmi2);
 }
@@ -762,7 +764,7 @@
                       void* workSpace, size_t wkspSize)
 {
     return HUF_compress_internal(dst, dstSize, src, srcSize,
-                                 maxSymbolValue, huffLog, 0 /*4 streams*/,
+                                 maxSymbolValue, huffLog, HUF_fourStreams,
                                  workSpace, wkspSize,
                                  NULL, NULL, 0, 0 /*bmi2*/);
 }
@@ -777,7 +779,7 @@
                       HUF_CElt* hufTable, HUF_repeat* repeat, int preferRepeat, int bmi2)
 {
     return HUF_compress_internal(dst, dstSize, src, srcSize,
-                                 maxSymbolValue, huffLog, 0 /* 4 streams */,
+                                 maxSymbolValue, huffLog, HUF_fourStreams,
                                  workSpace, wkspSize,
                                  hufTable, repeat, preferRepeat, bmi2);
 }
--- a/contrib/python-zstandard/zstd/compress/zstd_compress.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstd_compress.c	Wed Apr 17 13:41:18 2019 -0400
@@ -11,6 +11,7 @@
 /*-*************************************
 *  Dependencies
 ***************************************/
+#include <limits.h>         /* INT_MAX */
 #include <string.h>         /* memset */
 #include "cpu.h"
 #include "mem.h"
@@ -61,7 +62,7 @@
     memset(cctx, 0, sizeof(*cctx));
     cctx->customMem = memManager;
     cctx->bmi2 = ZSTD_cpuid_bmi2(ZSTD_cpuid());
-    {   size_t const err = ZSTD_CCtx_resetParameters(cctx);
+    {   size_t const err = ZSTD_CCtx_reset(cctx, ZSTD_reset_parameters);
         assert(!ZSTD_isError(err));
         (void)err;
     }
@@ -128,7 +129,7 @@
 #ifdef ZSTD_MULTITHREAD
     return ZSTDMT_sizeof_CCtx(cctx->mtctx);
 #else
-    (void) cctx;
+    (void)cctx;
     return 0;
 #endif
 }
@@ -226,9 +227,160 @@
     return ret;
 }
 
-#define CLAMPCHECK(val,min,max) {            \
-    if (((val)<(min)) | ((val)>(max))) {     \
-        return ERROR(parameter_outOfBound);  \
+ZSTD_bounds ZSTD_cParam_getBounds(ZSTD_cParameter param)
+{
+    ZSTD_bounds bounds = { 0, 0, 0 };
+
+    switch(param)
+    {
+    case ZSTD_c_compressionLevel:
+        bounds.lowerBound = ZSTD_minCLevel();
+        bounds.upperBound = ZSTD_maxCLevel();
+        return bounds;
+
+    case ZSTD_c_windowLog:
+        bounds.lowerBound = ZSTD_WINDOWLOG_MIN;
+        bounds.upperBound = ZSTD_WINDOWLOG_MAX;
+        return bounds;
+
+    case ZSTD_c_hashLog:
+        bounds.lowerBound = ZSTD_HASHLOG_MIN;
+        bounds.upperBound = ZSTD_HASHLOG_MAX;
+        return bounds;
+
+    case ZSTD_c_chainLog:
+        bounds.lowerBound = ZSTD_CHAINLOG_MIN;
+        bounds.upperBound = ZSTD_CHAINLOG_MAX;
+        return bounds;
+
+    case ZSTD_c_searchLog:
+        bounds.lowerBound = ZSTD_SEARCHLOG_MIN;
+        bounds.upperBound = ZSTD_SEARCHLOG_MAX;
+        return bounds;
+
+    case ZSTD_c_minMatch:
+        bounds.lowerBound = ZSTD_MINMATCH_MIN;
+        bounds.upperBound = ZSTD_MINMATCH_MAX;
+        return bounds;
+
+    case ZSTD_c_targetLength:
+        bounds.lowerBound = ZSTD_TARGETLENGTH_MIN;
+        bounds.upperBound = ZSTD_TARGETLENGTH_MAX;
+        return bounds;
+
+    case ZSTD_c_strategy:
+        bounds.lowerBound = ZSTD_STRATEGY_MIN;
+        bounds.upperBound = ZSTD_STRATEGY_MAX;
+        return bounds;
+
+    case ZSTD_c_contentSizeFlag:
+        bounds.lowerBound = 0;
+        bounds.upperBound = 1;
+        return bounds;
+
+    case ZSTD_c_checksumFlag:
+        bounds.lowerBound = 0;
+        bounds.upperBound = 1;
+        return bounds;
+
+    case ZSTD_c_dictIDFlag:
+        bounds.lowerBound = 0;
+        bounds.upperBound = 1;
+        return bounds;
+
+    case ZSTD_c_nbWorkers:
+        bounds.lowerBound = 0;
+#ifdef ZSTD_MULTITHREAD
+        bounds.upperBound = ZSTDMT_NBWORKERS_MAX;
+#else
+        bounds.upperBound = 0;
+#endif
+        return bounds;
+
+    case ZSTD_c_jobSize:
+        bounds.lowerBound = 0;
+#ifdef ZSTD_MULTITHREAD
+        bounds.upperBound = ZSTDMT_JOBSIZE_MAX;
+#else
+        bounds.upperBound = 0;
+#endif
+        return bounds;
+
+    case ZSTD_c_overlapLog:
+        bounds.lowerBound = ZSTD_OVERLAPLOG_MIN;
+        bounds.upperBound = ZSTD_OVERLAPLOG_MAX;
+        return bounds;
+
+    case ZSTD_c_enableLongDistanceMatching:
+        bounds.lowerBound = 0;
+        bounds.upperBound = 1;
+        return bounds;
+
+    case ZSTD_c_ldmHashLog:
+        bounds.lowerBound = ZSTD_LDM_HASHLOG_MIN;
+        bounds.upperBound = ZSTD_LDM_HASHLOG_MAX;
+        return bounds;
+
+    case ZSTD_c_ldmMinMatch:
+        bounds.lowerBound = ZSTD_LDM_MINMATCH_MIN;
+        bounds.upperBound = ZSTD_LDM_MINMATCH_MAX;
+        return bounds;
+
+    case ZSTD_c_ldmBucketSizeLog:
+        bounds.lowerBound = ZSTD_LDM_BUCKETSIZELOG_MIN;
+        bounds.upperBound = ZSTD_LDM_BUCKETSIZELOG_MAX;
+        return bounds;
+
+    case ZSTD_c_ldmHashRateLog:
+        bounds.lowerBound = ZSTD_LDM_HASHRATELOG_MIN;
+        bounds.upperBound = ZSTD_LDM_HASHRATELOG_MAX;
+        return bounds;
+
+    /* experimental parameters */
+    case ZSTD_c_rsyncable:
+        bounds.lowerBound = 0;
+        bounds.upperBound = 1;
+        return bounds;
+
+    case ZSTD_c_forceMaxWindow :
+        bounds.lowerBound = 0;
+        bounds.upperBound = 1;
+        return bounds;
+
+    case ZSTD_c_format:
+        ZSTD_STATIC_ASSERT(ZSTD_f_zstd1 < ZSTD_f_zstd1_magicless);
+        bounds.lowerBound = ZSTD_f_zstd1;
+        bounds.upperBound = ZSTD_f_zstd1_magicless;   /* note : how to ensure at compile time that this is the highest value enum ? */
+        return bounds;
+
+    case ZSTD_c_forceAttachDict:
+        ZSTD_STATIC_ASSERT(ZSTD_dictDefaultAttach < ZSTD_dictForceCopy);
+        bounds.lowerBound = ZSTD_dictDefaultAttach;
+        bounds.upperBound = ZSTD_dictForceCopy;       /* note : how to ensure at compile time that this is the highest value enum ? */
+        return bounds;
+
+    default:
+        {   ZSTD_bounds const boundError = { ERROR(parameter_unsupported), 0, 0 };
+            return boundError;
+        }
+    }
+}
+
+/* ZSTD_cParam_withinBounds:
+ * @return 1 if value is within cParam bounds,
+ * 0 otherwise */
+static int ZSTD_cParam_withinBounds(ZSTD_cParameter cParam, int value)
+{
+    ZSTD_bounds const bounds = ZSTD_cParam_getBounds(cParam);
+    if (ZSTD_isError(bounds.error)) return 0;
+    if (value < bounds.lowerBound) return 0;
+    if (value > bounds.upperBound) return 0;
+    return 1;
+}
+
+#define BOUNDCHECK(cParam, val) {                  \
+    if (!ZSTD_cParam_withinBounds(cParam,val)) {   \
+        return ERROR(parameter_outOfBound);        \
 }   }
 
 
@@ -236,38 +388,39 @@
 {
     switch(param)
     {
-    case ZSTD_p_compressionLevel:
-    case ZSTD_p_hashLog:
-    case ZSTD_p_chainLog:
-    case ZSTD_p_searchLog:
-    case ZSTD_p_minMatch:
-    case ZSTD_p_targetLength:
-    case ZSTD_p_compressionStrategy:
+    case ZSTD_c_compressionLevel:
+    case ZSTD_c_hashLog:
+    case ZSTD_c_chainLog:
+    case ZSTD_c_searchLog:
+    case ZSTD_c_minMatch:
+    case ZSTD_c_targetLength:
+    case ZSTD_c_strategy:
         return 1;
 
-    case ZSTD_p_format:
-    case ZSTD_p_windowLog:
-    case ZSTD_p_contentSizeFlag:
-    case ZSTD_p_checksumFlag:
-    case ZSTD_p_dictIDFlag:
-    case ZSTD_p_forceMaxWindow :
-    case ZSTD_p_nbWorkers:
-    case ZSTD_p_jobSize:
-    case ZSTD_p_overlapSizeLog:
-    case ZSTD_p_enableLongDistanceMatching:
-    case ZSTD_p_ldmHashLog:
-    case ZSTD_p_ldmMinMatch:
-    case ZSTD_p_ldmBucketSizeLog:
-    case ZSTD_p_ldmHashEveryLog:
-    case ZSTD_p_forceAttachDict:
+    case ZSTD_c_format:
+    case ZSTD_c_windowLog:
+    case ZSTD_c_contentSizeFlag:
+    case ZSTD_c_checksumFlag:
+    case ZSTD_c_dictIDFlag:
+    case ZSTD_c_forceMaxWindow :
+    case ZSTD_c_nbWorkers:
+    case ZSTD_c_jobSize:
+    case ZSTD_c_overlapLog:
+    case ZSTD_c_rsyncable:
+    case ZSTD_c_enableLongDistanceMatching:
+    case ZSTD_c_ldmHashLog:
+    case ZSTD_c_ldmMinMatch:
+    case ZSTD_c_ldmBucketSizeLog:
+    case ZSTD_c_ldmHashRateLog:
+    case ZSTD_c_forceAttachDict:
     default:
         return 0;
     }
 }
 
-size_t ZSTD_CCtx_setParameter(ZSTD_CCtx* cctx, ZSTD_cParameter param, unsigned value)
+size_t ZSTD_CCtx_setParameter(ZSTD_CCtx* cctx, ZSTD_cParameter param, int value)
 {
-    DEBUGLOG(4, "ZSTD_CCtx_setParameter (%u, %u)", (U32)param, value);
+    DEBUGLOG(4, "ZSTD_CCtx_setParameter (%i, %i)", (int)param, value);
     if (cctx->streamStage != zcss_init) {
         if (ZSTD_isUpdateAuthorized(param)) {
             cctx->cParamsChanged = 1;
@@ -277,51 +430,52 @@
 
     switch(param)
     {
-    case ZSTD_p_format :
+    case ZSTD_c_format :
         return ZSTD_CCtxParam_setParameter(&cctx->requestedParams, param, value);
 
-    case ZSTD_p_compressionLevel:
+    case ZSTD_c_compressionLevel:
         if (cctx->cdict) return ERROR(stage_wrong);
         return ZSTD_CCtxParam_setParameter(&cctx->requestedParams, param, value);
 
-    case ZSTD_p_windowLog:
-    case ZSTD_p_hashLog:
-    case ZSTD_p_chainLog:
-    case ZSTD_p_searchLog:
-    case ZSTD_p_minMatch:
-    case ZSTD_p_targetLength:
-    case ZSTD_p_compressionStrategy:
+    case ZSTD_c_windowLog:
+    case ZSTD_c_hashLog:
+    case ZSTD_c_chainLog:
+    case ZSTD_c_searchLog:
+    case ZSTD_c_minMatch:
+    case ZSTD_c_targetLength:
+    case ZSTD_c_strategy:
         if (cctx->cdict) return ERROR(stage_wrong);
         return ZSTD_CCtxParam_setParameter(&cctx->requestedParams, param, value);
 
-    case ZSTD_p_contentSizeFlag:
-    case ZSTD_p_checksumFlag:
-    case ZSTD_p_dictIDFlag:
+    case ZSTD_c_contentSizeFlag:
+    case ZSTD_c_checksumFlag:
+    case ZSTD_c_dictIDFlag:
         return ZSTD_CCtxParam_setParameter(&cctx->requestedParams, param, value);
 
-    case ZSTD_p_forceMaxWindow :  /* Force back-references to remain < windowSize,
+    case ZSTD_c_forceMaxWindow :  /* Force back-references to remain < windowSize,
                                    * even when referencing into Dictionary content.
                                    * default : 0 when using a CDict, 1 when using a Prefix */
         return ZSTD_CCtxParam_setParameter(&cctx->requestedParams, param, value);
 
-    case ZSTD_p_forceAttachDict:
+    case ZSTD_c_forceAttachDict:
         return ZSTD_CCtxParam_setParameter(&cctx->requestedParams, param, value);
 
-    case ZSTD_p_nbWorkers:
-        if ((value>0) && cctx->staticSize) {
+    case ZSTD_c_nbWorkers:
+        if ((value!=0) && cctx->staticSize) {
             return ERROR(parameter_unsupported);  /* MT not compatible with static alloc */
         }
         return ZSTD_CCtxParam_setParameter(&cctx->requestedParams, param, value);
 
-    case ZSTD_p_jobSize:
-    case ZSTD_p_overlapSizeLog:
+    case ZSTD_c_jobSize:
+    case ZSTD_c_overlapLog:
+    case ZSTD_c_rsyncable:
         return ZSTD_CCtxParam_setParameter(&cctx->requestedParams, param, value);
 
-    case ZSTD_p_enableLongDistanceMatching:
-    case ZSTD_p_ldmHashLog:
-    case ZSTD_p_ldmMinMatch:
-    case ZSTD_p_ldmBucketSizeLog:
-    case ZSTD_p_ldmHashEveryLog:
+    case ZSTD_c_enableLongDistanceMatching:
+    case ZSTD_c_ldmHashLog:
+    case ZSTD_c_ldmMinMatch:
+    case ZSTD_c_ldmBucketSizeLog:
+    case ZSTD_c_ldmHashRateLog:
         if (cctx->cdict) return ERROR(stage_wrong);
         return ZSTD_CCtxParam_setParameter(&cctx->requestedParams, param, value);
 
@@ -329,21 +483,21 @@
     }
 }
 
-size_t ZSTD_CCtxParam_setParameter(
-        ZSTD_CCtx_params* CCtxParams, ZSTD_cParameter param, unsigned value)
+size_t ZSTD_CCtxParam_setParameter(ZSTD_CCtx_params* CCtxParams,
+                                   ZSTD_cParameter param, int value)
 {
-    DEBUGLOG(4, "ZSTD_CCtxParam_setParameter (%u, %u)", (U32)param, value);
+    DEBUGLOG(4, "ZSTD_CCtxParam_setParameter (%i, %i)", (int)param, value);
     switch(param)
     {
-    case ZSTD_p_format :
-        if (value > (unsigned)ZSTD_f_zstd1_magicless)
-            return ERROR(parameter_unsupported);
+    case ZSTD_c_format :
+        BOUNDCHECK(ZSTD_c_format, value);
         CCtxParams->format = (ZSTD_format_e)value;
         return (size_t)CCtxParams->format;
 
-    case ZSTD_p_compressionLevel : {
-        int cLevel = (int)value;  /* cast expected to restore negative sign */
+    case ZSTD_c_compressionLevel : {
+        int cLevel = value;
         if (cLevel > ZSTD_maxCLevel()) cLevel = ZSTD_maxCLevel();
+        if (cLevel < ZSTD_minCLevel()) cLevel = ZSTD_minCLevel();
         if (cLevel) {  /* 0 : does not change current level */
             CCtxParams->compressionLevel = cLevel;
         }
@@ -351,213 +505,229 @@
         return 0;  /* return type (size_t) cannot represent negative values */
     }
 
-    case ZSTD_p_windowLog :
-        if (value>0)   /* 0 => use default */
-            CLAMPCHECK(value, ZSTD_WINDOWLOG_MIN, ZSTD_WINDOWLOG_MAX);
+    case ZSTD_c_windowLog :
+        if (value!=0)   /* 0 => use default */
+            BOUNDCHECK(ZSTD_c_windowLog, value);
         CCtxParams->cParams.windowLog = value;
         return CCtxParams->cParams.windowLog;
 
-    case ZSTD_p_hashLog :
-        if (value>0)   /* 0 => use default */
-            CLAMPCHECK(value, ZSTD_HASHLOG_MIN, ZSTD_HASHLOG_MAX);
+    case ZSTD_c_hashLog :
+        if (value!=0)   /* 0 => use default */
+            BOUNDCHECK(ZSTD_c_hashLog, value);
         CCtxParams->cParams.hashLog = value;
         return CCtxParams->cParams.hashLog;
 
-    case ZSTD_p_chainLog :
-        if (value>0)   /* 0 => use default */
-            CLAMPCHECK(value, ZSTD_CHAINLOG_MIN, ZSTD_CHAINLOG_MAX);
+    case ZSTD_c_chainLog :
+        if (value!=0)   /* 0 => use default */
+            BOUNDCHECK(ZSTD_c_chainLog, value);
         CCtxParams->cParams.chainLog = value;
         return CCtxParams->cParams.chainLog;
 
-    case ZSTD_p_searchLog :
-        if (value>0)   /* 0 => use default */
-            CLAMPCHECK(value, ZSTD_SEARCHLOG_MIN, ZSTD_SEARCHLOG_MAX);
+    case ZSTD_c_searchLog :
+        if (value!=0)   /* 0 => use default */
+            BOUNDCHECK(ZSTD_c_searchLog, value);
         CCtxParams->cParams.searchLog = value;
         return value;
 
-    case ZSTD_p_minMatch :
-        if (value>0)   /* 0 => use default */
-            CLAMPCHECK(value, ZSTD_SEARCHLENGTH_MIN, ZSTD_SEARCHLENGTH_MAX);
-        CCtxParams->cParams.searchLength = value;
-        return CCtxParams->cParams.searchLength;
-
-    case ZSTD_p_targetLength :
-        /* all values are valid. 0 => use default */
+    case ZSTD_c_minMatch :
+        if (value!=0)   /* 0 => use default */
+            BOUNDCHECK(ZSTD_c_minMatch, value);
+        CCtxParams->cParams.minMatch = value;
+        return CCtxParams->cParams.minMatch;
+
+    case ZSTD_c_targetLength :
+        BOUNDCHECK(ZSTD_c_targetLength, value);
         CCtxParams->cParams.targetLength = value;
         return CCtxParams->cParams.targetLength;
 
-    case ZSTD_p_compressionStrategy :
-        if (value>0)   /* 0 => use default */
-            CLAMPCHECK(value, (unsigned)ZSTD_fast, (unsigned)ZSTD_btultra);
+    case ZSTD_c_strategy :
+        if (value!=0)   /* 0 => use default */
+            BOUNDCHECK(ZSTD_c_strategy, value);
         CCtxParams->cParams.strategy = (ZSTD_strategy)value;
         return (size_t)CCtxParams->cParams.strategy;
 
-    case ZSTD_p_contentSizeFlag :
+    case ZSTD_c_contentSizeFlag :
         /* Content size written in frame header _when known_ (default:1) */
-        DEBUGLOG(4, "set content size flag = %u", (value>0));
-        CCtxParams->fParams.contentSizeFlag = value > 0;
+        DEBUGLOG(4, "set content size flag = %u", (value!=0));
+        CCtxParams->fParams.contentSizeFlag = value != 0;
         return CCtxParams->fParams.contentSizeFlag;
 
-    case ZSTD_p_checksumFlag :
+    case ZSTD_c_checksumFlag :
         /* A 32-bits content checksum will be calculated and written at end of frame (default:0) */
-        CCtxParams->fParams.checksumFlag = value > 0;
+        CCtxParams->fParams.checksumFlag = value != 0;
         return CCtxParams->fParams.checksumFlag;
 
-    case ZSTD_p_dictIDFlag : /* When applicable, dictionary's dictID is provided in frame header (default:1) */
-        DEBUGLOG(4, "set dictIDFlag = %u", (value>0));
+    case ZSTD_c_dictIDFlag : /* When applicable, dictionary's dictID is provided in frame header (default:1) */
+        DEBUGLOG(4, "set dictIDFlag = %u", (value!=0));
         CCtxParams->fParams.noDictIDFlag = !value;
         return !CCtxParams->fParams.noDictIDFlag;
 
-    case ZSTD_p_forceMaxWindow :
-        CCtxParams->forceWindow = (value > 0);
+    case ZSTD_c_forceMaxWindow :
+        CCtxParams->forceWindow = (value != 0);
         return CCtxParams->forceWindow;
 
-    case ZSTD_p_forceAttachDict :
-        CCtxParams->attachDictPref = value ?
-                                    (value > 0 ? ZSTD_dictForceAttach : ZSTD_dictForceCopy) :
-                                     ZSTD_dictDefaultAttach;
+    case ZSTD_c_forceAttachDict : {
+        const ZSTD_dictAttachPref_e pref = (ZSTD_dictAttachPref_e)value;
+        BOUNDCHECK(ZSTD_c_forceAttachDict, pref);
+        CCtxParams->attachDictPref = pref;
         return CCtxParams->attachDictPref;
-
-    case ZSTD_p_nbWorkers :
+    }
+
+    case ZSTD_c_nbWorkers :
 #ifndef ZSTD_MULTITHREAD
-        if (value>0) return ERROR(parameter_unsupported);
+        if (value!=0) return ERROR(parameter_unsupported);
         return 0;
 #else
         return ZSTDMT_CCtxParam_setNbWorkers(CCtxParams, value);
 #endif
 
-    case ZSTD_p_jobSize :
+    case ZSTD_c_jobSize :
 #ifndef ZSTD_MULTITHREAD
         return ERROR(parameter_unsupported);
 #else
         return ZSTDMT_CCtxParam_setMTCtxParameter(CCtxParams, ZSTDMT_p_jobSize, value);
 #endif
 
-    case ZSTD_p_overlapSizeLog :
+    case ZSTD_c_overlapLog :
+#ifndef ZSTD_MULTITHREAD
+        return ERROR(parameter_unsupported);
+#else
+        return ZSTDMT_CCtxParam_setMTCtxParameter(CCtxParams, ZSTDMT_p_overlapLog, value);
+#endif
+
+    case ZSTD_c_rsyncable :
 #ifndef ZSTD_MULTITHREAD
         return ERROR(parameter_unsupported);
 #else
-        return ZSTDMT_CCtxParam_setMTCtxParameter(CCtxParams, ZSTDMT_p_overlapSectionLog, value);
+        return ZSTDMT_CCtxParam_setMTCtxParameter(CCtxParams, ZSTDMT_p_rsyncable, value);
 #endif
 
-    case ZSTD_p_enableLongDistanceMatching :
-        CCtxParams->ldmParams.enableLdm = (value>0);
+    case ZSTD_c_enableLongDistanceMatching :
+        CCtxParams->ldmParams.enableLdm = (value!=0);
         return CCtxParams->ldmParams.enableLdm;
 
-    case ZSTD_p_ldmHashLog :
-        if (value>0)   /* 0 ==> auto */
-            CLAMPCHECK(value, ZSTD_HASHLOG_MIN, ZSTD_HASHLOG_MAX);
+    case ZSTD_c_ldmHashLog :
+        if (value!=0)   /* 0 ==> auto */
+            BOUNDCHECK(ZSTD_c_ldmHashLog, value);
         CCtxParams->ldmParams.hashLog = value;
         return CCtxParams->ldmParams.hashLog;
 
-    case ZSTD_p_ldmMinMatch :
-        if (value>0)   /* 0 ==> default */
-            CLAMPCHECK(value, ZSTD_LDM_MINMATCH_MIN, ZSTD_LDM_MINMATCH_MAX);
+    case ZSTD_c_ldmMinMatch :
+        if (value!=0)   /* 0 ==> default */
+            BOUNDCHECK(ZSTD_c_ldmMinMatch, value);
         CCtxParams->ldmParams.minMatchLength = value;
         return CCtxParams->ldmParams.minMatchLength;
 
-    case ZSTD_p_ldmBucketSizeLog :
-        if (value > ZSTD_LDM_BUCKETSIZELOG_MAX)
-            return ERROR(parameter_outOfBound);
+    case ZSTD_c_ldmBucketSizeLog :
+        if (value!=0)   /* 0 ==> default */
+            BOUNDCHECK(ZSTD_c_ldmBucketSizeLog, value);
         CCtxParams->ldmParams.bucketSizeLog = value;
         return CCtxParams->ldmParams.bucketSizeLog;
 
-    case ZSTD_p_ldmHashEveryLog :
+    case ZSTD_c_ldmHashRateLog :
         if (value > ZSTD_WINDOWLOG_MAX - ZSTD_HASHLOG_MIN)
             return ERROR(parameter_outOfBound);
-        CCtxParams->ldmParams.hashEveryLog = value;
-        return CCtxParams->ldmParams.hashEveryLog;
+        CCtxParams->ldmParams.hashRateLog = value;
+        return CCtxParams->ldmParams.hashRateLog;
 
     default: return ERROR(parameter_unsupported);
     }
 }
 
-size_t ZSTD_CCtx_getParameter(ZSTD_CCtx* cctx, ZSTD_cParameter param, unsigned* value)
+size_t ZSTD_CCtx_getParameter(ZSTD_CCtx* cctx, ZSTD_cParameter param, int* value)
 {
     return ZSTD_CCtxParam_getParameter(&cctx->requestedParams, param, value);
 }
 
 size_t ZSTD_CCtxParam_getParameter(
-        ZSTD_CCtx_params* CCtxParams, ZSTD_cParameter param, unsigned* value)
+        ZSTD_CCtx_params* CCtxParams, ZSTD_cParameter param, int* value)
 {
     switch(param)
     {
-    case ZSTD_p_format :
+    case ZSTD_c_format :
         *value = CCtxParams->format;
         break;
-    case ZSTD_p_compressionLevel :
+    case ZSTD_c_compressionLevel :
         *value = CCtxParams->compressionLevel;
         break;
-    case ZSTD_p_windowLog :
+    case ZSTD_c_windowLog :
         *value = CCtxParams->cParams.windowLog;
         break;
-    case ZSTD_p_hashLog :
+    case ZSTD_c_hashLog :
         *value = CCtxParams->cParams.hashLog;
         break;
-    case ZSTD_p_chainLog :
+    case ZSTD_c_chainLog :
         *value = CCtxParams->cParams.chainLog;
         break;
-    case ZSTD_p_searchLog :
+    case ZSTD_c_searchLog :
         *value = CCtxParams->cParams.searchLog;
         break;
-    case ZSTD_p_minMatch :
-        *value = CCtxParams->cParams.searchLength;
+    case ZSTD_c_minMatch :
+        *value = CCtxParams->cParams.minMatch;
         break;
-    case ZSTD_p_targetLength :
+    case ZSTD_c_targetLength :
         *value = CCtxParams->cParams.targetLength;
         break;
-    case ZSTD_p_compressionStrategy :
+    case ZSTD_c_strategy :
         *value = (unsigned)CCtxParams->cParams.strategy;
         break;
-    case ZSTD_p_contentSizeFlag :
+    case ZSTD_c_contentSizeFlag :
         *value = CCtxParams->fParams.contentSizeFlag;
         break;
-    case ZSTD_p_checksumFlag :
+    case ZSTD_c_checksumFlag :
         *value = CCtxParams->fParams.checksumFlag;
         break;
-    case ZSTD_p_dictIDFlag :
+    case ZSTD_c_dictIDFlag :
         *value = !CCtxParams->fParams.noDictIDFlag;
         break;
-    case ZSTD_p_forceMaxWindow :
+    case ZSTD_c_forceMaxWindow :
         *value = CCtxParams->forceWindow;
         break;
-    case ZSTD_p_forceAttachDict :
+    case ZSTD_c_forceAttachDict :
         *value = CCtxParams->attachDictPref;
         break;
-    case ZSTD_p_nbWorkers :
+    case ZSTD_c_nbWorkers :
 #ifndef ZSTD_MULTITHREAD
         assert(CCtxParams->nbWorkers == 0);
 #endif
         *value = CCtxParams->nbWorkers;
         break;
-    case ZSTD_p_jobSize :
+    case ZSTD_c_jobSize :
 #ifndef ZSTD_MULTITHREAD
         return ERROR(parameter_unsupported);
 #else
-        *value = CCtxParams->jobSize;
+        assert(CCtxParams->jobSize <= INT_MAX);
+        *value = (int)CCtxParams->jobSize;
         break;
 #endif
-    case ZSTD_p_overlapSizeLog :
+    case ZSTD_c_overlapLog :
 #ifndef ZSTD_MULTITHREAD
         return ERROR(parameter_unsupported);
 #else
-        *value = CCtxParams->overlapSizeLog;
+        *value = CCtxParams->overlapLog;
         break;
 #endif
-    case ZSTD_p_enableLongDistanceMatching :
+    case ZSTD_c_rsyncable :
+#ifndef ZSTD_MULTITHREAD
+        return ERROR(parameter_unsupported);
+#else
+        *value = CCtxParams->rsyncable;
+        break;
+#endif
+    case ZSTD_c_enableLongDistanceMatching :
         *value = CCtxParams->ldmParams.enableLdm;
         break;
-    case ZSTD_p_ldmHashLog :
+    case ZSTD_c_ldmHashLog :
         *value = CCtxParams->ldmParams.hashLog;
         break;
-    case ZSTD_p_ldmMinMatch :
+    case ZSTD_c_ldmMinMatch :
         *value = CCtxParams->ldmParams.minMatchLength;
         break;
-    case ZSTD_p_ldmBucketSizeLog :
+    case ZSTD_c_ldmBucketSizeLog :
         *value = CCtxParams->ldmParams.bucketSizeLog;
         break;
-    case ZSTD_p_ldmHashEveryLog :
-        *value = CCtxParams->ldmParams.hashEveryLog;
+    case ZSTD_c_ldmHashRateLog :
+        *value = CCtxParams->ldmParams.hashRateLog;
         break;
     default: return ERROR(parameter_unsupported);
     }
@@ -655,34 +825,35 @@
 
 /*! ZSTD_CCtx_reset() :
  *  Also dumps dictionary */
-void ZSTD_CCtx_reset(ZSTD_CCtx* cctx)
+size_t ZSTD_CCtx_reset(ZSTD_CCtx* cctx, ZSTD_ResetDirective reset)
 {
-    cctx->streamStage = zcss_init;
-    cctx->pledgedSrcSizePlusOne = 0;
+    if ( (reset == ZSTD_reset_session_only)
+      || (reset == ZSTD_reset_session_and_parameters) ) {
+        cctx->streamStage = zcss_init;
+        cctx->pledgedSrcSizePlusOne = 0;
+    }
+    if ( (reset == ZSTD_reset_parameters)
+      || (reset == ZSTD_reset_session_and_parameters) ) {
+        if (cctx->streamStage != zcss_init) return ERROR(stage_wrong);
+        cctx->cdict = NULL;
+        return ZSTD_CCtxParams_reset(&cctx->requestedParams);
+    }
+    return 0;
 }
 
-size_t ZSTD_CCtx_resetParameters(ZSTD_CCtx* cctx)
-{
-    if (cctx->streamStage != zcss_init) return ERROR(stage_wrong);
-    cctx->cdict = NULL;
-    return ZSTD_CCtxParams_reset(&cctx->requestedParams);
-}
 
 /** ZSTD_checkCParams() :
     control CParam values remain within authorized range.
     @return : 0, or an error code if one value is beyond authorized range */
 size_t ZSTD_checkCParams(ZSTD_compressionParameters cParams)
 {
-    CLAMPCHECK(cParams.windowLog, ZSTD_WINDOWLOG_MIN, ZSTD_WINDOWLOG_MAX);
-    CLAMPCHECK(cParams.chainLog, ZSTD_CHAINLOG_MIN, ZSTD_CHAINLOG_MAX);
-    CLAMPCHECK(cParams.hashLog, ZSTD_HASHLOG_MIN, ZSTD_HASHLOG_MAX);
-    CLAMPCHECK(cParams.searchLog, ZSTD_SEARCHLOG_MIN, ZSTD_SEARCHLOG_MAX);
-    CLAMPCHECK(cParams.searchLength, ZSTD_SEARCHLENGTH_MIN, ZSTD_SEARCHLENGTH_MAX);
-    ZSTD_STATIC_ASSERT(ZSTD_TARGETLENGTH_MIN == 0);
-    if (cParams.targetLength > ZSTD_TARGETLENGTH_MAX)
-        return ERROR(parameter_outOfBound);
-    if ((U32)(cParams.strategy) > (U32)ZSTD_btultra)
-        return ERROR(parameter_unsupported);
+    BOUNDCHECK(ZSTD_c_windowLog, cParams.windowLog);
+    BOUNDCHECK(ZSTD_c_chainLog,  cParams.chainLog);
+    BOUNDCHECK(ZSTD_c_hashLog,   cParams.hashLog);
+    BOUNDCHECK(ZSTD_c_searchLog, cParams.searchLog);
+    BOUNDCHECK(ZSTD_c_minMatch,  cParams.minMatch);
+    BOUNDCHECK(ZSTD_c_targetLength,cParams.targetLength);
+    BOUNDCHECK(ZSTD_c_strategy,  cParams.strategy);
     return 0;
 }
 
@@ -692,19 +863,19 @@
 static ZSTD_compressionParameters
 ZSTD_clampCParams(ZSTD_compressionParameters cParams)
 {
-#   define CLAMP(val,min,max) {      \
-        if (val<min) val=min;        \
-        else if (val>max) val=max;   \
+#   define CLAMP_TYPE(cParam, val, type) {                                \
+        ZSTD_bounds const bounds = ZSTD_cParam_getBounds(cParam);         \
+        if ((int)val<bounds.lowerBound) val=(type)bounds.lowerBound;      \
+        else if ((int)val>bounds.upperBound) val=(type)bounds.upperBound; \
     }
-    CLAMP(cParams.windowLog, ZSTD_WINDOWLOG_MIN, ZSTD_WINDOWLOG_MAX);
-    CLAMP(cParams.chainLog, ZSTD_CHAINLOG_MIN, ZSTD_CHAINLOG_MAX);
-    CLAMP(cParams.hashLog, ZSTD_HASHLOG_MIN, ZSTD_HASHLOG_MAX);
-    CLAMP(cParams.searchLog, ZSTD_SEARCHLOG_MIN, ZSTD_SEARCHLOG_MAX);
-    CLAMP(cParams.searchLength, ZSTD_SEARCHLENGTH_MIN, ZSTD_SEARCHLENGTH_MAX);
-    ZSTD_STATIC_ASSERT(ZSTD_TARGETLENGTH_MIN == 0);
-    if (cParams.targetLength > ZSTD_TARGETLENGTH_MAX)
-        cParams.targetLength = ZSTD_TARGETLENGTH_MAX;
-    CLAMP(cParams.strategy, ZSTD_fast, ZSTD_btultra);
+#   define CLAMP(cParam, val) CLAMP_TYPE(cParam, val, int)
+    CLAMP(ZSTD_c_windowLog, cParams.windowLog);
+    CLAMP(ZSTD_c_chainLog,  cParams.chainLog);
+    CLAMP(ZSTD_c_hashLog,   cParams.hashLog);
+    CLAMP(ZSTD_c_searchLog, cParams.searchLog);
+    CLAMP(ZSTD_c_minMatch,  cParams.minMatch);
+    CLAMP(ZSTD_c_targetLength,cParams.targetLength);
+    CLAMP_TYPE(ZSTD_c_strategy,cParams.strategy, ZSTD_strategy);
     return cParams;
 }
 
@@ -774,7 +945,7 @@
     if (CCtxParams->cParams.hashLog) cParams.hashLog = CCtxParams->cParams.hashLog;
     if (CCtxParams->cParams.chainLog) cParams.chainLog = CCtxParams->cParams.chainLog;
     if (CCtxParams->cParams.searchLog) cParams.searchLog = CCtxParams->cParams.searchLog;
-    if (CCtxParams->cParams.searchLength) cParams.searchLength = CCtxParams->cParams.searchLength;
+    if (CCtxParams->cParams.minMatch) cParams.minMatch = CCtxParams->cParams.minMatch;
     if (CCtxParams->cParams.targetLength) cParams.targetLength = CCtxParams->cParams.targetLength;
     if (CCtxParams->cParams.strategy) cParams.strategy = CCtxParams->cParams.strategy;
     assert(!ZSTD_checkCParams(cParams));
@@ -787,13 +958,12 @@
 {
     size_t const chainSize = (cParams->strategy == ZSTD_fast) ? 0 : ((size_t)1 << cParams->chainLog);
     size_t const hSize = ((size_t)1) << cParams->hashLog;
-    U32    const hashLog3 = (forCCtx && cParams->searchLength==3) ? MIN(ZSTD_HASHLOG3_MAX, cParams->windowLog) : 0;
+    U32    const hashLog3 = (forCCtx && cParams->minMatch==3) ? MIN(ZSTD_HASHLOG3_MAX, cParams->windowLog) : 0;
     size_t const h3Size = ((size_t)1) << hashLog3;
     size_t const tableSpace = (chainSize + hSize + h3Size) * sizeof(U32);
     size_t const optPotentialSpace = ((MaxML+1) + (MaxLL+1) + (MaxOff+1) + (1<<Litbits)) * sizeof(U32)
                           + (ZSTD_OPT_NUM+1) * (sizeof(ZSTD_match_t)+sizeof(ZSTD_optimal_t));
-    size_t const optSpace = (forCCtx && ((cParams->strategy == ZSTD_btopt) ||
-                                         (cParams->strategy == ZSTD_btultra)))
+    size_t const optSpace = (forCCtx && (cParams->strategy >= ZSTD_btopt))
                                 ? optPotentialSpace
                                 : 0;
     DEBUGLOG(4, "chainSize: %u - hSize: %u - h3Size: %u",
@@ -808,7 +978,7 @@
     {   ZSTD_compressionParameters const cParams =
                 ZSTD_getCParamsFromCCtxParams(params, 0, 0);
         size_t const blockSize = MIN(ZSTD_BLOCKSIZE_MAX, (size_t)1 << cParams.windowLog);
-        U32    const divider = (cParams.searchLength==3) ? 3 : 4;
+        U32    const divider = (cParams.minMatch==3) ? 3 : 4;
         size_t const maxNbSeq = blockSize / divider;
         size_t const tokenSpace = WILDCOPY_OVERLENGTH + blockSize + 11*maxNbSeq;
         size_t const entropySpace = HUF_WORKSPACE_SIZE;
@@ -843,7 +1013,7 @@
 {
     int level;
     size_t memBudget = 0;
-    for (level=1; level<=compressionLevel; level++) {
+    for (level=MIN(compressionLevel, 1); level<=compressionLevel; level++) {
         size_t const newMB = ZSTD_estimateCCtxSize_internal(level);
         if (newMB > memBudget) memBudget = newMB;
     }
@@ -879,7 +1049,7 @@
 {
     int level;
     size_t memBudget = 0;
-    for (level=1; level<=compressionLevel; level++) {
+    for (level=MIN(compressionLevel, 1); level<=compressionLevel; level++) {
         size_t const newMB = ZSTD_estimateCStreamSize_internal(level);
         if (newMB > memBudget) memBudget = newMB;
     }
@@ -933,7 +1103,7 @@
     return (cParams1.hashLog  == cParams2.hashLog)
          & (cParams1.chainLog == cParams2.chainLog)
          & (cParams1.strategy == cParams2.strategy)   /* opt parser space */
-         & ((cParams1.searchLength==3) == (cParams2.searchLength==3));  /* hashlog3 space */
+         & ((cParams1.minMatch==3) == (cParams2.minMatch==3));  /* hashlog3 space */
 }
 
 static void ZSTD_assertEqualCParams(ZSTD_compressionParameters cParams1,
@@ -945,7 +1115,7 @@
     assert(cParams1.chainLog     == cParams2.chainLog);
     assert(cParams1.hashLog      == cParams2.hashLog);
     assert(cParams1.searchLog    == cParams2.searchLog);
-    assert(cParams1.searchLength == cParams2.searchLength);
+    assert(cParams1.minMatch     == cParams2.minMatch);
     assert(cParams1.targetLength == cParams2.targetLength);
     assert(cParams1.strategy     == cParams2.strategy);
 }
@@ -960,7 +1130,7 @@
             ldmParams1.hashLog == ldmParams2.hashLog &&
             ldmParams1.bucketSizeLog == ldmParams2.bucketSizeLog &&
             ldmParams1.minMatchLength == ldmParams2.minMatchLength &&
-            ldmParams1.hashEveryLog == ldmParams2.hashEveryLog);
+            ldmParams1.hashRateLog == ldmParams2.hashRateLog);
 }
 
 typedef enum { ZSTDb_not_buffered, ZSTDb_buffered } ZSTD_buffered_policy_e;
@@ -976,7 +1146,7 @@
 {
     size_t const windowSize2 = MAX(1, (size_t)MIN(((U64)1 << cParams2.windowLog), pledgedSrcSize));
     size_t const blockSize2 = MIN(ZSTD_BLOCKSIZE_MAX, windowSize2);
-    size_t const maxNbSeq2 = blockSize2 / ((cParams2.searchLength == 3) ? 3 : 4);
+    size_t const maxNbSeq2 = blockSize2 / ((cParams2.minMatch == 3) ? 3 : 4);
     size_t const maxNbLit2 = blockSize2;
     size_t const neededBufferSize2 = (buffPol2==ZSTDb_buffered) ? windowSize2 + blockSize2 : 0;
     DEBUGLOG(4, "ZSTD_sufficientBuff: is neededBufferSize2=%u <= bufferSize1=%u",
@@ -1034,8 +1204,8 @@
 {
     ZSTD_window_clear(&ms->window);
 
-    ms->nextToUpdate = ms->window.dictLimit + 1;
-    ms->nextToUpdate3 = ms->window.dictLimit + 1;
+    ms->nextToUpdate = ms->window.dictLimit;
+    ms->nextToUpdate3 = ms->window.dictLimit;
     ms->loadedDictEnd = 0;
     ms->opt.litLengthSum = 0;  /* force reset of btopt stats */
     ms->dictMatchState = NULL;
@@ -1080,7 +1250,7 @@
 {
     size_t const chainSize = (cParams->strategy == ZSTD_fast) ? 0 : ((size_t)1 << cParams->chainLog);
     size_t const hSize = ((size_t)1) << cParams->hashLog;
-    U32    const hashLog3 = (forCCtx && cParams->searchLength==3) ? MIN(ZSTD_HASHLOG3_MAX, cParams->windowLog) : 0;
+    U32    const hashLog3 = (forCCtx && cParams->minMatch==3) ? MIN(ZSTD_HASHLOG3_MAX, cParams->windowLog) : 0;
     size_t const h3Size = ((size_t)1) << hashLog3;
     size_t const tableSpace = (chainSize + hSize + h3Size) * sizeof(U32);
 
@@ -1094,9 +1264,9 @@
     ZSTD_invalidateMatchState(ms);
 
     /* opt parser space */
-    if (forCCtx && ((cParams->strategy == ZSTD_btopt) | (cParams->strategy == ZSTD_btultra))) {
+    if (forCCtx && (cParams->strategy >= ZSTD_btopt)) {
         DEBUGLOG(4, "reserving optimal parser space");
-        ms->opt.litFreq = (U32*)ptr;
+        ms->opt.litFreq = (unsigned*)ptr;
         ms->opt.litLengthFreq = ms->opt.litFreq + (1<<Litbits);
         ms->opt.matchLengthFreq = ms->opt.litLengthFreq + (MaxLL+1);
         ms->opt.offCodeFreq = ms->opt.matchLengthFreq + (MaxML+1);
@@ -1158,13 +1328,13 @@
         /* Adjust long distance matching parameters */
         ZSTD_ldm_adjustParameters(&params.ldmParams, &params.cParams);
         assert(params.ldmParams.hashLog >= params.ldmParams.bucketSizeLog);
-        assert(params.ldmParams.hashEveryLog < 32);
-        zc->ldmState.hashPower = ZSTD_ldm_getHashPower(params.ldmParams.minMatchLength);
+        assert(params.ldmParams.hashRateLog < 32);
+        zc->ldmState.hashPower = ZSTD_rollingHash_primePower(params.ldmParams.minMatchLength);
     }
 
     {   size_t const windowSize = MAX(1, (size_t)MIN(((U64)1 << params.cParams.windowLog), pledgedSrcSize));
         size_t const blockSize = MIN(ZSTD_BLOCKSIZE_MAX, windowSize);
-        U32    const divider = (params.cParams.searchLength==3) ? 3 : 4;
+        U32    const divider = (params.cParams.minMatch==3) ? 3 : 4;
         size_t const maxNbSeq = blockSize / divider;
         size_t const tokenSpace = WILDCOPY_OVERLENGTH + blockSize + 11*maxNbSeq;
         size_t const buffOutSize = (zbuff==ZSTDb_buffered) ? ZSTD_compressBound(blockSize)+1 : 0;
@@ -1227,7 +1397,7 @@
         if (pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN)
             zc->appliedParams.fParams.contentSizeFlag = 0;
         DEBUGLOG(4, "pledged content size : %u ; flag : %u",
-            (U32)pledgedSrcSize, zc->appliedParams.fParams.contentSizeFlag);
+            (unsigned)pledgedSrcSize, zc->appliedParams.fParams.contentSizeFlag);
         zc->blockSize = blockSize;
 
         XXH64_reset(&zc->xxhState, 0);
@@ -1306,16 +1476,17 @@
  * dictionary tables into the working context is faster than using them
  * in-place.
  */
-static const size_t attachDictSizeCutoffs[(unsigned)ZSTD_btultra+1] = {
-    8 KB, /* unused */
-    8 KB, /* ZSTD_fast */
+static const size_t attachDictSizeCutoffs[ZSTD_STRATEGY_MAX+1] = {
+    8 KB,  /* unused */
+    8 KB,  /* ZSTD_fast */
     16 KB, /* ZSTD_dfast */
     32 KB, /* ZSTD_greedy */
     32 KB, /* ZSTD_lazy */
     32 KB, /* ZSTD_lazy2 */
     32 KB, /* ZSTD_btlazy2 */
     32 KB, /* ZSTD_btopt */
-    8 KB /* ZSTD_btultra */
+    8 KB,  /* ZSTD_btultra */
+    8 KB   /* ZSTD_btultra2 */
 };
 
 static int ZSTD_shouldAttachDict(const ZSTD_CDict* cdict,
@@ -1447,7 +1618,8 @@
                             ZSTD_buffered_policy_e zbuff)
 {
 
-    DEBUGLOG(4, "ZSTD_resetCCtx_usingCDict (pledgedSrcSize=%u)", (U32)pledgedSrcSize);
+    DEBUGLOG(4, "ZSTD_resetCCtx_usingCDict (pledgedSrcSize=%u)",
+                (unsigned)pledgedSrcSize);
 
     if (ZSTD_shouldAttachDict(cdict, params, pledgedSrcSize)) {
         return ZSTD_resetCCtx_byAttachingCDict(
@@ -1670,7 +1842,9 @@
  * note : use same formula for both situations */
 static size_t ZSTD_minGain(size_t srcSize, ZSTD_strategy strat)
 {
-    U32 const minlog = (strat==ZSTD_btultra) ? 7 : 6;
+    U32 const minlog = (strat>=ZSTD_btultra) ? (U32)(strat) - 1 : 6;
+    ZSTD_STATIC_ASSERT(ZSTD_btultra == 8);
+    assert(ZSTD_cParam_withinBounds(ZSTD_c_strategy, strat));
     return (srcSize >> minlog) + 2;
 }
 
@@ -1679,7 +1853,8 @@
                                      ZSTD_strategy strategy, int disableLiteralCompression,
                                      void* dst, size_t dstCapacity,
                                const void* src, size_t srcSize,
-                                     U32* workspace, const int bmi2)
+                                     void* workspace, size_t wkspSize,
+                               const int bmi2)
 {
     size_t const minGain = ZSTD_minGain(srcSize, strategy);
     size_t const lhSize = 3 + (srcSize >= 1 KB) + (srcSize >= 16 KB);
@@ -1708,9 +1883,9 @@
         int const preferRepeat = strategy < ZSTD_lazy ? srcSize <= 1024 : 0;
         if (repeat == HUF_repeat_valid && lhSize == 3) singleStream = 1;
         cLitSize = singleStream ? HUF_compress1X_repeat(ostart+lhSize, dstCapacity-lhSize, src, srcSize, 255, 11,
-                                      workspace, HUF_WORKSPACE_SIZE, (HUF_CElt*)nextHuf->CTable, &repeat, preferRepeat, bmi2)
+                                      workspace, wkspSize, (HUF_CElt*)nextHuf->CTable, &repeat, preferRepeat, bmi2)
                                 : HUF_compress4X_repeat(ostart+lhSize, dstCapacity-lhSize, src, srcSize, 255, 11,
-                                      workspace, HUF_WORKSPACE_SIZE, (HUF_CElt*)nextHuf->CTable, &repeat, preferRepeat, bmi2);
+                                      workspace, wkspSize, (HUF_CElt*)nextHuf->CTable, &repeat, preferRepeat, bmi2);
         if (repeat != HUF_repeat_none) {
             /* reused the existing table */
             hType = set_repeat;
@@ -1977,7 +2152,7 @@
         assert(!ZSTD_isError(NCountCost));
         assert(compressedCost < ERROR(maxCode));
         DEBUGLOG(5, "Estimated bit costs: basic=%u\trepeat=%u\tcompressed=%u",
-                    (U32)basicCost, (U32)repeatCost, (U32)compressedCost);
+                    (unsigned)basicCost, (unsigned)repeatCost, (unsigned)compressedCost);
         if (basicCost <= repeatCost && basicCost <= compressedCost) {
             DEBUGLOG(5, "Selected set_basic");
             assert(isDefaultAllowed);
@@ -1999,7 +2174,7 @@
 MEM_STATIC size_t
 ZSTD_buildCTable(void* dst, size_t dstCapacity,
                 FSE_CTable* nextCTable, U32 FSELog, symbolEncodingType_e type,
-                U32* count, U32 max,
+                unsigned* count, U32 max,
                 const BYTE* codeTable, size_t nbSeq,
                 const S16* defaultNorm, U32 defaultNormLog, U32 defaultMax,
                 const FSE_CTable* prevCTable, size_t prevCTableSize,
@@ -2007,11 +2182,13 @@
 {
     BYTE* op = (BYTE*)dst;
     const BYTE* const oend = op + dstCapacity;
+    DEBUGLOG(6, "ZSTD_buildCTable (dstCapacity=%u)", (unsigned)dstCapacity);
 
     switch (type) {
     case set_rle:
+        CHECK_F(FSE_buildCTable_rle(nextCTable, (BYTE)max));
+        if (dstCapacity==0) return ERROR(dstSize_tooSmall);
         *op = codeTable[0];
-        CHECK_F(FSE_buildCTable_rle(nextCTable, (BYTE)max));
         return 1;
     case set_repeat:
         memcpy(nextCTable, prevCTable, prevCTableSize);
@@ -2053,6 +2230,9 @@
     FSE_CState_t  stateLitLength;
 
     CHECK_E(BIT_initCStream(&blockStream, dst, dstCapacity), dstSize_tooSmall); /* not enough space remaining */
+    DEBUGLOG(6, "available space for bitstream : %i  (dstCapacity=%u)",
+                (int)(blockStream.endPtr - blockStream.startPtr),
+                (unsigned)dstCapacity);
 
     /* first symbols */
     FSE_initCState2(&stateMatchLength, CTable_MatchLength, mlCodeTable[nbSeq-1]);
@@ -2085,9 +2265,9 @@
             U32  const ofBits = ofCode;
             U32  const mlBits = ML_bits[mlCode];
             DEBUGLOG(6, "encoding: litlen:%2u - matchlen:%2u - offCode:%7u",
-                        sequences[n].litLength,
-                        sequences[n].matchLength + MINMATCH,
-                        sequences[n].offset);
+                        (unsigned)sequences[n].litLength,
+                        (unsigned)sequences[n].matchLength + MINMATCH,
+                        (unsigned)sequences[n].offset);
                                                                             /* 32b*/  /* 64b*/
                                                                             /* (7)*/  /* (7)*/
             FSE_encodeSymbol(&blockStream, &stateOffsetBits, ofCode);       /* 15 */  /* 15 */
@@ -2112,6 +2292,7 @@
                 BIT_addBits(&blockStream, sequences[n].offset, ofBits);     /* 31 */
             }
             BIT_flushBits(&blockStream);                                    /* (7)*/
+            DEBUGLOG(7, "remaining space : %i", (int)(blockStream.endPtr - blockStream.ptr));
     }   }
 
     DEBUGLOG(6, "ZSTD_encodeSequences: flushing ML state with %u bits", stateMatchLength.stateLog);
@@ -2169,6 +2350,7 @@
             FSE_CTable const* CTable_LitLength, BYTE const* llCodeTable,
             seqDef const* sequences, size_t nbSeq, int longOffsets, int bmi2)
 {
+    DEBUGLOG(5, "ZSTD_encodeSequences: dstCapacity = %u", (unsigned)dstCapacity);
 #if DYNAMIC_BMI2
     if (bmi2) {
         return ZSTD_encodeSequences_bmi2(dst, dstCapacity,
@@ -2186,16 +2368,20 @@
                                         sequences, nbSeq, longOffsets);
 }
 
-MEM_STATIC size_t ZSTD_compressSequences_internal(seqStore_t* seqStorePtr,
-                              ZSTD_entropyCTables_t const* prevEntropy,
-                              ZSTD_entropyCTables_t* nextEntropy,
-                              ZSTD_CCtx_params const* cctxParams,
-                              void* dst, size_t dstCapacity, U32* workspace,
-                              const int bmi2)
+/* ZSTD_compressSequences_internal():
+ * actually compresses both literals and sequences */
+MEM_STATIC size_t
+ZSTD_compressSequences_internal(seqStore_t* seqStorePtr,
+                          const ZSTD_entropyCTables_t* prevEntropy,
+                                ZSTD_entropyCTables_t* nextEntropy,
+                          const ZSTD_CCtx_params* cctxParams,
+                                void* dst, size_t dstCapacity,
+                                void* workspace, size_t wkspSize,
+                          const int bmi2)
 {
     const int longOffsets = cctxParams->cParams.windowLog > STREAM_ACCUMULATOR_MIN;
     ZSTD_strategy const strategy = cctxParams->cParams.strategy;
-    U32 count[MaxSeq+1];
+    unsigned count[MaxSeq+1];
     FSE_CTable* CTable_LitLength = nextEntropy->fse.litlengthCTable;
     FSE_CTable* CTable_OffsetBits = nextEntropy->fse.offcodeCTable;
     FSE_CTable* CTable_MatchLength = nextEntropy->fse.matchlengthCTable;
@@ -2212,6 +2398,7 @@
     BYTE* lastNCount = NULL;
 
     ZSTD_STATIC_ASSERT(HUF_WORKSPACE_SIZE >= (1<<MAX(MLFSELog,LLFSELog)));
+    DEBUGLOG(5, "ZSTD_compressSequences_internal");
 
     /* Compress literals */
     {   const BYTE* const literals = seqStorePtr->litStart;
@@ -2222,7 +2409,8 @@
                                     cctxParams->cParams.strategy, disableLiteralCompression,
                                     op, dstCapacity,
                                     literals, litSize,
-                                    workspace, bmi2);
+                                    workspace, wkspSize,
+                                    bmi2);
         if (ZSTD_isError(cSize))
           return cSize;
         assert(cSize <= dstCapacity);
@@ -2249,51 +2437,63 @@
     /* convert length/distances into codes */
     ZSTD_seqToCodes(seqStorePtr);
     /* build CTable for Literal Lengths */
-    {   U32 max = MaxLL;
-        size_t const mostFrequent = HIST_countFast_wksp(count, &max, llCodeTable, nbSeq, workspace);   /* can't fail */
+    {   unsigned max = MaxLL;
+        size_t const mostFrequent = HIST_countFast_wksp(count, &max, llCodeTable, nbSeq, workspace, wkspSize);   /* can't fail */
         DEBUGLOG(5, "Building LL table");
         nextEntropy->fse.litlength_repeatMode = prevEntropy->fse.litlength_repeatMode;
-        LLtype = ZSTD_selectEncodingType(&nextEntropy->fse.litlength_repeatMode, count, max, mostFrequent, nbSeq, LLFSELog, prevEntropy->fse.litlengthCTable, LL_defaultNorm, LL_defaultNormLog, ZSTD_defaultAllowed, strategy);
+        LLtype = ZSTD_selectEncodingType(&nextEntropy->fse.litlength_repeatMode,
+                                        count, max, mostFrequent, nbSeq,
+                                        LLFSELog, prevEntropy->fse.litlengthCTable,
+                                        LL_defaultNorm, LL_defaultNormLog,
+                                        ZSTD_defaultAllowed, strategy);
         assert(set_basic < set_compressed && set_rle < set_compressed);
         assert(!(LLtype < set_compressed && nextEntropy->fse.litlength_repeatMode != FSE_repeat_none)); /* We don't copy tables */
         {   size_t const countSize = ZSTD_buildCTable(op, oend - op, CTable_LitLength, LLFSELog, (symbolEncodingType_e)LLtype,
                                                     count, max, llCodeTable, nbSeq, LL_defaultNorm, LL_defaultNormLog, MaxLL,
                                                     prevEntropy->fse.litlengthCTable, sizeof(prevEntropy->fse.litlengthCTable),
-                                                    workspace, HUF_WORKSPACE_SIZE);
+                                                    workspace, wkspSize);
             if (ZSTD_isError(countSize)) return countSize;
             if (LLtype == set_compressed)
                 lastNCount = op;
             op += countSize;
     }   }
     /* build CTable for Offsets */
-    {   U32 max = MaxOff;
-        size_t const mostFrequent = HIST_countFast_wksp(count, &max, ofCodeTable, nbSeq, workspace);  /* can't fail */
+    {   unsigned max = MaxOff;
+        size_t const mostFrequent = HIST_countFast_wksp(count, &max, ofCodeTable, nbSeq, workspace, wkspSize);  /* can't fail */
         /* We can only use the basic table if max <= DefaultMaxOff, otherwise the offsets are too large */
         ZSTD_defaultPolicy_e const defaultPolicy = (max <= DefaultMaxOff) ? ZSTD_defaultAllowed : ZSTD_defaultDisallowed;
         DEBUGLOG(5, "Building OF table");
         nextEntropy->fse.offcode_repeatMode = prevEntropy->fse.offcode_repeatMode;
-        Offtype = ZSTD_selectEncodingType(&nextEntropy->fse.offcode_repeatMode, count, max, mostFrequent, nbSeq, OffFSELog, prevEntropy->fse.offcodeCTable, OF_defaultNorm, OF_defaultNormLog, defaultPolicy, strategy);
+        Offtype = ZSTD_selectEncodingType(&nextEntropy->fse.offcode_repeatMode,
+                                        count, max, mostFrequent, nbSeq,
+                                        OffFSELog, prevEntropy->fse.offcodeCTable,
+                                        OF_defaultNorm, OF_defaultNormLog,
+                                        defaultPolicy, strategy);
         assert(!(Offtype < set_compressed && nextEntropy->fse.offcode_repeatMode != FSE_repeat_none)); /* We don't copy tables */
         {   size_t const countSize = ZSTD_buildCTable(op, oend - op, CTable_OffsetBits, OffFSELog, (symbolEncodingType_e)Offtype,
                                                     count, max, ofCodeTable, nbSeq, OF_defaultNorm, OF_defaultNormLog, DefaultMaxOff,
                                                     prevEntropy->fse.offcodeCTable, sizeof(prevEntropy->fse.offcodeCTable),
-                                                    workspace, HUF_WORKSPACE_SIZE);
+                                                    workspace, wkspSize);
             if (ZSTD_isError(countSize)) return countSize;
             if (Offtype == set_compressed)
                 lastNCount = op;
             op += countSize;
     }   }
     /* build CTable for MatchLengths */
-    {   U32 max = MaxML;
-        size_t const mostFrequent = HIST_countFast_wksp(count, &max, mlCodeTable, nbSeq, workspace);   /* can't fail */
-        DEBUGLOG(5, "Building ML table");
+    {   unsigned max = MaxML;
+        size_t const mostFrequent = HIST_countFast_wksp(count, &max, mlCodeTable, nbSeq, workspace, wkspSize);   /* can't fail */
+        DEBUGLOG(5, "Building ML table (remaining space : %i)", (int)(oend-op));
         nextEntropy->fse.matchlength_repeatMode = prevEntropy->fse.matchlength_repeatMode;
-        MLtype = ZSTD_selectEncodingType(&nextEntropy->fse.matchlength_repeatMode, count, max, mostFrequent, nbSeq, MLFSELog, prevEntropy->fse.matchlengthCTable, ML_defaultNorm, ML_defaultNormLog, ZSTD_defaultAllowed, strategy);
+        MLtype = ZSTD_selectEncodingType(&nextEntropy->fse.matchlength_repeatMode,
+                                        count, max, mostFrequent, nbSeq,
+                                        MLFSELog, prevEntropy->fse.matchlengthCTable,
+                                        ML_defaultNorm, ML_defaultNormLog,
+                                        ZSTD_defaultAllowed, strategy);
         assert(!(MLtype < set_compressed && nextEntropy->fse.matchlength_repeatMode != FSE_repeat_none)); /* We don't copy tables */
         {   size_t const countSize = ZSTD_buildCTable(op, oend - op, CTable_MatchLength, MLFSELog, (symbolEncodingType_e)MLtype,
                                                     count, max, mlCodeTable, nbSeq, ML_defaultNorm, ML_defaultNormLog, MaxML,
                                                     prevEntropy->fse.matchlengthCTable, sizeof(prevEntropy->fse.matchlengthCTable),
-                                                    workspace, HUF_WORKSPACE_SIZE);
+                                                    workspace, wkspSize);
             if (ZSTD_isError(countSize)) return countSize;
             if (MLtype == set_compressed)
                 lastNCount = op;
@@ -2328,19 +2528,24 @@
         }
     }
 
+    DEBUGLOG(5, "compressed block size : %u", (unsigned)(op - ostart));
     return op - ostart;
 }
 
-MEM_STATIC size_t ZSTD_compressSequences(seqStore_t* seqStorePtr,
-                        const ZSTD_entropyCTables_t* prevEntropy,
-                              ZSTD_entropyCTables_t* nextEntropy,
-                        const ZSTD_CCtx_params* cctxParams,
-                              void* dst, size_t dstCapacity,
-                              size_t srcSize, U32* workspace, int bmi2)
+MEM_STATIC size_t
+ZSTD_compressSequences(seqStore_t* seqStorePtr,
+                       const ZSTD_entropyCTables_t* prevEntropy,
+                             ZSTD_entropyCTables_t* nextEntropy,
+                       const ZSTD_CCtx_params* cctxParams,
+                             void* dst, size_t dstCapacity,
+                             size_t srcSize,
+                             void* workspace, size_t wkspSize,
+                             int bmi2)
 {
     size_t const cSize = ZSTD_compressSequences_internal(
-            seqStorePtr, prevEntropy, nextEntropy, cctxParams, dst, dstCapacity,
-            workspace, bmi2);
+                            seqStorePtr, prevEntropy, nextEntropy, cctxParams,
+                            dst, dstCapacity,
+                            workspace, wkspSize, bmi2);
     if (cSize == 0) return 0;
     /* When srcSize <= dstCapacity, there is enough space to write a raw uncompressed block.
      * Since we ran out of space, block must be not compressible, so fall back to raw uncompressed block.
@@ -2362,7 +2567,7 @@
  * assumption : strat is a valid strategy */
 ZSTD_blockCompressor ZSTD_selectBlockCompressor(ZSTD_strategy strat, ZSTD_dictMode_e dictMode)
 {
-    static const ZSTD_blockCompressor blockCompressor[3][(unsigned)ZSTD_btultra+1] = {
+    static const ZSTD_blockCompressor blockCompressor[3][ZSTD_STRATEGY_MAX+1] = {
         { ZSTD_compressBlock_fast  /* default for 0 */,
           ZSTD_compressBlock_fast,
           ZSTD_compressBlock_doubleFast,
@@ -2371,7 +2576,8 @@
           ZSTD_compressBlock_lazy2,
           ZSTD_compressBlock_btlazy2,
           ZSTD_compressBlock_btopt,
-          ZSTD_compressBlock_btultra },
+          ZSTD_compressBlock_btultra,
+          ZSTD_compressBlock_btultra2 },
         { ZSTD_compressBlock_fast_extDict  /* default for 0 */,
           ZSTD_compressBlock_fast_extDict,
           ZSTD_compressBlock_doubleFast_extDict,
@@ -2380,6 +2586,7 @@
           ZSTD_compressBlock_lazy2_extDict,
           ZSTD_compressBlock_btlazy2_extDict,
           ZSTD_compressBlock_btopt_extDict,
+          ZSTD_compressBlock_btultra_extDict,
           ZSTD_compressBlock_btultra_extDict },
         { ZSTD_compressBlock_fast_dictMatchState  /* default for 0 */,
           ZSTD_compressBlock_fast_dictMatchState,
@@ -2389,14 +2596,14 @@
           ZSTD_compressBlock_lazy2_dictMatchState,
           ZSTD_compressBlock_btlazy2_dictMatchState,
           ZSTD_compressBlock_btopt_dictMatchState,
+          ZSTD_compressBlock_btultra_dictMatchState,
           ZSTD_compressBlock_btultra_dictMatchState }
     };
     ZSTD_blockCompressor selectedCompressor;
     ZSTD_STATIC_ASSERT((unsigned)ZSTD_fast == 1);
 
-    assert((U32)strat >= (U32)ZSTD_fast);
-    assert((U32)strat <= (U32)ZSTD_btultra);
-    selectedCompressor = blockCompressor[(int)dictMode][(U32)strat];
+    assert(ZSTD_cParam_withinBounds(ZSTD_c_strategy, strat));
+    selectedCompressor = blockCompressor[(int)dictMode][(int)strat];
     assert(selectedCompressor != NULL);
     return selectedCompressor;
 }
@@ -2421,15 +2628,15 @@
 {
     ZSTD_matchState_t* const ms = &zc->blockState.matchState;
     size_t cSize;
-    DEBUGLOG(5, "ZSTD_compressBlock_internal (dstCapacity=%zu, dictLimit=%u, nextToUpdate=%u)",
-                dstCapacity, ms->window.dictLimit, ms->nextToUpdate);
+    DEBUGLOG(5, "ZSTD_compressBlock_internal (dstCapacity=%u, dictLimit=%u, nextToUpdate=%u)",
+                (unsigned)dstCapacity, (unsigned)ms->window.dictLimit, (unsigned)ms->nextToUpdate);
     assert(srcSize <= ZSTD_BLOCKSIZE_MAX);
 
     /* Assert that we have correctly flushed the ctx params into the ms's copy */
     ZSTD_assertEqualCParams(zc->appliedParams.cParams, ms->cParams);
 
     if (srcSize < MIN_CBLOCK_SIZE+ZSTD_blockHeaderSize+1) {
-        ZSTD_ldm_skipSequences(&zc->externSeqStore, srcSize, zc->appliedParams.cParams.searchLength);
+        ZSTD_ldm_skipSequences(&zc->externSeqStore, srcSize, zc->appliedParams.cParams.minMatch);
         cSize = 0;
         goto out;  /* don't even attempt compression below a certain srcSize */
     }
@@ -2437,8 +2644,8 @@
     ms->opt.symbolCosts = &zc->blockState.prevCBlock->entropy;   /* required for optimal parser to read stats from dictionary */
 
     /* a gap between an attached dict and the current window is not safe,
-     * they must remain adjacent, and when that stops being the case, the dict
-     * must be unset */
+     * they must remain adjacent,
+     * and when that stops being the case, the dict must be unset */
     assert(ms->dictMatchState == NULL || ms->loadedDictEnd == ms->window.dictLimit);
 
     /* limited update after a very long match */
@@ -2495,7 +2702,9 @@
             &zc->blockState.prevCBlock->entropy, &zc->blockState.nextCBlock->entropy,
             &zc->appliedParams,
             dst, dstCapacity,
-            srcSize, zc->entropyWorkspace, zc->bmi2);
+            srcSize,
+            zc->entropyWorkspace, HUF_WORKSPACE_SIZE /* statically allocated in resetCCtx */,
+            zc->bmi2);
 
 out:
     if (!ZSTD_isError(cSize) && cSize != 0) {
@@ -2535,7 +2744,7 @@
     U32 const maxDist = (U32)1 << cctx->appliedParams.cParams.windowLog;
     assert(cctx->appliedParams.cParams.windowLog <= 31);
 
-    DEBUGLOG(5, "ZSTD_compress_frameChunk (blockSize=%u)", (U32)blockSize);
+    DEBUGLOG(5, "ZSTD_compress_frameChunk (blockSize=%u)", (unsigned)blockSize);
     if (cctx->appliedParams.fParams.checksumFlag && srcSize)
         XXH64_update(&cctx->xxhState, src, srcSize);
 
@@ -2583,7 +2792,7 @@
             assert(dstCapacity >= cSize);
             dstCapacity -= cSize;
             DEBUGLOG(5, "ZSTD_compress_frameChunk: adding a block of size %u",
-                        (U32)cSize);
+                        (unsigned)cSize);
     }   }
 
     if (lastFrameChunk && (op>ostart)) cctx->stage = ZSTDcs_ending;
@@ -2606,9 +2815,9 @@
     size_t pos=0;
 
     assert(!(params.fParams.contentSizeFlag && pledgedSrcSize == ZSTD_CONTENTSIZE_UNKNOWN));
-    if (dstCapacity < ZSTD_frameHeaderSize_max) return ERROR(dstSize_tooSmall);
+    if (dstCapacity < ZSTD_FRAMEHEADERSIZE_MAX) return ERROR(dstSize_tooSmall);
     DEBUGLOG(4, "ZSTD_writeFrameHeader : dictIDFlag : %u ; dictID : %u ; dictIDSizeCode : %u",
-                !params.fParams.noDictIDFlag, dictID,  dictIDSizeCode);
+                !params.fParams.noDictIDFlag, (unsigned)dictID, (unsigned)dictIDSizeCode);
 
     if (params.format == ZSTD_f_zstd1) {
         MEM_writeLE32(dst, ZSTD_MAGICNUMBER);
@@ -2672,7 +2881,7 @@
     size_t fhSize = 0;
 
     DEBUGLOG(5, "ZSTD_compressContinue_internal, stage: %u, srcSize: %u",
-                cctx->stage, (U32)srcSize);
+                cctx->stage, (unsigned)srcSize);
     if (cctx->stage==ZSTDcs_created) return ERROR(stage_wrong);   /* missing init (ZSTD_compressBegin) */
 
     if (frame && (cctx->stage==ZSTDcs_init)) {
@@ -2709,7 +2918,7 @@
         }
     }
 
-    DEBUGLOG(5, "ZSTD_compressContinue_internal (blockSize=%u)", (U32)cctx->blockSize);
+    DEBUGLOG(5, "ZSTD_compressContinue_internal (blockSize=%u)", (unsigned)cctx->blockSize);
     {   size_t const cSize = frame ?
                              ZSTD_compress_frameChunk (cctx, dst, dstCapacity, src, srcSize, lastFrameChunk) :
                              ZSTD_compressBlock_internal (cctx, dst, dstCapacity, src, srcSize);
@@ -2721,7 +2930,7 @@
             ZSTD_STATIC_ASSERT(ZSTD_CONTENTSIZE_UNKNOWN == (unsigned long long)-1);
             if (cctx->consumedSrcSize+1 > cctx->pledgedSrcSizePlusOne) {
                 DEBUGLOG(4, "error : pledgedSrcSize = %u, while realSrcSize >= %u",
-                    (U32)cctx->pledgedSrcSizePlusOne-1, (U32)cctx->consumedSrcSize);
+                    (unsigned)cctx->pledgedSrcSizePlusOne-1, (unsigned)cctx->consumedSrcSize);
                 return ERROR(srcSize_wrong);
             }
         }
@@ -2733,7 +2942,7 @@
                               void* dst, size_t dstCapacity,
                         const void* src, size_t srcSize)
 {
-    DEBUGLOG(5, "ZSTD_compressContinue (srcSize=%u)", (U32)srcSize);
+    DEBUGLOG(5, "ZSTD_compressContinue (srcSize=%u)", (unsigned)srcSize);
     return ZSTD_compressContinue_internal(cctx, dst, dstCapacity, src, srcSize, 1 /* frame mode */, 0 /* last chunk */);
 }
 
@@ -2791,6 +3000,7 @@
     case ZSTD_btlazy2:   /* we want the dictionary table fully sorted */
     case ZSTD_btopt:
     case ZSTD_btultra:
+    case ZSTD_btultra2:
         if (srcSize >= HASH_READ_SIZE)
             ZSTD_updateTree(ms, iend-HASH_READ_SIZE, iend);
         break;
@@ -2861,7 +3071,9 @@
         if (offcodeLog > OffFSELog) return ERROR(dictionary_corrupted);
         /* Defer checking offcodeMaxValue because we need to know the size of the dictionary content */
         /* fill all offset symbols to avoid garbage at end of table */
-        CHECK_E( FSE_buildCTable_wksp(bs->entropy.fse.offcodeCTable, offcodeNCount, MaxOff, offcodeLog, workspace, HUF_WORKSPACE_SIZE),
+        CHECK_E( FSE_buildCTable_wksp(bs->entropy.fse.offcodeCTable,
+                                    offcodeNCount, MaxOff, offcodeLog,
+                                    workspace, HUF_WORKSPACE_SIZE),
                  dictionary_corrupted);
         dictPtr += offcodeHeaderSize;
     }
@@ -2873,7 +3085,9 @@
         if (matchlengthLog > MLFSELog) return ERROR(dictionary_corrupted);
         /* Every match length code must have non-zero probability */
         CHECK_F( ZSTD_checkDictNCount(matchlengthNCount, matchlengthMaxValue, MaxML));
-        CHECK_E( FSE_buildCTable_wksp(bs->entropy.fse.matchlengthCTable, matchlengthNCount, matchlengthMaxValue, matchlengthLog, workspace, HUF_WORKSPACE_SIZE),
+        CHECK_E( FSE_buildCTable_wksp(bs->entropy.fse.matchlengthCTable,
+                                    matchlengthNCount, matchlengthMaxValue, matchlengthLog,
+                                    workspace, HUF_WORKSPACE_SIZE),
                  dictionary_corrupted);
         dictPtr += matchlengthHeaderSize;
     }
@@ -2885,7 +3099,9 @@
         if (litlengthLog > LLFSELog) return ERROR(dictionary_corrupted);
         /* Every literal length code must have non-zero probability */
         CHECK_F( ZSTD_checkDictNCount(litlengthNCount, litlengthMaxValue, MaxLL));
-        CHECK_E( FSE_buildCTable_wksp(bs->entropy.fse.litlengthCTable, litlengthNCount, litlengthMaxValue, litlengthLog, workspace, HUF_WORKSPACE_SIZE),
+        CHECK_E( FSE_buildCTable_wksp(bs->entropy.fse.litlengthCTable,
+                                    litlengthNCount, litlengthMaxValue, litlengthLog,
+                                    workspace, HUF_WORKSPACE_SIZE),
                  dictionary_corrupted);
         dictPtr += litlengthHeaderSize;
     }
@@ -3023,7 +3239,7 @@
     ZSTD_parameters const params = ZSTD_getParams(compressionLevel, ZSTD_CONTENTSIZE_UNKNOWN, dictSize);
     ZSTD_CCtx_params const cctxParams =
             ZSTD_assignParamsToCCtxParams(cctx->requestedParams, params);
-    DEBUGLOG(4, "ZSTD_compressBegin_usingDict (dictSize=%u)", (U32)dictSize);
+    DEBUGLOG(4, "ZSTD_compressBegin_usingDict (dictSize=%u)", (unsigned)dictSize);
     return ZSTD_compressBegin_internal(cctx, dict, dictSize, ZSTD_dct_auto, ZSTD_dtlm_fast, NULL,
                                        cctxParams, ZSTD_CONTENTSIZE_UNKNOWN, ZSTDb_not_buffered);
 }
@@ -3067,7 +3283,7 @@
     if (cctx->appliedParams.fParams.checksumFlag) {
         U32 const checksum = (U32) XXH64_digest(&cctx->xxhState);
         if (dstCapacity<4) return ERROR(dstSize_tooSmall);
-        DEBUGLOG(4, "ZSTD_writeEpilogue: write checksum : %08X", checksum);
+        DEBUGLOG(4, "ZSTD_writeEpilogue: write checksum : %08X", (unsigned)checksum);
         MEM_writeLE32(op, checksum);
         op += 4;
     }
@@ -3093,7 +3309,7 @@
         DEBUGLOG(4, "end of frame : controlling src size");
         if (cctx->pledgedSrcSizePlusOne != cctx->consumedSrcSize+1) {
             DEBUGLOG(4, "error : pledgedSrcSize = %u, while realSrcSize = %u",
-                (U32)cctx->pledgedSrcSizePlusOne-1, (U32)cctx->consumedSrcSize);
+                (unsigned)cctx->pledgedSrcSizePlusOne-1, (unsigned)cctx->consumedSrcSize);
             return ERROR(srcSize_wrong);
     }   }
     return cSize + endResult;
@@ -3139,7 +3355,7 @@
         const void* dict,size_t dictSize,
         ZSTD_CCtx_params params)
 {
-    DEBUGLOG(4, "ZSTD_compress_advanced_internal (srcSize:%u)", (U32)srcSize);
+    DEBUGLOG(4, "ZSTD_compress_advanced_internal (srcSize:%u)", (unsigned)srcSize);
     CHECK_F( ZSTD_compressBegin_internal(cctx,
                          dict, dictSize, ZSTD_dct_auto, ZSTD_dtlm_fast, NULL,
                          params, srcSize, ZSTDb_not_buffered) );
@@ -3163,7 +3379,7 @@
                    const void* src, size_t srcSize,
                          int compressionLevel)
 {
-    DEBUGLOG(4, "ZSTD_compressCCtx (srcSize=%u)", (U32)srcSize);
+    DEBUGLOG(4, "ZSTD_compressCCtx (srcSize=%u)", (unsigned)srcSize);
     assert(cctx != NULL);
     return ZSTD_compress_usingDict(cctx, dst, dstCapacity, src, srcSize, NULL, 0, compressionLevel);
 }
@@ -3189,7 +3405,7 @@
         size_t dictSize, ZSTD_compressionParameters cParams,
         ZSTD_dictLoadMethod_e dictLoadMethod)
 {
-    DEBUGLOG(5, "sizeof(ZSTD_CDict) : %u", (U32)sizeof(ZSTD_CDict));
+    DEBUGLOG(5, "sizeof(ZSTD_CDict) : %u", (unsigned)sizeof(ZSTD_CDict));
     return sizeof(ZSTD_CDict) + HUF_WORKSPACE_SIZE + ZSTD_sizeof_matchState(&cParams, /* forCCtx */ 0)
            + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize);
 }
@@ -3203,7 +3419,7 @@
 size_t ZSTD_sizeof_CDict(const ZSTD_CDict* cdict)
 {
     if (cdict==NULL) return 0;   /* support sizeof on NULL */
-    DEBUGLOG(5, "sizeof(*cdict) : %u", (U32)sizeof(*cdict));
+    DEBUGLOG(5, "sizeof(*cdict) : %u", (unsigned)sizeof(*cdict));
     return cdict->workspaceSize + (cdict->dictBuffer ? cdict->dictContentSize : 0) + sizeof(*cdict);
 }
 
@@ -3214,7 +3430,7 @@
                     ZSTD_dictContentType_e dictContentType,
                     ZSTD_compressionParameters cParams)
 {
-    DEBUGLOG(3, "ZSTD_initCDict_internal (dictContentType:%u)", (U32)dictContentType);
+    DEBUGLOG(3, "ZSTD_initCDict_internal (dictContentType:%u)", (unsigned)dictContentType);
     assert(!ZSTD_checkCParams(cParams));
     cdict->matchState.cParams = cParams;
     if ((dictLoadMethod == ZSTD_dlm_byRef) || (!dictBuffer) || (!dictSize)) {
@@ -3264,7 +3480,7 @@
                                       ZSTD_dictContentType_e dictContentType,
                                       ZSTD_compressionParameters cParams, ZSTD_customMem customMem)
 {
-    DEBUGLOG(3, "ZSTD_createCDict_advanced, mode %u", (U32)dictContentType);
+    DEBUGLOG(3, "ZSTD_createCDict_advanced, mode %u", (unsigned)dictContentType);
     if (!customMem.customAlloc ^ !customMem.customFree) return NULL;
 
     {   ZSTD_CDict* const cdict = (ZSTD_CDict*)ZSTD_malloc(sizeof(ZSTD_CDict), customMem);
@@ -3345,7 +3561,7 @@
     void* ptr;
     if ((size_t)workspace & 7) return NULL;  /* 8-aligned */
     DEBUGLOG(4, "(workspaceSize < neededSize) : (%u < %u) => %u",
-        (U32)workspaceSize, (U32)neededSize, (U32)(workspaceSize < neededSize));
+        (unsigned)workspaceSize, (unsigned)neededSize, (unsigned)(workspaceSize < neededSize));
     if (workspaceSize < neededSize) return NULL;
 
     if (dictLoadMethod == ZSTD_dlm_byCopy) {
@@ -3505,7 +3721,7 @@
 size_t ZSTD_resetCStream(ZSTD_CStream* zcs, unsigned long long pledgedSrcSize)
 {
     ZSTD_CCtx_params params = zcs->requestedParams;
-    DEBUGLOG(4, "ZSTD_resetCStream: pledgedSrcSize = %u", (U32)pledgedSrcSize);
+    DEBUGLOG(4, "ZSTD_resetCStream: pledgedSrcSize = %u", (unsigned)pledgedSrcSize);
     if (pledgedSrcSize==0) pledgedSrcSize = ZSTD_CONTENTSIZE_UNKNOWN;
     params.fParams.contentSizeFlag = 1;
     return ZSTD_resetCStream_internal(zcs, NULL, 0, ZSTD_dct_auto, zcs->cdict, params, pledgedSrcSize);
@@ -3525,7 +3741,7 @@
     assert(!((dict) && (cdict)));  /* either dict or cdict, not both */
 
     if (dict && dictSize >= 8) {
-        DEBUGLOG(4, "loading dictionary of size %u", (U32)dictSize);
+        DEBUGLOG(4, "loading dictionary of size %u", (unsigned)dictSize);
         if (zcs->staticSize) {   /* static CCtx : never uses malloc */
             /* incompatible with internal cdict creation */
             return ERROR(memory_allocation);
@@ -3584,7 +3800,7 @@
                                  ZSTD_parameters params, unsigned long long pledgedSrcSize)
 {
     DEBUGLOG(4, "ZSTD_initCStream_advanced: pledgedSrcSize=%u, flag=%u",
-                (U32)pledgedSrcSize, params.fParams.contentSizeFlag);
+                (unsigned)pledgedSrcSize, params.fParams.contentSizeFlag);
     CHECK_F( ZSTD_checkCParams(params.cParams) );
     if ((pledgedSrcSize==0) && (params.fParams.contentSizeFlag==0)) pledgedSrcSize = ZSTD_CONTENTSIZE_UNKNOWN;  /* for compatibility with older programs relying on this behavior. Users should now specify ZSTD_CONTENTSIZE_UNKNOWN. This line will be removed in the future. */
     zcs->requestedParams = ZSTD_assignParamsToCCtxParams(zcs->requestedParams, params);
@@ -3612,8 +3828,15 @@
 
 /*======   Compression   ======*/
 
-MEM_STATIC size_t ZSTD_limitCopy(void* dst, size_t dstCapacity,
-                           const void* src, size_t srcSize)
+static size_t ZSTD_nextInputSizeHint(const ZSTD_CCtx* cctx)
+{
+    size_t hintInSize = cctx->inBuffTarget - cctx->inBuffPos;
+    if (hintInSize==0) hintInSize = cctx->blockSize;
+    return hintInSize;
+}
+
+static size_t ZSTD_limitCopy(void* dst, size_t dstCapacity,
+                       const void* src, size_t srcSize)
 {
     size_t const length = MIN(dstCapacity, srcSize);
     if (length) memcpy(dst, src, length);
@@ -3621,7 +3844,7 @@
 }
 
 /** ZSTD_compressStream_generic():
- *  internal function for all *compressStream*() variants and *compress_generic()
+ *  internal function for all *compressStream*() variants
  *  non-static, because can be called from zstdmt_compress.c
  * @return : hint size for next input */
 size_t ZSTD_compressStream_generic(ZSTD_CStream* zcs,
@@ -3638,7 +3861,7 @@
     U32 someMoreWork = 1;
 
     /* check expectations */
-    DEBUGLOG(5, "ZSTD_compressStream_generic, flush=%u", (U32)flushMode);
+    DEBUGLOG(5, "ZSTD_compressStream_generic, flush=%u", (unsigned)flushMode);
     assert(zcs->inBuff != NULL);
     assert(zcs->inBuffSize > 0);
     assert(zcs->outBuff !=  NULL);
@@ -3660,12 +3883,12 @@
                 /* shortcut to compression pass directly into output buffer */
                 size_t const cSize = ZSTD_compressEnd(zcs,
                                                 op, oend-op, ip, iend-ip);
-                DEBUGLOG(4, "ZSTD_compressEnd : %u", (U32)cSize);
+                DEBUGLOG(4, "ZSTD_compressEnd : cSize=%u", (unsigned)cSize);
                 if (ZSTD_isError(cSize)) return cSize;
                 ip = iend;
                 op += cSize;
                 zcs->frameEnded = 1;
-                ZSTD_CCtx_reset(zcs);
+                ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
                 someMoreWork = 0; break;
             }
             /* complete loading into inBuffer */
@@ -3709,7 +3932,7 @@
                 if (zcs->inBuffTarget > zcs->inBuffSize)
                     zcs->inBuffPos = 0, zcs->inBuffTarget = zcs->blockSize;
                 DEBUGLOG(5, "inBuffTarget:%u / inBuffSize:%u",
-                         (U32)zcs->inBuffTarget, (U32)zcs->inBuffSize);
+                         (unsigned)zcs->inBuffTarget, (unsigned)zcs->inBuffSize);
                 if (!lastBlock)
                     assert(zcs->inBuffTarget <= zcs->inBuffSize);
                 zcs->inToCompress = zcs->inBuffPos;
@@ -3718,7 +3941,7 @@
                     if (zcs->frameEnded) {
                         DEBUGLOG(5, "Frame completed directly in outBuffer");
                         someMoreWork = 0;
-                        ZSTD_CCtx_reset(zcs);
+                        ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
                     }
                     break;
                 }
@@ -3733,7 +3956,7 @@
                 size_t const flushed = ZSTD_limitCopy(op, oend-op,
                             zcs->outBuff + zcs->outBuffFlushedSize, toFlush);
                 DEBUGLOG(5, "toFlush: %u into %u ==> flushed: %u",
-                            (U32)toFlush, (U32)(oend-op), (U32)flushed);
+                            (unsigned)toFlush, (unsigned)(oend-op), (unsigned)flushed);
                 op += flushed;
                 zcs->outBuffFlushedSize += flushed;
                 if (toFlush!=flushed) {
@@ -3746,7 +3969,7 @@
                 if (zcs->frameEnded) {
                     DEBUGLOG(5, "Frame completed on flush");
                     someMoreWork = 0;
-                    ZSTD_CCtx_reset(zcs);
+                    ZSTD_CCtx_reset(zcs, ZSTD_reset_session_only);
                     break;
                 }
                 zcs->streamStage = zcss_load;
@@ -3761,28 +3984,34 @@
     input->pos = ip - istart;
     output->pos = op - ostart;
     if (zcs->frameEnded) return 0;
-    {   size_t hintInSize = zcs->inBuffTarget - zcs->inBuffPos;
-        if (hintInSize==0) hintInSize = zcs->blockSize;
-        return hintInSize;
+    return ZSTD_nextInputSizeHint(zcs);
+}
+
+static size_t ZSTD_nextInputSizeHint_MTorST(const ZSTD_CCtx* cctx)
+{
+#ifdef ZSTD_MULTITHREAD
+    if (cctx->appliedParams.nbWorkers >= 1) {
+        assert(cctx->mtctx != NULL);
+        return ZSTDMT_nextInputSizeHint(cctx->mtctx);
     }
+#endif
+    return ZSTD_nextInputSizeHint(cctx);
+
 }
 
 size_t ZSTD_compressStream(ZSTD_CStream* zcs, ZSTD_outBuffer* output, ZSTD_inBuffer* input)
 {
-    /* check conditions */
-    if (output->pos > output->size) return ERROR(GENERIC);
-    if (input->pos  > input->size)  return ERROR(GENERIC);
-
-    return ZSTD_compressStream_generic(zcs, output, input, ZSTD_e_continue);
+    CHECK_F( ZSTD_compressStream2(zcs, output, input, ZSTD_e_continue) );
+    return ZSTD_nextInputSizeHint_MTorST(zcs);
 }
 
 
-size_t ZSTD_compress_generic (ZSTD_CCtx* cctx,
-                              ZSTD_outBuffer* output,
-                              ZSTD_inBuffer* input,
-                              ZSTD_EndDirective endOp)
+size_t ZSTD_compressStream2( ZSTD_CCtx* cctx,
+                             ZSTD_outBuffer* output,
+                             ZSTD_inBuffer* input,
+                             ZSTD_EndDirective endOp)
 {
-    DEBUGLOG(5, "ZSTD_compress_generic, endOp=%u ", (U32)endOp);
+    DEBUGLOG(5, "ZSTD_compressStream2, endOp=%u ", (unsigned)endOp);
     /* check conditions */
     if (output->pos > output->size) return ERROR(GENERIC);
     if (input->pos  > input->size)  return ERROR(GENERIC);
@@ -3792,9 +4021,9 @@
     if (cctx->streamStage == zcss_init) {
         ZSTD_CCtx_params params = cctx->requestedParams;
         ZSTD_prefixDict const prefixDict = cctx->prefixDict;
-        memset(&cctx->prefixDict, 0, sizeof(cctx->prefixDict));  /* single usage */
-        assert(prefixDict.dict==NULL || cctx->cdict==NULL);   /* only one can be set */
-        DEBUGLOG(4, "ZSTD_compress_generic : transparent init stage");
+        memset(&cctx->prefixDict, 0, sizeof(cctx->prefixDict));   /* single usage */
+        assert(prefixDict.dict==NULL || cctx->cdict==NULL);    /* only one can be set */
+        DEBUGLOG(4, "ZSTD_compressStream2 : transparent init stage");
         if (endOp == ZSTD_e_end) cctx->pledgedSrcSizePlusOne = input->size + 1;  /* auto-fix pledgedSrcSize */
         params.cParams = ZSTD_getCParamsFromCCtxParams(
                 &cctx->requestedParams, cctx->pledgedSrcSizePlusOne-1, 0 /*dictSize*/);
@@ -3807,7 +4036,7 @@
         if (params.nbWorkers > 0) {
             /* mt context creation */
             if (cctx->mtctx == NULL) {
-                DEBUGLOG(4, "ZSTD_compress_generic: creating new mtctx for nbWorkers=%u",
+                DEBUGLOG(4, "ZSTD_compressStream2: creating new mtctx for nbWorkers=%u",
                             params.nbWorkers);
                 cctx->mtctx = ZSTDMT_createCCtx_advanced(params.nbWorkers, cctx->customMem);
                 if (cctx->mtctx == NULL) return ERROR(memory_allocation);
@@ -3829,6 +4058,7 @@
             assert(cctx->streamStage == zcss_load);
             assert(cctx->appliedParams.nbWorkers == 0);
     }   }
+    /* end of transparent initialization stage */
 
     /* compression stage */
 #ifdef ZSTD_MULTITHREAD
@@ -3840,18 +4070,18 @@
         {   size_t const flushMin = ZSTDMT_compressStream_generic(cctx->mtctx, output, input, endOp);
             if ( ZSTD_isError(flushMin)
               || (endOp == ZSTD_e_end && flushMin == 0) ) { /* compression completed */
-                ZSTD_CCtx_reset(cctx);
+                ZSTD_CCtx_reset(cctx, ZSTD_reset_session_only);
             }
-            DEBUGLOG(5, "completed ZSTD_compress_generic delegating to ZSTDMT_compressStream_generic");
+            DEBUGLOG(5, "completed ZSTD_compressStream2 delegating to ZSTDMT_compressStream_generic");
             return flushMin;
     }   }
 #endif
     CHECK_F( ZSTD_compressStream_generic(cctx, output, input, endOp) );
-    DEBUGLOG(5, "completed ZSTD_compress_generic");
+    DEBUGLOG(5, "completed ZSTD_compressStream2");
     return cctx->outBuffContentSize - cctx->outBuffFlushedSize; /* remaining to flush */
 }
 
-size_t ZSTD_compress_generic_simpleArgs (
+size_t ZSTD_compressStream2_simpleArgs (
                             ZSTD_CCtx* cctx,
                             void* dst, size_t dstCapacity, size_t* dstPos,
                       const void* src, size_t srcSize, size_t* srcPos,
@@ -3859,13 +4089,33 @@
 {
     ZSTD_outBuffer output = { dst, dstCapacity, *dstPos };
     ZSTD_inBuffer  input  = { src, srcSize, *srcPos };
-    /* ZSTD_compress_generic() will check validity of dstPos and srcPos */
-    size_t const cErr = ZSTD_compress_generic(cctx, &output, &input, endOp);
+    /* ZSTD_compressStream2() will check validity of dstPos and srcPos */
+    size_t const cErr = ZSTD_compressStream2(cctx, &output, &input, endOp);
     *dstPos = output.pos;
     *srcPos = input.pos;
     return cErr;
 }
 
+size_t ZSTD_compress2(ZSTD_CCtx* cctx,
+                      void* dst, size_t dstCapacity,
+                      const void* src, size_t srcSize)
+{
+    ZSTD_CCtx_reset(cctx, ZSTD_reset_session_only);
+    {   size_t oPos = 0;
+        size_t iPos = 0;
+        size_t const result = ZSTD_compressStream2_simpleArgs(cctx,
+                                        dst, dstCapacity, &oPos,
+                                        src, srcSize, &iPos,
+                                        ZSTD_e_end);
+        if (ZSTD_isError(result)) return result;
+        if (result != 0) {  /* compression not completed, due to lack of output space */
+            assert(oPos == dstCapacity);
+            return ERROR(dstSize_tooSmall);
+        }
+        assert(iPos == srcSize);   /* all input is expected consumed */
+        return oPos;
+    }
+}
 
 /*======   Finalize   ======*/
 
@@ -3874,21 +4124,21 @@
 size_t ZSTD_flushStream(ZSTD_CStream* zcs, ZSTD_outBuffer* output)
 {
     ZSTD_inBuffer input = { NULL, 0, 0 };
-    if (output->pos > output->size) return ERROR(GENERIC);
-    CHECK_F( ZSTD_compressStream_generic(zcs, output, &input, ZSTD_e_flush) );
-    return zcs->outBuffContentSize - zcs->outBuffFlushedSize;  /* remaining to flush */
+    return ZSTD_compressStream2(zcs, output, &input, ZSTD_e_flush);
 }
 
 
 size_t ZSTD_endStream(ZSTD_CStream* zcs, ZSTD_outBuffer* output)
 {
     ZSTD_inBuffer input = { NULL, 0, 0 };
-    if (output->pos > output->size) return ERROR(GENERIC);
-    CHECK_F( ZSTD_compressStream_generic(zcs, output, &input, ZSTD_e_end) );
+    size_t const remainingToFlush = ZSTD_compressStream2(zcs, output, &input, ZSTD_e_end);
+    CHECK_F( remainingToFlush );
+    if (zcs->appliedParams.nbWorkers > 0) return remainingToFlush;   /* minimal estimation */
+    /* single thread mode : attempt to calculate remaining to flush more precisely */
     {   size_t const lastBlockSize = zcs->frameEnded ? 0 : ZSTD_BLOCKHEADERSIZE;
         size_t const checksumSize = zcs->frameEnded ? 0 : zcs->appliedParams.fParams.checksumFlag * 4;
-        size_t const toFlush = zcs->outBuffContentSize - zcs->outBuffFlushedSize + lastBlockSize + checksumSize;
-        DEBUGLOG(4, "ZSTD_endStream : remaining to flush : %u", (U32)toFlush);
+        size_t const toFlush = remainingToFlush + lastBlockSize + checksumSize;
+        DEBUGLOG(4, "ZSTD_endStream : remaining to flush : %u", (unsigned)toFlush);
         return toFlush;
     }
 }
@@ -3905,27 +4155,27 @@
     /* W,  C,  H,  S,  L, TL, strat */
     { 19, 12, 13,  1,  6,  1, ZSTD_fast    },  /* base for negative levels */
     { 19, 13, 14,  1,  7,  0, ZSTD_fast    },  /* level  1 */
-    { 19, 15, 16,  1,  6,  0, ZSTD_fast    },  /* level  2 */
-    { 20, 16, 17,  1,  5,  1, ZSTD_dfast   },  /* level  3 */
-    { 20, 18, 18,  1,  5,  1, ZSTD_dfast   },  /* level  4 */
-    { 20, 18, 18,  2,  5,  2, ZSTD_greedy  },  /* level  5 */
-    { 21, 18, 19,  2,  5,  4, ZSTD_lazy    },  /* level  6 */
-    { 21, 18, 19,  3,  5,  8, ZSTD_lazy2   },  /* level  7 */
+    { 20, 15, 16,  1,  6,  0, ZSTD_fast    },  /* level  2 */
+    { 21, 16, 17,  1,  5,  1, ZSTD_dfast   },  /* level  3 */
+    { 21, 18, 18,  1,  5,  1, ZSTD_dfast   },  /* level  4 */
+    { 21, 18, 19,  2,  5,  2, ZSTD_greedy  },  /* level  5 */
+    { 21, 19, 19,  3,  5,  4, ZSTD_greedy  },  /* level  6 */
+    { 21, 19, 19,  3,  5,  8, ZSTD_lazy    },  /* level  7 */
     { 21, 19, 19,  3,  5, 16, ZSTD_lazy2   },  /* level  8 */
     { 21, 19, 20,  4,  5, 16, ZSTD_lazy2   },  /* level  9 */
-    { 21, 20, 21,  4,  5, 16, ZSTD_lazy2   },  /* level 10 */
-    { 21, 21, 22,  4,  5, 16, ZSTD_lazy2   },  /* level 11 */
-    { 22, 20, 22,  5,  5, 16, ZSTD_lazy2   },  /* level 12 */
-    { 22, 21, 22,  4,  5, 32, ZSTD_btlazy2 },  /* level 13 */
-    { 22, 21, 22,  5,  5, 32, ZSTD_btlazy2 },  /* level 14 */
-    { 22, 22, 22,  6,  5, 32, ZSTD_btlazy2 },  /* level 15 */
-    { 22, 21, 22,  4,  5, 48, ZSTD_btopt   },  /* level 16 */
-    { 23, 22, 22,  4,  4, 64, ZSTD_btopt   },  /* level 17 */
-    { 23, 23, 22,  6,  3,256, ZSTD_btopt   },  /* level 18 */
-    { 23, 24, 22,  7,  3,256, ZSTD_btultra },  /* level 19 */
-    { 25, 25, 23,  7,  3,256, ZSTD_btultra },  /* level 20 */
-    { 26, 26, 24,  7,  3,512, ZSTD_btultra },  /* level 21 */
-    { 27, 27, 25,  9,  3,999, ZSTD_btultra },  /* level 22 */
+    { 22, 20, 21,  4,  5, 16, ZSTD_lazy2   },  /* level 10 */
+    { 22, 21, 22,  4,  5, 16, ZSTD_lazy2   },  /* level 11 */
+    { 22, 21, 22,  5,  5, 16, ZSTD_lazy2   },  /* level 12 */
+    { 22, 21, 22,  5,  5, 32, ZSTD_btlazy2 },  /* level 13 */
+    { 22, 22, 23,  5,  5, 32, ZSTD_btlazy2 },  /* level 14 */
+    { 22, 23, 23,  6,  5, 32, ZSTD_btlazy2 },  /* level 15 */
+    { 22, 22, 22,  5,  5, 48, ZSTD_btopt   },  /* level 16 */
+    { 23, 23, 22,  5,  4, 64, ZSTD_btopt   },  /* level 17 */
+    { 23, 23, 22,  6,  3, 64, ZSTD_btultra },  /* level 18 */
+    { 23, 24, 22,  7,  3,256, ZSTD_btultra2},  /* level 19 */
+    { 25, 25, 23,  7,  3,256, ZSTD_btultra2},  /* level 20 */
+    { 26, 26, 24,  7,  3,512, ZSTD_btultra2},  /* level 21 */
+    { 27, 27, 25,  9,  3,999, ZSTD_btultra2},  /* level 22 */
 },
 {   /* for srcSize <= 256 KB */
     /* W,  C,  H,  S,  L,  T, strat */
@@ -3940,18 +4190,18 @@
     { 18, 18, 19,  4,  4,  8, ZSTD_lazy2   },  /* level  8 */
     { 18, 18, 19,  5,  4,  8, ZSTD_lazy2   },  /* level  9 */
     { 18, 18, 19,  6,  4,  8, ZSTD_lazy2   },  /* level 10 */
-    { 18, 18, 19,  5,  4, 16, ZSTD_btlazy2 },  /* level 11.*/
-    { 18, 19, 19,  6,  4, 16, ZSTD_btlazy2 },  /* level 12.*/
-    { 18, 19, 19,  8,  4, 16, ZSTD_btlazy2 },  /* level 13 */
-    { 18, 18, 19,  4,  4, 24, ZSTD_btopt   },  /* level 14.*/
-    { 18, 18, 19,  4,  3, 24, ZSTD_btopt   },  /* level 15.*/
-    { 18, 19, 19,  6,  3, 64, ZSTD_btopt   },  /* level 16.*/
-    { 18, 19, 19,  8,  3,128, ZSTD_btopt   },  /* level 17.*/
-    { 18, 19, 19, 10,  3,256, ZSTD_btopt   },  /* level 18.*/
-    { 18, 19, 19, 10,  3,256, ZSTD_btultra },  /* level 19.*/
-    { 18, 19, 19, 11,  3,512, ZSTD_btultra },  /* level 20.*/
-    { 18, 19, 19, 12,  3,512, ZSTD_btultra },  /* level 21.*/
-    { 18, 19, 19, 13,  3,999, ZSTD_btultra },  /* level 22.*/
+    { 18, 18, 19,  5,  4, 12, ZSTD_btlazy2 },  /* level 11.*/
+    { 18, 19, 19,  7,  4, 12, ZSTD_btlazy2 },  /* level 12.*/
+    { 18, 18, 19,  4,  4, 16, ZSTD_btopt   },  /* level 13 */
+    { 18, 18, 19,  4,  3, 32, ZSTD_btopt   },  /* level 14.*/
+    { 18, 18, 19,  6,  3,128, ZSTD_btopt   },  /* level 15.*/
+    { 18, 19, 19,  6,  3,128, ZSTD_btultra },  /* level 16.*/
+    { 18, 19, 19,  8,  3,256, ZSTD_btultra },  /* level 17.*/
+    { 18, 19, 19,  6,  3,128, ZSTD_btultra2},  /* level 18.*/
+    { 18, 19, 19,  8,  3,256, ZSTD_btultra2},  /* level 19.*/
+    { 18, 19, 19, 10,  3,512, ZSTD_btultra2},  /* level 20.*/
+    { 18, 19, 19, 12,  3,512, ZSTD_btultra2},  /* level 21.*/
+    { 18, 19, 19, 13,  3,999, ZSTD_btultra2},  /* level 22.*/
 },
 {   /* for srcSize <= 128 KB */
     /* W,  C,  H,  S,  L,  T, strat */
@@ -3966,26 +4216,26 @@
     { 17, 17, 17,  4,  4,  8, ZSTD_lazy2   },  /* level  8 */
     { 17, 17, 17,  5,  4,  8, ZSTD_lazy2   },  /* level  9 */
     { 17, 17, 17,  6,  4,  8, ZSTD_lazy2   },  /* level 10 */
-    { 17, 17, 17,  7,  4,  8, ZSTD_lazy2   },  /* level 11 */
-    { 17, 18, 17,  6,  4, 16, ZSTD_btlazy2 },  /* level 12 */
-    { 17, 18, 17,  8,  4, 16, ZSTD_btlazy2 },  /* level 13.*/
-    { 17, 18, 17,  4,  4, 32, ZSTD_btopt   },  /* level 14.*/
-    { 17, 18, 17,  6,  3, 64, ZSTD_btopt   },  /* level 15.*/
-    { 17, 18, 17,  7,  3,128, ZSTD_btopt   },  /* level 16.*/
-    { 17, 18, 17,  7,  3,256, ZSTD_btopt   },  /* level 17.*/
-    { 17, 18, 17,  8,  3,256, ZSTD_btopt   },  /* level 18.*/
-    { 17, 18, 17,  8,  3,256, ZSTD_btultra },  /* level 19.*/
-    { 17, 18, 17,  9,  3,256, ZSTD_btultra },  /* level 20.*/
-    { 17, 18, 17, 10,  3,256, ZSTD_btultra },  /* level 21.*/
-    { 17, 18, 17, 11,  3,512, ZSTD_btultra },  /* level 22.*/
+    { 17, 17, 17,  5,  4,  8, ZSTD_btlazy2 },  /* level 11 */
+    { 17, 18, 17,  7,  4, 12, ZSTD_btlazy2 },  /* level 12 */
+    { 17, 18, 17,  3,  4, 12, ZSTD_btopt   },  /* level 13.*/
+    { 17, 18, 17,  4,  3, 32, ZSTD_btopt   },  /* level 14.*/
+    { 17, 18, 17,  6,  3,256, ZSTD_btopt   },  /* level 15.*/
+    { 17, 18, 17,  6,  3,128, ZSTD_btultra },  /* level 16.*/
+    { 17, 18, 17,  8,  3,256, ZSTD_btultra },  /* level 17.*/
+    { 17, 18, 17, 10,  3,512, ZSTD_btultra },  /* level 18.*/
+    { 17, 18, 17,  5,  3,256, ZSTD_btultra2},  /* level 19.*/
+    { 17, 18, 17,  7,  3,512, ZSTD_btultra2},  /* level 20.*/
+    { 17, 18, 17,  9,  3,512, ZSTD_btultra2},  /* level 21.*/
+    { 17, 18, 17, 11,  3,999, ZSTD_btultra2},  /* level 22.*/
 },
 {   /* for srcSize <= 16 KB */
     /* W,  C,  H,  S,  L,  T, strat */
     { 14, 12, 13,  1,  5,  1, ZSTD_fast    },  /* base for negative levels */
     { 14, 14, 15,  1,  5,  0, ZSTD_fast    },  /* level  1 */
     { 14, 14, 15,  1,  4,  0, ZSTD_fast    },  /* level  2 */
-    { 14, 14, 14,  2,  4,  1, ZSTD_dfast   },  /* level  3.*/
-    { 14, 14, 14,  4,  4,  2, ZSTD_greedy  },  /* level  4.*/
+    { 14, 14, 15,  2,  4,  1, ZSTD_dfast   },  /* level  3 */
+    { 14, 14, 14,  4,  4,  2, ZSTD_greedy  },  /* level  4 */
     { 14, 14, 14,  3,  4,  4, ZSTD_lazy    },  /* level  5.*/
     { 14, 14, 14,  4,  4,  8, ZSTD_lazy2   },  /* level  6 */
     { 14, 14, 14,  6,  4,  8, ZSTD_lazy2   },  /* level  7 */
@@ -3993,17 +4243,17 @@
     { 14, 15, 14,  5,  4,  8, ZSTD_btlazy2 },  /* level  9.*/
     { 14, 15, 14,  9,  4,  8, ZSTD_btlazy2 },  /* level 10.*/
     { 14, 15, 14,  3,  4, 12, ZSTD_btopt   },  /* level 11.*/
-    { 14, 15, 14,  6,  3, 16, ZSTD_btopt   },  /* level 12.*/
-    { 14, 15, 14,  6,  3, 24, ZSTD_btopt   },  /* level 13.*/
-    { 14, 15, 15,  6,  3, 48, ZSTD_btopt   },  /* level 14.*/
-    { 14, 15, 15,  6,  3, 64, ZSTD_btopt   },  /* level 15.*/
-    { 14, 15, 15,  6,  3, 96, ZSTD_btopt   },  /* level 16.*/
-    { 14, 15, 15,  6,  3,128, ZSTD_btopt   },  /* level 17.*/
-    { 14, 15, 15,  8,  3,256, ZSTD_btopt   },  /* level 18.*/
-    { 14, 15, 15,  6,  3,256, ZSTD_btultra },  /* level 19.*/
-    { 14, 15, 15,  8,  3,256, ZSTD_btultra },  /* level 20.*/
-    { 14, 15, 15,  9,  3,256, ZSTD_btultra },  /* level 21.*/
-    { 14, 15, 15, 10,  3,512, ZSTD_btultra },  /* level 22.*/
+    { 14, 15, 14,  4,  3, 24, ZSTD_btopt   },  /* level 12.*/
+    { 14, 15, 14,  5,  3, 32, ZSTD_btultra },  /* level 13.*/
+    { 14, 15, 15,  6,  3, 64, ZSTD_btultra },  /* level 14.*/
+    { 14, 15, 15,  7,  3,256, ZSTD_btultra },  /* level 15.*/
+    { 14, 15, 15,  5,  3, 48, ZSTD_btultra2},  /* level 16.*/
+    { 14, 15, 15,  6,  3,128, ZSTD_btultra2},  /* level 17.*/
+    { 14, 15, 15,  7,  3,256, ZSTD_btultra2},  /* level 18.*/
+    { 14, 15, 15,  8,  3,256, ZSTD_btultra2},  /* level 19.*/
+    { 14, 15, 15,  8,  3,512, ZSTD_btultra2},  /* level 20.*/
+    { 14, 15, 15,  9,  3,512, ZSTD_btultra2},  /* level 21.*/
+    { 14, 15, 15, 10,  3,999, ZSTD_btultra2},  /* level 22.*/
 },
 };
 
@@ -4022,8 +4272,8 @@
     if (compressionLevel > ZSTD_MAX_CLEVEL) row = ZSTD_MAX_CLEVEL;
     {   ZSTD_compressionParameters cp = ZSTD_defaultCParameters[tableID][row];
         if (compressionLevel < 0) cp.targetLength = (unsigned)(-compressionLevel);   /* acceleration factor */
-        return ZSTD_adjustCParams_internal(cp, srcSizeHint, dictSize); }
-
+        return ZSTD_adjustCParams_internal(cp, srcSizeHint, dictSize);
+    }
 }
 
 /*! ZSTD_getParams() :
--- a/contrib/python-zstandard/zstd/compress/zstd_compress_internal.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstd_compress_internal.h	Wed Apr 17 13:41:18 2019 -0400
@@ -48,12 +48,6 @@
 typedef enum { ZSTDcs_created=0, ZSTDcs_init, ZSTDcs_ongoing, ZSTDcs_ending } ZSTD_compressionStage_e;
 typedef enum { zcss_init=0, zcss_load, zcss_flush } ZSTD_cStreamStage;
 
-typedef enum {
-    ZSTD_dictDefaultAttach = 0,
-    ZSTD_dictForceAttach = 1,
-    ZSTD_dictForceCopy = -1,
-} ZSTD_dictAttachPref_e;
-
 typedef struct ZSTD_prefixDict_s {
     const void* dict;
     size_t dictSize;
@@ -96,10 +90,10 @@
 
 typedef struct {
     /* All tables are allocated inside cctx->workspace by ZSTD_resetCCtx_internal() */
-    U32* litFreq;                /* table of literals statistics, of size 256 */
-    U32* litLengthFreq;          /* table of litLength statistics, of size (MaxLL+1) */
-    U32* matchLengthFreq;        /* table of matchLength statistics, of size (MaxML+1) */
-    U32* offCodeFreq;            /* table of offCode statistics, of size (MaxOff+1) */
+    unsigned* litFreq;           /* table of literals statistics, of size 256 */
+    unsigned* litLengthFreq;     /* table of litLength statistics, of size (MaxLL+1) */
+    unsigned* matchLengthFreq;   /* table of matchLength statistics, of size (MaxML+1) */
+    unsigned* offCodeFreq;       /* table of offCode statistics, of size (MaxOff+1) */
     ZSTD_match_t* matchTable;    /* list of found matches, of size ZSTD_OPT_NUM+1 */
     ZSTD_optimal_t* priceTable;  /* All positions tracked by optimal parser, of size ZSTD_OPT_NUM+1 */
 
@@ -139,7 +133,7 @@
     U32* hashTable3;
     U32* chainTable;
     optState_t opt;         /* optimal parser state */
-    const ZSTD_matchState_t *dictMatchState;
+    const ZSTD_matchState_t * dictMatchState;
     ZSTD_compressionParameters cParams;
 };
 
@@ -167,7 +161,7 @@
     U32 hashLog;            /* Log size of hashTable */
     U32 bucketSizeLog;      /* Log bucket size for collision resolution, at most 8 */
     U32 minMatchLength;     /* Minimum match length */
-    U32 hashEveryLog;       /* Log number of entries to skip */
+    U32 hashRateLog;       /* Log number of entries to skip */
     U32 windowLog;          /* Window log for the LDM */
 } ldmParams_t;
 
@@ -196,9 +190,10 @@
     ZSTD_dictAttachPref_e attachDictPref;
 
     /* Multithreading: used to pass parameters to mtctx */
-    unsigned nbWorkers;
-    unsigned jobSize;
-    unsigned overlapSizeLog;
+    int nbWorkers;
+    size_t jobSize;
+    int overlapLog;
+    int rsyncable;
 
     /* Long distance matching parameters */
     ldmParams_t ldmParams;
@@ -498,6 +493,64 @@
     }
 }
 
+/** ZSTD_ipow() :
+ * Return base^exponent.
+ */
+static U64 ZSTD_ipow(U64 base, U64 exponent)
+{
+    U64 power = 1;
+    while (exponent) {
+      if (exponent & 1) power *= base;
+      exponent >>= 1;
+      base *= base;
+    }
+    return power;
+}
+
+#define ZSTD_ROLL_HASH_CHAR_OFFSET 10
+
+/** ZSTD_rollingHash_append() :
+ * Add the buffer to the hash value.
+ */
+static U64 ZSTD_rollingHash_append(U64 hash, void const* buf, size_t size)
+{
+    BYTE const* istart = (BYTE const*)buf;
+    size_t pos;
+    for (pos = 0; pos < size; ++pos) {
+        hash *= prime8bytes;
+        hash += istart[pos] + ZSTD_ROLL_HASH_CHAR_OFFSET;
+    }
+    return hash;
+}
+
+/** ZSTD_rollingHash_compute() :
+ * Compute the rolling hash value of the buffer.
+ */
+MEM_STATIC U64 ZSTD_rollingHash_compute(void const* buf, size_t size)
+{
+    return ZSTD_rollingHash_append(0, buf, size);
+}
+
+/** ZSTD_rollingHash_primePower() :
+ * Compute the primePower to be passed to ZSTD_rollingHash_rotate() for a hash
+ * over a window of length bytes.
+ */
+MEM_STATIC U64 ZSTD_rollingHash_primePower(U32 length)
+{
+    return ZSTD_ipow(prime8bytes, length - 1);
+}
+
+/** ZSTD_rollingHash_rotate() :
+ * Rotate the rolling hash by one byte.
+ */
+MEM_STATIC U64 ZSTD_rollingHash_rotate(U64 hash, BYTE toRemove, BYTE toAdd, U64 primePower)
+{
+    hash -= (toRemove + ZSTD_ROLL_HASH_CHAR_OFFSET) * primePower;
+    hash *= prime8bytes;
+    hash += toAdd + ZSTD_ROLL_HASH_CHAR_OFFSET;
+    return hash;
+}
+
 /*-*************************************
 *  Round buffer management
 ***************************************/
@@ -626,20 +679,23 @@
  * dictMatchState mode, lowLimit and dictLimit are the same, and the dictionary
  * is below them. forceWindow and dictMatchState are therefore incompatible.
  */
-MEM_STATIC void ZSTD_window_enforceMaxDist(ZSTD_window_t* window,
-                                           void const* srcEnd, U32 maxDist,
-                                           U32* loadedDictEndPtr,
-                                           const ZSTD_matchState_t** dictMatchStatePtr)
+MEM_STATIC void
+ZSTD_window_enforceMaxDist(ZSTD_window_t* window,
+                           void const* srcEnd,
+                           U32 maxDist,
+                           U32* loadedDictEndPtr,
+                     const ZSTD_matchState_t** dictMatchStatePtr)
 {
-    U32 const current = (U32)((BYTE const*)srcEnd - window->base);
-    U32 loadedDictEnd = loadedDictEndPtr != NULL ? *loadedDictEndPtr : 0;
-    DEBUGLOG(5, "ZSTD_window_enforceMaxDist: current=%u, maxDist=%u", current, maxDist);
-    if (current > maxDist + loadedDictEnd) {
-        U32 const newLowLimit = current - maxDist;
+    U32 const blockEndIdx = (U32)((BYTE const*)srcEnd - window->base);
+    U32 loadedDictEnd = (loadedDictEndPtr != NULL) ? *loadedDictEndPtr : 0;
+    DEBUGLOG(5, "ZSTD_window_enforceMaxDist: blockEndIdx=%u, maxDist=%u",
+                (unsigned)blockEndIdx, (unsigned)maxDist);
+    if (blockEndIdx > maxDist + loadedDictEnd) {
+        U32 const newLowLimit = blockEndIdx - maxDist;
         if (window->lowLimit < newLowLimit) window->lowLimit = newLowLimit;
         if (window->dictLimit < window->lowLimit) {
             DEBUGLOG(5, "Update dictLimit to match lowLimit, from %u to %u",
-                        window->dictLimit, window->lowLimit);
+                        (unsigned)window->dictLimit, (unsigned)window->lowLimit);
             window->dictLimit = window->lowLimit;
         }
         if (loadedDictEndPtr)
@@ -690,20 +746,23 @@
 
 
 /* debug functions */
+#if (DEBUGLEVEL>=2)
 
 MEM_STATIC double ZSTD_fWeight(U32 rawStat)
 {
     U32 const fp_accuracy = 8;
     U32 const fp_multiplier = (1 << fp_accuracy);
-    U32 const stat = rawStat + 1;
-    U32 const hb = ZSTD_highbit32(stat);
+    U32 const newStat = rawStat + 1;
+    U32 const hb = ZSTD_highbit32(newStat);
     U32 const BWeight = hb * fp_multiplier;
-    U32 const FWeight = (stat << fp_accuracy) >> hb;
+    U32 const FWeight = (newStat << fp_accuracy) >> hb;
     U32 const weight = BWeight + FWeight;
     assert(hb + fp_accuracy < 31);
     return (double)weight / fp_multiplier;
 }
 
+/* display a table content,
+ * listing each element, its frequency, and its predicted bit cost */
 MEM_STATIC void ZSTD_debugTable(const U32* table, U32 max)
 {
     unsigned u, sum;
@@ -715,6 +774,9 @@
     }
 }
 
+#endif
+
+
 #if defined (__cplusplus)
 }
 #endif
--- a/contrib/python-zstandard/zstd/compress/zstd_double_fast.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstd_double_fast.c	Wed Apr 17 13:41:18 2019 -0400
@@ -18,7 +18,7 @@
     const ZSTD_compressionParameters* const cParams = &ms->cParams;
     U32* const hashLarge = ms->hashTable;
     U32  const hBitsL = cParams->hashLog;
-    U32  const mls = cParams->searchLength;
+    U32  const mls = cParams->minMatch;
     U32* const hashSmall = ms->chainTable;
     U32  const hBitsS = cParams->chainLog;
     const BYTE* const base = ms->window.base;
@@ -309,7 +309,7 @@
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
         void const* src, size_t srcSize)
 {
-    const U32 mls = ms->cParams.searchLength;
+    const U32 mls = ms->cParams.minMatch;
     switch(mls)
     {
     default: /* includes case 3 */
@@ -329,7 +329,7 @@
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
         void const* src, size_t srcSize)
 {
-    const U32 mls = ms->cParams.searchLength;
+    const U32 mls = ms->cParams.minMatch;
     switch(mls)
     {
     default: /* includes case 3 */
@@ -483,7 +483,7 @@
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
         void const* src, size_t srcSize)
 {
-    U32 const mls = ms->cParams.searchLength;
+    U32 const mls = ms->cParams.minMatch;
     switch(mls)
     {
     default: /* includes case 3 */
--- a/contrib/python-zstandard/zstd/compress/zstd_fast.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstd_fast.c	Wed Apr 17 13:41:18 2019 -0400
@@ -18,7 +18,7 @@
     const ZSTD_compressionParameters* const cParams = &ms->cParams;
     U32* const hashTable = ms->hashTable;
     U32  const hBits = cParams->hashLog;
-    U32  const mls = cParams->searchLength;
+    U32  const mls = cParams->minMatch;
     const BYTE* const base = ms->window.base;
     const BYTE* ip = base + ms->nextToUpdate;
     const BYTE* const iend = ((const BYTE*)end) - HASH_READ_SIZE;
@@ -27,18 +27,18 @@
     /* Always insert every fastHashFillStep position into the hash table.
      * Insert the other positions if their hash entry is empty.
      */
-    for (; ip + fastHashFillStep - 1 <= iend; ip += fastHashFillStep) {
+    for ( ; ip + fastHashFillStep < iend + 2; ip += fastHashFillStep) {
         U32 const current = (U32)(ip - base);
-        U32 i;
-        for (i = 0; i < fastHashFillStep; ++i) {
-            size_t const hash = ZSTD_hashPtr(ip + i, hBits, mls);
-            if (i == 0 || hashTable[hash] == 0)
-                hashTable[hash] = current + i;
-            /* Only load extra positions for ZSTD_dtlm_full */
-            if (dtlm == ZSTD_dtlm_fast)
-                break;
-        }
-    }
+        size_t const hash0 = ZSTD_hashPtr(ip, hBits, mls);
+        hashTable[hash0] = current;
+        if (dtlm == ZSTD_dtlm_fast) continue;
+        /* Only load extra positions for ZSTD_dtlm_full */
+        {   U32 p;
+            for (p = 1; p < fastHashFillStep; ++p) {
+                size_t const hash = ZSTD_hashPtr(ip + p, hBits, mls);
+                if (hashTable[hash] == 0) {  /* not yet filled */
+                    hashTable[hash] = current + p;
+    }   }   }   }
 }
 
 FORCE_INLINE_TEMPLATE
@@ -235,7 +235,7 @@
         void const* src, size_t srcSize)
 {
     ZSTD_compressionParameters const* cParams = &ms->cParams;
-    U32 const mls = cParams->searchLength;
+    U32 const mls = cParams->minMatch;
     assert(ms->dictMatchState == NULL);
     switch(mls)
     {
@@ -256,7 +256,7 @@
         void const* src, size_t srcSize)
 {
     ZSTD_compressionParameters const* cParams = &ms->cParams;
-    U32 const mls = cParams->searchLength;
+    U32 const mls = cParams->minMatch;
     assert(ms->dictMatchState != NULL);
     switch(mls)
     {
@@ -375,7 +375,7 @@
         void const* src, size_t srcSize)
 {
     ZSTD_compressionParameters const* cParams = &ms->cParams;
-    U32 const mls = cParams->searchLength;
+    U32 const mls = cParams->minMatch;
     switch(mls)
     {
     default: /* includes case 3 */
--- a/contrib/python-zstandard/zstd/compress/zstd_lazy.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstd_lazy.c	Wed Apr 17 13:41:18 2019 -0400
@@ -63,12 +63,13 @@
 static void
 ZSTD_insertDUBT1(ZSTD_matchState_t* ms,
                  U32 current, const BYTE* inputEnd,
-                 U32 nbCompares, U32 btLow, const ZSTD_dictMode_e dictMode)
+                 U32 nbCompares, U32 btLow,
+                 const ZSTD_dictMode_e dictMode)
 {
     const ZSTD_compressionParameters* const cParams = &ms->cParams;
-    U32*   const bt = ms->chainTable;
-    U32    const btLog  = cParams->chainLog - 1;
-    U32    const btMask = (1 << btLog) - 1;
+    U32* const bt = ms->chainTable;
+    U32  const btLog  = cParams->chainLog - 1;
+    U32  const btMask = (1 << btLog) - 1;
     size_t commonLengthSmaller=0, commonLengthLarger=0;
     const BYTE* const base = ms->window.base;
     const BYTE* const dictBase = ms->window.dictBase;
@@ -80,7 +81,7 @@
     const BYTE* match;
     U32* smallerPtr = bt + 2*(current&btMask);
     U32* largerPtr  = smallerPtr + 1;
-    U32 matchIndex = *smallerPtr;
+    U32 matchIndex = *smallerPtr;   /* this candidate is unsorted : next sorted candidate is reached through *smallerPtr, while *largerPtr contains previous unsorted candidate (which is already saved and can be overwritten) */
     U32 dummy32;   /* to be nullified at the end */
     U32 const windowLow = ms->window.lowLimit;
 
@@ -93,6 +94,9 @@
         U32* const nextPtr = bt + 2*(matchIndex & btMask);
         size_t matchLength = MIN(commonLengthSmaller, commonLengthLarger);   /* guaranteed minimum nb of common bytes */
         assert(matchIndex < current);
+        /* note : all candidates are now supposed sorted,
+         * but it's still possible to have nextPtr[1] == ZSTD_DUBT_UNSORTED_MARK
+         * when a real index has the same value as ZSTD_DUBT_UNSORTED_MARK */
 
         if ( (dictMode != ZSTD_extDict)
           || (matchIndex+matchLength >= dictLimit)  /* both in current segment*/
@@ -108,7 +112,7 @@
             match = dictBase + matchIndex;
             matchLength += ZSTD_count_2segments(ip+matchLength, match+matchLength, iend, dictEnd, prefixStart);
             if (matchIndex+matchLength >= dictLimit)
-                match = base + matchIndex;   /* to prepare for next usage of match[matchLength] */
+                match = base + matchIndex;   /* preparation for next read of match[matchLength] */
         }
 
         DEBUGLOG(8, "ZSTD_insertDUBT1: comparing %u with %u : found %u common bytes ",
@@ -147,6 +151,7 @@
         ZSTD_matchState_t* ms,
         const BYTE* const ip, const BYTE* const iend,
         size_t* offsetPtr,
+        size_t bestLength,
         U32 nbCompares,
         U32 const mls,
         const ZSTD_dictMode_e dictMode)
@@ -172,8 +177,7 @@
     U32         const btMask = (1 << btLog) - 1;
     U32         const btLow = (btMask >= dictHighLimit - dictLowLimit) ? dictLowLimit : dictHighLimit - btMask;
 
-    size_t commonLengthSmaller=0, commonLengthLarger=0, bestLength=0;
-    U32 matchEndIdx = current+8+1;
+    size_t commonLengthSmaller=0, commonLengthLarger=0;
 
     (void)dictMode;
     assert(dictMode == ZSTD_dictMatchState);
@@ -188,10 +192,8 @@
 
         if (matchLength > bestLength) {
             U32 matchIndex = dictMatchIndex + dictIndexDelta;
-            if (matchLength > matchEndIdx - matchIndex)
-                matchEndIdx = matchIndex + (U32)matchLength;
             if ( (4*(int)(matchLength-bestLength)) > (int)(ZSTD_highbit32(current-matchIndex+1) - ZSTD_highbit32((U32)offsetPtr[0]+1)) ) {
-                DEBUGLOG(2, "ZSTD_DUBT_findBestDictMatch(%u) : found better match length %u -> %u and offsetCode %u -> %u (dictMatchIndex %u, matchIndex %u)",
+                DEBUGLOG(9, "ZSTD_DUBT_findBetterDictMatch(%u) : found better match length %u -> %u and offsetCode %u -> %u (dictMatchIndex %u, matchIndex %u)",
                     current, (U32)bestLength, (U32)matchLength, (U32)*offsetPtr, ZSTD_REP_MOVE + current - matchIndex, dictMatchIndex, matchIndex);
                 bestLength = matchLength, *offsetPtr = ZSTD_REP_MOVE + current - matchIndex;
             }
@@ -200,7 +202,6 @@
             }
         }
 
-        DEBUGLOG(2, "matchLength:%6zu, match:%p, prefixStart:%p, ip:%p", matchLength, match, prefixStart, ip);
         if (match[matchLength] < ip[matchLength]) {
             if (dictMatchIndex <= btLow) { break; }   /* beyond tree size, stop the search */
             commonLengthSmaller = matchLength;    /* all smaller will now have at least this guaranteed common length */
@@ -215,7 +216,7 @@
 
     if (bestLength >= MINMATCH) {
         U32 const mIndex = current - ((U32)*offsetPtr - ZSTD_REP_MOVE); (void)mIndex;
-        DEBUGLOG(2, "ZSTD_DUBT_findBestDictMatch(%u) : found match of length %u and offsetCode %u (pos %u)",
+        DEBUGLOG(8, "ZSTD_DUBT_findBetterDictMatch(%u) : found match of length %u and offsetCode %u (pos %u)",
                     current, (U32)bestLength, (U32)*offsetPtr, mIndex);
     }
     return bestLength;
@@ -261,7 +262,7 @@
          && (nbCandidates > 1) ) {
         DEBUGLOG(8, "ZSTD_DUBT_findBestMatch: candidate %u is unsorted",
                     matchIndex);
-        *unsortedMark = previousCandidate;
+        *unsortedMark = previousCandidate;  /* the unsortedMark becomes a reversed chain, to move up back to original position */
         previousCandidate = matchIndex;
         matchIndex = *nextCandidate;
         nextCandidate = bt + 2*(matchIndex&btMask);
@@ -269,11 +270,13 @@
         nbCandidates --;
     }
 
+    /* nullify last candidate if it's still unsorted
+     * simplification, detrimental to compression ratio, beneficial for speed */
     if ( (matchIndex > unsortLimit)
       && (*unsortedMark==ZSTD_DUBT_UNSORTED_MARK) ) {
         DEBUGLOG(7, "ZSTD_DUBT_findBestMatch: nullify last unsorted candidate %u",
                     matchIndex);
-        *nextCandidate = *unsortedMark = 0;   /* nullify next candidate if it's still unsorted (note : simplification, detrimental to compression ratio, beneficial for speed) */
+        *nextCandidate = *unsortedMark = 0;
     }
 
     /* batch sort stacked candidates */
@@ -288,14 +291,14 @@
     }
 
     /* find longest match */
-    {   size_t commonLengthSmaller=0, commonLengthLarger=0;
+    {   size_t commonLengthSmaller = 0, commonLengthLarger = 0;
         const BYTE* const dictBase = ms->window.dictBase;
         const U32 dictLimit = ms->window.dictLimit;
         const BYTE* const dictEnd = dictBase + dictLimit;
         const BYTE* const prefixStart = base + dictLimit;
         U32* smallerPtr = bt + 2*(current&btMask);
         U32* largerPtr  = bt + 2*(current&btMask) + 1;
-        U32 matchEndIdx = current+8+1;
+        U32 matchEndIdx = current + 8 + 1;
         U32 dummy32;   /* to be nullified at the end */
         size_t bestLength = 0;
 
@@ -323,6 +326,11 @@
                 if ( (4*(int)(matchLength-bestLength)) > (int)(ZSTD_highbit32(current-matchIndex+1) - ZSTD_highbit32((U32)offsetPtr[0]+1)) )
                     bestLength = matchLength, *offsetPtr = ZSTD_REP_MOVE + current - matchIndex;
                 if (ip+matchLength == iend) {   /* equal : no way to know if inf or sup */
+                    if (dictMode == ZSTD_dictMatchState) {
+                        nbCompares = 0; /* in addition to avoiding checking any
+                                         * further in this loop, make sure we
+                                         * skip checking in the dictionary. */
+                    }
                     break;   /* drop, to guarantee consistency (miss a little bit of compression) */
                 }
             }
@@ -346,7 +354,10 @@
         *smallerPtr = *largerPtr = 0;
 
         if (dictMode == ZSTD_dictMatchState && nbCompares) {
-            bestLength = ZSTD_DUBT_findBetterDictMatch(ms, ip, iend, offsetPtr, nbCompares, mls, dictMode);
+            bestLength = ZSTD_DUBT_findBetterDictMatch(
+                    ms, ip, iend,
+                    offsetPtr, bestLength, nbCompares,
+                    mls, dictMode);
         }
 
         assert(matchEndIdx > current+8); /* ensure nextToUpdate is increased */
@@ -381,7 +392,7 @@
                             const BYTE* ip, const BYTE* const iLimit,
                                   size_t* offsetPtr)
 {
-    switch(ms->cParams.searchLength)
+    switch(ms->cParams.minMatch)
     {
     default : /* includes case 3 */
     case 4 : return ZSTD_BtFindBestMatch(ms, ip, iLimit, offsetPtr, 4, ZSTD_noDict);
@@ -397,7 +408,7 @@
                         const BYTE* ip, const BYTE* const iLimit,
                         size_t* offsetPtr)
 {
-    switch(ms->cParams.searchLength)
+    switch(ms->cParams.minMatch)
     {
     default : /* includes case 3 */
     case 4 : return ZSTD_BtFindBestMatch(ms, ip, iLimit, offsetPtr, 4, ZSTD_dictMatchState);
@@ -413,7 +424,7 @@
                         const BYTE* ip, const BYTE* const iLimit,
                         size_t* offsetPtr)
 {
-    switch(ms->cParams.searchLength)
+    switch(ms->cParams.minMatch)
     {
     default : /* includes case 3 */
     case 4 : return ZSTD_BtFindBestMatch(ms, ip, iLimit, offsetPtr, 4, ZSTD_extDict);
@@ -428,7 +439,7 @@
 /* *********************************
 *  Hash Chain
 ***********************************/
-#define NEXT_IN_CHAIN(d, mask)   chainTable[(d) & mask]
+#define NEXT_IN_CHAIN(d, mask)   chainTable[(d) & (mask)]
 
 /* Update chains up to ip (excluded)
    Assumption : always within prefix (i.e. not within extDict) */
@@ -458,7 +469,7 @@
 
 U32 ZSTD_insertAndFindFirstIndex(ZSTD_matchState_t* ms, const BYTE* ip) {
     const ZSTD_compressionParameters* const cParams = &ms->cParams;
-    return ZSTD_insertAndFindFirstIndex_internal(ms, cParams, ip, ms->cParams.searchLength);
+    return ZSTD_insertAndFindFirstIndex_internal(ms, cParams, ip, ms->cParams.minMatch);
 }
 
 
@@ -492,6 +503,7 @@
         size_t currentMl=0;
         if ((dictMode != ZSTD_extDict) || matchIndex >= dictLimit) {
             const BYTE* const match = base + matchIndex;
+            assert(matchIndex >= dictLimit);   /* ensures this is true if dictMode != ZSTD_extDict */
             if (match[ml] == ip[ml])   /* potentially better */
                 currentMl = ZSTD_count(ip, match, iLimit);
         } else {
@@ -554,7 +566,7 @@
                         const BYTE* ip, const BYTE* const iLimit,
                         size_t* offsetPtr)
 {
-    switch(ms->cParams.searchLength)
+    switch(ms->cParams.minMatch)
     {
     default : /* includes case 3 */
     case 4 : return ZSTD_HcFindBestMatch_generic(ms, ip, iLimit, offsetPtr, 4, ZSTD_noDict);
@@ -570,7 +582,7 @@
                         const BYTE* ip, const BYTE* const iLimit,
                         size_t* offsetPtr)
 {
-    switch(ms->cParams.searchLength)
+    switch(ms->cParams.minMatch)
     {
     default : /* includes case 3 */
     case 4 : return ZSTD_HcFindBestMatch_generic(ms, ip, iLimit, offsetPtr, 4, ZSTD_dictMatchState);
@@ -586,7 +598,7 @@
                         const BYTE* ip, const BYTE* const iLimit,
                         size_t* offsetPtr)
 {
-    switch(ms->cParams.searchLength)
+    switch(ms->cParams.minMatch)
     {
     default : /* includes case 3 */
     case 4 : return ZSTD_HcFindBestMatch_generic(ms, ip, iLimit, offsetPtr, 4, ZSTD_extDict);
--- a/contrib/python-zstandard/zstd/compress/zstd_ldm.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstd_ldm.c	Wed Apr 17 13:41:18 2019 -0400
@@ -37,8 +37,8 @@
         params->hashLog = MAX(ZSTD_HASHLOG_MIN, params->windowLog - LDM_HASH_RLOG);
         assert(params->hashLog <= ZSTD_HASHLOG_MAX);
     }
-    if (params->hashEveryLog == 0) {
-        params->hashEveryLog = params->windowLog < params->hashLog
+    if (params->hashRateLog == 0) {
+        params->hashRateLog = params->windowLog < params->hashLog
                                    ? 0
                                    : params->windowLog - params->hashLog;
     }
@@ -119,20 +119,20 @@
  *
  *  Gets the small hash, checksum, and tag from the rollingHash.
  *
- *  If the tag matches (1 << ldmParams.hashEveryLog)-1, then
+ *  If the tag matches (1 << ldmParams.hashRateLog)-1, then
  *  creates an ldmEntry from the offset, and inserts it into the hash table.
  *
  *  hBits is the length of the small hash, which is the most significant hBits
  *  of rollingHash. The checksum is the next 32 most significant bits, followed
- *  by ldmParams.hashEveryLog bits that make up the tag. */
+ *  by ldmParams.hashRateLog bits that make up the tag. */
 static void ZSTD_ldm_makeEntryAndInsertByTag(ldmState_t* ldmState,
                                              U64 const rollingHash,
                                              U32 const hBits,
                                              U32 const offset,
                                              ldmParams_t const ldmParams)
 {
-    U32 const tag = ZSTD_ldm_getTag(rollingHash, hBits, ldmParams.hashEveryLog);
-    U32 const tagMask = ((U32)1 << ldmParams.hashEveryLog) - 1;
+    U32 const tag = ZSTD_ldm_getTag(rollingHash, hBits, ldmParams.hashRateLog);
+    U32 const tagMask = ((U32)1 << ldmParams.hashRateLog) - 1;
     if (tag == tagMask) {
         U32 const hash = ZSTD_ldm_getSmallHash(rollingHash, hBits);
         U32 const checksum = ZSTD_ldm_getChecksum(rollingHash, hBits);
@@ -143,56 +143,6 @@
     }
 }
 
-/** ZSTD_ldm_getRollingHash() :
- *  Get a 64-bit hash using the first len bytes from buf.
- *
- *  Giving bytes s = s_1, s_2, ... s_k, the hash is defined to be
- *  H(s) = s_1*(a^(k-1)) + s_2*(a^(k-2)) + ... + s_k*(a^0)
- *
- *  where the constant a is defined to be prime8bytes.
- *
- *  The implementation adds an offset to each byte, so
- *  H(s) = (s_1 + HASH_CHAR_OFFSET)*(a^(k-1)) + ... */
-static U64 ZSTD_ldm_getRollingHash(const BYTE* buf, U32 len)
-{
-    U64 ret = 0;
-    U32 i;
-    for (i = 0; i < len; i++) {
-        ret *= prime8bytes;
-        ret += buf[i] + LDM_HASH_CHAR_OFFSET;
-    }
-    return ret;
-}
-
-/** ZSTD_ldm_ipow() :
- *  Return base^exp. */
-static U64 ZSTD_ldm_ipow(U64 base, U64 exp)
-{
-    U64 ret = 1;
-    while (exp) {
-        if (exp & 1) { ret *= base; }
-        exp >>= 1;
-        base *= base;
-    }
-    return ret;
-}
-
-U64 ZSTD_ldm_getHashPower(U32 minMatchLength) {
-    DEBUGLOG(4, "ZSTD_ldm_getHashPower: mml=%u", minMatchLength);
-    assert(minMatchLength >= ZSTD_LDM_MINMATCH_MIN);
-    return ZSTD_ldm_ipow(prime8bytes, minMatchLength - 1);
-}
-
-/** ZSTD_ldm_updateHash() :
- *  Updates hash by removing toRemove and adding toAdd. */
-static U64 ZSTD_ldm_updateHash(U64 hash, BYTE toRemove, BYTE toAdd, U64 hashPower)
-{
-    hash -= ((toRemove + LDM_HASH_CHAR_OFFSET) * hashPower);
-    hash *= prime8bytes;
-    hash += toAdd + LDM_HASH_CHAR_OFFSET;
-    return hash;
-}
-
 /** ZSTD_ldm_countBackwardsMatch() :
  *  Returns the number of bytes that match backwards before pIn and pMatch.
  *
@@ -238,6 +188,7 @@
     case ZSTD_btlazy2:
     case ZSTD_btopt:
     case ZSTD_btultra:
+    case ZSTD_btultra2:
         break;
     default:
         assert(0);  /* not possible : not a valid strategy id */
@@ -261,9 +212,9 @@
     const BYTE* cur = lastHashed + 1;
 
     while (cur < iend) {
-        rollingHash = ZSTD_ldm_updateHash(rollingHash, cur[-1],
-                                          cur[ldmParams.minMatchLength-1],
-                                          state->hashPower);
+        rollingHash = ZSTD_rollingHash_rotate(rollingHash, cur[-1],
+                                              cur[ldmParams.minMatchLength-1],
+                                              state->hashPower);
         ZSTD_ldm_makeEntryAndInsertByTag(state,
                                          rollingHash, hBits,
                                          (U32)(cur - base), ldmParams);
@@ -297,8 +248,8 @@
     U64 const hashPower = ldmState->hashPower;
     U32 const hBits = params->hashLog - params->bucketSizeLog;
     U32 const ldmBucketSize = 1U << params->bucketSizeLog;
-    U32 const hashEveryLog = params->hashEveryLog;
-    U32 const ldmTagMask = (1U << params->hashEveryLog) - 1;
+    U32 const hashRateLog = params->hashRateLog;
+    U32 const ldmTagMask = (1U << params->hashRateLog) - 1;
     /* Prefix and extDict parameters */
     U32 const dictLimit = ldmState->window.dictLimit;
     U32 const lowestIndex = extDict ? ldmState->window.lowLimit : dictLimit;
@@ -324,16 +275,16 @@
         size_t forwardMatchLength = 0, backwardMatchLength = 0;
         ldmEntry_t* bestEntry = NULL;
         if (ip != istart) {
-            rollingHash = ZSTD_ldm_updateHash(rollingHash, lastHashed[0],
-                                              lastHashed[minMatchLength],
-                                              hashPower);
+            rollingHash = ZSTD_rollingHash_rotate(rollingHash, lastHashed[0],
+                                                  lastHashed[minMatchLength],
+                                                  hashPower);
         } else {
-            rollingHash = ZSTD_ldm_getRollingHash(ip, minMatchLength);
+            rollingHash = ZSTD_rollingHash_compute(ip, minMatchLength);
         }
         lastHashed = ip;
 
         /* Do not insert and do not look for a match */
-        if (ZSTD_ldm_getTag(rollingHash, hBits, hashEveryLog) != ldmTagMask) {
+        if (ZSTD_ldm_getTag(rollingHash, hBits, hashRateLog) != ldmTagMask) {
            ip++;
            continue;
         }
@@ -593,7 +544,7 @@
     void const* src, size_t srcSize)
 {
     const ZSTD_compressionParameters* const cParams = &ms->cParams;
-    unsigned const minMatch = cParams->searchLength;
+    unsigned const minMatch = cParams->minMatch;
     ZSTD_blockCompressor const blockCompressor =
         ZSTD_selectBlockCompressor(cParams->strategy, ZSTD_matchState_dictMode(ms));
     /* Input bounds */
--- a/contrib/python-zstandard/zstd/compress/zstd_ldm.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstd_ldm.h	Wed Apr 17 13:41:18 2019 -0400
@@ -21,7 +21,7 @@
 *  Long distance matching
 ***************************************/
 
-#define ZSTD_LDM_DEFAULT_WINDOW_LOG ZSTD_WINDOWLOG_DEFAULTMAX
+#define ZSTD_LDM_DEFAULT_WINDOW_LOG ZSTD_WINDOWLOG_LIMIT_DEFAULT
 
 /**
  * ZSTD_ldm_generateSequences():
@@ -86,12 +86,8 @@
  */
 size_t ZSTD_ldm_getMaxNbSeq(ldmParams_t params, size_t maxChunkSize);
 
-/** ZSTD_ldm_getTableSize() :
- *  Return prime8bytes^(minMatchLength-1) */
-U64 ZSTD_ldm_getHashPower(U32 minMatchLength);
-
 /** ZSTD_ldm_adjustParameters() :
- *  If the params->hashEveryLog is not set, set it to its default value based on
+ *  If the params->hashRateLog is not set, set it to its default value based on
  *  windowLog and params->hashLog.
  *
  *  Ensures that params->bucketSizeLog is <= params->hashLog (setting it to
--- a/contrib/python-zstandard/zstd/compress/zstd_opt.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstd_opt.c	Wed Apr 17 13:41:18 2019 -0400
@@ -17,6 +17,8 @@
 #define ZSTD_FREQ_DIV       4   /* log factor when using previous stats to init next stats */
 #define ZSTD_MAX_PRICE     (1<<30)
 
+#define ZSTD_PREDEF_THRESHOLD 1024   /* if srcSize < ZSTD_PREDEF_THRESHOLD, symbols' cost is assumed static, directly determined by pre-defined distributions */
+
 
 /*-*************************************
 *  Price functions for optimal parser
@@ -52,11 +54,15 @@
     return weight;
 }
 
-/* debugging function, @return price in bytes */
+#if (DEBUGLEVEL>=2)
+/* debugging function,
+ * @return price in bytes as fractional value
+ * for debug messages only */
 MEM_STATIC double ZSTD_fCost(U32 price)
 {
     return (double)price / (BITCOST_MULTIPLIER*8);
 }
+#endif
 
 static void ZSTD_setBasePrices(optState_t* optPtr, int optLevel)
 {
@@ -67,29 +73,44 @@
 }
 
 
-static U32 ZSTD_downscaleStat(U32* table, U32 lastEltIndex, int malus)
+/* ZSTD_downscaleStat() :
+ * reduce all elements in table by a factor 2^(ZSTD_FREQ_DIV+malus)
+ * return the resulting sum of elements */
+static U32 ZSTD_downscaleStat(unsigned* table, U32 lastEltIndex, int malus)
 {
     U32 s, sum=0;
+    DEBUGLOG(5, "ZSTD_downscaleStat (nbElts=%u)", (unsigned)lastEltIndex+1);
     assert(ZSTD_FREQ_DIV+malus > 0 && ZSTD_FREQ_DIV+malus < 31);
-    for (s=0; s<=lastEltIndex; s++) {
+    for (s=0; s<lastEltIndex+1; s++) {
         table[s] = 1 + (table[s] >> (ZSTD_FREQ_DIV+malus));
         sum += table[s];
     }
     return sum;
 }
 
-static void ZSTD_rescaleFreqs(optState_t* const optPtr,
-                              const BYTE* const src, size_t const srcSize,
-                              int optLevel)
+/* ZSTD_rescaleFreqs() :
+ * if first block (detected by optPtr->litLengthSum == 0) : init statistics
+ *    take hints from dictionary if there is one
+ *    or init from zero, using src for literals stats, or flat 1 for match symbols
+ * otherwise downscale existing stats, to be used as seed for next block.
+ */
+static void
+ZSTD_rescaleFreqs(optState_t* const optPtr,
+            const BYTE* const src, size_t const srcSize,
+                  int const optLevel)
 {
+    DEBUGLOG(5, "ZSTD_rescaleFreqs (srcSize=%u)", (unsigned)srcSize);
     optPtr->priceType = zop_dynamic;
 
     if (optPtr->litLengthSum == 0) {  /* first block : init */
-        if (srcSize <= 1024)   /* heuristic */
+        if (srcSize <= ZSTD_PREDEF_THRESHOLD) {  /* heuristic */
+            DEBUGLOG(5, "(srcSize <= ZSTD_PREDEF_THRESHOLD) => zop_predef");
             optPtr->priceType = zop_predef;
+        }
 
         assert(optPtr->symbolCosts != NULL);
-        if (optPtr->symbolCosts->huf.repeatMode == HUF_repeat_valid) { /* huffman table presumed generated by dictionary */
+        if (optPtr->symbolCosts->huf.repeatMode == HUF_repeat_valid) {
+            /* huffman table presumed generated by dictionary */
             optPtr->priceType = zop_dynamic;
 
             assert(optPtr->litFreq != NULL);
@@ -208,7 +229,9 @@
 
     /* dynamic statistics */
     {   U32 const llCode = ZSTD_LLcode(litLength);
-        return (LL_bits[llCode] * BITCOST_MULTIPLIER) + (optPtr->litLengthSumBasePrice - WEIGHT(optPtr->litLengthFreq[llCode], optLevel));
+        return (LL_bits[llCode] * BITCOST_MULTIPLIER)
+             + optPtr->litLengthSumBasePrice
+             - WEIGHT(optPtr->litLengthFreq[llCode], optLevel);
     }
 }
 
@@ -253,7 +276,7 @@
 FORCE_INLINE_TEMPLATE U32
 ZSTD_getMatchPrice(U32 const offset,
                    U32 const matchLength,
-                   const optState_t* const optPtr,
+             const optState_t* const optPtr,
                    int const optLevel)
 {
     U32 price;
@@ -385,7 +408,6 @@
     U32* largerPtr  = smallerPtr + 1;
     U32 dummy32;   /* to be nullified at the end */
     U32 const windowLow = ms->window.lowLimit;
-    U32 const matchLow = windowLow ? windowLow : 1;
     U32 matchEndIdx = current+8+1;
     size_t bestLength = 8;
     U32 nbCompares = 1U << cParams->searchLog;
@@ -401,7 +423,8 @@
     assert(ip <= iend-8);   /* required for h calculation */
     hashTable[h] = current;   /* Update Hash Table */
 
-    while (nbCompares-- && (matchIndex >= matchLow)) {
+    assert(windowLow > 0);
+    while (nbCompares-- && (matchIndex >= windowLow)) {
         U32* const nextPtr = bt + 2*(matchIndex & btMask);
         size_t matchLength = MIN(commonLengthSmaller, commonLengthLarger);   /* guaranteed minimum nb of common bytes */
         assert(matchIndex < current);
@@ -479,7 +502,7 @@
     const BYTE* const base = ms->window.base;
     U32 const target = (U32)(ip - base);
     U32 idx = ms->nextToUpdate;
-    DEBUGLOG(5, "ZSTD_updateTree_internal, from %u to %u  (dictMode:%u)",
+    DEBUGLOG(6, "ZSTD_updateTree_internal, from %u to %u  (dictMode:%u)",
                 idx, target, dictMode);
 
     while(idx < target)
@@ -488,15 +511,18 @@
 }
 
 void ZSTD_updateTree(ZSTD_matchState_t* ms, const BYTE* ip, const BYTE* iend) {
-    ZSTD_updateTree_internal(ms, ip, iend, ms->cParams.searchLength, ZSTD_noDict);
+    ZSTD_updateTree_internal(ms, ip, iend, ms->cParams.minMatch, ZSTD_noDict);
 }
 
 FORCE_INLINE_TEMPLATE
 U32 ZSTD_insertBtAndGetAllMatches (
                     ZSTD_matchState_t* ms,
                     const BYTE* const ip, const BYTE* const iLimit, const ZSTD_dictMode_e dictMode,
-                    U32 rep[ZSTD_REP_NUM], U32 const ll0,
-                    ZSTD_match_t* matches, const U32 lengthToBeat, U32 const mls /* template */)
+                    U32 rep[ZSTD_REP_NUM],
+                    U32 const ll0,   /* tells if associated literal length is 0 or not. This value must be 0 or 1 */
+                    ZSTD_match_t* matches,
+                    const U32 lengthToBeat,
+                    U32 const mls /* template */)
 {
     const ZSTD_compressionParameters* const cParams = &ms->cParams;
     U32 const sufficient_len = MIN(cParams->targetLength, ZSTD_OPT_NUM -1);
@@ -542,6 +568,7 @@
     DEBUGLOG(8, "ZSTD_insertBtAndGetAllMatches: current=%u", current);
 
     /* check repCode */
+    assert(ll0 <= 1);   /* necessarily 1 or 0 */
     {   U32 const lastR = ZSTD_REP_NUM + ll0;
         U32 repCode;
         for (repCode = ll0; repCode < lastR; repCode++) {
@@ -724,7 +751,7 @@
                         ZSTD_match_t* matches, U32 const lengthToBeat)
 {
     const ZSTD_compressionParameters* const cParams = &ms->cParams;
-    U32 const matchLengthSearch = cParams->searchLength;
+    U32 const matchLengthSearch = cParams->minMatch;
     DEBUGLOG(8, "ZSTD_BtGetAllMatches");
     if (ip < ms->window.base + ms->nextToUpdate) return 0;   /* skipped area */
     ZSTD_updateTree_internal(ms, ip, iHighLimit, matchLengthSearch, dictMode);
@@ -774,12 +801,30 @@
     return sol.litlen + sol.mlen;
 }
 
+#if 0 /* debug */
+
+static void
+listStats(const U32* table, int lastEltID)
+{
+    int const nbElts = lastEltID + 1;
+    int enb;
+    for (enb=0; enb < nbElts; enb++) {
+        (void)table;
+        //RAWLOG(2, "%3i:%3i,  ", enb, table[enb]);
+        RAWLOG(2, "%4i,", table[enb]);
+    }
+    RAWLOG(2, " \n");
+}
+
+#endif
+
 FORCE_INLINE_TEMPLATE size_t
 ZSTD_compressBlock_opt_generic(ZSTD_matchState_t* ms,
                                seqStore_t* seqStore,
                                U32 rep[ZSTD_REP_NUM],
-                               const void* src, size_t srcSize,
-                               const int optLevel, const ZSTD_dictMode_e dictMode)
+                         const void* src, size_t srcSize,
+                         const int optLevel,
+                         const ZSTD_dictMode_e dictMode)
 {
     optState_t* const optStatePtr = &ms->opt;
     const BYTE* const istart = (const BYTE*)src;
@@ -792,14 +837,15 @@
     const ZSTD_compressionParameters* const cParams = &ms->cParams;
 
     U32 const sufficient_len = MIN(cParams->targetLength, ZSTD_OPT_NUM -1);
-    U32 const minMatch = (cParams->searchLength == 3) ? 3 : 4;
+    U32 const minMatch = (cParams->minMatch == 3) ? 3 : 4;
 
     ZSTD_optimal_t* const opt = optStatePtr->priceTable;
     ZSTD_match_t* const matches = optStatePtr->matchTable;
     ZSTD_optimal_t lastSequence;
 
     /* init */
-    DEBUGLOG(5, "ZSTD_compressBlock_opt_generic");
+    DEBUGLOG(5, "ZSTD_compressBlock_opt_generic: current=%u, prefix=%u, nextToUpdate=%u",
+                (U32)(ip - base), ms->window.dictLimit, ms->nextToUpdate);
     assert(optLevel <= 2);
     ms->nextToUpdate3 = ms->nextToUpdate;
     ZSTD_rescaleFreqs(optStatePtr, (const BYTE*)src, srcSize, optLevel);
@@ -999,7 +1045,7 @@
                     U32 const offCode = opt[storePos].off;
                     U32 const advance = llen + mlen;
                     DEBUGLOG(6, "considering seq starting at %zi, llen=%u, mlen=%u",
-                                anchor - istart, llen, mlen);
+                                anchor - istart, (unsigned)llen, (unsigned)mlen);
 
                     if (mlen==0) {  /* only literals => must be last "sequence", actually starting a new stream of sequences */
                         assert(storePos == storeEnd);   /* must be last sequence */
@@ -1047,11 +1093,11 @@
 
 
 /* used in 2-pass strategy */
-static U32 ZSTD_upscaleStat(U32* table, U32 lastEltIndex, int bonus)
+static U32 ZSTD_upscaleStat(unsigned* table, U32 lastEltIndex, int bonus)
 {
     U32 s, sum=0;
-    assert(ZSTD_FREQ_DIV+bonus > 0);
-    for (s=0; s<=lastEltIndex; s++) {
+    assert(ZSTD_FREQ_DIV+bonus >= 0);
+    for (s=0; s<lastEltIndex+1; s++) {
         table[s] <<= ZSTD_FREQ_DIV+bonus;
         table[s]--;
         sum += table[s];
@@ -1063,9 +1109,43 @@
 MEM_STATIC void ZSTD_upscaleStats(optState_t* optPtr)
 {
     optPtr->litSum = ZSTD_upscaleStat(optPtr->litFreq, MaxLit, 0);
-    optPtr->litLengthSum = ZSTD_upscaleStat(optPtr->litLengthFreq, MaxLL, 1);
-    optPtr->matchLengthSum = ZSTD_upscaleStat(optPtr->matchLengthFreq, MaxML, 1);
-    optPtr->offCodeSum = ZSTD_upscaleStat(optPtr->offCodeFreq, MaxOff, 1);
+    optPtr->litLengthSum = ZSTD_upscaleStat(optPtr->litLengthFreq, MaxLL, 0);
+    optPtr->matchLengthSum = ZSTD_upscaleStat(optPtr->matchLengthFreq, MaxML, 0);
+    optPtr->offCodeSum = ZSTD_upscaleStat(optPtr->offCodeFreq, MaxOff, 0);
+}
+
+/* ZSTD_initStats_ultra():
+ * make a first compression pass, just to seed stats with more accurate starting values.
+ * only works on first block, with no dictionary and no ldm.
+ * this function cannot error, hence its constract must be respected.
+ */
+static void
+ZSTD_initStats_ultra(ZSTD_matchState_t* ms,
+                     seqStore_t* seqStore,
+                     U32 rep[ZSTD_REP_NUM],
+               const void* src, size_t srcSize)
+{
+    U32 tmpRep[ZSTD_REP_NUM];  /* updated rep codes will sink here */
+    memcpy(tmpRep, rep, sizeof(tmpRep));
+
+    DEBUGLOG(4, "ZSTD_initStats_ultra (srcSize=%zu)", srcSize);
+    assert(ms->opt.litLengthSum == 0);    /* first block */
+    assert(seqStore->sequences == seqStore->sequencesStart);   /* no ldm */
+    assert(ms->window.dictLimit == ms->window.lowLimit);   /* no dictionary */
+    assert(ms->window.dictLimit - ms->nextToUpdate <= 1);  /* no prefix (note: intentional overflow, defined as 2-complement) */
+
+    ZSTD_compressBlock_opt_generic(ms, seqStore, tmpRep, src, srcSize, 2 /*optLevel*/, ZSTD_noDict);   /* generate stats into ms->opt*/
+
+    /* invalidate first scan from history */
+    ZSTD_resetSeqStore(seqStore);
+    ms->window.base -= srcSize;
+    ms->window.dictLimit += (U32)srcSize;
+    ms->window.lowLimit = ms->window.dictLimit;
+    ms->nextToUpdate = ms->window.dictLimit;
+    ms->nextToUpdate3 = ms->window.dictLimit;
+
+    /* re-inforce weight of collected statistics */
+    ZSTD_upscaleStats(&ms->opt);
 }
 
 size_t ZSTD_compressBlock_btultra(
@@ -1073,33 +1153,34 @@
         const void* src, size_t srcSize)
 {
     DEBUGLOG(5, "ZSTD_compressBlock_btultra (srcSize=%zu)", srcSize);
-#if 0
-    /* 2-pass strategy (disabled)
+    return ZSTD_compressBlock_opt_generic(ms, seqStore, rep, src, srcSize, 2 /*optLevel*/, ZSTD_noDict);
+}
+
+size_t ZSTD_compressBlock_btultra2(
+        ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
+        const void* src, size_t srcSize)
+{
+    U32 const current = (U32)((const BYTE*)src - ms->window.base);
+    DEBUGLOG(5, "ZSTD_compressBlock_btultra2 (srcSize=%zu)", srcSize);
+
+    /* 2-pass strategy:
      * this strategy makes a first pass over first block to collect statistics
      * and seed next round's statistics with it.
+     * After 1st pass, function forgets everything, and starts a new block.
+     * Consequently, this can only work if no data has been previously loaded in tables,
+     * aka, no dictionary, no prefix, no ldm preprocessing.
      * The compression ratio gain is generally small (~0.5% on first block),
      * the cost is 2x cpu time on first block. */
     assert(srcSize <= ZSTD_BLOCKSIZE_MAX);
     if ( (ms->opt.litLengthSum==0)   /* first block */
-      && (seqStore->sequences == seqStore->sequencesStart)   /* no ldm */
-      && (ms->window.dictLimit == ms->window.lowLimit) ) {   /* no dictionary */
-        U32 tmpRep[ZSTD_REP_NUM];
-        DEBUGLOG(5, "ZSTD_compressBlock_btultra: first block: collecting statistics");
-        assert(ms->nextToUpdate >= ms->window.dictLimit
-            && ms->nextToUpdate <= ms->window.dictLimit + 1);
-        memcpy(tmpRep, rep, sizeof(tmpRep));
-        ZSTD_compressBlock_opt_generic(ms, seqStore, tmpRep, src, srcSize, 2 /*optLevel*/, ZSTD_noDict);   /* generate stats into ms->opt*/
-        ZSTD_resetSeqStore(seqStore);
-        /* invalidate first scan from history */
-        ms->window.base -= srcSize;
-        ms->window.dictLimit += (U32)srcSize;
-        ms->window.lowLimit = ms->window.dictLimit;
-        ms->nextToUpdate = ms->window.dictLimit;
-        ms->nextToUpdate3 = ms->window.dictLimit;
-        /* re-inforce weight of collected statistics */
-        ZSTD_upscaleStats(&ms->opt);
+      && (seqStore->sequences == seqStore->sequencesStart)  /* no ldm */
+      && (ms->window.dictLimit == ms->window.lowLimit)   /* no dictionary */
+      && (current == ms->window.dictLimit)   /* start of frame, nothing already loaded nor skipped */
+      && (srcSize > ZSTD_PREDEF_THRESHOLD)
+      ) {
+        ZSTD_initStats_ultra(ms, seqStore, rep, src, srcSize);
     }
-#endif
+
     return ZSTD_compressBlock_opt_generic(ms, seqStore, rep, src, srcSize, 2 /*optLevel*/, ZSTD_noDict);
 }
 
@@ -1130,3 +1211,7 @@
 {
     return ZSTD_compressBlock_opt_generic(ms, seqStore, rep, src, srcSize, 2 /*optLevel*/, ZSTD_extDict);
 }
+
+/* note : no btultra2 variant for extDict nor dictMatchState,
+ * because btultra2 is not meant to work with dictionaries
+ * and is only specific for the first block (no prefix) */
--- a/contrib/python-zstandard/zstd/compress/zstd_opt.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstd_opt.h	Wed Apr 17 13:41:18 2019 -0400
@@ -26,6 +26,10 @@
 size_t ZSTD_compressBlock_btultra(
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
         void const* src, size_t srcSize);
+size_t ZSTD_compressBlock_btultra2(
+        ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
+        void const* src, size_t srcSize);
+
 
 size_t ZSTD_compressBlock_btopt_dictMatchState(
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
@@ -41,6 +45,10 @@
         ZSTD_matchState_t* ms, seqStore_t* seqStore, U32 rep[ZSTD_REP_NUM],
         void const* src, size_t srcSize);
 
+        /* note : no btultra2 variant for extDict nor dictMatchState,
+         * because btultra2 is not meant to work with dictionaries
+         * and is only specific for the first block (no prefix) */
+
 #if defined (__cplusplus)
 }
 #endif
--- a/contrib/python-zstandard/zstd/compress/zstdmt_compress.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstdmt_compress.c	Wed Apr 17 13:41:18 2019 -0400
@@ -9,21 +9,19 @@
  */
 
 
-/* ======   Tuning parameters   ====== */
-#define ZSTDMT_NBWORKERS_MAX 200
-#define ZSTDMT_JOBSIZE_MAX  (MEM_32bits() ? (512 MB) : (2 GB))  /* note : limited by `jobSize` type, which is `unsigned` */
-#define ZSTDMT_OVERLAPLOG_DEFAULT 6
-
-
 /* ======   Compiler specifics   ====== */
 #if defined(_MSC_VER)
 #  pragma warning(disable : 4204)   /* disable: C4204: non-constant aggregate initializer */
 #endif
 
 
+/* ======   Constants   ====== */
+#define ZSTDMT_OVERLAPLOG_DEFAULT 0
+
+
 /* ======   Dependencies   ====== */
 #include <string.h>      /* memcpy, memset */
-#include <limits.h>      /* INT_MAX */
+#include <limits.h>      /* INT_MAX, UINT_MAX */
 #include "pool.h"        /* threadpool */
 #include "threading.h"   /* mutex */
 #include "zstd_compress_internal.h"  /* MIN, ERROR, ZSTD_*, ZSTD_highbit32 */
@@ -57,9 +55,9 @@
    static clock_t _ticksPerSecond = 0;
    if (_ticksPerSecond <= 0) _ticksPerSecond = sysconf(_SC_CLK_TCK);
 
-   { struct tms junk; clock_t newTicks = (clock_t) times(&junk);
-     return ((((unsigned long long)newTicks)*(1000000))/_ticksPerSecond); }
-}
+   {   struct tms junk; clock_t newTicks = (clock_t) times(&junk);
+       return ((((unsigned long long)newTicks)*(1000000))/_ticksPerSecond);
+}  }
 
 #define MUTEX_WAIT_TIME_DLEVEL 6
 #define ZSTD_PTHREAD_MUTEX_LOCK(mutex) {          \
@@ -342,8 +340,8 @@
 
 typedef struct {
     ZSTD_pthread_mutex_t poolMutex;
-    unsigned totalCCtx;
-    unsigned availCCtx;
+    int totalCCtx;
+    int availCCtx;
     ZSTD_customMem cMem;
     ZSTD_CCtx* cctx[1];   /* variable size */
 } ZSTDMT_CCtxPool;
@@ -351,16 +349,16 @@
 /* note : all CCtx borrowed from the pool should be released back to the pool _before_ freeing the pool */
 static void ZSTDMT_freeCCtxPool(ZSTDMT_CCtxPool* pool)
 {
-    unsigned u;
-    for (u=0; u<pool->totalCCtx; u++)
-        ZSTD_freeCCtx(pool->cctx[u]);  /* note : compatible with free on NULL */
+    int cid;
+    for (cid=0; cid<pool->totalCCtx; cid++)
+        ZSTD_freeCCtx(pool->cctx[cid]);  /* note : compatible with free on NULL */
     ZSTD_pthread_mutex_destroy(&pool->poolMutex);
     ZSTD_free(pool, pool->cMem);
 }
 
 /* ZSTDMT_createCCtxPool() :
  * implies nbWorkers >= 1 , checked by caller ZSTDMT_createCCtx() */
-static ZSTDMT_CCtxPool* ZSTDMT_createCCtxPool(unsigned nbWorkers,
+static ZSTDMT_CCtxPool* ZSTDMT_createCCtxPool(int nbWorkers,
                                               ZSTD_customMem cMem)
 {
     ZSTDMT_CCtxPool* const cctxPool = (ZSTDMT_CCtxPool*) ZSTD_calloc(
@@ -381,7 +379,7 @@
 }
 
 static ZSTDMT_CCtxPool* ZSTDMT_expandCCtxPool(ZSTDMT_CCtxPool* srcPool,
-                                              unsigned nbWorkers)
+                                              int nbWorkers)
 {
     if (srcPool==NULL) return NULL;
     if (nbWorkers <= srcPool->totalCCtx) return srcPool;   /* good enough */
@@ -469,9 +467,9 @@
         DEBUGLOG(4, "LDM window size = %u KB", (1U << params.cParams.windowLog) >> 10);
         ZSTD_ldm_adjustParameters(&params.ldmParams, &params.cParams);
         assert(params.ldmParams.hashLog >= params.ldmParams.bucketSizeLog);
-        assert(params.ldmParams.hashEveryLog < 32);
+        assert(params.ldmParams.hashRateLog < 32);
         serialState->ldmState.hashPower =
-                ZSTD_ldm_getHashPower(params.ldmParams.minMatchLength);
+                ZSTD_rollingHash_primePower(params.ldmParams.minMatchLength);
     } else {
         memset(&params.ldmParams, 0, sizeof(params.ldmParams));
     }
@@ -674,7 +672,7 @@
         if (ZSTD_isError(initError)) JOB_ERROR(initError);
     } else {  /* srcStart points at reloaded section */
         U64 const pledgedSrcSize = job->firstJob ? job->fullFrameSize : job->src.size;
-        {   size_t const forceWindowError = ZSTD_CCtxParam_setParameter(&jobParams, ZSTD_p_forceMaxWindow, !job->firstJob);
+        {   size_t const forceWindowError = ZSTD_CCtxParam_setParameter(&jobParams, ZSTD_c_forceMaxWindow, !job->firstJob);
             if (ZSTD_isError(forceWindowError)) JOB_ERROR(forceWindowError);
         }
         {   size_t const initError = ZSTD_compressBegin_advanced_internal(cctx,
@@ -777,6 +775,14 @@
 
 static const roundBuff_t kNullRoundBuff = {NULL, 0, 0};
 
+#define RSYNC_LENGTH 32
+
+typedef struct {
+  U64 hash;
+  U64 hitMask;
+  U64 primePower;
+} rsyncState_t;
+
 struct ZSTDMT_CCtx_s {
     POOL_ctx* factory;
     ZSTDMT_jobDescription* jobs;
@@ -790,6 +796,7 @@
     inBuff_t inBuff;
     roundBuff_t roundBuff;
     serialState_t serial;
+    rsyncState_t rsync;
     unsigned singleBlockingThread;
     unsigned jobIDMask;
     unsigned doneJobID;
@@ -859,7 +866,7 @@
 {
     if (nbWorkers > ZSTDMT_NBWORKERS_MAX) nbWorkers = ZSTDMT_NBWORKERS_MAX;
     params->nbWorkers = nbWorkers;
-    params->overlapSizeLog = ZSTDMT_OVERLAPLOG_DEFAULT;
+    params->overlapLog = ZSTDMT_OVERLAPLOG_DEFAULT;
     params->jobSize = 0;
     return nbWorkers;
 }
@@ -969,52 +976,59 @@
 }
 
 /* Internal only */
-size_t ZSTDMT_CCtxParam_setMTCtxParameter(ZSTD_CCtx_params* params,
-                                ZSTDMT_parameter parameter, unsigned value) {
+size_t
+ZSTDMT_CCtxParam_setMTCtxParameter(ZSTD_CCtx_params* params,
+                                   ZSTDMT_parameter parameter,
+                                   int value)
+{
     DEBUGLOG(4, "ZSTDMT_CCtxParam_setMTCtxParameter");
     switch(parameter)
     {
     case ZSTDMT_p_jobSize :
-        DEBUGLOG(4, "ZSTDMT_CCtxParam_setMTCtxParameter : set jobSize to %u", value);
-        if ( (value > 0)  /* value==0 => automatic job size */
-           & (value < ZSTDMT_JOBSIZE_MIN) )
+        DEBUGLOG(4, "ZSTDMT_CCtxParam_setMTCtxParameter : set jobSize to %i", value);
+        if ( value != 0  /* default */
+          && value < ZSTDMT_JOBSIZE_MIN)
             value = ZSTDMT_JOBSIZE_MIN;
-        if (value > ZSTDMT_JOBSIZE_MAX)
-            value = ZSTDMT_JOBSIZE_MAX;
+        assert(value >= 0);
+        if (value > ZSTDMT_JOBSIZE_MAX) value = ZSTDMT_JOBSIZE_MAX;
         params->jobSize = value;
         return value;
-    case ZSTDMT_p_overlapSectionLog :
-        if (value > 9) value = 9;
-        DEBUGLOG(4, "ZSTDMT_p_overlapSectionLog : %u", value);
-        params->overlapSizeLog = (value >= 9) ? 9 : value;
+
+    case ZSTDMT_p_overlapLog :
+        DEBUGLOG(4, "ZSTDMT_p_overlapLog : %i", value);
+        if (value < ZSTD_OVERLAPLOG_MIN) value = ZSTD_OVERLAPLOG_MIN;
+        if (value > ZSTD_OVERLAPLOG_MAX) value = ZSTD_OVERLAPLOG_MAX;
+        params->overlapLog = value;
         return value;
+
+    case ZSTDMT_p_rsyncable :
+        value = (value != 0);
+        params->rsyncable = value;
+        return value;
+
     default :
         return ERROR(parameter_unsupported);
     }
 }
 
-size_t ZSTDMT_setMTCtxParameter(ZSTDMT_CCtx* mtctx, ZSTDMT_parameter parameter, unsigned value)
+size_t ZSTDMT_setMTCtxParameter(ZSTDMT_CCtx* mtctx, ZSTDMT_parameter parameter, int value)
 {
     DEBUGLOG(4, "ZSTDMT_setMTCtxParameter");
-    switch(parameter)
-    {
-    case ZSTDMT_p_jobSize :
-        return ZSTDMT_CCtxParam_setMTCtxParameter(&mtctx->params, parameter, value);
-    case ZSTDMT_p_overlapSectionLog :
-        return ZSTDMT_CCtxParam_setMTCtxParameter(&mtctx->params, parameter, value);
-    default :
-        return ERROR(parameter_unsupported);
-    }
+    return ZSTDMT_CCtxParam_setMTCtxParameter(&mtctx->params, parameter, value);
 }
 
-size_t ZSTDMT_getMTCtxParameter(ZSTDMT_CCtx* mtctx, ZSTDMT_parameter parameter, unsigned* value)
+size_t ZSTDMT_getMTCtxParameter(ZSTDMT_CCtx* mtctx, ZSTDMT_parameter parameter, int* value)
 {
     switch (parameter) {
     case ZSTDMT_p_jobSize:
-        *value = mtctx->params.jobSize;
+        assert(mtctx->params.jobSize <= INT_MAX);
+        *value = (int)(mtctx->params.jobSize);
         break;
-    case ZSTDMT_p_overlapSectionLog:
-        *value = mtctx->params.overlapSizeLog;
+    case ZSTDMT_p_overlapLog:
+        *value = mtctx->params.overlapLog;
+        break;
+    case ZSTDMT_p_rsyncable:
+        *value = mtctx->params.rsyncable;
         break;
     default:
         return ERROR(parameter_unsupported);
@@ -1140,22 +1154,66 @@
 /* =====   Multi-threaded compression   ===== */
 /* ------------------------------------------ */
 
-static size_t ZSTDMT_computeTargetJobLog(ZSTD_CCtx_params const params)
+static unsigned ZSTDMT_computeTargetJobLog(ZSTD_CCtx_params const params)
 {
     if (params.ldmParams.enableLdm)
+        /* In Long Range Mode, the windowLog is typically oversized.
+         * In which case, it's preferable to determine the jobSize
+         * based on chainLog instead. */
         return MAX(21, params.cParams.chainLog + 4);
     return MAX(20, params.cParams.windowLog + 2);
 }
 
-static size_t ZSTDMT_computeOverlapLog(ZSTD_CCtx_params const params)
+static int ZSTDMT_overlapLog_default(ZSTD_strategy strat)
 {
-    unsigned const overlapRLog = (params.overlapSizeLog>9) ? 0 : 9-params.overlapSizeLog;
-    if (params.ldmParams.enableLdm)
-        return (MIN(params.cParams.windowLog, ZSTDMT_computeTargetJobLog(params) - 2) - overlapRLog);
-    return overlapRLog >= 9 ? 0 : (params.cParams.windowLog - overlapRLog);
+    switch(strat)
+    {
+        case ZSTD_btultra2:
+            return 9;
+        case ZSTD_btultra:
+        case ZSTD_btopt:
+            return 8;
+        case ZSTD_btlazy2:
+        case ZSTD_lazy2:
+            return 7;
+        case ZSTD_lazy:
+        case ZSTD_greedy:
+        case ZSTD_dfast:
+        case ZSTD_fast:
+        default:;
+    }
+    return 6;
 }
 
-static unsigned ZSTDMT_computeNbJobs(ZSTD_CCtx_params params, size_t srcSize, unsigned nbWorkers) {
+static int ZSTDMT_overlapLog(int ovlog, ZSTD_strategy strat)
+{
+    assert(0 <= ovlog && ovlog <= 9);
+    if (ovlog == 0) return ZSTDMT_overlapLog_default(strat);
+    return ovlog;
+}
+
+static size_t ZSTDMT_computeOverlapSize(ZSTD_CCtx_params const params)
+{
+    int const overlapRLog = 9 - ZSTDMT_overlapLog(params.overlapLog, params.cParams.strategy);
+    int ovLog = (overlapRLog >= 8) ? 0 : (params.cParams.windowLog - overlapRLog);
+    assert(0 <= overlapRLog && overlapRLog <= 8);
+    if (params.ldmParams.enableLdm) {
+        /* In Long Range Mode, the windowLog is typically oversized.
+         * In which case, it's preferable to determine the jobSize
+         * based on chainLog instead.
+         * Then, ovLog becomes a fraction of the jobSize, rather than windowSize */
+        ovLog = MIN(params.cParams.windowLog, ZSTDMT_computeTargetJobLog(params) - 2)
+                - overlapRLog;
+    }
+    assert(0 <= ovLog && ovLog <= 30);
+    DEBUGLOG(4, "overlapLog : %i", params.overlapLog);
+    DEBUGLOG(4, "overlap size : %i", 1 << ovLog);
+    return (ovLog==0) ? 0 : (size_t)1 << ovLog;
+}
+
+static unsigned
+ZSTDMT_computeNbJobs(ZSTD_CCtx_params params, size_t srcSize, unsigned nbWorkers)
+{
     assert(nbWorkers>0);
     {   size_t const jobSizeTarget = (size_t)1 << ZSTDMT_computeTargetJobLog(params);
         size_t const jobMaxSize = jobSizeTarget << 2;
@@ -1178,7 +1236,7 @@
                 ZSTD_CCtx_params params)
 {
     ZSTD_CCtx_params const jobParams = ZSTDMT_initJobCCtxParams(params);
-    size_t const overlapSize = (size_t)1 << ZSTDMT_computeOverlapLog(params);
+    size_t const overlapSize = ZSTDMT_computeOverlapSize(params);
     unsigned const nbJobs = ZSTDMT_computeNbJobs(params, srcSize, params.nbWorkers);
     size_t const proposedJobSize = (srcSize + (nbJobs-1)) / nbJobs;
     size_t const avgJobSize = (((proposedJobSize-1) & 0x1FFFF) < 0x7FFF) ? proposedJobSize + 0xFFFF : proposedJobSize;   /* avoid too small last block */
@@ -1289,16 +1347,17 @@
 }
 
 size_t ZSTDMT_compress_advanced(ZSTDMT_CCtx* mtctx,
-                               void* dst, size_t dstCapacity,
-                         const void* src, size_t srcSize,
-                         const ZSTD_CDict* cdict,
-                               ZSTD_parameters params,
-                               unsigned overlapLog)
+                                void* dst, size_t dstCapacity,
+                          const void* src, size_t srcSize,
+                          const ZSTD_CDict* cdict,
+                                ZSTD_parameters params,
+                                int overlapLog)
 {
     ZSTD_CCtx_params cctxParams = mtctx->params;
     cctxParams.cParams = params.cParams;
     cctxParams.fParams = params.fParams;
-    cctxParams.overlapSizeLog = overlapLog;
+    assert(ZSTD_OVERLAPLOG_MIN <= overlapLog && overlapLog <= ZSTD_OVERLAPLOG_MAX);
+    cctxParams.overlapLog = overlapLog;
     return ZSTDMT_compress_advanced_internal(mtctx,
                                              dst, dstCapacity,
                                              src, srcSize,
@@ -1311,8 +1370,8 @@
                      const void* src, size_t srcSize,
                            int compressionLevel)
 {
-    U32 const overlapLog = (compressionLevel >= ZSTD_maxCLevel()) ? 9 : ZSTDMT_OVERLAPLOG_DEFAULT;
     ZSTD_parameters params = ZSTD_getParams(compressionLevel, srcSize, 0);
+    int const overlapLog = ZSTDMT_overlapLog_default(params.cParams.strategy);
     params.fParams.contentSizeFlag = 1;
     return ZSTDMT_compress_advanced(mtctx, dst, dstCapacity, src, srcSize, NULL, params, overlapLog);
 }
@@ -1339,8 +1398,8 @@
     if (params.nbWorkers != mtctx->params.nbWorkers)
         CHECK_F( ZSTDMT_resize(mtctx, params.nbWorkers) );
 
-    if (params.jobSize > 0 && params.jobSize < ZSTDMT_JOBSIZE_MIN) params.jobSize = ZSTDMT_JOBSIZE_MIN;
-    if (params.jobSize > ZSTDMT_JOBSIZE_MAX) params.jobSize = ZSTDMT_JOBSIZE_MAX;
+    if (params.jobSize != 0 && params.jobSize < ZSTDMT_JOBSIZE_MIN) params.jobSize = ZSTDMT_JOBSIZE_MIN;
+    if (params.jobSize > (size_t)ZSTDMT_JOBSIZE_MAX) params.jobSize = ZSTDMT_JOBSIZE_MAX;
 
     mtctx->singleBlockingThread = (pledgedSrcSize <= ZSTDMT_JOBSIZE_MIN);  /* do not trigger multi-threading when srcSize is too small */
     if (mtctx->singleBlockingThread) {
@@ -1375,14 +1434,24 @@
         mtctx->cdict = cdict;
     }
 
-    mtctx->targetPrefixSize = (size_t)1 << ZSTDMT_computeOverlapLog(params);
-    DEBUGLOG(4, "overlapLog=%u => %u KB", params.overlapSizeLog, (U32)(mtctx->targetPrefixSize>>10));
+    mtctx->targetPrefixSize = ZSTDMT_computeOverlapSize(params);
+    DEBUGLOG(4, "overlapLog=%i => %u KB", params.overlapLog, (U32)(mtctx->targetPrefixSize>>10));
     mtctx->targetSectionSize = params.jobSize;
     if (mtctx->targetSectionSize == 0) {
         mtctx->targetSectionSize = 1ULL << ZSTDMT_computeTargetJobLog(params);
     }
+    if (params.rsyncable) {
+        /* Aim for the targetsectionSize as the average job size. */
+        U32 const jobSizeMB = (U32)(mtctx->targetSectionSize >> 20);
+        U32 const rsyncBits = ZSTD_highbit32(jobSizeMB) + 20;
+        assert(jobSizeMB >= 1);
+        DEBUGLOG(4, "rsyncLog = %u", rsyncBits);
+        mtctx->rsync.hash = 0;
+        mtctx->rsync.hitMask = (1ULL << rsyncBits) - 1;
+        mtctx->rsync.primePower = ZSTD_rollingHash_primePower(RSYNC_LENGTH);
+    }
     if (mtctx->targetSectionSize < mtctx->targetPrefixSize) mtctx->targetSectionSize = mtctx->targetPrefixSize;  /* job size must be >= overlap size */
-    DEBUGLOG(4, "Job Size : %u KB (note : set to %u)", (U32)(mtctx->targetSectionSize>>10), params.jobSize);
+    DEBUGLOG(4, "Job Size : %u KB (note : set to %u)", (U32)(mtctx->targetSectionSize>>10), (U32)params.jobSize);
     DEBUGLOG(4, "inBuff Size : %u KB", (U32)(mtctx->targetSectionSize>>10));
     ZSTDMT_setBufferSize(mtctx->bufPool, ZSTD_compressBound(mtctx->targetSectionSize));
     {
@@ -1818,6 +1887,89 @@
     return 1;
 }
 
+typedef struct {
+  size_t toLoad;  /* The number of bytes to load from the input. */
+  int flush;      /* Boolean declaring if we must flush because we found a synchronization point. */
+} syncPoint_t;
+
+/**
+ * Searches through the input for a synchronization point. If one is found, we
+ * will instruct the caller to flush, and return the number of bytes to load.
+ * Otherwise, we will load as many bytes as possible and instruct the caller
+ * to continue as normal.
+ */
+static syncPoint_t
+findSynchronizationPoint(ZSTDMT_CCtx const* mtctx, ZSTD_inBuffer const input)
+{
+    BYTE const* const istart = (BYTE const*)input.src + input.pos;
+    U64 const primePower = mtctx->rsync.primePower;
+    U64 const hitMask = mtctx->rsync.hitMask;
+
+    syncPoint_t syncPoint;
+    U64 hash;
+    BYTE const* prev;
+    size_t pos;
+
+    syncPoint.toLoad = MIN(input.size - input.pos, mtctx->targetSectionSize - mtctx->inBuff.filled);
+    syncPoint.flush = 0;
+    if (!mtctx->params.rsyncable)
+        /* Rsync is disabled. */
+        return syncPoint;
+    if (mtctx->inBuff.filled + syncPoint.toLoad < RSYNC_LENGTH)
+        /* Not enough to compute the hash.
+         * We will miss any synchronization points in this RSYNC_LENGTH byte
+         * window. However, since it depends only in the internal buffers, if the
+         * state is already synchronized, we will remain synchronized.
+         * Additionally, the probability that we miss a synchronization point is
+         * low: RSYNC_LENGTH / targetSectionSize.
+         */
+        return syncPoint;
+    /* Initialize the loop variables. */
+    if (mtctx->inBuff.filled >= RSYNC_LENGTH) {
+        /* We have enough bytes buffered to initialize the hash.
+         * Start scanning at the beginning of the input.
+         */
+        pos = 0;
+        prev = (BYTE const*)mtctx->inBuff.buffer.start + mtctx->inBuff.filled - RSYNC_LENGTH;
+        hash = ZSTD_rollingHash_compute(prev, RSYNC_LENGTH);
+    } else {
+        /* We don't have enough bytes buffered to initialize the hash, but
+         * we know we have at least RSYNC_LENGTH bytes total.
+         * Start scanning after the first RSYNC_LENGTH bytes less the bytes
+         * already buffered.
+         */
+        pos = RSYNC_LENGTH - mtctx->inBuff.filled;
+        prev = (BYTE const*)mtctx->inBuff.buffer.start - pos;
+        hash = ZSTD_rollingHash_compute(mtctx->inBuff.buffer.start, mtctx->inBuff.filled);
+        hash = ZSTD_rollingHash_append(hash, istart, pos);
+    }
+    /* Starting with the hash of the previous RSYNC_LENGTH bytes, roll
+     * through the input. If we hit a synchronization point, then cut the
+     * job off, and tell the compressor to flush the job. Otherwise, load
+     * all the bytes and continue as normal.
+     * If we go too long without a synchronization point (targetSectionSize)
+     * then a block will be emitted anyways, but this is okay, since if we
+     * are already synchronized we will remain synchronized.
+     */
+    for (; pos < syncPoint.toLoad; ++pos) {
+        BYTE const toRemove = pos < RSYNC_LENGTH ? prev[pos] : istart[pos - RSYNC_LENGTH];
+        /* if (pos >= RSYNC_LENGTH) assert(ZSTD_rollingHash_compute(istart + pos - RSYNC_LENGTH, RSYNC_LENGTH) == hash); */
+        hash = ZSTD_rollingHash_rotate(hash, toRemove, istart[pos], primePower);
+        if ((hash & hitMask) == hitMask) {
+            syncPoint.toLoad = pos + 1;
+            syncPoint.flush = 1;
+            break;
+        }
+    }
+    return syncPoint;
+}
+
+size_t ZSTDMT_nextInputSizeHint(const ZSTDMT_CCtx* mtctx)
+{
+    size_t hintInSize = mtctx->targetSectionSize - mtctx->inBuff.filled;
+    if (hintInSize==0) hintInSize = mtctx->targetSectionSize;
+    return hintInSize;
+}
 
 /** ZSTDMT_compressStream_generic() :
  *  internal use only - exposed to be invoked from zstd_compress.c
@@ -1844,7 +1996,8 @@
     }
 
     /* single-pass shortcut (note : synchronous-mode) */
-    if ( (mtctx->nextJobID == 0)      /* just started */
+    if ( (!mtctx->params.rsyncable)   /* rsyncable mode is disabled */
+      && (mtctx->nextJobID == 0)      /* just started */
       && (mtctx->inBuff.filled == 0)  /* nothing buffered */
       && (!mtctx->jobReady)           /* no job already created */
       && (endOp == ZSTD_e_end)        /* end order */
@@ -1876,14 +2029,17 @@
                 DEBUGLOG(5, "ZSTDMT_tryGetInputRange completed successfully : mtctx->inBuff.buffer.start = %p", mtctx->inBuff.buffer.start);
         }
         if (mtctx->inBuff.buffer.start != NULL) {
-            size_t const toLoad = MIN(input->size - input->pos, mtctx->targetSectionSize - mtctx->inBuff.filled);
+            syncPoint_t const syncPoint = findSynchronizationPoint(mtctx, *input);
+            if (syncPoint.flush && endOp == ZSTD_e_continue) {
+                endOp = ZSTD_e_flush;
+            }
             assert(mtctx->inBuff.buffer.capacity >= mtctx->targetSectionSize);
             DEBUGLOG(5, "ZSTDMT_compressStream_generic: adding %u bytes on top of %u to buffer of size %u",
-                        (U32)toLoad, (U32)mtctx->inBuff.filled, (U32)mtctx->targetSectionSize);
-            memcpy((char*)mtctx->inBuff.buffer.start + mtctx->inBuff.filled, (const char*)input->src + input->pos, toLoad);
-            input->pos += toLoad;
-            mtctx->inBuff.filled += toLoad;
-            forwardInputProgress = toLoad>0;
+                        (U32)syncPoint.toLoad, (U32)mtctx->inBuff.filled, (U32)mtctx->targetSectionSize);
+            memcpy((char*)mtctx->inBuff.buffer.start + mtctx->inBuff.filled, (const char*)input->src + input->pos, syncPoint.toLoad);
+            input->pos += syncPoint.toLoad;
+            mtctx->inBuff.filled += syncPoint.toLoad;
+            forwardInputProgress = syncPoint.toLoad>0;
         }
         if ((input->pos < input->size) && (endOp == ZSTD_e_end))
             endOp = ZSTD_e_flush;   /* can't end now : not all input consumed */
--- a/contrib/python-zstandard/zstd/compress/zstdmt_compress.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/compress/zstdmt_compress.h	Wed Apr 17 13:41:18 2019 -0400
@@ -28,6 +28,16 @@
 #include "zstd.h"            /* ZSTD_inBuffer, ZSTD_outBuffer, ZSTDLIB_API */
 
 
+/* ===   Constants   === */
+#ifndef ZSTDMT_NBWORKERS_MAX
+#  define ZSTDMT_NBWORKERS_MAX 200
+#endif
+#ifndef ZSTDMT_JOBSIZE_MIN
+#  define ZSTDMT_JOBSIZE_MIN (1 MB)
+#endif
+#define ZSTDMT_JOBSIZE_MAX  (MEM_32bits() ? (512 MB) : (1024 MB))
+
+
 /* ===   Memory management   === */
 typedef struct ZSTDMT_CCtx_s ZSTDMT_CCtx;
 ZSTDLIB_API ZSTDMT_CCtx* ZSTDMT_createCCtx(unsigned nbWorkers);
@@ -52,6 +62,7 @@
 ZSTDLIB_API size_t ZSTDMT_initCStream(ZSTDMT_CCtx* mtctx, int compressionLevel);
 ZSTDLIB_API size_t ZSTDMT_resetCStream(ZSTDMT_CCtx* mtctx, unsigned long long pledgedSrcSize);  /**< if srcSize is not known at reset time, use ZSTD_CONTENTSIZE_UNKNOWN. Note: for compatibility with older programs, 0 means the same as ZSTD_CONTENTSIZE_UNKNOWN, but it will change in the future to mean "empty" */
 
+ZSTDLIB_API size_t ZSTDMT_nextInputSizeHint(const ZSTDMT_CCtx* mtctx);
 ZSTDLIB_API size_t ZSTDMT_compressStream(ZSTDMT_CCtx* mtctx, ZSTD_outBuffer* output, ZSTD_inBuffer* input);
 
 ZSTDLIB_API size_t ZSTDMT_flushStream(ZSTDMT_CCtx* mtctx, ZSTD_outBuffer* output);   /**< @return : 0 == all flushed; >0 : still some data to be flushed; or an error code (ZSTD_isError()) */
@@ -60,16 +71,12 @@
 
 /* ===   Advanced functions and parameters  === */
 
-#ifndef ZSTDMT_JOBSIZE_MIN
-#  define ZSTDMT_JOBSIZE_MIN (1U << 20)   /* 1 MB - Minimum size of each compression job */
-#endif
-
 ZSTDLIB_API size_t ZSTDMT_compress_advanced(ZSTDMT_CCtx* mtctx,
                                            void* dst, size_t dstCapacity,
                                      const void* src, size_t srcSize,
                                      const ZSTD_CDict* cdict,
                                            ZSTD_parameters params,
-                                           unsigned overlapLog);
+                                           int overlapLog);
 
 ZSTDLIB_API size_t ZSTDMT_initCStream_advanced(ZSTDMT_CCtx* mtctx,
                                         const void* dict, size_t dictSize,   /* dict can be released after init, a local copy is preserved within zcs */
@@ -84,8 +91,9 @@
 /* ZSTDMT_parameter :
  * List of parameters that can be set using ZSTDMT_setMTCtxParameter() */
 typedef enum {
-    ZSTDMT_p_jobSize,           /* Each job is compressed in parallel. By default, this value is dynamically determined depending on compression parameters. Can be set explicitly here. */
-    ZSTDMT_p_overlapSectionLog  /* Each job may reload a part of previous job to enhance compressionr ratio; 0 == no overlap, 6(default) == use 1/8th of window, >=9 == use full window. This is a "sticky" parameter : its value will be re-used on next compression job */
+    ZSTDMT_p_jobSize,     /* Each job is compressed in parallel. By default, this value is dynamically determined depending on compression parameters. Can be set explicitly here. */
+    ZSTDMT_p_overlapLog,  /* Each job may reload a part of previous job to enhance compressionr ratio; 0 == no overlap, 6(default) == use 1/8th of window, >=9 == use full window. This is a "sticky" parameter : its value will be re-used on next compression job */
+    ZSTDMT_p_rsyncable    /* Enables rsyncable mode. */
 } ZSTDMT_parameter;
 
 /* ZSTDMT_setMTCtxParameter() :
@@ -93,12 +101,12 @@
  * The function must be called typically after ZSTD_createCCtx() but __before ZSTDMT_init*() !__
  * Parameters not explicitly reset by ZSTDMT_init*() remain the same in consecutive compression sessions.
  * @return : 0, or an error code (which can be tested using ZSTD_isError()) */
-ZSTDLIB_API size_t ZSTDMT_setMTCtxParameter(ZSTDMT_CCtx* mtctx, ZSTDMT_parameter parameter, unsigned value);
+ZSTDLIB_API size_t ZSTDMT_setMTCtxParameter(ZSTDMT_CCtx* mtctx, ZSTDMT_parameter parameter, int value);
 
 /* ZSTDMT_getMTCtxParameter() :
  * Query the ZSTDMT_CCtx for a parameter value.
  * @return : 0, or an error code (which can be tested using ZSTD_isError()) */
-ZSTDLIB_API size_t ZSTDMT_getMTCtxParameter(ZSTDMT_CCtx* mtctx, ZSTDMT_parameter parameter, unsigned* value);
+ZSTDLIB_API size_t ZSTDMT_getMTCtxParameter(ZSTDMT_CCtx* mtctx, ZSTDMT_parameter parameter, int* value);
 
 
 /*! ZSTDMT_compressStream_generic() :
@@ -129,7 +137,7 @@
 
 /*! ZSTDMT_CCtxParam_setMTCtxParameter()
  *  like ZSTDMT_setMTCtxParameter(), but into a ZSTD_CCtx_Params */
-size_t ZSTDMT_CCtxParam_setMTCtxParameter(ZSTD_CCtx_params* params, ZSTDMT_parameter parameter, unsigned value);
+size_t ZSTDMT_CCtxParam_setMTCtxParameter(ZSTD_CCtx_params* params, ZSTDMT_parameter parameter, int value);
 
 /*! ZSTDMT_CCtxParam_setNbWorkers()
  *  Set nbWorkers, and clamp it.
--- a/contrib/python-zstandard/zstd/decompress/huf_decompress.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/decompress/huf_decompress.c	Wed Apr 17 13:41:18 2019 -0400
@@ -43,6 +43,19 @@
 #include "huf.h"
 #include "error_private.h"
 
+/* **************************************************************
+*  Macros
+****************************************************************/
+
+/* These two optional macros force the use one way or another of the two
+ * Huffman decompression implementations. You can't force in both directions
+ * at the same time.
+ */
+#if defined(HUF_FORCE_DECOMPRESS_X1) && \
+    defined(HUF_FORCE_DECOMPRESS_X2)
+#error "Cannot force the use of the X1 and X2 decoders at the same time!"
+#endif
+
 
 /* **************************************************************
 *  Error Management
@@ -58,6 +71,51 @@
 #define HUF_ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
 
 
+/* **************************************************************
+*  BMI2 Variant Wrappers
+****************************************************************/
+#if DYNAMIC_BMI2
+
+#define HUF_DGEN(fn)                                                        \
+                                                                            \
+    static size_t fn##_default(                                             \
+                  void* dst,  size_t dstSize,                               \
+            const void* cSrc, size_t cSrcSize,                              \
+            const HUF_DTable* DTable)                                       \
+    {                                                                       \
+        return fn##_body(dst, dstSize, cSrc, cSrcSize, DTable);             \
+    }                                                                       \
+                                                                            \
+    static TARGET_ATTRIBUTE("bmi2") size_t fn##_bmi2(                       \
+                  void* dst,  size_t dstSize,                               \
+            const void* cSrc, size_t cSrcSize,                              \
+            const HUF_DTable* DTable)                                       \
+    {                                                                       \
+        return fn##_body(dst, dstSize, cSrc, cSrcSize, DTable);             \
+    }                                                                       \
+                                                                            \
+    static size_t fn(void* dst, size_t dstSize, void const* cSrc,           \
+                     size_t cSrcSize, HUF_DTable const* DTable, int bmi2)   \
+    {                                                                       \
+        if (bmi2) {                                                         \
+            return fn##_bmi2(dst, dstSize, cSrc, cSrcSize, DTable);         \
+        }                                                                   \
+        return fn##_default(dst, dstSize, cSrc, cSrcSize, DTable);          \
+    }
+
+#else
+
+#define HUF_DGEN(fn)                                                        \
+    static size_t fn(void* dst, size_t dstSize, void const* cSrc,           \
+                     size_t cSrcSize, HUF_DTable const* DTable, int bmi2)   \
+    {                                                                       \
+        (void)bmi2;                                                         \
+        return fn##_body(dst, dstSize, cSrc, cSrcSize, DTable);             \
+    }
+
+#endif
+
+
 /*-***************************/
 /*  generic DTableDesc       */
 /*-***************************/
@@ -71,6 +129,8 @@
 }
 
 
+#ifndef HUF_FORCE_DECOMPRESS_X2
+
 /*-***************************/
 /*  single-symbol decoding   */
 /*-***************************/
@@ -307,46 +367,6 @@
                                                const void *cSrc,
                                                size_t cSrcSize,
                                                const HUF_DTable *DTable);
-#if DYNAMIC_BMI2
-
-#define HUF_DGEN(fn)                                                               \
-                                                                            \
-    static size_t fn##_default(                                             \
-                  void* dst,  size_t dstSize,                               \
-            const void* cSrc, size_t cSrcSize,                              \
-            const HUF_DTable* DTable)                                       \
-    {                                                                       \
-        return fn##_body(dst, dstSize, cSrc, cSrcSize, DTable);             \
-    }                                                                       \
-                                                                            \
-    static TARGET_ATTRIBUTE("bmi2") size_t fn##_bmi2(                       \
-                  void* dst,  size_t dstSize,                               \
-            const void* cSrc, size_t cSrcSize,                              \
-            const HUF_DTable* DTable)                                       \
-    {                                                                       \
-        return fn##_body(dst, dstSize, cSrc, cSrcSize, DTable);             \
-    }                                                                       \
-                                                                            \
-    static size_t fn(void* dst, size_t dstSize, void const* cSrc,           \
-                     size_t cSrcSize, HUF_DTable const* DTable, int bmi2)   \
-    {                                                                       \
-        if (bmi2) {                                                         \
-            return fn##_bmi2(dst, dstSize, cSrc, cSrcSize, DTable);         \
-        }                                                                   \
-        return fn##_default(dst, dstSize, cSrc, cSrcSize, DTable);          \
-    }
-
-#else
-
-#define HUF_DGEN(fn)                                                               \
-    static size_t fn(void* dst, size_t dstSize, void const* cSrc,           \
-                     size_t cSrcSize, HUF_DTable const* DTable, int bmi2)   \
-    {                                                                       \
-        (void)bmi2;                                                         \
-        return fn##_body(dst, dstSize, cSrc, cSrcSize, DTable);             \
-    }
-
-#endif
 
 HUF_DGEN(HUF_decompress1X1_usingDTable_internal)
 HUF_DGEN(HUF_decompress4X1_usingDTable_internal)
@@ -437,6 +457,10 @@
     return HUF_decompress4X1_DCtx(DTable, dst, dstSize, cSrc, cSrcSize);
 }
 
+#endif /* HUF_FORCE_DECOMPRESS_X2 */
+
+
+#ifndef HUF_FORCE_DECOMPRESS_X1
 
 /* *************************/
 /* double-symbols decoding */
@@ -911,6 +935,8 @@
     return HUF_decompress4X2_DCtx(DTable, dst, dstSize, cSrc, cSrcSize);
 }
 
+#endif /* HUF_FORCE_DECOMPRESS_X1 */
+
 
 /* ***********************************/
 /* Universal decompression selectors */
@@ -921,8 +947,18 @@
                                     const HUF_DTable* DTable)
 {
     DTableDesc const dtd = HUF_getDTableDesc(DTable);
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+    (void)dtd;
+    assert(dtd.tableType == 0);
+    return HUF_decompress1X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, /* bmi2 */ 0);
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+    (void)dtd;
+    assert(dtd.tableType == 1);
+    return HUF_decompress1X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, /* bmi2 */ 0);
+#else
     return dtd.tableType ? HUF_decompress1X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, /* bmi2 */ 0) :
                            HUF_decompress1X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, /* bmi2 */ 0);
+#endif
 }
 
 size_t HUF_decompress4X_usingDTable(void* dst, size_t maxDstSize,
@@ -930,11 +966,22 @@
                                     const HUF_DTable* DTable)
 {
     DTableDesc const dtd = HUF_getDTableDesc(DTable);
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+    (void)dtd;
+    assert(dtd.tableType == 0);
+    return HUF_decompress4X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, /* bmi2 */ 0);
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+    (void)dtd;
+    assert(dtd.tableType == 1);
+    return HUF_decompress4X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, /* bmi2 */ 0);
+#else
     return dtd.tableType ? HUF_decompress4X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, /* bmi2 */ 0) :
                            HUF_decompress4X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, /* bmi2 */ 0);
+#endif
 }
 
 
+#if !defined(HUF_FORCE_DECOMPRESS_X1) && !defined(HUF_FORCE_DECOMPRESS_X2)
 typedef struct { U32 tableTime; U32 decode256Time; } algo_time_t;
 static const algo_time_t algoTime[16 /* Quantization */][3 /* single, double, quad */] =
 {
@@ -956,6 +1003,7 @@
     {{1455,128}, {2422,124}, {4174,124}},   /* Q ==14 : 87-93% */
     {{ 722,128}, {1891,145}, {1936,146}},   /* Q ==15 : 93-99% */
 };
+#endif
 
 /** HUF_selectDecoder() :
  *  Tells which decoder is likely to decode faster,
@@ -966,6 +1014,15 @@
 {
     assert(dstSize > 0);
     assert(dstSize <= 128*1024);
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+    (void)dstSize;
+    (void)cSrcSize;
+    return 0;
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+    (void)dstSize;
+    (void)cSrcSize;
+    return 1;
+#else
     /* decoder timing evaluation */
     {   U32 const Q = (cSrcSize >= dstSize) ? 15 : (U32)(cSrcSize * 16 / dstSize);   /* Q < 16 */
         U32 const D256 = (U32)(dstSize >> 8);
@@ -973,14 +1030,18 @@
         U32 DTime1 = algoTime[Q][1].tableTime + (algoTime[Q][1].decode256Time * D256);
         DTime1 += DTime1 >> 3;  /* advantage to algorithm using less memory, to reduce cache eviction */
         return DTime1 < DTime0;
-}   }
+    }
+#endif
+}
 
 
 typedef size_t (*decompressionAlgo)(void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize);
 
 size_t HUF_decompress (void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize)
 {
+#if !defined(HUF_FORCE_DECOMPRESS_X1) && !defined(HUF_FORCE_DECOMPRESS_X2)
     static const decompressionAlgo decompress[2] = { HUF_decompress4X1, HUF_decompress4X2 };
+#endif
 
     /* validation checks */
     if (dstSize == 0) return ERROR(dstSize_tooSmall);
@@ -989,7 +1050,17 @@
     if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; }   /* RLE */
 
     {   U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+        (void)algoNb;
+        assert(algoNb == 0);
+        return HUF_decompress4X1(dst, dstSize, cSrc, cSrcSize);
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+        (void)algoNb;
+        assert(algoNb == 1);
+        return HUF_decompress4X2(dst, dstSize, cSrc, cSrcSize);
+#else
         return decompress[algoNb](dst, dstSize, cSrc, cSrcSize);
+#endif
     }
 }
 
@@ -1002,8 +1073,18 @@
     if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; }   /* RLE */
 
     {   U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+        (void)algoNb;
+        assert(algoNb == 0);
+        return HUF_decompress4X1_DCtx(dctx, dst, dstSize, cSrc, cSrcSize);
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+        (void)algoNb;
+        assert(algoNb == 1);
+        return HUF_decompress4X2_DCtx(dctx, dst, dstSize, cSrc, cSrcSize);
+#else
         return algoNb ? HUF_decompress4X2_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) :
                         HUF_decompress4X1_DCtx(dctx, dst, dstSize, cSrc, cSrcSize) ;
+#endif
     }
 }
 
@@ -1025,8 +1106,19 @@
     if (cSrcSize == 0) return ERROR(corruption_detected);
 
     {   U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
-        return algoNb ? HUF_decompress4X2_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize):
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+        (void)algoNb;
+        assert(algoNb == 0);
+        return HUF_decompress4X1_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize);
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+        (void)algoNb;
+        assert(algoNb == 1);
+        return HUF_decompress4X2_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize);
+#else
+        return algoNb ? HUF_decompress4X2_DCtx_wksp(dctx, dst, dstSize, cSrc,
+                            cSrcSize, workSpace, wkspSize):
                         HUF_decompress4X1_DCtx_wksp(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize);
+#endif
     }
 }
 
@@ -1041,10 +1133,22 @@
     if (cSrcSize == 1) { memset(dst, *(const BYTE*)cSrc, dstSize); return dstSize; }   /* RLE */
 
     {   U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+        (void)algoNb;
+        assert(algoNb == 0);
+        return HUF_decompress1X1_DCtx_wksp(dctx, dst, dstSize, cSrc,
+                                cSrcSize, workSpace, wkspSize);
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+        (void)algoNb;
+        assert(algoNb == 1);
+        return HUF_decompress1X2_DCtx_wksp(dctx, dst, dstSize, cSrc,
+                                cSrcSize, workSpace, wkspSize);
+#else
         return algoNb ? HUF_decompress1X2_DCtx_wksp(dctx, dst, dstSize, cSrc,
                                 cSrcSize, workSpace, wkspSize):
                         HUF_decompress1X1_DCtx_wksp(dctx, dst, dstSize, cSrc,
                                 cSrcSize, workSpace, wkspSize);
+#endif
     }
 }
 
@@ -1060,10 +1164,21 @@
 size_t HUF_decompress1X_usingDTable_bmi2(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable, int bmi2)
 {
     DTableDesc const dtd = HUF_getDTableDesc(DTable);
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+    (void)dtd;
+    assert(dtd.tableType == 0);
+    return HUF_decompress1X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, bmi2);
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+    (void)dtd;
+    assert(dtd.tableType == 1);
+    return HUF_decompress1X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, bmi2);
+#else
     return dtd.tableType ? HUF_decompress1X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, bmi2) :
                            HUF_decompress1X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, bmi2);
+#endif
 }
 
+#ifndef HUF_FORCE_DECOMPRESS_X2
 size_t HUF_decompress1X1_DCtx_wksp_bmi2(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int bmi2)
 {
     const BYTE* ip = (const BYTE*) cSrc;
@@ -1075,12 +1190,23 @@
 
     return HUF_decompress1X1_usingDTable_internal(dst, dstSize, ip, cSrcSize, dctx, bmi2);
 }
+#endif
 
 size_t HUF_decompress4X_usingDTable_bmi2(void* dst, size_t maxDstSize, const void* cSrc, size_t cSrcSize, const HUF_DTable* DTable, int bmi2)
 {
     DTableDesc const dtd = HUF_getDTableDesc(DTable);
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+    (void)dtd;
+    assert(dtd.tableType == 0);
+    return HUF_decompress4X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, bmi2);
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+    (void)dtd;
+    assert(dtd.tableType == 1);
+    return HUF_decompress4X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, bmi2);
+#else
     return dtd.tableType ? HUF_decompress4X2_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, bmi2) :
                            HUF_decompress4X1_usingDTable_internal(dst, maxDstSize, cSrc, cSrcSize, DTable, bmi2);
+#endif
 }
 
 size_t HUF_decompress4X_hufOnly_wksp_bmi2(HUF_DTable* dctx, void* dst, size_t dstSize, const void* cSrc, size_t cSrcSize, void* workSpace, size_t wkspSize, int bmi2)
@@ -1090,7 +1216,17 @@
     if (cSrcSize == 0) return ERROR(corruption_detected);
 
     {   U32 const algoNb = HUF_selectDecoder(dstSize, cSrcSize);
+#if defined(HUF_FORCE_DECOMPRESS_X1)
+        (void)algoNb;
+        assert(algoNb == 0);
+        return HUF_decompress4X1_DCtx_wksp_bmi2(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize, bmi2);
+#elif defined(HUF_FORCE_DECOMPRESS_X2)
+        (void)algoNb;
+        assert(algoNb == 1);
+        return HUF_decompress4X2_DCtx_wksp_bmi2(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize, bmi2);
+#else
         return algoNb ? HUF_decompress4X2_DCtx_wksp_bmi2(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize, bmi2) :
                         HUF_decompress4X1_DCtx_wksp_bmi2(dctx, dst, dstSize, cSrc, cSrcSize, workSpace, wkspSize, bmi2);
+#endif
     }
 }
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/python-zstandard/zstd/decompress/zstd_ddict.c	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,240 @@
+/*
+ * Copyright (c) 2016-present, Yann Collet, Facebook, Inc.
+ * All rights reserved.
+ *
+ * This source code is licensed under both the BSD-style license (found in the
+ * LICENSE file in the root directory of this source tree) and the GPLv2 (found
+ * in the COPYING file in the root directory of this source tree).
+ * You may select, at your option, one of the above-listed licenses.
+ */
+
+/* zstd_ddict.c :
+ * concentrates all logic that needs to know the internals of ZSTD_DDict object */
+
+/*-*******************************************************
+*  Dependencies
+*********************************************************/
+#include <string.h>      /* memcpy, memmove, memset */
+#include "cpu.h"         /* bmi2 */
+#include "mem.h"         /* low level memory routines */
+#define FSE_STATIC_LINKING_ONLY
+#include "fse.h"
+#define HUF_STATIC_LINKING_ONLY
+#include "huf.h"
+#include "zstd_decompress_internal.h"
+#include "zstd_ddict.h"
+
+#if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
+#  include "zstd_legacy.h"
+#endif
+
+
+
+/*-*******************************************************
+*  Types
+*********************************************************/
+struct ZSTD_DDict_s {
+    void* dictBuffer;
+    const void* dictContent;
+    size_t dictSize;
+    ZSTD_entropyDTables_t entropy;
+    U32 dictID;
+    U32 entropyPresent;
+    ZSTD_customMem cMem;
+};  /* typedef'd to ZSTD_DDict within "zstd.h" */
+
+const void* ZSTD_DDict_dictContent(const ZSTD_DDict* ddict)
+{
+    assert(ddict != NULL);
+    return ddict->dictContent;
+}
+
+size_t ZSTD_DDict_dictSize(const ZSTD_DDict* ddict)
+{
+    assert(ddict != NULL);
+    return ddict->dictSize;
+}
+
+void ZSTD_copyDDictParameters(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict)
+{
+    DEBUGLOG(4, "ZSTD_copyDDictParameters");
+    assert(dctx != NULL);
+    assert(ddict != NULL);
+    dctx->dictID = ddict->dictID;
+    dctx->prefixStart = ddict->dictContent;
+    dctx->virtualStart = ddict->dictContent;
+    dctx->dictEnd = (const BYTE*)ddict->dictContent + ddict->dictSize;
+    dctx->previousDstEnd = dctx->dictEnd;
+    if (ddict->entropyPresent) {
+        dctx->litEntropy = 1;
+        dctx->fseEntropy = 1;
+        dctx->LLTptr = ddict->entropy.LLTable;
+        dctx->MLTptr = ddict->entropy.MLTable;
+        dctx->OFTptr = ddict->entropy.OFTable;
+        dctx->HUFptr = ddict->entropy.hufTable;
+        dctx->entropy.rep[0] = ddict->entropy.rep[0];
+        dctx->entropy.rep[1] = ddict->entropy.rep[1];
+        dctx->entropy.rep[2] = ddict->entropy.rep[2];
+    } else {
+        dctx->litEntropy = 0;
+        dctx->fseEntropy = 0;
+    }
+}
+
+
+static size_t
+ZSTD_loadEntropy_intoDDict(ZSTD_DDict* ddict,
+                           ZSTD_dictContentType_e dictContentType)
+{
+    ddict->dictID = 0;
+    ddict->entropyPresent = 0;
+    if (dictContentType == ZSTD_dct_rawContent) return 0;
+
+    if (ddict->dictSize < 8) {
+        if (dictContentType == ZSTD_dct_fullDict)
+            return ERROR(dictionary_corrupted);   /* only accept specified dictionaries */
+        return 0;   /* pure content mode */
+    }
+    {   U32 const magic = MEM_readLE32(ddict->dictContent);
+        if (magic != ZSTD_MAGIC_DICTIONARY) {
+            if (dictContentType == ZSTD_dct_fullDict)
+                return ERROR(dictionary_corrupted);   /* only accept specified dictionaries */
+            return 0;   /* pure content mode */
+        }
+    }
+    ddict->dictID = MEM_readLE32((const char*)ddict->dictContent + ZSTD_FRAMEIDSIZE);
+
+    /* load entropy tables */
+    CHECK_E( ZSTD_loadDEntropy(&ddict->entropy,
+                                ddict->dictContent, ddict->dictSize),
+             dictionary_corrupted );
+    ddict->entropyPresent = 1;
+    return 0;
+}
+
+
+static size_t ZSTD_initDDict_internal(ZSTD_DDict* ddict,
+                                      const void* dict, size_t dictSize,
+                                      ZSTD_dictLoadMethod_e dictLoadMethod,
+                                      ZSTD_dictContentType_e dictContentType)
+{
+    if ((dictLoadMethod == ZSTD_dlm_byRef) || (!dict) || (!dictSize)) {
+        ddict->dictBuffer = NULL;
+        ddict->dictContent = dict;
+        if (!dict) dictSize = 0;
+    } else {
+        void* const internalBuffer = ZSTD_malloc(dictSize, ddict->cMem);
+        ddict->dictBuffer = internalBuffer;
+        ddict->dictContent = internalBuffer;
+        if (!internalBuffer) return ERROR(memory_allocation);
+        memcpy(internalBuffer, dict, dictSize);
+    }
+    ddict->dictSize = dictSize;
+    ddict->entropy.hufTable[0] = (HUF_DTable)((HufLog)*0x1000001);  /* cover both little and big endian */
+
+    /* parse dictionary content */
+    CHECK_F( ZSTD_loadEntropy_intoDDict(ddict, dictContentType) );
+
+    return 0;
+}
+
+ZSTD_DDict* ZSTD_createDDict_advanced(const void* dict, size_t dictSize,
+                                      ZSTD_dictLoadMethod_e dictLoadMethod,
+                                      ZSTD_dictContentType_e dictContentType,
+                                      ZSTD_customMem customMem)
+{
+    if (!customMem.customAlloc ^ !customMem.customFree) return NULL;
+
+    {   ZSTD_DDict* const ddict = (ZSTD_DDict*) ZSTD_malloc(sizeof(ZSTD_DDict), customMem);
+        if (ddict == NULL) return NULL;
+        ddict->cMem = customMem;
+        {   size_t const initResult = ZSTD_initDDict_internal(ddict,
+                                            dict, dictSize,
+                                            dictLoadMethod, dictContentType);
+            if (ZSTD_isError(initResult)) {
+                ZSTD_freeDDict(ddict);
+                return NULL;
+        }   }
+        return ddict;
+    }
+}
+
+/*! ZSTD_createDDict() :
+*   Create a digested dictionary, to start decompression without startup delay.
+*   `dict` content is copied inside DDict.
+*   Consequently, `dict` can be released after `ZSTD_DDict` creation */
+ZSTD_DDict* ZSTD_createDDict(const void* dict, size_t dictSize)
+{
+    ZSTD_customMem const allocator = { NULL, NULL, NULL };
+    return ZSTD_createDDict_advanced(dict, dictSize, ZSTD_dlm_byCopy, ZSTD_dct_auto, allocator);
+}
+
+/*! ZSTD_createDDict_byReference() :
+ *  Create a digested dictionary, to start decompression without startup delay.
+ *  Dictionary content is simply referenced, it will be accessed during decompression.
+ *  Warning : dictBuffer must outlive DDict (DDict must be freed before dictBuffer) */
+ZSTD_DDict* ZSTD_createDDict_byReference(const void* dictBuffer, size_t dictSize)
+{
+    ZSTD_customMem const allocator = { NULL, NULL, NULL };
+    return ZSTD_createDDict_advanced(dictBuffer, dictSize, ZSTD_dlm_byRef, ZSTD_dct_auto, allocator);
+}
+
+
+const ZSTD_DDict* ZSTD_initStaticDDict(
+                                void* sBuffer, size_t sBufferSize,
+                                const void* dict, size_t dictSize,
+                                ZSTD_dictLoadMethod_e dictLoadMethod,
+                                ZSTD_dictContentType_e dictContentType)
+{
+    size_t const neededSpace = sizeof(ZSTD_DDict)
+                             + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize);
+    ZSTD_DDict* const ddict = (ZSTD_DDict*)sBuffer;
+    assert(sBuffer != NULL);
+    assert(dict != NULL);
+    if ((size_t)sBuffer & 7) return NULL;   /* 8-aligned */
+    if (sBufferSize < neededSpace) return NULL;
+    if (dictLoadMethod == ZSTD_dlm_byCopy) {
+        memcpy(ddict+1, dict, dictSize);  /* local copy */
+        dict = ddict+1;
+    }
+    if (ZSTD_isError( ZSTD_initDDict_internal(ddict,
+                                              dict, dictSize,
+                                              ZSTD_dlm_byRef, dictContentType) ))
+        return NULL;
+    return ddict;
+}
+
+
+size_t ZSTD_freeDDict(ZSTD_DDict* ddict)
+{
+    if (ddict==NULL) return 0;   /* support free on NULL */
+    {   ZSTD_customMem const cMem = ddict->cMem;
+        ZSTD_free(ddict->dictBuffer, cMem);
+        ZSTD_free(ddict, cMem);
+        return 0;
+    }
+}
+
+/*! ZSTD_estimateDDictSize() :
+ *  Estimate amount of memory that will be needed to create a dictionary for decompression.
+ *  Note : dictionary created by reference using ZSTD_dlm_byRef are smaller */
+size_t ZSTD_estimateDDictSize(size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod)
+{
+    return sizeof(ZSTD_DDict) + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize);
+}
+
+size_t ZSTD_sizeof_DDict(const ZSTD_DDict* ddict)
+{
+    if (ddict==NULL) return 0;   /* support sizeof on NULL */
+    return sizeof(*ddict) + (ddict->dictBuffer ? ddict->dictSize : 0) ;
+}
+
+/*! ZSTD_getDictID_fromDDict() :
+ *  Provides the dictID of the dictionary loaded into `ddict`.
+ *  If @return == 0, the dictionary is not conformant to Zstandard specification, or empty.
+ *  Non-conformant dictionaries can still be loaded, but as content-only dictionaries. */
+unsigned ZSTD_getDictID_fromDDict(const ZSTD_DDict* ddict)
+{
+    if (ddict==NULL) return 0;
+    return ZSTD_getDictID_fromDict(ddict->dictContent, ddict->dictSize);
+}
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/python-zstandard/zstd/decompress/zstd_ddict.h	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,44 @@
+/*
+ * Copyright (c) 2016-present, Yann Collet, Facebook, Inc.
+ * All rights reserved.
+ *
+ * This source code is licensed under both the BSD-style license (found in the
+ * LICENSE file in the root directory of this source tree) and the GPLv2 (found
+ * in the COPYING file in the root directory of this source tree).
+ * You may select, at your option, one of the above-listed licenses.
+ */
+
+
+#ifndef ZSTD_DDICT_H
+#define ZSTD_DDICT_H
+
+/*-*******************************************************
+ *  Dependencies
+ *********************************************************/
+#include <stddef.h>   /* size_t */
+#include "zstd.h"     /* ZSTD_DDict, and several public functions */
+
+
+/*-*******************************************************
+ *  Interface
+ *********************************************************/
+
+/* note: several prototypes are already published in `zstd.h` :
+ * ZSTD_createDDict()
+ * ZSTD_createDDict_byReference()
+ * ZSTD_createDDict_advanced()
+ * ZSTD_freeDDict()
+ * ZSTD_initStaticDDict()
+ * ZSTD_sizeof_DDict()
+ * ZSTD_estimateDDictSize()
+ * ZSTD_getDictID_fromDict()
+ */
+
+const void* ZSTD_DDict_dictContent(const ZSTD_DDict* ddict);
+size_t ZSTD_DDict_dictSize(const ZSTD_DDict* ddict);
+
+void ZSTD_copyDDictParameters(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict);
+
+
+
+#endif /* ZSTD_DDICT_H */
--- a/contrib/python-zstandard/zstd/decompress/zstd_decompress.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/decompress/zstd_decompress.c	Wed Apr 17 13:41:18 2019 -0400
@@ -37,12 +37,12 @@
  *  It's possible to set a different limit using ZSTD_DCtx_setMaxWindowSize().
  */
 #ifndef ZSTD_MAXWINDOWSIZE_DEFAULT
-#  define ZSTD_MAXWINDOWSIZE_DEFAULT (((U32)1 << ZSTD_WINDOWLOG_DEFAULTMAX) + 1)
+#  define ZSTD_MAXWINDOWSIZE_DEFAULT (((U32)1 << ZSTD_WINDOWLOG_LIMIT_DEFAULT) + 1)
 #endif
 
 /*!
  *  NO_FORWARD_PROGRESS_MAX :
- *  maximum allowed nb of calls to ZSTD_decompressStream() and ZSTD_decompress_generic()
+ *  maximum allowed nb of calls to ZSTD_decompressStream()
  *  without any forward progress
  *  (defined as: no byte read from input, and no byte flushed to output)
  *  before triggering an error.
@@ -56,128 +56,25 @@
 *  Dependencies
 *********************************************************/
 #include <string.h>      /* memcpy, memmove, memset */
-#include "compiler.h"    /* prefetch */
 #include "cpu.h"         /* bmi2 */
 #include "mem.h"         /* low level memory routines */
 #define FSE_STATIC_LINKING_ONLY
 #include "fse.h"
 #define HUF_STATIC_LINKING_ONLY
 #include "huf.h"
-#include "zstd_internal.h"
+#include "zstd_internal.h"  /* blockProperties_t */
+#include "zstd_decompress_internal.h"   /* ZSTD_DCtx */
+#include "zstd_ddict.h"  /* ZSTD_DDictDictContent */
+#include "zstd_decompress_block.h"   /* ZSTD_decompressBlock_internal */
 
 #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
 #  include "zstd_legacy.h"
 #endif
 
-static const void* ZSTD_DDictDictContent(const ZSTD_DDict* ddict);
-static size_t ZSTD_DDictDictSize(const ZSTD_DDict* ddict);
-
-
-/*-*************************************
-*  Errors
-***************************************/
-#define ZSTD_isError ERR_isError   /* for inlining */
-#define FSE_isError  ERR_isError
-#define HUF_isError  ERR_isError
-
-
-/*_*******************************************************
-*  Memory operations
-**********************************************************/
-static void ZSTD_copy4(void* dst, const void* src) { memcpy(dst, src, 4); }
-
 
 /*-*************************************************************
 *   Context management
 ***************************************************************/
-typedef enum { ZSTDds_getFrameHeaderSize, ZSTDds_decodeFrameHeader,
-               ZSTDds_decodeBlockHeader, ZSTDds_decompressBlock,
-               ZSTDds_decompressLastBlock, ZSTDds_checkChecksum,
-               ZSTDds_decodeSkippableHeader, ZSTDds_skipFrame } ZSTD_dStage;
-
-typedef enum { zdss_init=0, zdss_loadHeader,
-               zdss_read, zdss_load, zdss_flush } ZSTD_dStreamStage;
-
-
-typedef struct {
-    U32 fastMode;
-    U32 tableLog;
-} ZSTD_seqSymbol_header;
-
-typedef struct {
-    U16  nextState;
-    BYTE nbAdditionalBits;
-    BYTE nbBits;
-    U32  baseValue;
-} ZSTD_seqSymbol;
-
-#define SEQSYMBOL_TABLE_SIZE(log)   (1 + (1 << (log)))
-
-typedef struct {
-    ZSTD_seqSymbol LLTable[SEQSYMBOL_TABLE_SIZE(LLFSELog)];    /* Note : Space reserved for FSE Tables */
-    ZSTD_seqSymbol OFTable[SEQSYMBOL_TABLE_SIZE(OffFSELog)];   /* is also used as temporary workspace while building hufTable during DDict creation */
-    ZSTD_seqSymbol MLTable[SEQSYMBOL_TABLE_SIZE(MLFSELog)];    /* and therefore must be at least HUF_DECOMPRESS_WORKSPACE_SIZE large */
-    HUF_DTable hufTable[HUF_DTABLE_SIZE(HufLog)];  /* can accommodate HUF_decompress4X */
-    U32 rep[ZSTD_REP_NUM];
-} ZSTD_entropyDTables_t;
-
-struct ZSTD_DCtx_s
-{
-    const ZSTD_seqSymbol* LLTptr;
-    const ZSTD_seqSymbol* MLTptr;
-    const ZSTD_seqSymbol* OFTptr;
-    const HUF_DTable* HUFptr;
-    ZSTD_entropyDTables_t entropy;
-    U32 workspace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32];   /* space needed when building huffman tables */
-    const void* previousDstEnd;   /* detect continuity */
-    const void* prefixStart;      /* start of current segment */
-    const void* virtualStart;     /* virtual start of previous segment if it was just before current one */
-    const void* dictEnd;          /* end of previous segment */
-    size_t expected;
-    ZSTD_frameHeader fParams;
-    U64 decodedSize;
-    blockType_e bType;            /* used in ZSTD_decompressContinue(), store blockType between block header decoding and block decompression stages */
-    ZSTD_dStage stage;
-    U32 litEntropy;
-    U32 fseEntropy;
-    XXH64_state_t xxhState;
-    size_t headerSize;
-    ZSTD_format_e format;
-    const BYTE* litPtr;
-    ZSTD_customMem customMem;
-    size_t litSize;
-    size_t rleSize;
-    size_t staticSize;
-    int bmi2;                     /* == 1 if the CPU supports BMI2 and 0 otherwise. CPU support is determined dynamically once per context lifetime. */
-
-    /* dictionary */
-    ZSTD_DDict* ddictLocal;
-    const ZSTD_DDict* ddict;     /* set by ZSTD_initDStream_usingDDict(), or ZSTD_DCtx_refDDict() */
-    U32 dictID;
-    int ddictIsCold;             /* if == 1 : dictionary is "new" for working context, and presumed "cold" (not in cpu cache) */
-
-    /* streaming */
-    ZSTD_dStreamStage streamStage;
-    char*  inBuff;
-    size_t inBuffSize;
-    size_t inPos;
-    size_t maxWindowSize;
-    char*  outBuff;
-    size_t outBuffSize;
-    size_t outStart;
-    size_t outEnd;
-    size_t lhSize;
-    void* legacyContext;
-    U32 previousLegacyVersion;
-    U32 legacyVersion;
-    U32 hostageByte;
-    int noForwardProgress;
-
-    /* workspace */
-    BYTE litBuffer[ZSTD_BLOCKSIZE_MAX + WILDCOPY_OVERLENGTH];
-    BYTE headerBuffer[ZSTD_FRAMEHEADERSIZE_MAX];
-};  /* typedef'd to ZSTD_DCtx within "zstd.h" */
-
 size_t ZSTD_sizeof_DCtx (const ZSTD_DCtx* dctx)
 {
     if (dctx==NULL) return 0;   /* support sizeof NULL */
@@ -192,8 +89,8 @@
 static size_t ZSTD_startingInputLength(ZSTD_format_e format)
 {
     size_t const startingInputLength = (format==ZSTD_f_zstd1_magicless) ?
-                    ZSTD_frameHeaderSize_prefix - ZSTD_FRAMEIDSIZE :
-                    ZSTD_frameHeaderSize_prefix;
+                    ZSTD_FRAMEHEADERSIZE_PREFIX - ZSTD_FRAMEIDSIZE :
+                    ZSTD_FRAMEHEADERSIZE_PREFIX;
     ZSTD_STATIC_ASSERT(ZSTD_FRAMEHEADERSIZE_PREFIX >= ZSTD_FRAMEIDSIZE);
     /* only supports formats ZSTD_f_zstd1 and ZSTD_f_zstd1_magicless */
     assert( (format == ZSTD_f_zstd1) || (format == ZSTD_f_zstd1_magicless) );
@@ -290,7 +187,7 @@
     if (size < ZSTD_FRAMEIDSIZE) return 0;
     {   U32 const magic = MEM_readLE32(buffer);
         if (magic == ZSTD_MAGICNUMBER) return 1;
-        if ((magic & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) return 1;
+        if ((magic & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) return 1;
     }
 #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1)
     if (ZSTD_isLegacy(buffer, size)) return 1;
@@ -345,10 +242,10 @@
 
     if ( (format != ZSTD_f_zstd1_magicless)
       && (MEM_readLE32(src) != ZSTD_MAGICNUMBER) ) {
-        if ((MEM_readLE32(src) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) {
+        if ((MEM_readLE32(src) & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {
             /* skippable frame */
-            if (srcSize < ZSTD_skippableHeaderSize)
-                return ZSTD_skippableHeaderSize; /* magic number + frame length */
+            if (srcSize < ZSTD_SKIPPABLEHEADERSIZE)
+                return ZSTD_SKIPPABLEHEADERSIZE; /* magic number + frame length */
             memset(zfhPtr, 0, sizeof(*zfhPtr));
             zfhPtr->frameContentSize = MEM_readLE32((const char *)src + ZSTD_FRAMEIDSIZE);
             zfhPtr->frameType = ZSTD_skippableFrame;
@@ -446,6 +343,21 @@
     }   }
 }
 
+static size_t readSkippableFrameSize(void const* src, size_t srcSize)
+{
+    size_t const skippableHeaderSize = ZSTD_SKIPPABLEHEADERSIZE;
+    U32 sizeU32;
+
+    if (srcSize < ZSTD_SKIPPABLEHEADERSIZE)
+        return ERROR(srcSize_wrong);
+
+    sizeU32 = MEM_readLE32((BYTE const*)src + ZSTD_FRAMEIDSIZE);
+    if ((U32)(sizeU32 + ZSTD_SKIPPABLEHEADERSIZE) < sizeU32)
+        return ERROR(frameParameter_unsupported);
+
+    return skippableHeaderSize + sizeU32;
+}
+
 /** ZSTD_findDecompressedSize() :
  *  compatible with legacy mode
  *  `srcSize` must be the exact length of some number of ZSTD compressed and/or
@@ -455,15 +367,13 @@
 {
     unsigned long long totalDstSize = 0;
 
-    while (srcSize >= ZSTD_frameHeaderSize_prefix) {
+    while (srcSize >= ZSTD_FRAMEHEADERSIZE_PREFIX) {
         U32 const magicNumber = MEM_readLE32(src);
 
-        if ((magicNumber & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) {
-            size_t skippableSize;
-            if (srcSize < ZSTD_skippableHeaderSize)
-                return ERROR(srcSize_wrong);
-            skippableSize = MEM_readLE32((const BYTE *)src + ZSTD_FRAMEIDSIZE)
-                          + ZSTD_skippableHeaderSize;
+        if ((magicNumber & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {
+            size_t const skippableSize = readSkippableFrameSize(src, srcSize);
+            if (ZSTD_isError(skippableSize))
+                return skippableSize;
             if (srcSize < skippableSize) {
                 return ZSTD_CONTENTSIZE_ERROR;
             }
@@ -496,9 +406,9 @@
 }
 
 /** ZSTD_getDecompressedSize() :
-*   compatible with legacy mode
-*   @return : decompressed size if known, 0 otherwise
-              note : 0 can mean any of the following :
+ *  compatible with legacy mode
+ * @return : decompressed size if known, 0 otherwise
+             note : 0 can mean any of the following :
                    - frame content is empty
                    - decompressed size field is not present in frame header
                    - frame header unknown / not supported
@@ -512,8 +422,8 @@
 
 
 /** ZSTD_decodeFrameHeader() :
-*   `headerSize` must be the size provided by ZSTD_frameHeaderSize().
-*   @return : 0 if success, or an error code, which can be tested using ZSTD_isError() */
+ * `headerSize` must be the size provided by ZSTD_frameHeaderSize().
+ * @return : 0 if success, or an error code, which can be tested using ZSTD_isError() */
 static size_t ZSTD_decodeFrameHeader(ZSTD_DCtx* dctx, const void* src, size_t headerSize)
 {
     size_t const result = ZSTD_getFrameHeader_advanced(&(dctx->fParams), src, headerSize, dctx->format);
@@ -526,1275 +436,6 @@
 }
 
 
-/*-*************************************************************
- *   Block decoding
- ***************************************************************/
-
-/*! ZSTD_getcBlockSize() :
-*   Provides the size of compressed block from block header `src` */
-size_t ZSTD_getcBlockSize(const void* src, size_t srcSize,
-                          blockProperties_t* bpPtr)
-{
-    if (srcSize < ZSTD_blockHeaderSize) return ERROR(srcSize_wrong);
-    {   U32 const cBlockHeader = MEM_readLE24(src);
-        U32 const cSize = cBlockHeader >> 3;
-        bpPtr->lastBlock = cBlockHeader & 1;
-        bpPtr->blockType = (blockType_e)((cBlockHeader >> 1) & 3);
-        bpPtr->origSize = cSize;   /* only useful for RLE */
-        if (bpPtr->blockType == bt_rle) return 1;
-        if (bpPtr->blockType == bt_reserved) return ERROR(corruption_detected);
-        return cSize;
-    }
-}
-
-
-static size_t ZSTD_copyRawBlock(void* dst, size_t dstCapacity,
-                          const void* src, size_t srcSize)
-{
-    if (dst==NULL) return ERROR(dstSize_tooSmall);
-    if (srcSize > dstCapacity) return ERROR(dstSize_tooSmall);
-    memcpy(dst, src, srcSize);
-    return srcSize;
-}
-
-
-static size_t ZSTD_setRleBlock(void* dst, size_t dstCapacity,
-                         const void* src, size_t srcSize,
-                               size_t regenSize)
-{
-    if (srcSize != 1) return ERROR(srcSize_wrong);
-    if (regenSize > dstCapacity) return ERROR(dstSize_tooSmall);
-    memset(dst, *(const BYTE*)src, regenSize);
-    return regenSize;
-}
-
-/* Hidden declaration for fullbench */
-size_t ZSTD_decodeLiteralsBlock(ZSTD_DCtx* dctx,
-                          const void* src, size_t srcSize);
-/*! ZSTD_decodeLiteralsBlock() :
- * @return : nb of bytes read from src (< srcSize )
- *  note : symbol not declared but exposed for fullbench */
-size_t ZSTD_decodeLiteralsBlock(ZSTD_DCtx* dctx,
-                          const void* src, size_t srcSize)   /* note : srcSize < BLOCKSIZE */
-{
-    if (srcSize < MIN_CBLOCK_SIZE) return ERROR(corruption_detected);
-
-    {   const BYTE* const istart = (const BYTE*) src;
-        symbolEncodingType_e const litEncType = (symbolEncodingType_e)(istart[0] & 3);
-
-        switch(litEncType)
-        {
-        case set_repeat:
-            if (dctx->litEntropy==0) return ERROR(dictionary_corrupted);
-            /* fall-through */
-
-        case set_compressed:
-            if (srcSize < 5) return ERROR(corruption_detected);   /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need up to 5 for case 3 */
-            {   size_t lhSize, litSize, litCSize;
-                U32 singleStream=0;
-                U32 const lhlCode = (istart[0] >> 2) & 3;
-                U32 const lhc = MEM_readLE32(istart);
-                switch(lhlCode)
-                {
-                case 0: case 1: default:   /* note : default is impossible, since lhlCode into [0..3] */
-                    /* 2 - 2 - 10 - 10 */
-                    singleStream = !lhlCode;
-                    lhSize = 3;
-                    litSize  = (lhc >> 4) & 0x3FF;
-                    litCSize = (lhc >> 14) & 0x3FF;
-                    break;
-                case 2:
-                    /* 2 - 2 - 14 - 14 */
-                    lhSize = 4;
-                    litSize  = (lhc >> 4) & 0x3FFF;
-                    litCSize = lhc >> 18;
-                    break;
-                case 3:
-                    /* 2 - 2 - 18 - 18 */
-                    lhSize = 5;
-                    litSize  = (lhc >> 4) & 0x3FFFF;
-                    litCSize = (lhc >> 22) + (istart[4] << 10);
-                    break;
-                }
-                if (litSize > ZSTD_BLOCKSIZE_MAX) return ERROR(corruption_detected);
-                if (litCSize + lhSize > srcSize) return ERROR(corruption_detected);
-
-                /* prefetch huffman table if cold */
-                if (dctx->ddictIsCold && (litSize > 768 /* heuristic */)) {
-                    PREFETCH_AREA(dctx->HUFptr, sizeof(dctx->entropy.hufTable));
-                }
-
-                if (HUF_isError((litEncType==set_repeat) ?
-                                    ( singleStream ?
-                                        HUF_decompress1X_usingDTable_bmi2(dctx->litBuffer, litSize, istart+lhSize, litCSize, dctx->HUFptr, dctx->bmi2) :
-                                        HUF_decompress4X_usingDTable_bmi2(dctx->litBuffer, litSize, istart+lhSize, litCSize, dctx->HUFptr, dctx->bmi2) ) :
-                                    ( singleStream ?
-                                        HUF_decompress1X1_DCtx_wksp_bmi2(dctx->entropy.hufTable, dctx->litBuffer, litSize, istart+lhSize, litCSize,
-                                                                         dctx->workspace, sizeof(dctx->workspace), dctx->bmi2) :
-                                        HUF_decompress4X_hufOnly_wksp_bmi2(dctx->entropy.hufTable, dctx->litBuffer, litSize, istart+lhSize, litCSize,
-                                                                           dctx->workspace, sizeof(dctx->workspace), dctx->bmi2))))
-                    return ERROR(corruption_detected);
-
-                dctx->litPtr = dctx->litBuffer;
-                dctx->litSize = litSize;
-                dctx->litEntropy = 1;
-                if (litEncType==set_compressed) dctx->HUFptr = dctx->entropy.hufTable;
-                memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH);
-                return litCSize + lhSize;
-            }
-
-        case set_basic:
-            {   size_t litSize, lhSize;
-                U32 const lhlCode = ((istart[0]) >> 2) & 3;
-                switch(lhlCode)
-                {
-                case 0: case 2: default:   /* note : default is impossible, since lhlCode into [0..3] */
-                    lhSize = 1;
-                    litSize = istart[0] >> 3;
-                    break;
-                case 1:
-                    lhSize = 2;
-                    litSize = MEM_readLE16(istart) >> 4;
-                    break;
-                case 3:
-                    lhSize = 3;
-                    litSize = MEM_readLE24(istart) >> 4;
-                    break;
-                }
-
-                if (lhSize+litSize+WILDCOPY_OVERLENGTH > srcSize) {  /* risk reading beyond src buffer with wildcopy */
-                    if (litSize+lhSize > srcSize) return ERROR(corruption_detected);
-                    memcpy(dctx->litBuffer, istart+lhSize, litSize);
-                    dctx->litPtr = dctx->litBuffer;
-                    dctx->litSize = litSize;
-                    memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH);
-                    return lhSize+litSize;
-                }
-                /* direct reference into compressed stream */
-                dctx->litPtr = istart+lhSize;
-                dctx->litSize = litSize;
-                return lhSize+litSize;
-            }
-
-        case set_rle:
-            {   U32 const lhlCode = ((istart[0]) >> 2) & 3;
-                size_t litSize, lhSize;
-                switch(lhlCode)
-                {
-                case 0: case 2: default:   /* note : default is impossible, since lhlCode into [0..3] */
-                    lhSize = 1;
-                    litSize = istart[0] >> 3;
-                    break;
-                case 1:
-                    lhSize = 2;
-                    litSize = MEM_readLE16(istart) >> 4;
-                    break;
-                case 3:
-                    lhSize = 3;
-                    litSize = MEM_readLE24(istart) >> 4;
-                    if (srcSize<4) return ERROR(corruption_detected);   /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need lhSize+1 = 4 */
-                    break;
-                }
-                if (litSize > ZSTD_BLOCKSIZE_MAX) return ERROR(corruption_detected);
-                memset(dctx->litBuffer, istart[lhSize], litSize + WILDCOPY_OVERLENGTH);
-                dctx->litPtr = dctx->litBuffer;
-                dctx->litSize = litSize;
-                return lhSize+1;
-            }
-        default:
-            return ERROR(corruption_detected);   /* impossible */
-        }
-    }
-}
-
-/* Default FSE distribution tables.
- * These are pre-calculated FSE decoding tables using default distributions as defined in specification :
- * https://github.com/facebook/zstd/blob/master/doc/zstd_compression_format.md#default-distributions
- * They were generated programmatically with following method :
- * - start from default distributions, present in /lib/common/zstd_internal.h
- * - generate tables normally, using ZSTD_buildFSETable()
- * - printout the content of tables
- * - pretify output, report below, test with fuzzer to ensure it's correct */
-
-/* Default FSE distribution table for Literal Lengths */
-static const ZSTD_seqSymbol LL_defaultDTable[(1<<LL_DEFAULTNORMLOG)+1] = {
-     {  1,  1,  1, LL_DEFAULTNORMLOG},  /* header : fastMode, tableLog */
-     /* nextState, nbAddBits, nbBits, baseVal */
-     {  0,  0,  4,    0},  { 16,  0,  4,    0},
-     { 32,  0,  5,    1},  {  0,  0,  5,    3},
-     {  0,  0,  5,    4},  {  0,  0,  5,    6},
-     {  0,  0,  5,    7},  {  0,  0,  5,    9},
-     {  0,  0,  5,   10},  {  0,  0,  5,   12},
-     {  0,  0,  6,   14},  {  0,  1,  5,   16},
-     {  0,  1,  5,   20},  {  0,  1,  5,   22},
-     {  0,  2,  5,   28},  {  0,  3,  5,   32},
-     {  0,  4,  5,   48},  { 32,  6,  5,   64},
-     {  0,  7,  5,  128},  {  0,  8,  6,  256},
-     {  0, 10,  6, 1024},  {  0, 12,  6, 4096},
-     { 32,  0,  4,    0},  {  0,  0,  4,    1},
-     {  0,  0,  5,    2},  { 32,  0,  5,    4},
-     {  0,  0,  5,    5},  { 32,  0,  5,    7},
-     {  0,  0,  5,    8},  { 32,  0,  5,   10},
-     {  0,  0,  5,   11},  {  0,  0,  6,   13},
-     { 32,  1,  5,   16},  {  0,  1,  5,   18},
-     { 32,  1,  5,   22},  {  0,  2,  5,   24},
-     { 32,  3,  5,   32},  {  0,  3,  5,   40},
-     {  0,  6,  4,   64},  { 16,  6,  4,   64},
-     { 32,  7,  5,  128},  {  0,  9,  6,  512},
-     {  0, 11,  6, 2048},  { 48,  0,  4,    0},
-     { 16,  0,  4,    1},  { 32,  0,  5,    2},
-     { 32,  0,  5,    3},  { 32,  0,  5,    5},
-     { 32,  0,  5,    6},  { 32,  0,  5,    8},
-     { 32,  0,  5,    9},  { 32,  0,  5,   11},
-     { 32,  0,  5,   12},  {  0,  0,  6,   15},
-     { 32,  1,  5,   18},  { 32,  1,  5,   20},
-     { 32,  2,  5,   24},  { 32,  2,  5,   28},
-     { 32,  3,  5,   40},  { 32,  4,  5,   48},
-     {  0, 16,  6,65536},  {  0, 15,  6,32768},
-     {  0, 14,  6,16384},  {  0, 13,  6, 8192},
-};   /* LL_defaultDTable */
-
-/* Default FSE distribution table for Offset Codes */
-static const ZSTD_seqSymbol OF_defaultDTable[(1<<OF_DEFAULTNORMLOG)+1] = {
-    {  1,  1,  1, OF_DEFAULTNORMLOG},  /* header : fastMode, tableLog */
-    /* nextState, nbAddBits, nbBits, baseVal */
-    {  0,  0,  5,    0},     {  0,  6,  4,   61},
-    {  0,  9,  5,  509},     {  0, 15,  5,32765},
-    {  0, 21,  5,2097149},   {  0,  3,  5,    5},
-    {  0,  7,  4,  125},     {  0, 12,  5, 4093},
-    {  0, 18,  5,262141},    {  0, 23,  5,8388605},
-    {  0,  5,  5,   29},     {  0,  8,  4,  253},
-    {  0, 14,  5,16381},     {  0, 20,  5,1048573},
-    {  0,  2,  5,    1},     { 16,  7,  4,  125},
-    {  0, 11,  5, 2045},     {  0, 17,  5,131069},
-    {  0, 22,  5,4194301},   {  0,  4,  5,   13},
-    { 16,  8,  4,  253},     {  0, 13,  5, 8189},
-    {  0, 19,  5,524285},    {  0,  1,  5,    1},
-    { 16,  6,  4,   61},     {  0, 10,  5, 1021},
-    {  0, 16,  5,65533},     {  0, 28,  5,268435453},
-    {  0, 27,  5,134217725}, {  0, 26,  5,67108861},
-    {  0, 25,  5,33554429},  {  0, 24,  5,16777213},
-};   /* OF_defaultDTable */
-
-
-/* Default FSE distribution table for Match Lengths */
-static const ZSTD_seqSymbol ML_defaultDTable[(1<<ML_DEFAULTNORMLOG)+1] = {
-    {  1,  1,  1, ML_DEFAULTNORMLOG},  /* header : fastMode, tableLog */
-    /* nextState, nbAddBits, nbBits, baseVal */
-    {  0,  0,  6,    3},  {  0,  0,  4,    4},
-    { 32,  0,  5,    5},  {  0,  0,  5,    6},
-    {  0,  0,  5,    8},  {  0,  0,  5,    9},
-    {  0,  0,  5,   11},  {  0,  0,  6,   13},
-    {  0,  0,  6,   16},  {  0,  0,  6,   19},
-    {  0,  0,  6,   22},  {  0,  0,  6,   25},
-    {  0,  0,  6,   28},  {  0,  0,  6,   31},
-    {  0,  0,  6,   34},  {  0,  1,  6,   37},
-    {  0,  1,  6,   41},  {  0,  2,  6,   47},
-    {  0,  3,  6,   59},  {  0,  4,  6,   83},
-    {  0,  7,  6,  131},  {  0,  9,  6,  515},
-    { 16,  0,  4,    4},  {  0,  0,  4,    5},
-    { 32,  0,  5,    6},  {  0,  0,  5,    7},
-    { 32,  0,  5,    9},  {  0,  0,  5,   10},
-    {  0,  0,  6,   12},  {  0,  0,  6,   15},
-    {  0,  0,  6,   18},  {  0,  0,  6,   21},
-    {  0,  0,  6,   24},  {  0,  0,  6,   27},
-    {  0,  0,  6,   30},  {  0,  0,  6,   33},
-    {  0,  1,  6,   35},  {  0,  1,  6,   39},
-    {  0,  2,  6,   43},  {  0,  3,  6,   51},
-    {  0,  4,  6,   67},  {  0,  5,  6,   99},
-    {  0,  8,  6,  259},  { 32,  0,  4,    4},
-    { 48,  0,  4,    4},  { 16,  0,  4,    5},
-    { 32,  0,  5,    7},  { 32,  0,  5,    8},
-    { 32,  0,  5,   10},  { 32,  0,  5,   11},
-    {  0,  0,  6,   14},  {  0,  0,  6,   17},
-    {  0,  0,  6,   20},  {  0,  0,  6,   23},
-    {  0,  0,  6,   26},  {  0,  0,  6,   29},
-    {  0,  0,  6,   32},  {  0, 16,  6,65539},
-    {  0, 15,  6,32771},  {  0, 14,  6,16387},
-    {  0, 13,  6, 8195},  {  0, 12,  6, 4099},
-    {  0, 11,  6, 2051},  {  0, 10,  6, 1027},
-};   /* ML_defaultDTable */
-
-
-static void ZSTD_buildSeqTable_rle(ZSTD_seqSymbol* dt, U32 baseValue, U32 nbAddBits)
-{
-    void* ptr = dt;
-    ZSTD_seqSymbol_header* const DTableH = (ZSTD_seqSymbol_header*)ptr;
-    ZSTD_seqSymbol* const cell = dt + 1;
-
-    DTableH->tableLog = 0;
-    DTableH->fastMode = 0;
-
-    cell->nbBits = 0;
-    cell->nextState = 0;
-    assert(nbAddBits < 255);
-    cell->nbAdditionalBits = (BYTE)nbAddBits;
-    cell->baseValue = baseValue;
-}
-
-
-/* ZSTD_buildFSETable() :
- * generate FSE decoding table for one symbol (ll, ml or off) */
-static void
-ZSTD_buildFSETable(ZSTD_seqSymbol* dt,
-    const short* normalizedCounter, unsigned maxSymbolValue,
-    const U32* baseValue, const U32* nbAdditionalBits,
-    unsigned tableLog)
-{
-    ZSTD_seqSymbol* const tableDecode = dt+1;
-    U16 symbolNext[MaxSeq+1];
-
-    U32 const maxSV1 = maxSymbolValue + 1;
-    U32 const tableSize = 1 << tableLog;
-    U32 highThreshold = tableSize-1;
-
-    /* Sanity Checks */
-    assert(maxSymbolValue <= MaxSeq);
-    assert(tableLog <= MaxFSELog);
-
-    /* Init, lay down lowprob symbols */
-    {   ZSTD_seqSymbol_header DTableH;
-        DTableH.tableLog = tableLog;
-        DTableH.fastMode = 1;
-        {   S16 const largeLimit= (S16)(1 << (tableLog-1));
-            U32 s;
-            for (s=0; s<maxSV1; s++) {
-                if (normalizedCounter[s]==-1) {
-                    tableDecode[highThreshold--].baseValue = s;
-                    symbolNext[s] = 1;
-                } else {
-                    if (normalizedCounter[s] >= largeLimit) DTableH.fastMode=0;
-                    symbolNext[s] = normalizedCounter[s];
-        }   }   }
-        memcpy(dt, &DTableH, sizeof(DTableH));
-    }
-
-    /* Spread symbols */
-    {   U32 const tableMask = tableSize-1;
-        U32 const step = FSE_TABLESTEP(tableSize);
-        U32 s, position = 0;
-        for (s=0; s<maxSV1; s++) {
-            int i;
-            for (i=0; i<normalizedCounter[s]; i++) {
-                tableDecode[position].baseValue = s;
-                position = (position + step) & tableMask;
-                while (position > highThreshold) position = (position + step) & tableMask;   /* lowprob area */
-        }   }
-        assert(position == 0); /* position must reach all cells once, otherwise normalizedCounter is incorrect */
-    }
-
-    /* Build Decoding table */
-    {   U32 u;
-        for (u=0; u<tableSize; u++) {
-            U32 const symbol = tableDecode[u].baseValue;
-            U32 const nextState = symbolNext[symbol]++;
-            tableDecode[u].nbBits = (BYTE) (tableLog - BIT_highbit32(nextState) );
-            tableDecode[u].nextState = (U16) ( (nextState << tableDecode[u].nbBits) - tableSize);
-            assert(nbAdditionalBits[symbol] < 255);
-            tableDecode[u].nbAdditionalBits = (BYTE)nbAdditionalBits[symbol];
-            tableDecode[u].baseValue = baseValue[symbol];
-    }   }
-}
-
-
-/*! ZSTD_buildSeqTable() :
- * @return : nb bytes read from src,
- *           or an error code if it fails */
-static size_t ZSTD_buildSeqTable(ZSTD_seqSymbol* DTableSpace, const ZSTD_seqSymbol** DTablePtr,
-                                 symbolEncodingType_e type, U32 max, U32 maxLog,
-                                 const void* src, size_t srcSize,
-                                 const U32* baseValue, const U32* nbAdditionalBits,
-                                 const ZSTD_seqSymbol* defaultTable, U32 flagRepeatTable,
-                                 int ddictIsCold, int nbSeq)
-{
-    switch(type)
-    {
-    case set_rle :
-        if (!srcSize) return ERROR(srcSize_wrong);
-        if ( (*(const BYTE*)src) > max) return ERROR(corruption_detected);
-        {   U32 const symbol = *(const BYTE*)src;
-            U32 const baseline = baseValue[symbol];
-            U32 const nbBits = nbAdditionalBits[symbol];
-            ZSTD_buildSeqTable_rle(DTableSpace, baseline, nbBits);
-        }
-        *DTablePtr = DTableSpace;
-        return 1;
-    case set_basic :
-        *DTablePtr = defaultTable;
-        return 0;
-    case set_repeat:
-        if (!flagRepeatTable) return ERROR(corruption_detected);
-        /* prefetch FSE table if used */
-        if (ddictIsCold && (nbSeq > 24 /* heuristic */)) {
-            const void* const pStart = *DTablePtr;
-            size_t const pSize = sizeof(ZSTD_seqSymbol) * (SEQSYMBOL_TABLE_SIZE(maxLog));
-            PREFETCH_AREA(pStart, pSize);
-        }
-        return 0;
-    case set_compressed :
-        {   U32 tableLog;
-            S16 norm[MaxSeq+1];
-            size_t const headerSize = FSE_readNCount(norm, &max, &tableLog, src, srcSize);
-            if (FSE_isError(headerSize)) return ERROR(corruption_detected);
-            if (tableLog > maxLog) return ERROR(corruption_detected);
-            ZSTD_buildFSETable(DTableSpace, norm, max, baseValue, nbAdditionalBits, tableLog);
-            *DTablePtr = DTableSpace;
-            return headerSize;
-        }
-    default :   /* impossible */
-        assert(0);
-        return ERROR(GENERIC);
-    }
-}
-
-static const U32 LL_base[MaxLL+1] = {
-                 0,    1,    2,     3,     4,     5,     6,      7,
-                 8,    9,   10,    11,    12,    13,    14,     15,
-                16,   18,   20,    22,    24,    28,    32,     40,
-                48,   64, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000,
-                0x2000, 0x4000, 0x8000, 0x10000 };
-
-static const U32 OF_base[MaxOff+1] = {
-                 0,        1,       1,       5,     0xD,     0x1D,     0x3D,     0x7D,
-                 0xFD,   0x1FD,   0x3FD,   0x7FD,   0xFFD,   0x1FFD,   0x3FFD,   0x7FFD,
-                 0xFFFD, 0x1FFFD, 0x3FFFD, 0x7FFFD, 0xFFFFD, 0x1FFFFD, 0x3FFFFD, 0x7FFFFD,
-                 0xFFFFFD, 0x1FFFFFD, 0x3FFFFFD, 0x7FFFFFD, 0xFFFFFFD, 0x1FFFFFFD, 0x3FFFFFFD, 0x7FFFFFFD };
-
-static const U32 OF_bits[MaxOff+1] = {
-                     0,  1,  2,  3,  4,  5,  6,  7,
-                     8,  9, 10, 11, 12, 13, 14, 15,
-                    16, 17, 18, 19, 20, 21, 22, 23,
-                    24, 25, 26, 27, 28, 29, 30, 31 };
-
-static const U32 ML_base[MaxML+1] = {
-                     3,  4,  5,    6,     7,     8,     9,    10,
-                    11, 12, 13,   14,    15,    16,    17,    18,
-                    19, 20, 21,   22,    23,    24,    25,    26,
-                    27, 28, 29,   30,    31,    32,    33,    34,
-                    35, 37, 39,   41,    43,    47,    51,    59,
-                    67, 83, 99, 0x83, 0x103, 0x203, 0x403, 0x803,
-                    0x1003, 0x2003, 0x4003, 0x8003, 0x10003 };
-
-/* Hidden delcaration for fullbench */
-size_t ZSTD_decodeSeqHeaders(ZSTD_DCtx* dctx, int* nbSeqPtr,
-                             const void* src, size_t srcSize);
-
-size_t ZSTD_decodeSeqHeaders(ZSTD_DCtx* dctx, int* nbSeqPtr,
-                             const void* src, size_t srcSize)
-{
-    const BYTE* const istart = (const BYTE* const)src;
-    const BYTE* const iend = istart + srcSize;
-    const BYTE* ip = istart;
-    int nbSeq;
-    DEBUGLOG(5, "ZSTD_decodeSeqHeaders");
-
-    /* check */
-    if (srcSize < MIN_SEQUENCES_SIZE) return ERROR(srcSize_wrong);
-
-    /* SeqHead */
-    nbSeq = *ip++;
-    if (!nbSeq) { *nbSeqPtr=0; return 1; }
-    if (nbSeq > 0x7F) {
-        if (nbSeq == 0xFF) {
-            if (ip+2 > iend) return ERROR(srcSize_wrong);
-            nbSeq = MEM_readLE16(ip) + LONGNBSEQ, ip+=2;
-        } else {
-            if (ip >= iend) return ERROR(srcSize_wrong);
-            nbSeq = ((nbSeq-0x80)<<8) + *ip++;
-        }
-    }
-    *nbSeqPtr = nbSeq;
-
-    /* FSE table descriptors */
-    if (ip+4 > iend) return ERROR(srcSize_wrong); /* minimum possible size */
-    {   symbolEncodingType_e const LLtype = (symbolEncodingType_e)(*ip >> 6);
-        symbolEncodingType_e const OFtype = (symbolEncodingType_e)((*ip >> 4) & 3);
-        symbolEncodingType_e const MLtype = (symbolEncodingType_e)((*ip >> 2) & 3);
-        ip++;
-
-        /* Build DTables */
-        {   size_t const llhSize = ZSTD_buildSeqTable(dctx->entropy.LLTable, &dctx->LLTptr,
-                                                      LLtype, MaxLL, LLFSELog,
-                                                      ip, iend-ip,
-                                                      LL_base, LL_bits,
-                                                      LL_defaultDTable, dctx->fseEntropy,
-                                                      dctx->ddictIsCold, nbSeq);
-            if (ZSTD_isError(llhSize)) return ERROR(corruption_detected);
-            ip += llhSize;
-        }
-
-        {   size_t const ofhSize = ZSTD_buildSeqTable(dctx->entropy.OFTable, &dctx->OFTptr,
-                                                      OFtype, MaxOff, OffFSELog,
-                                                      ip, iend-ip,
-                                                      OF_base, OF_bits,
-                                                      OF_defaultDTable, dctx->fseEntropy,
-                                                      dctx->ddictIsCold, nbSeq);
-            if (ZSTD_isError(ofhSize)) return ERROR(corruption_detected);
-            ip += ofhSize;
-        }
-
-        {   size_t const mlhSize = ZSTD_buildSeqTable(dctx->entropy.MLTable, &dctx->MLTptr,
-                                                      MLtype, MaxML, MLFSELog,
-                                                      ip, iend-ip,
-                                                      ML_base, ML_bits,
-                                                      ML_defaultDTable, dctx->fseEntropy,
-                                                      dctx->ddictIsCold, nbSeq);
-            if (ZSTD_isError(mlhSize)) return ERROR(corruption_detected);
-            ip += mlhSize;
-        }
-    }
-
-    /* prefetch dictionary content */
-    if (dctx->ddictIsCold) {
-        size_t const dictSize = (const char*)dctx->prefixStart - (const char*)dctx->virtualStart;
-        size_t const psmin = MIN(dictSize, (size_t)(64*nbSeq) /* heuristic */ );
-        size_t const pSize = MIN(psmin, 128 KB /* protection */ );
-        const void* const pStart = (const char*)dctx->dictEnd - pSize;
-        PREFETCH_AREA(pStart, pSize);
-        dctx->ddictIsCold = 0;
-    }
-
-    return ip-istart;
-}
-
-
-typedef struct {
-    size_t litLength;
-    size_t matchLength;
-    size_t offset;
-    const BYTE* match;
-} seq_t;
-
-typedef struct {
-    size_t state;
-    const ZSTD_seqSymbol* table;
-} ZSTD_fseState;
-
-typedef struct {
-    BIT_DStream_t DStream;
-    ZSTD_fseState stateLL;
-    ZSTD_fseState stateOffb;
-    ZSTD_fseState stateML;
-    size_t prevOffset[ZSTD_REP_NUM];
-    const BYTE* prefixStart;
-    const BYTE* dictEnd;
-    size_t pos;
-} seqState_t;
-
-
-FORCE_NOINLINE
-size_t ZSTD_execSequenceLast7(BYTE* op,
-                              BYTE* const oend, seq_t sequence,
-                              const BYTE** litPtr, const BYTE* const litLimit,
-                              const BYTE* const base, const BYTE* const vBase, const BYTE* const dictEnd)
-{
-    BYTE* const oLitEnd = op + sequence.litLength;
-    size_t const sequenceLength = sequence.litLength + sequence.matchLength;
-    BYTE* const oMatchEnd = op + sequenceLength;   /* risk : address space overflow (32-bits) */
-    BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH;
-    const BYTE* const iLitEnd = *litPtr + sequence.litLength;
-    const BYTE* match = oLitEnd - sequence.offset;
-
-    /* check */
-    if (oMatchEnd>oend) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */
-    if (iLitEnd > litLimit) return ERROR(corruption_detected);   /* over-read beyond lit buffer */
-    if (oLitEnd <= oend_w) return ERROR(GENERIC);   /* Precondition */
-
-    /* copy literals */
-    if (op < oend_w) {
-        ZSTD_wildcopy(op, *litPtr, oend_w - op);
-        *litPtr += oend_w - op;
-        op = oend_w;
-    }
-    while (op < oLitEnd) *op++ = *(*litPtr)++;
-
-    /* copy Match */
-    if (sequence.offset > (size_t)(oLitEnd - base)) {
-        /* offset beyond prefix */
-        if (sequence.offset > (size_t)(oLitEnd - vBase)) return ERROR(corruption_detected);
-        match = dictEnd - (base-match);
-        if (match + sequence.matchLength <= dictEnd) {
-            memmove(oLitEnd, match, sequence.matchLength);
-            return sequenceLength;
-        }
-        /* span extDict & currentPrefixSegment */
-        {   size_t const length1 = dictEnd - match;
-            memmove(oLitEnd, match, length1);
-            op = oLitEnd + length1;
-            sequence.matchLength -= length1;
-            match = base;
-    }   }
-    while (op < oMatchEnd) *op++ = *match++;
-    return sequenceLength;
-}
-
-
-HINT_INLINE
-size_t ZSTD_execSequence(BYTE* op,
-                         BYTE* const oend, seq_t sequence,
-                         const BYTE** litPtr, const BYTE* const litLimit,
-                         const BYTE* const prefixStart, const BYTE* const virtualStart, const BYTE* const dictEnd)
-{
-    BYTE* const oLitEnd = op + sequence.litLength;
-    size_t const sequenceLength = sequence.litLength + sequence.matchLength;
-    BYTE* const oMatchEnd = op + sequenceLength;   /* risk : address space overflow (32-bits) */
-    BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH;
-    const BYTE* const iLitEnd = *litPtr + sequence.litLength;
-    const BYTE* match = oLitEnd - sequence.offset;
-
-    /* check */
-    if (oMatchEnd>oend) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */
-    if (iLitEnd > litLimit) return ERROR(corruption_detected);   /* over-read beyond lit buffer */
-    if (oLitEnd>oend_w) return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, prefixStart, virtualStart, dictEnd);
-
-    /* copy Literals */
-    ZSTD_copy8(op, *litPtr);
-    if (sequence.litLength > 8)
-        ZSTD_wildcopy(op+8, (*litPtr)+8, sequence.litLength - 8);   /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */
-    op = oLitEnd;
-    *litPtr = iLitEnd;   /* update for next sequence */
-
-    /* copy Match */
-    if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
-        /* offset beyond prefix -> go into extDict */
-        if (sequence.offset > (size_t)(oLitEnd - virtualStart))
-            return ERROR(corruption_detected);
-        match = dictEnd + (match - prefixStart);
-        if (match + sequence.matchLength <= dictEnd) {
-            memmove(oLitEnd, match, sequence.matchLength);
-            return sequenceLength;
-        }
-        /* span extDict & currentPrefixSegment */
-        {   size_t const length1 = dictEnd - match;
-            memmove(oLitEnd, match, length1);
-            op = oLitEnd + length1;
-            sequence.matchLength -= length1;
-            match = prefixStart;
-            if (op > oend_w || sequence.matchLength < MINMATCH) {
-              U32 i;
-              for (i = 0; i < sequence.matchLength; ++i) op[i] = match[i];
-              return sequenceLength;
-            }
-    }   }
-    /* Requirement: op <= oend_w && sequence.matchLength >= MINMATCH */
-
-    /* match within prefix */
-    if (sequence.offset < 8) {
-        /* close range match, overlap */
-        static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };   /* added */
-        static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 };   /* subtracted */
-        int const sub2 = dec64table[sequence.offset];
-        op[0] = match[0];
-        op[1] = match[1];
-        op[2] = match[2];
-        op[3] = match[3];
-        match += dec32table[sequence.offset];
-        ZSTD_copy4(op+4, match);
-        match -= sub2;
-    } else {
-        ZSTD_copy8(op, match);
-    }
-    op += 8; match += 8;
-
-    if (oMatchEnd > oend-(16-MINMATCH)) {
-        if (op < oend_w) {
-            ZSTD_wildcopy(op, match, oend_w - op);
-            match += oend_w - op;
-            op = oend_w;
-        }
-        while (op < oMatchEnd) *op++ = *match++;
-    } else {
-        ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8);   /* works even if matchLength < 8 */
-    }
-    return sequenceLength;
-}
-
-
-HINT_INLINE
-size_t ZSTD_execSequenceLong(BYTE* op,
-                             BYTE* const oend, seq_t sequence,
-                             const BYTE** litPtr, const BYTE* const litLimit,
-                             const BYTE* const prefixStart, const BYTE* const dictStart, const BYTE* const dictEnd)
-{
-    BYTE* const oLitEnd = op + sequence.litLength;
-    size_t const sequenceLength = sequence.litLength + sequence.matchLength;
-    BYTE* const oMatchEnd = op + sequenceLength;   /* risk : address space overflow (32-bits) */
-    BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH;
-    const BYTE* const iLitEnd = *litPtr + sequence.litLength;
-    const BYTE* match = sequence.match;
-
-    /* check */
-    if (oMatchEnd > oend) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */
-    if (iLitEnd > litLimit) return ERROR(corruption_detected);   /* over-read beyond lit buffer */
-    if (oLitEnd > oend_w) return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, prefixStart, dictStart, dictEnd);
-
-    /* copy Literals */
-    ZSTD_copy8(op, *litPtr);  /* note : op <= oLitEnd <= oend_w == oend - 8 */
-    if (sequence.litLength > 8)
-        ZSTD_wildcopy(op+8, (*litPtr)+8, sequence.litLength - 8);   /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */
-    op = oLitEnd;
-    *litPtr = iLitEnd;   /* update for next sequence */
-
-    /* copy Match */
-    if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
-        /* offset beyond prefix */
-        if (sequence.offset > (size_t)(oLitEnd - dictStart)) return ERROR(corruption_detected);
-        if (match + sequence.matchLength <= dictEnd) {
-            memmove(oLitEnd, match, sequence.matchLength);
-            return sequenceLength;
-        }
-        /* span extDict & currentPrefixSegment */
-        {   size_t const length1 = dictEnd - match;
-            memmove(oLitEnd, match, length1);
-            op = oLitEnd + length1;
-            sequence.matchLength -= length1;
-            match = prefixStart;
-            if (op > oend_w || sequence.matchLength < MINMATCH) {
-              U32 i;
-              for (i = 0; i < sequence.matchLength; ++i) op[i] = match[i];
-              return sequenceLength;
-            }
-    }   }
-    assert(op <= oend_w);
-    assert(sequence.matchLength >= MINMATCH);
-
-    /* match within prefix */
-    if (sequence.offset < 8) {
-        /* close range match, overlap */
-        static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };   /* added */
-        static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 };   /* subtracted */
-        int const sub2 = dec64table[sequence.offset];
-        op[0] = match[0];
-        op[1] = match[1];
-        op[2] = match[2];
-        op[3] = match[3];
-        match += dec32table[sequence.offset];
-        ZSTD_copy4(op+4, match);
-        match -= sub2;
-    } else {
-        ZSTD_copy8(op, match);
-    }
-    op += 8; match += 8;
-
-    if (oMatchEnd > oend-(16-MINMATCH)) {
-        if (op < oend_w) {
-            ZSTD_wildcopy(op, match, oend_w - op);
-            match += oend_w - op;
-            op = oend_w;
-        }
-        while (op < oMatchEnd) *op++ = *match++;
-    } else {
-        ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8);   /* works even if matchLength < 8 */
-    }
-    return sequenceLength;
-}
-
-static void
-ZSTD_initFseState(ZSTD_fseState* DStatePtr, BIT_DStream_t* bitD, const ZSTD_seqSymbol* dt)
-{
-    const void* ptr = dt;
-    const ZSTD_seqSymbol_header* const DTableH = (const ZSTD_seqSymbol_header*)ptr;
-    DStatePtr->state = BIT_readBits(bitD, DTableH->tableLog);
-    DEBUGLOG(6, "ZSTD_initFseState : val=%u using %u bits",
-                (U32)DStatePtr->state, DTableH->tableLog);
-    BIT_reloadDStream(bitD);
-    DStatePtr->table = dt + 1;
-}
-
-FORCE_INLINE_TEMPLATE void
-ZSTD_updateFseState(ZSTD_fseState* DStatePtr, BIT_DStream_t* bitD)
-{
-    ZSTD_seqSymbol const DInfo = DStatePtr->table[DStatePtr->state];
-    U32 const nbBits = DInfo.nbBits;
-    size_t const lowBits = BIT_readBits(bitD, nbBits);
-    DStatePtr->state = DInfo.nextState + lowBits;
-}
-
-/* We need to add at most (ZSTD_WINDOWLOG_MAX_32 - 1) bits to read the maximum
- * offset bits. But we can only read at most (STREAM_ACCUMULATOR_MIN_32 - 1)
- * bits before reloading. This value is the maximum number of bytes we read
- * after reloading when we are decoding long offets.
- */
-#define LONG_OFFSETS_MAX_EXTRA_BITS_32                       \
-    (ZSTD_WINDOWLOG_MAX_32 > STREAM_ACCUMULATOR_MIN_32       \
-        ? ZSTD_WINDOWLOG_MAX_32 - STREAM_ACCUMULATOR_MIN_32  \
-        : 0)
-
-typedef enum { ZSTD_lo_isRegularOffset, ZSTD_lo_isLongOffset=1 } ZSTD_longOffset_e;
-
-FORCE_INLINE_TEMPLATE seq_t
-ZSTD_decodeSequence(seqState_t* seqState, const ZSTD_longOffset_e longOffsets)
-{
-    seq_t seq;
-    U32 const llBits = seqState->stateLL.table[seqState->stateLL.state].nbAdditionalBits;
-    U32 const mlBits = seqState->stateML.table[seqState->stateML.state].nbAdditionalBits;
-    U32 const ofBits = seqState->stateOffb.table[seqState->stateOffb.state].nbAdditionalBits;
-    U32 const totalBits = llBits+mlBits+ofBits;
-    U32 const llBase = seqState->stateLL.table[seqState->stateLL.state].baseValue;
-    U32 const mlBase = seqState->stateML.table[seqState->stateML.state].baseValue;
-    U32 const ofBase = seqState->stateOffb.table[seqState->stateOffb.state].baseValue;
-
-    /* sequence */
-    {   size_t offset;
-        if (!ofBits)
-            offset = 0;
-        else {
-            ZSTD_STATIC_ASSERT(ZSTD_lo_isLongOffset == 1);
-            ZSTD_STATIC_ASSERT(LONG_OFFSETS_MAX_EXTRA_BITS_32 == 5);
-            assert(ofBits <= MaxOff);
-            if (MEM_32bits() && longOffsets && (ofBits >= STREAM_ACCUMULATOR_MIN_32)) {
-                U32 const extraBits = ofBits - MIN(ofBits, 32 - seqState->DStream.bitsConsumed);
-                offset = ofBase + (BIT_readBitsFast(&seqState->DStream, ofBits - extraBits) << extraBits);
-                BIT_reloadDStream(&seqState->DStream);
-                if (extraBits) offset += BIT_readBitsFast(&seqState->DStream, extraBits);
-                assert(extraBits <= LONG_OFFSETS_MAX_EXTRA_BITS_32);   /* to avoid another reload */
-            } else {
-                offset = ofBase + BIT_readBitsFast(&seqState->DStream, ofBits/*>0*/);   /* <=  (ZSTD_WINDOWLOG_MAX-1) bits */
-                if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream);
-            }
-        }
-
-        if (ofBits <= 1) {
-            offset += (llBase==0);
-            if (offset) {
-                size_t temp = (offset==3) ? seqState->prevOffset[0] - 1 : seqState->prevOffset[offset];
-                temp += !temp;   /* 0 is not valid; input is corrupted; force offset to 1 */
-                if (offset != 1) seqState->prevOffset[2] = seqState->prevOffset[1];
-                seqState->prevOffset[1] = seqState->prevOffset[0];
-                seqState->prevOffset[0] = offset = temp;
-            } else {  /* offset == 0 */
-                offset = seqState->prevOffset[0];
-            }
-        } else {
-            seqState->prevOffset[2] = seqState->prevOffset[1];
-            seqState->prevOffset[1] = seqState->prevOffset[0];
-            seqState->prevOffset[0] = offset;
-        }
-        seq.offset = offset;
-    }
-
-    seq.matchLength = mlBase
-                    + ((mlBits>0) ? BIT_readBitsFast(&seqState->DStream, mlBits/*>0*/) : 0);  /* <=  16 bits */
-    if (MEM_32bits() && (mlBits+llBits >= STREAM_ACCUMULATOR_MIN_32-LONG_OFFSETS_MAX_EXTRA_BITS_32))
-        BIT_reloadDStream(&seqState->DStream);
-    if (MEM_64bits() && (totalBits >= STREAM_ACCUMULATOR_MIN_64-(LLFSELog+MLFSELog+OffFSELog)))
-        BIT_reloadDStream(&seqState->DStream);
-    /* Ensure there are enough bits to read the rest of data in 64-bit mode. */
-    ZSTD_STATIC_ASSERT(16+LLFSELog+MLFSELog+OffFSELog < STREAM_ACCUMULATOR_MIN_64);
-
-    seq.litLength = llBase
-                  + ((llBits>0) ? BIT_readBitsFast(&seqState->DStream, llBits/*>0*/) : 0);    /* <=  16 bits */
-    if (MEM_32bits())
-        BIT_reloadDStream(&seqState->DStream);
-
-    DEBUGLOG(6, "seq: litL=%u, matchL=%u, offset=%u",
-                (U32)seq.litLength, (U32)seq.matchLength, (U32)seq.offset);
-
-    /* ANS state update */
-    ZSTD_updateFseState(&seqState->stateLL, &seqState->DStream);    /* <=  9 bits */
-    ZSTD_updateFseState(&seqState->stateML, &seqState->DStream);    /* <=  9 bits */
-    if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream);    /* <= 18 bits */
-    ZSTD_updateFseState(&seqState->stateOffb, &seqState->DStream);  /* <=  8 bits */
-
-    return seq;
-}
-
-FORCE_INLINE_TEMPLATE size_t
-ZSTD_decompressSequences_body( ZSTD_DCtx* dctx,
-                               void* dst, size_t maxDstSize,
-                         const void* seqStart, size_t seqSize, int nbSeq,
-                         const ZSTD_longOffset_e isLongOffset)
-{
-    const BYTE* ip = (const BYTE*)seqStart;
-    const BYTE* const iend = ip + seqSize;
-    BYTE* const ostart = (BYTE* const)dst;
-    BYTE* const oend = ostart + maxDstSize;
-    BYTE* op = ostart;
-    const BYTE* litPtr = dctx->litPtr;
-    const BYTE* const litEnd = litPtr + dctx->litSize;
-    const BYTE* const prefixStart = (const BYTE*) (dctx->prefixStart);
-    const BYTE* const vBase = (const BYTE*) (dctx->virtualStart);
-    const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd);
-    DEBUGLOG(5, "ZSTD_decompressSequences_body");
-
-    /* Regen sequences */
-    if (nbSeq) {
-        seqState_t seqState;
-        dctx->fseEntropy = 1;
-        { U32 i; for (i=0; i<ZSTD_REP_NUM; i++) seqState.prevOffset[i] = dctx->entropy.rep[i]; }
-        CHECK_E(BIT_initDStream(&seqState.DStream, ip, iend-ip), corruption_detected);
-        ZSTD_initFseState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr);
-        ZSTD_initFseState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr);
-        ZSTD_initFseState(&seqState.stateML, &seqState.DStream, dctx->MLTptr);
-
-        for ( ; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && nbSeq ; ) {
-            nbSeq--;
-            {   seq_t const sequence = ZSTD_decodeSequence(&seqState, isLongOffset);
-                size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequence, &litPtr, litEnd, prefixStart, vBase, dictEnd);
-                DEBUGLOG(6, "regenerated sequence size : %u", (U32)oneSeqSize);
-                if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
-                op += oneSeqSize;
-        }   }
-
-        /* check if reached exact end */
-        DEBUGLOG(5, "ZSTD_decompressSequences_body: after decode loop, remaining nbSeq : %i", nbSeq);
-        if (nbSeq) return ERROR(corruption_detected);
-        /* save reps for next block */
-        { U32 i; for (i=0; i<ZSTD_REP_NUM; i++) dctx->entropy.rep[i] = (U32)(seqState.prevOffset[i]); }
-    }
-
-    /* last literal segment */
-    {   size_t const lastLLSize = litEnd - litPtr;
-        if (lastLLSize > (size_t)(oend-op)) return ERROR(dstSize_tooSmall);
-        memcpy(op, litPtr, lastLLSize);
-        op += lastLLSize;
-    }
-
-    return op-ostart;
-}
-
-static size_t
-ZSTD_decompressSequences_default(ZSTD_DCtx* dctx,
-                                 void* dst, size_t maxDstSize,
-                           const void* seqStart, size_t seqSize, int nbSeq,
-                           const ZSTD_longOffset_e isLongOffset)
-{
-    return ZSTD_decompressSequences_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
-}
-
-
-
-FORCE_INLINE_TEMPLATE seq_t
-ZSTD_decodeSequenceLong(seqState_t* seqState, ZSTD_longOffset_e const longOffsets)
-{
-    seq_t seq;
-    U32 const llBits = seqState->stateLL.table[seqState->stateLL.state].nbAdditionalBits;
-    U32 const mlBits = seqState->stateML.table[seqState->stateML.state].nbAdditionalBits;
-    U32 const ofBits = seqState->stateOffb.table[seqState->stateOffb.state].nbAdditionalBits;
-    U32 const totalBits = llBits+mlBits+ofBits;
-    U32 const llBase = seqState->stateLL.table[seqState->stateLL.state].baseValue;
-    U32 const mlBase = seqState->stateML.table[seqState->stateML.state].baseValue;
-    U32 const ofBase = seqState->stateOffb.table[seqState->stateOffb.state].baseValue;
-
-    /* sequence */
-    {   size_t offset;
-        if (!ofBits)
-            offset = 0;
-        else {
-            ZSTD_STATIC_ASSERT(ZSTD_lo_isLongOffset == 1);
-            ZSTD_STATIC_ASSERT(LONG_OFFSETS_MAX_EXTRA_BITS_32 == 5);
-            assert(ofBits <= MaxOff);
-            if (MEM_32bits() && longOffsets) {
-                U32 const extraBits = ofBits - MIN(ofBits, STREAM_ACCUMULATOR_MIN_32-1);
-                offset = ofBase + (BIT_readBitsFast(&seqState->DStream, ofBits - extraBits) << extraBits);
-                if (MEM_32bits() || extraBits) BIT_reloadDStream(&seqState->DStream);
-                if (extraBits) offset += BIT_readBitsFast(&seqState->DStream, extraBits);
-            } else {
-                offset = ofBase + BIT_readBitsFast(&seqState->DStream, ofBits);   /* <=  (ZSTD_WINDOWLOG_MAX-1) bits */
-                if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream);
-            }
-        }
-
-        if (ofBits <= 1) {
-            offset += (llBase==0);
-            if (offset) {
-                size_t temp = (offset==3) ? seqState->prevOffset[0] - 1 : seqState->prevOffset[offset];
-                temp += !temp;   /* 0 is not valid; input is corrupted; force offset to 1 */
-                if (offset != 1) seqState->prevOffset[2] = seqState->prevOffset[1];
-                seqState->prevOffset[1] = seqState->prevOffset[0];
-                seqState->prevOffset[0] = offset = temp;
-            } else {
-                offset = seqState->prevOffset[0];
-            }
-        } else {
-            seqState->prevOffset[2] = seqState->prevOffset[1];
-            seqState->prevOffset[1] = seqState->prevOffset[0];
-            seqState->prevOffset[0] = offset;
-        }
-        seq.offset = offset;
-    }
-
-    seq.matchLength = mlBase + ((mlBits>0) ? BIT_readBitsFast(&seqState->DStream, mlBits) : 0);  /* <=  16 bits */
-    if (MEM_32bits() && (mlBits+llBits >= STREAM_ACCUMULATOR_MIN_32-LONG_OFFSETS_MAX_EXTRA_BITS_32))
-        BIT_reloadDStream(&seqState->DStream);
-    if (MEM_64bits() && (totalBits >= STREAM_ACCUMULATOR_MIN_64-(LLFSELog+MLFSELog+OffFSELog)))
-        BIT_reloadDStream(&seqState->DStream);
-    /* Verify that there is enough bits to read the rest of the data in 64-bit mode. */
-    ZSTD_STATIC_ASSERT(16+LLFSELog+MLFSELog+OffFSELog < STREAM_ACCUMULATOR_MIN_64);
-
-    seq.litLength = llBase + ((llBits>0) ? BIT_readBitsFast(&seqState->DStream, llBits) : 0);    /* <=  16 bits */
-    if (MEM_32bits())
-        BIT_reloadDStream(&seqState->DStream);
-
-    {   size_t const pos = seqState->pos + seq.litLength;
-        const BYTE* const matchBase = (seq.offset > pos) ? seqState->dictEnd : seqState->prefixStart;
-        seq.match = matchBase + pos - seq.offset;  /* note : this operation can overflow when seq.offset is really too large, which can only happen when input is corrupted.
-                                                    * No consequence though : no memory access will occur, overly large offset will be detected in ZSTD_execSequenceLong() */
-        seqState->pos = pos + seq.matchLength;
-    }
-
-    /* ANS state update */
-    ZSTD_updateFseState(&seqState->stateLL, &seqState->DStream);    /* <=  9 bits */
-    ZSTD_updateFseState(&seqState->stateML, &seqState->DStream);    /* <=  9 bits */
-    if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream);    /* <= 18 bits */
-    ZSTD_updateFseState(&seqState->stateOffb, &seqState->DStream);  /* <=  8 bits */
-
-    return seq;
-}
-
-FORCE_INLINE_TEMPLATE size_t
-ZSTD_decompressSequencesLong_body(
-                               ZSTD_DCtx* dctx,
-                               void* dst, size_t maxDstSize,
-                         const void* seqStart, size_t seqSize, int nbSeq,
-                         const ZSTD_longOffset_e isLongOffset)
-{
-    const BYTE* ip = (const BYTE*)seqStart;
-    const BYTE* const iend = ip + seqSize;
-    BYTE* const ostart = (BYTE* const)dst;
-    BYTE* const oend = ostart + maxDstSize;
-    BYTE* op = ostart;
-    const BYTE* litPtr = dctx->litPtr;
-    const BYTE* const litEnd = litPtr + dctx->litSize;
-    const BYTE* const prefixStart = (const BYTE*) (dctx->prefixStart);
-    const BYTE* const dictStart = (const BYTE*) (dctx->virtualStart);
-    const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd);
-
-    /* Regen sequences */
-    if (nbSeq) {
-#define STORED_SEQS 4
-#define STOSEQ_MASK (STORED_SEQS-1)
-#define ADVANCED_SEQS 4
-        seq_t sequences[STORED_SEQS];
-        int const seqAdvance = MIN(nbSeq, ADVANCED_SEQS);
-        seqState_t seqState;
-        int seqNb;
-        dctx->fseEntropy = 1;
-        { U32 i; for (i=0; i<ZSTD_REP_NUM; i++) seqState.prevOffset[i] = dctx->entropy.rep[i]; }
-        seqState.prefixStart = prefixStart;
-        seqState.pos = (size_t)(op-prefixStart);
-        seqState.dictEnd = dictEnd;
-        CHECK_E(BIT_initDStream(&seqState.DStream, ip, iend-ip), corruption_detected);
-        ZSTD_initFseState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr);
-        ZSTD_initFseState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr);
-        ZSTD_initFseState(&seqState.stateML, &seqState.DStream, dctx->MLTptr);
-
-        /* prepare in advance */
-        for (seqNb=0; (BIT_reloadDStream(&seqState.DStream) <= BIT_DStream_completed) && (seqNb<seqAdvance); seqNb++) {
-            sequences[seqNb] = ZSTD_decodeSequenceLong(&seqState, isLongOffset);
-        }
-        if (seqNb<seqAdvance) return ERROR(corruption_detected);
-
-        /* decode and decompress */
-        for ( ; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && (seqNb<nbSeq) ; seqNb++) {
-            seq_t const sequence = ZSTD_decodeSequenceLong(&seqState, isLongOffset);
-            size_t const oneSeqSize = ZSTD_execSequenceLong(op, oend, sequences[(seqNb-ADVANCED_SEQS) & STOSEQ_MASK], &litPtr, litEnd, prefixStart, dictStart, dictEnd);
-            if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
-            PREFETCH(sequence.match);  /* note : it's safe to invoke PREFETCH() on any memory address, including invalid ones */
-            sequences[seqNb&STOSEQ_MASK] = sequence;
-            op += oneSeqSize;
-        }
-        if (seqNb<nbSeq) return ERROR(corruption_detected);
-
-        /* finish queue */
-        seqNb -= seqAdvance;
-        for ( ; seqNb<nbSeq ; seqNb++) {
-            size_t const oneSeqSize = ZSTD_execSequenceLong(op, oend, sequences[seqNb&STOSEQ_MASK], &litPtr, litEnd, prefixStart, dictStart, dictEnd);
-            if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
-            op += oneSeqSize;
-        }
-
-        /* save reps for next block */
-        { U32 i; for (i=0; i<ZSTD_REP_NUM; i++) dctx->entropy.rep[i] = (U32)(seqState.prevOffset[i]); }
-#undef STORED_SEQS
-#undef STOSEQ_MASK
-#undef ADVANCED_SEQS
-    }
-
-    /* last literal segment */
-    {   size_t const lastLLSize = litEnd - litPtr;
-        if (lastLLSize > (size_t)(oend-op)) return ERROR(dstSize_tooSmall);
-        memcpy(op, litPtr, lastLLSize);
-        op += lastLLSize;
-    }
-
-    return op-ostart;
-}
-
-static size_t
-ZSTD_decompressSequencesLong_default(ZSTD_DCtx* dctx,
-                                 void* dst, size_t maxDstSize,
-                           const void* seqStart, size_t seqSize, int nbSeq,
-                           const ZSTD_longOffset_e isLongOffset)
-{
-    return ZSTD_decompressSequencesLong_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
-}
-
-
-
-#if DYNAMIC_BMI2
-
-static TARGET_ATTRIBUTE("bmi2") size_t
-ZSTD_decompressSequences_bmi2(ZSTD_DCtx* dctx,
-                                 void* dst, size_t maxDstSize,
-                           const void* seqStart, size_t seqSize, int nbSeq,
-                           const ZSTD_longOffset_e isLongOffset)
-{
-    return ZSTD_decompressSequences_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
-}
-
-static TARGET_ATTRIBUTE("bmi2") size_t
-ZSTD_decompressSequencesLong_bmi2(ZSTD_DCtx* dctx,
-                                 void* dst, size_t maxDstSize,
-                           const void* seqStart, size_t seqSize, int nbSeq,
-                           const ZSTD_longOffset_e isLongOffset)
-{
-    return ZSTD_decompressSequencesLong_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
-}
-
-#endif
-
-typedef size_t (*ZSTD_decompressSequences_t)(
-    ZSTD_DCtx *dctx, void *dst, size_t maxDstSize,
-    const void *seqStart, size_t seqSize, int nbSeq,
-    const ZSTD_longOffset_e isLongOffset);
-
-static size_t ZSTD_decompressSequences(ZSTD_DCtx* dctx, void* dst, size_t maxDstSize,
-                                const void* seqStart, size_t seqSize, int nbSeq,
-                                const ZSTD_longOffset_e isLongOffset)
-{
-    DEBUGLOG(5, "ZSTD_decompressSequences");
-#if DYNAMIC_BMI2
-    if (dctx->bmi2) {
-        return ZSTD_decompressSequences_bmi2(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
-    }
-#endif
-  return ZSTD_decompressSequences_default(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
-}
-
-static size_t ZSTD_decompressSequencesLong(ZSTD_DCtx* dctx,
-                                void* dst, size_t maxDstSize,
-                                const void* seqStart, size_t seqSize, int nbSeq,
-                                const ZSTD_longOffset_e isLongOffset)
-{
-    DEBUGLOG(5, "ZSTD_decompressSequencesLong");
-#if DYNAMIC_BMI2
-    if (dctx->bmi2) {
-        return ZSTD_decompressSequencesLong_bmi2(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
-    }
-#endif
-  return ZSTD_decompressSequencesLong_default(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
-}
-
-/* ZSTD_getLongOffsetsShare() :
- * condition : offTable must be valid
- * @return : "share" of long offsets (arbitrarily defined as > (1<<23))
- *           compared to maximum possible of (1<<OffFSELog) */
-static unsigned
-ZSTD_getLongOffsetsShare(const ZSTD_seqSymbol* offTable)
-{
-    const void* ptr = offTable;
-    U32 const tableLog = ((const ZSTD_seqSymbol_header*)ptr)[0].tableLog;
-    const ZSTD_seqSymbol* table = offTable + 1;
-    U32 const max = 1 << tableLog;
-    U32 u, total = 0;
-    DEBUGLOG(5, "ZSTD_getLongOffsetsShare: (tableLog=%u)", tableLog);
-
-    assert(max <= (1 << OffFSELog));  /* max not too large */
-    for (u=0; u<max; u++) {
-        if (table[u].nbAdditionalBits > 22) total += 1;
-    }
-
-    assert(tableLog <= OffFSELog);
-    total <<= (OffFSELog - tableLog);  /* scale to OffFSELog */
-
-    return total;
-}
-
-
-static size_t ZSTD_decompressBlock_internal(ZSTD_DCtx* dctx,
-                            void* dst, size_t dstCapacity,
-                      const void* src, size_t srcSize, const int frame)
-{   /* blockType == blockCompressed */
-    const BYTE* ip = (const BYTE*)src;
-    /* isLongOffset must be true if there are long offsets.
-     * Offsets are long if they are larger than 2^STREAM_ACCUMULATOR_MIN.
-     * We don't expect that to be the case in 64-bit mode.
-     * In block mode, window size is not known, so we have to be conservative.
-     * (note: but it could be evaluated from current-lowLimit)
-     */
-    ZSTD_longOffset_e const isLongOffset = (ZSTD_longOffset_e)(MEM_32bits() && (!frame || dctx->fParams.windowSize > (1ULL << STREAM_ACCUMULATOR_MIN)));
-    DEBUGLOG(5, "ZSTD_decompressBlock_internal (size : %u)", (U32)srcSize);
-
-    if (srcSize >= ZSTD_BLOCKSIZE_MAX) return ERROR(srcSize_wrong);
-
-    /* Decode literals section */
-    {   size_t const litCSize = ZSTD_decodeLiteralsBlock(dctx, src, srcSize);
-        DEBUGLOG(5, "ZSTD_decodeLiteralsBlock : %u", (U32)litCSize);
-        if (ZSTD_isError(litCSize)) return litCSize;
-        ip += litCSize;
-        srcSize -= litCSize;
-    }
-
-    /* Build Decoding Tables */
-    {   int nbSeq;
-        size_t const seqHSize = ZSTD_decodeSeqHeaders(dctx, &nbSeq, ip, srcSize);
-        if (ZSTD_isError(seqHSize)) return seqHSize;
-        ip += seqHSize;
-        srcSize -= seqHSize;
-
-        if ( (!frame || dctx->fParams.windowSize > (1<<24))
-          && (nbSeq>0) ) {  /* could probably use a larger nbSeq limit */
-            U32 const shareLongOffsets = ZSTD_getLongOffsetsShare(dctx->OFTptr);
-            U32 const minShare = MEM_64bits() ? 7 : 20; /* heuristic values, correspond to 2.73% and 7.81% */
-            if (shareLongOffsets >= minShare)
-                return ZSTD_decompressSequencesLong(dctx, dst, dstCapacity, ip, srcSize, nbSeq, isLongOffset);
-        }
-
-        return ZSTD_decompressSequences(dctx, dst, dstCapacity, ip, srcSize, nbSeq, isLongOffset);
-    }
-}
-
-
-static void ZSTD_checkContinuity(ZSTD_DCtx* dctx, const void* dst)
-{
-    if (dst != dctx->previousDstEnd) {   /* not contiguous */
-        dctx->dictEnd = dctx->previousDstEnd;
-        dctx->virtualStart = (const char*)dst - ((const char*)(dctx->previousDstEnd) - (const char*)(dctx->prefixStart));
-        dctx->prefixStart = dst;
-        dctx->previousDstEnd = dst;
-    }
-}
-
-size_t ZSTD_decompressBlock(ZSTD_DCtx* dctx,
-                            void* dst, size_t dstCapacity,
-                      const void* src, size_t srcSize)
-{
-    size_t dSize;
-    ZSTD_checkContinuity(dctx, dst);
-    dSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize, /* frame */ 0);
-    dctx->previousDstEnd = (char*)dst + dSize;
-    return dSize;
-}
-
-
-/** ZSTD_insertBlock() :
-    insert `src` block into `dctx` history. Useful to track uncompressed blocks. */
-ZSTDLIB_API size_t ZSTD_insertBlock(ZSTD_DCtx* dctx, const void* blockStart, size_t blockSize)
-{
-    ZSTD_checkContinuity(dctx, blockStart);
-    dctx->previousDstEnd = (const char*)blockStart + blockSize;
-    return blockSize;
-}
-
-
-static size_t ZSTD_generateNxBytes(void* dst, size_t dstCapacity, BYTE value, size_t length)
-{
-    if (length > dstCapacity) return ERROR(dstSize_tooSmall);
-    memset(dst, value, length);
-    return length;
-}
-
 /** ZSTD_findFrameCompressedSize() :
  *  compatible with legacy mode
  *  `src` must point to the start of a ZSTD frame, ZSTD legacy frame, or skippable frame
@@ -1806,9 +447,9 @@
     if (ZSTD_isLegacy(src, srcSize))
         return ZSTD_findFrameCompressedSizeLegacy(src, srcSize);
 #endif
-    if ( (srcSize >= ZSTD_skippableHeaderSize)
-      && (MEM_readLE32(src) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START ) {
-        return ZSTD_skippableHeaderSize + MEM_readLE32((const BYTE*)src + ZSTD_FRAMEIDSIZE);
+    if ( (srcSize >= ZSTD_SKIPPABLEHEADERSIZE)
+      && (MEM_readLE32(src) & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START ) {
+        return readSkippableFrameSize(src, srcSize);
     } else {
         const BYTE* ip = (const BYTE*)src;
         const BYTE* const ipstart = ip;
@@ -1848,8 +489,64 @@
     }
 }
 
+
+
+/*-*************************************************************
+ *   Frame decoding
+ ***************************************************************/
+
+
+void ZSTD_checkContinuity(ZSTD_DCtx* dctx, const void* dst)
+{
+    if (dst != dctx->previousDstEnd) {   /* not contiguous */
+        dctx->dictEnd = dctx->previousDstEnd;
+        dctx->virtualStart = (const char*)dst - ((const char*)(dctx->previousDstEnd) - (const char*)(dctx->prefixStart));
+        dctx->prefixStart = dst;
+        dctx->previousDstEnd = dst;
+    }
+}
+
+/** ZSTD_insertBlock() :
+    insert `src` block into `dctx` history. Useful to track uncompressed blocks. */
+size_t ZSTD_insertBlock(ZSTD_DCtx* dctx, const void* blockStart, size_t blockSize)
+{
+    ZSTD_checkContinuity(dctx, blockStart);
+    dctx->previousDstEnd = (const char*)blockStart + blockSize;
+    return blockSize;
+}
+
+
+static size_t ZSTD_copyRawBlock(void* dst, size_t dstCapacity,
+                          const void* src, size_t srcSize)
+{
+    DEBUGLOG(5, "ZSTD_copyRawBlock");
+    if (dst == NULL) {
+        if (srcSize == 0) return 0;
+        return ERROR(dstBuffer_null);
+    }
+    if (srcSize > dstCapacity) return ERROR(dstSize_tooSmall);
+    memcpy(dst, src, srcSize);
+    return srcSize;
+}
+
+static size_t ZSTD_setRleBlock(void* dst, size_t dstCapacity,
+                               BYTE b,
+                               size_t regenSize)
+{
+    if (dst == NULL) {
+        if (regenSize == 0) return 0;
+        return ERROR(dstBuffer_null);
+    }
+    if (regenSize > dstCapacity) return ERROR(dstSize_tooSmall);
+    memset(dst, b, regenSize);
+    return regenSize;
+}
+
+
 /*! ZSTD_decompressFrame() :
-*   @dctx must be properly initialized */
+ * @dctx must be properly initialized
+ *  will update *srcPtr and *srcSizePtr,
+ *  to make *srcPtr progress by one frame. */
 static size_t ZSTD_decompressFrame(ZSTD_DCtx* dctx,
                                    void* dst, size_t dstCapacity,
                              const void** srcPtr, size_t *srcSizePtr)
@@ -1858,31 +555,33 @@
     BYTE* const ostart = (BYTE* const)dst;
     BYTE* const oend = ostart + dstCapacity;
     BYTE* op = ostart;
-    size_t remainingSize = *srcSizePtr;
+    size_t remainingSrcSize = *srcSizePtr;
+
+    DEBUGLOG(4, "ZSTD_decompressFrame (srcSize:%i)", (int)*srcSizePtr);
 
     /* check */
-    if (remainingSize < ZSTD_frameHeaderSize_min+ZSTD_blockHeaderSize)
+    if (remainingSrcSize < ZSTD_FRAMEHEADERSIZE_MIN+ZSTD_blockHeaderSize)
         return ERROR(srcSize_wrong);
 
     /* Frame Header */
-    {   size_t const frameHeaderSize = ZSTD_frameHeaderSize(ip, ZSTD_frameHeaderSize_prefix);
+    {   size_t const frameHeaderSize = ZSTD_frameHeaderSize(ip, ZSTD_FRAMEHEADERSIZE_PREFIX);
         if (ZSTD_isError(frameHeaderSize)) return frameHeaderSize;
-        if (remainingSize < frameHeaderSize+ZSTD_blockHeaderSize)
+        if (remainingSrcSize < frameHeaderSize+ZSTD_blockHeaderSize)
             return ERROR(srcSize_wrong);
         CHECK_F( ZSTD_decodeFrameHeader(dctx, ip, frameHeaderSize) );
-        ip += frameHeaderSize; remainingSize -= frameHeaderSize;
+        ip += frameHeaderSize; remainingSrcSize -= frameHeaderSize;
     }
 
     /* Loop on each block */
     while (1) {
         size_t decodedSize;
         blockProperties_t blockProperties;
-        size_t const cBlockSize = ZSTD_getcBlockSize(ip, remainingSize, &blockProperties);
+        size_t const cBlockSize = ZSTD_getcBlockSize(ip, remainingSrcSize, &blockProperties);
         if (ZSTD_isError(cBlockSize)) return cBlockSize;
 
         ip += ZSTD_blockHeaderSize;
-        remainingSize -= ZSTD_blockHeaderSize;
-        if (cBlockSize > remainingSize) return ERROR(srcSize_wrong);
+        remainingSrcSize -= ZSTD_blockHeaderSize;
+        if (cBlockSize > remainingSrcSize) return ERROR(srcSize_wrong);
 
         switch(blockProperties.blockType)
         {
@@ -1893,7 +592,7 @@
             decodedSize = ZSTD_copyRawBlock(op, oend-op, ip, cBlockSize);
             break;
         case bt_rle :
-            decodedSize = ZSTD_generateNxBytes(op, oend-op, *ip, blockProperties.origSize);
+            decodedSize = ZSTD_setRleBlock(op, oend-op, *ip, blockProperties.origSize);
             break;
         case bt_reserved :
         default:
@@ -1905,7 +604,7 @@
             XXH64_update(&dctx->xxhState, op, decodedSize);
         op += decodedSize;
         ip += cBlockSize;
-        remainingSize -= cBlockSize;
+        remainingSrcSize -= cBlockSize;
         if (blockProperties.lastBlock) break;
     }
 
@@ -1916,16 +615,16 @@
     if (dctx->fParams.checksumFlag) { /* Frame content checksum verification */
         U32 const checkCalc = (U32)XXH64_digest(&dctx->xxhState);
         U32 checkRead;
-        if (remainingSize<4) return ERROR(checksum_wrong);
+        if (remainingSrcSize<4) return ERROR(checksum_wrong);
         checkRead = MEM_readLE32(ip);
         if (checkRead != checkCalc) return ERROR(checksum_wrong);
         ip += 4;
-        remainingSize -= 4;
+        remainingSrcSize -= 4;
     }
 
     /* Allow caller to get size read */
     *srcPtr = ip;
-    *srcSizePtr = remainingSize;
+    *srcSizePtr = remainingSrcSize;
     return op-ostart;
 }
 
@@ -1942,11 +641,11 @@
     assert(dict==NULL || ddict==NULL);  /* either dict or ddict set, not both */
 
     if (ddict) {
-        dict = ZSTD_DDictDictContent(ddict);
-        dictSize = ZSTD_DDictDictSize(ddict);
+        dict = ZSTD_DDict_dictContent(ddict);
+        dictSize = ZSTD_DDict_dictSize(ddict);
     }
 
-    while (srcSize >= ZSTD_frameHeaderSize_prefix) {
+    while (srcSize >= ZSTD_FRAMEHEADERSIZE_PREFIX) {
 
 #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT >= 1)
         if (ZSTD_isLegacy(src, srcSize)) {
@@ -1957,7 +656,9 @@
             if (dctx->staticSize) return ERROR(memory_allocation);
 
             decodedSize = ZSTD_decompressLegacy(dst, dstCapacity, src, frameSize, dict, dictSize);
+            if (ZSTD_isError(decodedSize)) return decodedSize;
 
+            assert(decodedSize <=- dstCapacity);
             dst = (BYTE*)dst + decodedSize;
             dstCapacity -= decodedSize;
 
@@ -1970,13 +671,11 @@
 
         {   U32 const magicNumber = MEM_readLE32(src);
             DEBUGLOG(4, "reading magic number %08X (expecting %08X)",
-                        (U32)magicNumber, (U32)ZSTD_MAGICNUMBER);
-            if ((magicNumber & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) {
-                size_t skippableSize;
-                if (srcSize < ZSTD_skippableHeaderSize)
-                    return ERROR(srcSize_wrong);
-                skippableSize = MEM_readLE32((const BYTE*)src + ZSTD_FRAMEIDSIZE)
-                              + ZSTD_skippableHeaderSize;
+                        (unsigned)magicNumber, ZSTD_MAGICNUMBER);
+            if ((magicNumber & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {
+                size_t const skippableSize = readSkippableFrameSize(src, srcSize);
+                if (ZSTD_isError(skippableSize))
+                    return skippableSize;
                 if (srcSize < skippableSize) return ERROR(srcSize_wrong);
 
                 src = (const BYTE *)src + skippableSize;
@@ -2010,7 +709,7 @@
                 return ERROR(srcSize_wrong);
             }
             if (ZSTD_isError(res)) return res;
-            /* no need to bound check, ZSTD_decompressFrame already has */
+            assert(res <= dstCapacity);
             dst = (BYTE*)dst + res;
             dstCapacity -= res;
         }
@@ -2090,9 +789,10 @@
  *            or an error code, which can be tested using ZSTD_isError() */
 size_t ZSTD_decompressContinue(ZSTD_DCtx* dctx, void* dst, size_t dstCapacity, const void* src, size_t srcSize)
 {
-    DEBUGLOG(5, "ZSTD_decompressContinue (srcSize:%u)", (U32)srcSize);
+    DEBUGLOG(5, "ZSTD_decompressContinue (srcSize:%u)", (unsigned)srcSize);
     /* Sanity check */
-    if (srcSize != dctx->expected) return ERROR(srcSize_wrong);  /* not allowed */
+    if (srcSize != dctx->expected)
+        return ERROR(srcSize_wrong);  /* not allowed */
     if (dstCapacity) ZSTD_checkContinuity(dctx, dst);
 
     switch (dctx->stage)
@@ -2101,9 +801,9 @@
         assert(src != NULL);
         if (dctx->format == ZSTD_f_zstd1) {  /* allows header */
             assert(srcSize >= ZSTD_FRAMEIDSIZE);  /* to read skippable magic number */
-            if ((MEM_readLE32(src) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) {        /* skippable frame */
+            if ((MEM_readLE32(src) & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {        /* skippable frame */
                 memcpy(dctx->headerBuffer, src, srcSize);
-                dctx->expected = ZSTD_skippableHeaderSize - srcSize;  /* remaining to load to get full skippable frame header */
+                dctx->expected = ZSTD_SKIPPABLEHEADERSIZE - srcSize;  /* remaining to load to get full skippable frame header */
                 dctx->stage = ZSTDds_decodeSkippableHeader;
                 return 0;
         }   }
@@ -2163,19 +863,19 @@
                 rSize = ZSTD_copyRawBlock(dst, dstCapacity, src, srcSize);
                 break;
             case bt_rle :
-                rSize = ZSTD_setRleBlock(dst, dstCapacity, src, srcSize, dctx->rleSize);
+                rSize = ZSTD_setRleBlock(dst, dstCapacity, *(const BYTE*)src, dctx->rleSize);
                 break;
             case bt_reserved :   /* should never happen */
             default:
                 return ERROR(corruption_detected);
             }
             if (ZSTD_isError(rSize)) return rSize;
-            DEBUGLOG(5, "ZSTD_decompressContinue: decoded size from block : %u", (U32)rSize);
+            DEBUGLOG(5, "ZSTD_decompressContinue: decoded size from block : %u", (unsigned)rSize);
             dctx->decodedSize += rSize;
             if (dctx->fParams.checksumFlag) XXH64_update(&dctx->xxhState, dst, rSize);
 
             if (dctx->stage == ZSTDds_decompressLastBlock) {   /* end of frame */
-                DEBUGLOG(4, "ZSTD_decompressContinue: decoded size from frame : %u", (U32)dctx->decodedSize);
+                DEBUGLOG(4, "ZSTD_decompressContinue: decoded size from frame : %u", (unsigned)dctx->decodedSize);
                 if (dctx->fParams.frameContentSize != ZSTD_CONTENTSIZE_UNKNOWN) {
                     if (dctx->decodedSize != dctx->fParams.frameContentSize) {
                         return ERROR(corruption_detected);
@@ -2199,7 +899,7 @@
         assert(srcSize == 4);  /* guaranteed by dctx->expected */
         {   U32 const h32 = (U32)XXH64_digest(&dctx->xxhState);
             U32 const check32 = MEM_readLE32(src);
-            DEBUGLOG(4, "ZSTD_decompressContinue: checksum : calculated %08X :: %08X read", h32, check32);
+            DEBUGLOG(4, "ZSTD_decompressContinue: checksum : calculated %08X :: %08X read", (unsigned)h32, (unsigned)check32);
             if (check32 != h32) return ERROR(checksum_wrong);
             dctx->expected = 0;
             dctx->stage = ZSTDds_getFrameHeaderSize;
@@ -2208,8 +908,8 @@
 
     case ZSTDds_decodeSkippableHeader:
         assert(src != NULL);
-        assert(srcSize <= ZSTD_skippableHeaderSize);
-        memcpy(dctx->headerBuffer + (ZSTD_skippableHeaderSize - srcSize), src, srcSize);   /* complete skippable header */
+        assert(srcSize <= ZSTD_SKIPPABLEHEADERSIZE);
+        memcpy(dctx->headerBuffer + (ZSTD_SKIPPABLEHEADERSIZE - srcSize), src, srcSize);   /* complete skippable header */
         dctx->expected = MEM_readLE32(dctx->headerBuffer + ZSTD_FRAMEIDSIZE);   /* note : dctx->expected can grow seriously large, beyond local buffer size */
         dctx->stage = ZSTDds_skipFrame;
         return 0;
@@ -2220,7 +920,8 @@
         return 0;
 
     default:
-        return ERROR(GENERIC);   /* impossible */
+        assert(0);   /* impossible */
+        return ERROR(GENERIC);   /* some compiler require default to do something */
     }
 }
 
@@ -2234,11 +935,12 @@
     return 0;
 }
 
-/*! ZSTD_loadEntropy() :
+/*! ZSTD_loadDEntropy() :
  *  dict : must point at beginning of a valid zstd dictionary.
  * @return : size of entropy tables read */
-static size_t ZSTD_loadEntropy(ZSTD_entropyDTables_t* entropy,
-                         const void* const dict, size_t const dictSize)
+size_t
+ZSTD_loadDEntropy(ZSTD_entropyDTables_t* entropy,
+                  const void* const dict, size_t const dictSize)
 {
     const BYTE* dictPtr = (const BYTE*)dict;
     const BYTE* const dictEnd = dictPtr + dictSize;
@@ -2252,15 +954,22 @@
     ZSTD_STATIC_ASSERT(sizeof(entropy->LLTable) + sizeof(entropy->OFTable) + sizeof(entropy->MLTable) >= HUF_DECOMPRESS_WORKSPACE_SIZE);
     {   void* const workspace = &entropy->LLTable;   /* use fse tables as temporary workspace; implies fse tables are grouped together */
         size_t const workspaceSize = sizeof(entropy->LLTable) + sizeof(entropy->OFTable) + sizeof(entropy->MLTable);
+#ifdef HUF_FORCE_DECOMPRESS_X1
+        /* in minimal huffman, we always use X1 variants */
+        size_t const hSize = HUF_readDTableX1_wksp(entropy->hufTable,
+                                                dictPtr, dictEnd - dictPtr,
+                                                workspace, workspaceSize);
+#else
         size_t const hSize = HUF_readDTableX2_wksp(entropy->hufTable,
                                                 dictPtr, dictEnd - dictPtr,
                                                 workspace, workspaceSize);
+#endif
         if (HUF_isError(hSize)) return ERROR(dictionary_corrupted);
         dictPtr += hSize;
     }
 
     {   short offcodeNCount[MaxOff+1];
-        U32 offcodeMaxValue = MaxOff, offcodeLog;
+        unsigned offcodeMaxValue = MaxOff, offcodeLog;
         size_t const offcodeHeaderSize = FSE_readNCount(offcodeNCount, &offcodeMaxValue, &offcodeLog, dictPtr, dictEnd-dictPtr);
         if (FSE_isError(offcodeHeaderSize)) return ERROR(dictionary_corrupted);
         if (offcodeMaxValue > MaxOff) return ERROR(dictionary_corrupted);
@@ -2320,7 +1029,7 @@
     dctx->dictID = MEM_readLE32((const char*)dict + ZSTD_FRAMEIDSIZE);
 
     /* load entropy tables */
-    {   size_t const eSize = ZSTD_loadEntropy(&dctx->entropy, dict, dictSize);
+    {   size_t const eSize = ZSTD_loadDEntropy(&dctx->entropy, dict, dictSize);
         if (ZSTD_isError(eSize)) return ERROR(dictionary_corrupted);
         dict = (const char*)dict + eSize;
         dictSize -= eSize;
@@ -2364,209 +1073,25 @@
 
 /* ======   ZSTD_DDict   ====== */
 
-struct ZSTD_DDict_s {
-    void* dictBuffer;
-    const void* dictContent;
-    size_t dictSize;
-    ZSTD_entropyDTables_t entropy;
-    U32 dictID;
-    U32 entropyPresent;
-    ZSTD_customMem cMem;
-};  /* typedef'd to ZSTD_DDict within "zstd.h" */
-
-static const void* ZSTD_DDictDictContent(const ZSTD_DDict* ddict)
-{
-    assert(ddict != NULL);
-    return ddict->dictContent;
-}
-
-static size_t ZSTD_DDictDictSize(const ZSTD_DDict* ddict)
-{
-    assert(ddict != NULL);
-    return ddict->dictSize;
-}
-
 size_t ZSTD_decompressBegin_usingDDict(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict)
 {
     DEBUGLOG(4, "ZSTD_decompressBegin_usingDDict");
     assert(dctx != NULL);
     if (ddict) {
-        dctx->ddictIsCold = (dctx->dictEnd != (const char*)ddict->dictContent + ddict->dictSize);
+        const char* const dictStart = (const char*)ZSTD_DDict_dictContent(ddict);
+        size_t const dictSize = ZSTD_DDict_dictSize(ddict);
+        const void* const dictEnd = dictStart + dictSize;
+        dctx->ddictIsCold = (dctx->dictEnd != dictEnd);
         DEBUGLOG(4, "DDict is %s",
                     dctx->ddictIsCold ? "~cold~" : "hot!");
     }
     CHECK_F( ZSTD_decompressBegin(dctx) );
     if (ddict) {   /* NULL ddict is equivalent to no dictionary */
-        dctx->dictID = ddict->dictID;
-        dctx->prefixStart = ddict->dictContent;
-        dctx->virtualStart = ddict->dictContent;
-        dctx->dictEnd = (const BYTE*)ddict->dictContent + ddict->dictSize;
-        dctx->previousDstEnd = dctx->dictEnd;
-        if (ddict->entropyPresent) {
-            dctx->litEntropy = 1;
-            dctx->fseEntropy = 1;
-            dctx->LLTptr = ddict->entropy.LLTable;
-            dctx->MLTptr = ddict->entropy.MLTable;
-            dctx->OFTptr = ddict->entropy.OFTable;
-            dctx->HUFptr = ddict->entropy.hufTable;
-            dctx->entropy.rep[0] = ddict->entropy.rep[0];
-            dctx->entropy.rep[1] = ddict->entropy.rep[1];
-            dctx->entropy.rep[2] = ddict->entropy.rep[2];
-        } else {
-            dctx->litEntropy = 0;
-            dctx->fseEntropy = 0;
-        }
+        ZSTD_copyDDictParameters(dctx, ddict);
     }
     return 0;
 }
 
-static size_t
-ZSTD_loadEntropy_inDDict(ZSTD_DDict* ddict,
-                         ZSTD_dictContentType_e dictContentType)
-{
-    ddict->dictID = 0;
-    ddict->entropyPresent = 0;
-    if (dictContentType == ZSTD_dct_rawContent) return 0;
-
-    if (ddict->dictSize < 8) {
-        if (dictContentType == ZSTD_dct_fullDict)
-            return ERROR(dictionary_corrupted);   /* only accept specified dictionaries */
-        return 0;   /* pure content mode */
-    }
-    {   U32 const magic = MEM_readLE32(ddict->dictContent);
-        if (magic != ZSTD_MAGIC_DICTIONARY) {
-            if (dictContentType == ZSTD_dct_fullDict)
-                return ERROR(dictionary_corrupted);   /* only accept specified dictionaries */
-            return 0;   /* pure content mode */
-        }
-    }
-    ddict->dictID = MEM_readLE32((const char*)ddict->dictContent + ZSTD_FRAMEIDSIZE);
-
-    /* load entropy tables */
-    CHECK_E( ZSTD_loadEntropy(&ddict->entropy,
-                              ddict->dictContent, ddict->dictSize),
-             dictionary_corrupted );
-    ddict->entropyPresent = 1;
-    return 0;
-}
-
-
-static size_t ZSTD_initDDict_internal(ZSTD_DDict* ddict,
-                                      const void* dict, size_t dictSize,
-                                      ZSTD_dictLoadMethod_e dictLoadMethod,
-                                      ZSTD_dictContentType_e dictContentType)
-{
-    if ((dictLoadMethod == ZSTD_dlm_byRef) || (!dict) || (!dictSize)) {
-        ddict->dictBuffer = NULL;
-        ddict->dictContent = dict;
-        if (!dict) dictSize = 0;
-    } else {
-        void* const internalBuffer = ZSTD_malloc(dictSize, ddict->cMem);
-        ddict->dictBuffer = internalBuffer;
-        ddict->dictContent = internalBuffer;
-        if (!internalBuffer) return ERROR(memory_allocation);
-        memcpy(internalBuffer, dict, dictSize);
-    }
-    ddict->dictSize = dictSize;
-    ddict->entropy.hufTable[0] = (HUF_DTable)((HufLog)*0x1000001);  /* cover both little and big endian */
-
-    /* parse dictionary content */
-    CHECK_F( ZSTD_loadEntropy_inDDict(ddict, dictContentType) );
-
-    return 0;
-}
-
-ZSTD_DDict* ZSTD_createDDict_advanced(const void* dict, size_t dictSize,
-                                      ZSTD_dictLoadMethod_e dictLoadMethod,
-                                      ZSTD_dictContentType_e dictContentType,
-                                      ZSTD_customMem customMem)
-{
-    if (!customMem.customAlloc ^ !customMem.customFree) return NULL;
-
-    {   ZSTD_DDict* const ddict = (ZSTD_DDict*) ZSTD_malloc(sizeof(ZSTD_DDict), customMem);
-        if (ddict == NULL) return NULL;
-        ddict->cMem = customMem;
-        {   size_t const initResult = ZSTD_initDDict_internal(ddict,
-                                            dict, dictSize,
-                                            dictLoadMethod, dictContentType);
-            if (ZSTD_isError(initResult)) {
-                ZSTD_freeDDict(ddict);
-                return NULL;
-        }   }
-        return ddict;
-    }
-}
-
-/*! ZSTD_createDDict() :
-*   Create a digested dictionary, to start decompression without startup delay.
-*   `dict` content is copied inside DDict.
-*   Consequently, `dict` can be released after `ZSTD_DDict` creation */
-ZSTD_DDict* ZSTD_createDDict(const void* dict, size_t dictSize)
-{
-    ZSTD_customMem const allocator = { NULL, NULL, NULL };
-    return ZSTD_createDDict_advanced(dict, dictSize, ZSTD_dlm_byCopy, ZSTD_dct_auto, allocator);
-}
-
-/*! ZSTD_createDDict_byReference() :
- *  Create a digested dictionary, to start decompression without startup delay.
- *  Dictionary content is simply referenced, it will be accessed during decompression.
- *  Warning : dictBuffer must outlive DDict (DDict must be freed before dictBuffer) */
-ZSTD_DDict* ZSTD_createDDict_byReference(const void* dictBuffer, size_t dictSize)
-{
-    ZSTD_customMem const allocator = { NULL, NULL, NULL };
-    return ZSTD_createDDict_advanced(dictBuffer, dictSize, ZSTD_dlm_byRef, ZSTD_dct_auto, allocator);
-}
-
-
-const ZSTD_DDict* ZSTD_initStaticDDict(
-                                void* sBuffer, size_t sBufferSize,
-                                const void* dict, size_t dictSize,
-                                ZSTD_dictLoadMethod_e dictLoadMethod,
-                                ZSTD_dictContentType_e dictContentType)
-{
-    size_t const neededSpace = sizeof(ZSTD_DDict)
-                             + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize);
-    ZSTD_DDict* const ddict = (ZSTD_DDict*)sBuffer;
-    assert(sBuffer != NULL);
-    assert(dict != NULL);
-    if ((size_t)sBuffer & 7) return NULL;   /* 8-aligned */
-    if (sBufferSize < neededSpace) return NULL;
-    if (dictLoadMethod == ZSTD_dlm_byCopy) {
-        memcpy(ddict+1, dict, dictSize);  /* local copy */
-        dict = ddict+1;
-    }
-    if (ZSTD_isError( ZSTD_initDDict_internal(ddict,
-                                              dict, dictSize,
-                                              ZSTD_dlm_byRef, dictContentType) ))
-        return NULL;
-    return ddict;
-}
-
-
-size_t ZSTD_freeDDict(ZSTD_DDict* ddict)
-{
-    if (ddict==NULL) return 0;   /* support free on NULL */
-    {   ZSTD_customMem const cMem = ddict->cMem;
-        ZSTD_free(ddict->dictBuffer, cMem);
-        ZSTD_free(ddict, cMem);
-        return 0;
-    }
-}
-
-/*! ZSTD_estimateDDictSize() :
- *  Estimate amount of memory that will be needed to create a dictionary for decompression.
- *  Note : dictionary created by reference using ZSTD_dlm_byRef are smaller */
-size_t ZSTD_estimateDDictSize(size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod)
-{
-    return sizeof(ZSTD_DDict) + (dictLoadMethod == ZSTD_dlm_byRef ? 0 : dictSize);
-}
-
-size_t ZSTD_sizeof_DDict(const ZSTD_DDict* ddict)
-{
-    if (ddict==NULL) return 0;   /* support sizeof on NULL */
-    return sizeof(*ddict) + (ddict->dictBuffer ? ddict->dictSize : 0) ;
-}
-
 /*! ZSTD_getDictID_fromDict() :
  *  Provides the dictID stored within dictionary.
  *  if @return == 0, the dictionary is not conformant with Zstandard specification.
@@ -2578,16 +1103,6 @@
     return MEM_readLE32((const char*)dict + ZSTD_FRAMEIDSIZE);
 }
 
-/*! ZSTD_getDictID_fromDDict() :
- *  Provides the dictID of the dictionary loaded into `ddict`.
- *  If @return == 0, the dictionary is not conformant to Zstandard specification, or empty.
- *  Non-conformant dictionaries can still be loaded, but as content-only dictionaries. */
-unsigned ZSTD_getDictID_fromDDict(const ZSTD_DDict* ddict)
-{
-    if (ddict==NULL) return 0;
-    return ZSTD_getDictID_fromDict(ddict->dictContent, ddict->dictSize);
-}
-
 /*! ZSTD_getDictID_fromFrame() :
  *  Provides the dictID required to decompresse frame stored within `src`.
  *  If @return == 0, the dictID could not be decoded.
@@ -2695,7 +1210,7 @@
 
 
 /* ZSTD_initDStream_usingDict() :
- * return : expected size, aka ZSTD_frameHeaderSize_prefix.
+ * return : expected size, aka ZSTD_FRAMEHEADERSIZE_PREFIX.
  * this function cannot fail */
 size_t ZSTD_initDStream_usingDict(ZSTD_DStream* zds, const void* dict, size_t dictSize)
 {
@@ -2703,7 +1218,7 @@
     zds->streamStage = zdss_init;
     zds->noForwardProgress = 0;
     CHECK_F( ZSTD_DCtx_loadDictionary(zds, dict, dictSize) );
-    return ZSTD_frameHeaderSize_prefix;
+    return ZSTD_FRAMEHEADERSIZE_PREFIX;
 }
 
 /* note : this variant can't fail */
@@ -2724,7 +1239,7 @@
 }
 
 /* ZSTD_resetDStream() :
- * return : expected size, aka ZSTD_frameHeaderSize_prefix.
+ * return : expected size, aka ZSTD_FRAMEHEADERSIZE_PREFIX.
  * this function cannot fail */
 size_t ZSTD_resetDStream(ZSTD_DStream* dctx)
 {
@@ -2733,23 +1248,9 @@
     dctx->lhSize = dctx->inPos = dctx->outStart = dctx->outEnd = 0;
     dctx->legacyVersion = 0;
     dctx->hostageByte = 0;
-    return ZSTD_frameHeaderSize_prefix;
+    return ZSTD_FRAMEHEADERSIZE_PREFIX;
 }
 
-size_t ZSTD_setDStreamParameter(ZSTD_DStream* dctx,
-                                ZSTD_DStreamParameter_e paramType, unsigned paramValue)
-{
-    if (dctx->streamStage != zdss_init) return ERROR(stage_wrong);
-    switch(paramType)
-    {
-        default : return ERROR(parameter_unsupported);
-        case DStream_p_maxWindowSize :
-            DEBUGLOG(4, "setting maxWindowSize = %u KB", paramValue >> 10);
-            dctx->maxWindowSize = paramValue ? paramValue : (U32)(-1);
-            break;
-    }
-    return 0;
-}
 
 size_t ZSTD_DCtx_refDDict(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict)
 {
@@ -2758,18 +1259,92 @@
     return 0;
 }
 
+/* ZSTD_DCtx_setMaxWindowSize() :
+ * note : no direct equivalence in ZSTD_DCtx_setParameter,
+ * since this version sets windowSize, and the other sets windowLog */
 size_t ZSTD_DCtx_setMaxWindowSize(ZSTD_DCtx* dctx, size_t maxWindowSize)
 {
+    ZSTD_bounds const bounds = ZSTD_dParam_getBounds(ZSTD_d_windowLogMax);
+    size_t const min = (size_t)1 << bounds.lowerBound;
+    size_t const max = (size_t)1 << bounds.upperBound;
     if (dctx->streamStage != zdss_init) return ERROR(stage_wrong);
+    if (maxWindowSize < min) return ERROR(parameter_outOfBound);
+    if (maxWindowSize > max) return ERROR(parameter_outOfBound);
     dctx->maxWindowSize = maxWindowSize;
     return 0;
 }
 
 size_t ZSTD_DCtx_setFormat(ZSTD_DCtx* dctx, ZSTD_format_e format)
 {
-    DEBUGLOG(4, "ZSTD_DCtx_setFormat : %u", (unsigned)format);
+    return ZSTD_DCtx_setParameter(dctx, ZSTD_d_format, format);
+}
+
+ZSTD_bounds ZSTD_dParam_getBounds(ZSTD_dParameter dParam)
+{
+    ZSTD_bounds bounds = { 0, 0, 0 };
+    switch(dParam) {
+        case ZSTD_d_windowLogMax:
+            bounds.lowerBound = ZSTD_WINDOWLOG_ABSOLUTEMIN;
+            bounds.upperBound = ZSTD_WINDOWLOG_MAX;
+            return bounds;
+        case ZSTD_d_format:
+            bounds.lowerBound = (int)ZSTD_f_zstd1;
+            bounds.upperBound = (int)ZSTD_f_zstd1_magicless;
+            ZSTD_STATIC_ASSERT(ZSTD_f_zstd1 < ZSTD_f_zstd1_magicless);
+            return bounds;
+        default:;
+    }
+    bounds.error = ERROR(parameter_unsupported);
+    return bounds;
+}
+
+/* ZSTD_dParam_withinBounds:
+ * @return 1 if value is within dParam bounds,
+ * 0 otherwise */
+static int ZSTD_dParam_withinBounds(ZSTD_dParameter dParam, int value)
+{
+    ZSTD_bounds const bounds = ZSTD_dParam_getBounds(dParam);
+    if (ZSTD_isError(bounds.error)) return 0;
+    if (value < bounds.lowerBound) return 0;
+    if (value > bounds.upperBound) return 0;
+    return 1;
+}
+
+#define CHECK_DBOUNDS(p,v) {                \
+    if (!ZSTD_dParam_withinBounds(p, v))    \
+        return ERROR(parameter_outOfBound); \
+}
+
+size_t ZSTD_DCtx_setParameter(ZSTD_DCtx* dctx, ZSTD_dParameter dParam, int value)
+{
     if (dctx->streamStage != zdss_init) return ERROR(stage_wrong);
-    dctx->format = format;
+    switch(dParam) {
+        case ZSTD_d_windowLogMax:
+            CHECK_DBOUNDS(ZSTD_d_windowLogMax, value);
+            dctx->maxWindowSize = ((size_t)1) << value;
+            return 0;
+        case ZSTD_d_format:
+            CHECK_DBOUNDS(ZSTD_d_format, value);
+            dctx->format = (ZSTD_format_e)value;
+            return 0;
+        default:;
+    }
+    return ERROR(parameter_unsupported);
+}
+
+size_t ZSTD_DCtx_reset(ZSTD_DCtx* dctx, ZSTD_ResetDirective reset)
+{
+    if ( (reset == ZSTD_reset_session_only)
+      || (reset == ZSTD_reset_session_and_parameters) ) {
+        (void)ZSTD_initDStream(dctx);
+    }
+    if ( (reset == ZSTD_reset_parameters)
+      || (reset == ZSTD_reset_session_and_parameters) ) {
+        if (dctx->streamStage != zdss_init)
+            return ERROR(stage_wrong);
+        dctx->format = ZSTD_f_zstd1;
+        dctx->maxWindowSize = ZSTD_MAXWINDOWSIZE_DEFAULT;
+    }
     return 0;
 }
 
@@ -2799,7 +1374,7 @@
 
 size_t ZSTD_estimateDStreamSize_fromFrame(const void* src, size_t srcSize)
 {
-    U32 const windowSizeMax = 1U << ZSTD_WINDOWLOG_MAX;   /* note : should be user-selectable */
+    U32 const windowSizeMax = 1U << ZSTD_WINDOWLOG_MAX;   /* note : should be user-selectable, but requires an additional parameter (or a dctx) */
     ZSTD_frameHeader zfh;
     size_t const err = ZSTD_getFrameHeader(&zfh, src, srcSize);
     if (ZSTD_isError(err)) return err;
@@ -2868,8 +1443,8 @@
 #if defined(ZSTD_LEGACY_SUPPORT) && (ZSTD_LEGACY_SUPPORT>=1)
                     U32 const legacyVersion = ZSTD_isLegacy(istart, iend-istart);
                     if (legacyVersion) {
-                        const void* const dict = zds->ddict ? zds->ddict->dictContent : NULL;
-                        size_t const dictSize = zds->ddict ? zds->ddict->dictSize : 0;
+                        const void* const dict = zds->ddict ? ZSTD_DDict_dictContent(zds->ddict) : NULL;
+                        size_t const dictSize = zds->ddict ? ZSTD_DDict_dictSize(zds->ddict) : 0;
                         DEBUGLOG(5, "ZSTD_decompressStream: detected legacy version v0.%u", legacyVersion);
                         /* legacy support is incompatible with static dctx */
                         if (zds->staticSize) return ERROR(memory_allocation);
@@ -2894,7 +1469,7 @@
                             zds->lhSize += remainingInput;
                         }
                         input->pos = input->size;
-                        return (MAX(ZSTD_frameHeaderSize_min, hSize) - zds->lhSize) + ZSTD_blockHeaderSize;   /* remaining header bytes + next block header */
+                        return (MAX(ZSTD_FRAMEHEADERSIZE_MIN, hSize) - zds->lhSize) + ZSTD_blockHeaderSize;   /* remaining header bytes + next block header */
                     }
                     assert(ip != NULL);
                     memcpy(zds->headerBuffer + zds->lhSize, ip, toLoad); zds->lhSize = hSize; ip += toLoad;
@@ -2922,7 +1497,7 @@
             DEBUGLOG(4, "Consume header");
             CHECK_F(ZSTD_decompressBegin_usingDDict(zds, zds->ddict));
 
-            if ((MEM_readLE32(zds->headerBuffer) & 0xFFFFFFF0U) == ZSTD_MAGIC_SKIPPABLE_START) {  /* skippable frame */
+            if ((MEM_readLE32(zds->headerBuffer) & ZSTD_MAGIC_SKIPPABLE_MASK) == ZSTD_MAGIC_SKIPPABLE_START) {  /* skippable frame */
                 zds->expected = MEM_readLE32(zds->headerBuffer + ZSTD_FRAMEIDSIZE);
                 zds->stage = ZSTDds_skipFrame;
             } else {
@@ -3038,7 +1613,9 @@
             someMoreWork = 0;
             break;
 
-        default: return ERROR(GENERIC);   /* impossible */
+        default:
+            assert(0);    /* impossible */
+            return ERROR(GENERIC);   /* some compiler require default to do something */
     }   }
 
     /* result */
@@ -3080,13 +1657,7 @@
     }
 }
 
-
-size_t ZSTD_decompress_generic(ZSTD_DCtx* dctx, ZSTD_outBuffer* output, ZSTD_inBuffer* input)
-{
-    return ZSTD_decompressStream(dctx, output, input);
-}
-
-size_t ZSTD_decompress_generic_simpleArgs (
+size_t ZSTD_decompressStream_simpleArgs (
                             ZSTD_DCtx* dctx,
                             void* dst, size_t dstCapacity, size_t* dstPos,
                       const void* src, size_t srcSize, size_t* srcPos)
@@ -3094,15 +1665,8 @@
     ZSTD_outBuffer output = { dst, dstCapacity, *dstPos };
     ZSTD_inBuffer  input  = { src, srcSize, *srcPos };
     /* ZSTD_compress_generic() will check validity of dstPos and srcPos */
-    size_t const cErr = ZSTD_decompress_generic(dctx, &output, &input);
+    size_t const cErr = ZSTD_decompressStream(dctx, &output, &input);
     *dstPos = output.pos;
     *srcPos = input.pos;
     return cErr;
 }
-
-void ZSTD_DCtx_reset(ZSTD_DCtx* dctx)
-{
-    (void)ZSTD_initDStream(dctx);
-    dctx->format = ZSTD_f_zstd1;
-    dctx->maxWindowSize = ZSTD_MAXWINDOWSIZE_DEFAULT;
-}
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/python-zstandard/zstd/decompress/zstd_decompress_block.c	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,1307 @@
+/*
+ * Copyright (c) 2016-present, Yann Collet, Facebook, Inc.
+ * All rights reserved.
+ *
+ * This source code is licensed under both the BSD-style license (found in the
+ * LICENSE file in the root directory of this source tree) and the GPLv2 (found
+ * in the COPYING file in the root directory of this source tree).
+ * You may select, at your option, one of the above-listed licenses.
+ */
+
+/* zstd_decompress_block :
+ * this module takes care of decompressing _compressed_ block */
+
+/*-*******************************************************
+*  Dependencies
+*********************************************************/
+#include <string.h>      /* memcpy, memmove, memset */
+#include "compiler.h"    /* prefetch */
+#include "cpu.h"         /* bmi2 */
+#include "mem.h"         /* low level memory routines */
+#define FSE_STATIC_LINKING_ONLY
+#include "fse.h"
+#define HUF_STATIC_LINKING_ONLY
+#include "huf.h"
+#include "zstd_internal.h"
+#include "zstd_decompress_internal.h"   /* ZSTD_DCtx */
+#include "zstd_ddict.h"  /* ZSTD_DDictDictContent */
+#include "zstd_decompress_block.h"
+
+/*_*******************************************************
+*  Macros
+**********************************************************/
+
+/* These two optional macros force the use one way or another of the two
+ * ZSTD_decompressSequences implementations. You can't force in both directions
+ * at the same time.
+ */
+#if defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT) && \
+    defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG)
+#error "Cannot force the use of the short and the long ZSTD_decompressSequences variants!"
+#endif
+
+
+/*_*******************************************************
+*  Memory operations
+**********************************************************/
+static void ZSTD_copy4(void* dst, const void* src) { memcpy(dst, src, 4); }
+
+
+/*-*************************************************************
+ *   Block decoding
+ ***************************************************************/
+
+/*! ZSTD_getcBlockSize() :
+ *  Provides the size of compressed block from block header `src` */
+size_t ZSTD_getcBlockSize(const void* src, size_t srcSize,
+                          blockProperties_t* bpPtr)
+{
+    if (srcSize < ZSTD_blockHeaderSize) return ERROR(srcSize_wrong);
+    {   U32 const cBlockHeader = MEM_readLE24(src);
+        U32 const cSize = cBlockHeader >> 3;
+        bpPtr->lastBlock = cBlockHeader & 1;
+        bpPtr->blockType = (blockType_e)((cBlockHeader >> 1) & 3);
+        bpPtr->origSize = cSize;   /* only useful for RLE */
+        if (bpPtr->blockType == bt_rle) return 1;
+        if (bpPtr->blockType == bt_reserved) return ERROR(corruption_detected);
+        return cSize;
+    }
+}
+
+
+/* Hidden declaration for fullbench */
+size_t ZSTD_decodeLiteralsBlock(ZSTD_DCtx* dctx,
+                          const void* src, size_t srcSize);
+/*! ZSTD_decodeLiteralsBlock() :
+ * @return : nb of bytes read from src (< srcSize )
+ *  note : symbol not declared but exposed for fullbench */
+size_t ZSTD_decodeLiteralsBlock(ZSTD_DCtx* dctx,
+                          const void* src, size_t srcSize)   /* note : srcSize < BLOCKSIZE */
+{
+    if (srcSize < MIN_CBLOCK_SIZE) return ERROR(corruption_detected);
+
+    {   const BYTE* const istart = (const BYTE*) src;
+        symbolEncodingType_e const litEncType = (symbolEncodingType_e)(istart[0] & 3);
+
+        switch(litEncType)
+        {
+        case set_repeat:
+            if (dctx->litEntropy==0) return ERROR(dictionary_corrupted);
+            /* fall-through */
+
+        case set_compressed:
+            if (srcSize < 5) return ERROR(corruption_detected);   /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need up to 5 for case 3 */
+            {   size_t lhSize, litSize, litCSize;
+                U32 singleStream=0;
+                U32 const lhlCode = (istart[0] >> 2) & 3;
+                U32 const lhc = MEM_readLE32(istart);
+                size_t hufSuccess;
+                switch(lhlCode)
+                {
+                case 0: case 1: default:   /* note : default is impossible, since lhlCode into [0..3] */
+                    /* 2 - 2 - 10 - 10 */
+                    singleStream = !lhlCode;
+                    lhSize = 3;
+                    litSize  = (lhc >> 4) & 0x3FF;
+                    litCSize = (lhc >> 14) & 0x3FF;
+                    break;
+                case 2:
+                    /* 2 - 2 - 14 - 14 */
+                    lhSize = 4;
+                    litSize  = (lhc >> 4) & 0x3FFF;
+                    litCSize = lhc >> 18;
+                    break;
+                case 3:
+                    /* 2 - 2 - 18 - 18 */
+                    lhSize = 5;
+                    litSize  = (lhc >> 4) & 0x3FFFF;
+                    litCSize = (lhc >> 22) + (istart[4] << 10);
+                    break;
+                }
+                if (litSize > ZSTD_BLOCKSIZE_MAX) return ERROR(corruption_detected);
+                if (litCSize + lhSize > srcSize) return ERROR(corruption_detected);
+
+                /* prefetch huffman table if cold */
+                if (dctx->ddictIsCold && (litSize > 768 /* heuristic */)) {
+                    PREFETCH_AREA(dctx->HUFptr, sizeof(dctx->entropy.hufTable));
+                }
+
+                if (litEncType==set_repeat) {
+                    if (singleStream) {
+                        hufSuccess = HUF_decompress1X_usingDTable_bmi2(
+                            dctx->litBuffer, litSize, istart+lhSize, litCSize,
+                            dctx->HUFptr, dctx->bmi2);
+                    } else {
+                        hufSuccess = HUF_decompress4X_usingDTable_bmi2(
+                            dctx->litBuffer, litSize, istart+lhSize, litCSize,
+                            dctx->HUFptr, dctx->bmi2);
+                    }
+                } else {
+                    if (singleStream) {
+#if defined(HUF_FORCE_DECOMPRESS_X2)
+                        hufSuccess = HUF_decompress1X_DCtx_wksp(
+                            dctx->entropy.hufTable, dctx->litBuffer, litSize,
+                            istart+lhSize, litCSize, dctx->workspace,
+                            sizeof(dctx->workspace));
+#else
+                        hufSuccess = HUF_decompress1X1_DCtx_wksp_bmi2(
+                            dctx->entropy.hufTable, dctx->litBuffer, litSize,
+                            istart+lhSize, litCSize, dctx->workspace,
+                            sizeof(dctx->workspace), dctx->bmi2);
+#endif
+                    } else {
+                        hufSuccess = HUF_decompress4X_hufOnly_wksp_bmi2(
+                            dctx->entropy.hufTable, dctx->litBuffer, litSize,
+                            istart+lhSize, litCSize, dctx->workspace,
+                            sizeof(dctx->workspace), dctx->bmi2);
+                    }
+                }
+
+                if (HUF_isError(hufSuccess)) return ERROR(corruption_detected);
+
+                dctx->litPtr = dctx->litBuffer;
+                dctx->litSize = litSize;
+                dctx->litEntropy = 1;
+                if (litEncType==set_compressed) dctx->HUFptr = dctx->entropy.hufTable;
+                memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH);
+                return litCSize + lhSize;
+            }
+
+        case set_basic:
+            {   size_t litSize, lhSize;
+                U32 const lhlCode = ((istart[0]) >> 2) & 3;
+                switch(lhlCode)
+                {
+                case 0: case 2: default:   /* note : default is impossible, since lhlCode into [0..3] */
+                    lhSize = 1;
+                    litSize = istart[0] >> 3;
+                    break;
+                case 1:
+                    lhSize = 2;
+                    litSize = MEM_readLE16(istart) >> 4;
+                    break;
+                case 3:
+                    lhSize = 3;
+                    litSize = MEM_readLE24(istart) >> 4;
+                    break;
+                }
+
+                if (lhSize+litSize+WILDCOPY_OVERLENGTH > srcSize) {  /* risk reading beyond src buffer with wildcopy */
+                    if (litSize+lhSize > srcSize) return ERROR(corruption_detected);
+                    memcpy(dctx->litBuffer, istart+lhSize, litSize);
+                    dctx->litPtr = dctx->litBuffer;
+                    dctx->litSize = litSize;
+                    memset(dctx->litBuffer + dctx->litSize, 0, WILDCOPY_OVERLENGTH);
+                    return lhSize+litSize;
+                }
+                /* direct reference into compressed stream */
+                dctx->litPtr = istart+lhSize;
+                dctx->litSize = litSize;
+                return lhSize+litSize;
+            }
+
+        case set_rle:
+            {   U32 const lhlCode = ((istart[0]) >> 2) & 3;
+                size_t litSize, lhSize;
+                switch(lhlCode)
+                {
+                case 0: case 2: default:   /* note : default is impossible, since lhlCode into [0..3] */
+                    lhSize = 1;
+                    litSize = istart[0] >> 3;
+                    break;
+                case 1:
+                    lhSize = 2;
+                    litSize = MEM_readLE16(istart) >> 4;
+                    break;
+                case 3:
+                    lhSize = 3;
+                    litSize = MEM_readLE24(istart) >> 4;
+                    if (srcSize<4) return ERROR(corruption_detected);   /* srcSize >= MIN_CBLOCK_SIZE == 3; here we need lhSize+1 = 4 */
+                    break;
+                }
+                if (litSize > ZSTD_BLOCKSIZE_MAX) return ERROR(corruption_detected);
+                memset(dctx->litBuffer, istart[lhSize], litSize + WILDCOPY_OVERLENGTH);
+                dctx->litPtr = dctx->litBuffer;
+                dctx->litSize = litSize;
+                return lhSize+1;
+            }
+        default:
+            return ERROR(corruption_detected);   /* impossible */
+        }
+    }
+}
+
+/* Default FSE distribution tables.
+ * These are pre-calculated FSE decoding tables using default distributions as defined in specification :
+ * https://github.com/facebook/zstd/blob/master/doc/zstd_compression_format.md#default-distributions
+ * They were generated programmatically with following method :
+ * - start from default distributions, present in /lib/common/zstd_internal.h
+ * - generate tables normally, using ZSTD_buildFSETable()
+ * - printout the content of tables
+ * - pretify output, report below, test with fuzzer to ensure it's correct */
+
+/* Default FSE distribution table for Literal Lengths */
+static const ZSTD_seqSymbol LL_defaultDTable[(1<<LL_DEFAULTNORMLOG)+1] = {
+     {  1,  1,  1, LL_DEFAULTNORMLOG},  /* header : fastMode, tableLog */
+     /* nextState, nbAddBits, nbBits, baseVal */
+     {  0,  0,  4,    0},  { 16,  0,  4,    0},
+     { 32,  0,  5,    1},  {  0,  0,  5,    3},
+     {  0,  0,  5,    4},  {  0,  0,  5,    6},
+     {  0,  0,  5,    7},  {  0,  0,  5,    9},
+     {  0,  0,  5,   10},  {  0,  0,  5,   12},
+     {  0,  0,  6,   14},  {  0,  1,  5,   16},
+     {  0,  1,  5,   20},  {  0,  1,  5,   22},
+     {  0,  2,  5,   28},  {  0,  3,  5,   32},
+     {  0,  4,  5,   48},  { 32,  6,  5,   64},
+     {  0,  7,  5,  128},  {  0,  8,  6,  256},
+     {  0, 10,  6, 1024},  {  0, 12,  6, 4096},
+     { 32,  0,  4,    0},  {  0,  0,  4,    1},
+     {  0,  0,  5,    2},  { 32,  0,  5,    4},
+     {  0,  0,  5,    5},  { 32,  0,  5,    7},
+     {  0,  0,  5,    8},  { 32,  0,  5,   10},
+     {  0,  0,  5,   11},  {  0,  0,  6,   13},
+     { 32,  1,  5,   16},  {  0,  1,  5,   18},
+     { 32,  1,  5,   22},  {  0,  2,  5,   24},
+     { 32,  3,  5,   32},  {  0,  3,  5,   40},
+     {  0,  6,  4,   64},  { 16,  6,  4,   64},
+     { 32,  7,  5,  128},  {  0,  9,  6,  512},
+     {  0, 11,  6, 2048},  { 48,  0,  4,    0},
+     { 16,  0,  4,    1},  { 32,  0,  5,    2},
+     { 32,  0,  5,    3},  { 32,  0,  5,    5},
+     { 32,  0,  5,    6},  { 32,  0,  5,    8},
+     { 32,  0,  5,    9},  { 32,  0,  5,   11},
+     { 32,  0,  5,   12},  {  0,  0,  6,   15},
+     { 32,  1,  5,   18},  { 32,  1,  5,   20},
+     { 32,  2,  5,   24},  { 32,  2,  5,   28},
+     { 32,  3,  5,   40},  { 32,  4,  5,   48},
+     {  0, 16,  6,65536},  {  0, 15,  6,32768},
+     {  0, 14,  6,16384},  {  0, 13,  6, 8192},
+};   /* LL_defaultDTable */
+
+/* Default FSE distribution table for Offset Codes */
+static const ZSTD_seqSymbol OF_defaultDTable[(1<<OF_DEFAULTNORMLOG)+1] = {
+    {  1,  1,  1, OF_DEFAULTNORMLOG},  /* header : fastMode, tableLog */
+    /* nextState, nbAddBits, nbBits, baseVal */
+    {  0,  0,  5,    0},     {  0,  6,  4,   61},
+    {  0,  9,  5,  509},     {  0, 15,  5,32765},
+    {  0, 21,  5,2097149},   {  0,  3,  5,    5},
+    {  0,  7,  4,  125},     {  0, 12,  5, 4093},
+    {  0, 18,  5,262141},    {  0, 23,  5,8388605},
+    {  0,  5,  5,   29},     {  0,  8,  4,  253},
+    {  0, 14,  5,16381},     {  0, 20,  5,1048573},
+    {  0,  2,  5,    1},     { 16,  7,  4,  125},
+    {  0, 11,  5, 2045},     {  0, 17,  5,131069},
+    {  0, 22,  5,4194301},   {  0,  4,  5,   13},
+    { 16,  8,  4,  253},     {  0, 13,  5, 8189},
+    {  0, 19,  5,524285},    {  0,  1,  5,    1},
+    { 16,  6,  4,   61},     {  0, 10,  5, 1021},
+    {  0, 16,  5,65533},     {  0, 28,  5,268435453},
+    {  0, 27,  5,134217725}, {  0, 26,  5,67108861},
+    {  0, 25,  5,33554429},  {  0, 24,  5,16777213},
+};   /* OF_defaultDTable */
+
+
+/* Default FSE distribution table for Match Lengths */
+static const ZSTD_seqSymbol ML_defaultDTable[(1<<ML_DEFAULTNORMLOG)+1] = {
+    {  1,  1,  1, ML_DEFAULTNORMLOG},  /* header : fastMode, tableLog */
+    /* nextState, nbAddBits, nbBits, baseVal */
+    {  0,  0,  6,    3},  {  0,  0,  4,    4},
+    { 32,  0,  5,    5},  {  0,  0,  5,    6},
+    {  0,  0,  5,    8},  {  0,  0,  5,    9},
+    {  0,  0,  5,   11},  {  0,  0,  6,   13},
+    {  0,  0,  6,   16},  {  0,  0,  6,   19},
+    {  0,  0,  6,   22},  {  0,  0,  6,   25},
+    {  0,  0,  6,   28},  {  0,  0,  6,   31},
+    {  0,  0,  6,   34},  {  0,  1,  6,   37},
+    {  0,  1,  6,   41},  {  0,  2,  6,   47},
+    {  0,  3,  6,   59},  {  0,  4,  6,   83},
+    {  0,  7,  6,  131},  {  0,  9,  6,  515},
+    { 16,  0,  4,    4},  {  0,  0,  4,    5},
+    { 32,  0,  5,    6},  {  0,  0,  5,    7},
+    { 32,  0,  5,    9},  {  0,  0,  5,   10},
+    {  0,  0,  6,   12},  {  0,  0,  6,   15},
+    {  0,  0,  6,   18},  {  0,  0,  6,   21},
+    {  0,  0,  6,   24},  {  0,  0,  6,   27},
+    {  0,  0,  6,   30},  {  0,  0,  6,   33},
+    {  0,  1,  6,   35},  {  0,  1,  6,   39},
+    {  0,  2,  6,   43},  {  0,  3,  6,   51},
+    {  0,  4,  6,   67},  {  0,  5,  6,   99},
+    {  0,  8,  6,  259},  { 32,  0,  4,    4},
+    { 48,  0,  4,    4},  { 16,  0,  4,    5},
+    { 32,  0,  5,    7},  { 32,  0,  5,    8},
+    { 32,  0,  5,   10},  { 32,  0,  5,   11},
+    {  0,  0,  6,   14},  {  0,  0,  6,   17},
+    {  0,  0,  6,   20},  {  0,  0,  6,   23},
+    {  0,  0,  6,   26},  {  0,  0,  6,   29},
+    {  0,  0,  6,   32},  {  0, 16,  6,65539},
+    {  0, 15,  6,32771},  {  0, 14,  6,16387},
+    {  0, 13,  6, 8195},  {  0, 12,  6, 4099},
+    {  0, 11,  6, 2051},  {  0, 10,  6, 1027},
+};   /* ML_defaultDTable */
+
+
+static void ZSTD_buildSeqTable_rle(ZSTD_seqSymbol* dt, U32 baseValue, U32 nbAddBits)
+{
+    void* ptr = dt;
+    ZSTD_seqSymbol_header* const DTableH = (ZSTD_seqSymbol_header*)ptr;
+    ZSTD_seqSymbol* const cell = dt + 1;
+
+    DTableH->tableLog = 0;
+    DTableH->fastMode = 0;
+
+    cell->nbBits = 0;
+    cell->nextState = 0;
+    assert(nbAddBits < 255);
+    cell->nbAdditionalBits = (BYTE)nbAddBits;
+    cell->baseValue = baseValue;
+}
+
+
+/* ZSTD_buildFSETable() :
+ * generate FSE decoding table for one symbol (ll, ml or off)
+ * cannot fail if input is valid =>
+ * all inputs are presumed validated at this stage */
+void
+ZSTD_buildFSETable(ZSTD_seqSymbol* dt,
+            const short* normalizedCounter, unsigned maxSymbolValue,
+            const U32* baseValue, const U32* nbAdditionalBits,
+            unsigned tableLog)
+{
+    ZSTD_seqSymbol* const tableDecode = dt+1;
+    U16 symbolNext[MaxSeq+1];
+
+    U32 const maxSV1 = maxSymbolValue + 1;
+    U32 const tableSize = 1 << tableLog;
+    U32 highThreshold = tableSize-1;
+
+    /* Sanity Checks */
+    assert(maxSymbolValue <= MaxSeq);
+    assert(tableLog <= MaxFSELog);
+
+    /* Init, lay down lowprob symbols */
+    {   ZSTD_seqSymbol_header DTableH;
+        DTableH.tableLog = tableLog;
+        DTableH.fastMode = 1;
+        {   S16 const largeLimit= (S16)(1 << (tableLog-1));
+            U32 s;
+            for (s=0; s<maxSV1; s++) {
+                if (normalizedCounter[s]==-1) {
+                    tableDecode[highThreshold--].baseValue = s;
+                    symbolNext[s] = 1;
+                } else {
+                    if (normalizedCounter[s] >= largeLimit) DTableH.fastMode=0;
+                    symbolNext[s] = normalizedCounter[s];
+        }   }   }
+        memcpy(dt, &DTableH, sizeof(DTableH));
+    }
+
+    /* Spread symbols */
+    {   U32 const tableMask = tableSize-1;
+        U32 const step = FSE_TABLESTEP(tableSize);
+        U32 s, position = 0;
+        for (s=0; s<maxSV1; s++) {
+            int i;
+            for (i=0; i<normalizedCounter[s]; i++) {
+                tableDecode[position].baseValue = s;
+                position = (position + step) & tableMask;
+                while (position > highThreshold) position = (position + step) & tableMask;   /* lowprob area */
+        }   }
+        assert(position == 0); /* position must reach all cells once, otherwise normalizedCounter is incorrect */
+    }
+
+    /* Build Decoding table */
+    {   U32 u;
+        for (u=0; u<tableSize; u++) {
+            U32 const symbol = tableDecode[u].baseValue;
+            U32 const nextState = symbolNext[symbol]++;
+            tableDecode[u].nbBits = (BYTE) (tableLog - BIT_highbit32(nextState) );
+            tableDecode[u].nextState = (U16) ( (nextState << tableDecode[u].nbBits) - tableSize);
+            assert(nbAdditionalBits[symbol] < 255);
+            tableDecode[u].nbAdditionalBits = (BYTE)nbAdditionalBits[symbol];
+            tableDecode[u].baseValue = baseValue[symbol];
+    }   }
+}
+
+
+/*! ZSTD_buildSeqTable() :
+ * @return : nb bytes read from src,
+ *           or an error code if it fails */
+static size_t ZSTD_buildSeqTable(ZSTD_seqSymbol* DTableSpace, const ZSTD_seqSymbol** DTablePtr,
+                                 symbolEncodingType_e type, unsigned max, U32 maxLog,
+                                 const void* src, size_t srcSize,
+                                 const U32* baseValue, const U32* nbAdditionalBits,
+                                 const ZSTD_seqSymbol* defaultTable, U32 flagRepeatTable,
+                                 int ddictIsCold, int nbSeq)
+{
+    switch(type)
+    {
+    case set_rle :
+        if (!srcSize) return ERROR(srcSize_wrong);
+        if ( (*(const BYTE*)src) > max) return ERROR(corruption_detected);
+        {   U32 const symbol = *(const BYTE*)src;
+            U32 const baseline = baseValue[symbol];
+            U32 const nbBits = nbAdditionalBits[symbol];
+            ZSTD_buildSeqTable_rle(DTableSpace, baseline, nbBits);
+        }
+        *DTablePtr = DTableSpace;
+        return 1;
+    case set_basic :
+        *DTablePtr = defaultTable;
+        return 0;
+    case set_repeat:
+        if (!flagRepeatTable) return ERROR(corruption_detected);
+        /* prefetch FSE table if used */
+        if (ddictIsCold && (nbSeq > 24 /* heuristic */)) {
+            const void* const pStart = *DTablePtr;
+            size_t const pSize = sizeof(ZSTD_seqSymbol) * (SEQSYMBOL_TABLE_SIZE(maxLog));
+            PREFETCH_AREA(pStart, pSize);
+        }
+        return 0;
+    case set_compressed :
+        {   unsigned tableLog;
+            S16 norm[MaxSeq+1];
+            size_t const headerSize = FSE_readNCount(norm, &max, &tableLog, src, srcSize);
+            if (FSE_isError(headerSize)) return ERROR(corruption_detected);
+            if (tableLog > maxLog) return ERROR(corruption_detected);
+            ZSTD_buildFSETable(DTableSpace, norm, max, baseValue, nbAdditionalBits, tableLog);
+            *DTablePtr = DTableSpace;
+            return headerSize;
+        }
+    default :   /* impossible */
+        assert(0);
+        return ERROR(GENERIC);
+    }
+}
+
+size_t ZSTD_decodeSeqHeaders(ZSTD_DCtx* dctx, int* nbSeqPtr,
+                             const void* src, size_t srcSize)
+{
+    const BYTE* const istart = (const BYTE* const)src;
+    const BYTE* const iend = istart + srcSize;
+    const BYTE* ip = istart;
+    int nbSeq;
+    DEBUGLOG(5, "ZSTD_decodeSeqHeaders");
+
+    /* check */
+    if (srcSize < MIN_SEQUENCES_SIZE) return ERROR(srcSize_wrong);
+
+    /* SeqHead */
+    nbSeq = *ip++;
+    if (!nbSeq) {
+        *nbSeqPtr=0;
+        if (srcSize != 1) return ERROR(srcSize_wrong);
+        return 1;
+    }
+    if (nbSeq > 0x7F) {
+        if (nbSeq == 0xFF) {
+            if (ip+2 > iend) return ERROR(srcSize_wrong);
+            nbSeq = MEM_readLE16(ip) + LONGNBSEQ, ip+=2;
+        } else {
+            if (ip >= iend) return ERROR(srcSize_wrong);
+            nbSeq = ((nbSeq-0x80)<<8) + *ip++;
+        }
+    }
+    *nbSeqPtr = nbSeq;
+
+    /* FSE table descriptors */
+    if (ip+4 > iend) return ERROR(srcSize_wrong); /* minimum possible size */
+    {   symbolEncodingType_e const LLtype = (symbolEncodingType_e)(*ip >> 6);
+        symbolEncodingType_e const OFtype = (symbolEncodingType_e)((*ip >> 4) & 3);
+        symbolEncodingType_e const MLtype = (symbolEncodingType_e)((*ip >> 2) & 3);
+        ip++;
+
+        /* Build DTables */
+        {   size_t const llhSize = ZSTD_buildSeqTable(dctx->entropy.LLTable, &dctx->LLTptr,
+                                                      LLtype, MaxLL, LLFSELog,
+                                                      ip, iend-ip,
+                                                      LL_base, LL_bits,
+                                                      LL_defaultDTable, dctx->fseEntropy,
+                                                      dctx->ddictIsCold, nbSeq);
+            if (ZSTD_isError(llhSize)) return ERROR(corruption_detected);
+            ip += llhSize;
+        }
+
+        {   size_t const ofhSize = ZSTD_buildSeqTable(dctx->entropy.OFTable, &dctx->OFTptr,
+                                                      OFtype, MaxOff, OffFSELog,
+                                                      ip, iend-ip,
+                                                      OF_base, OF_bits,
+                                                      OF_defaultDTable, dctx->fseEntropy,
+                                                      dctx->ddictIsCold, nbSeq);
+            if (ZSTD_isError(ofhSize)) return ERROR(corruption_detected);
+            ip += ofhSize;
+        }
+
+        {   size_t const mlhSize = ZSTD_buildSeqTable(dctx->entropy.MLTable, &dctx->MLTptr,
+                                                      MLtype, MaxML, MLFSELog,
+                                                      ip, iend-ip,
+                                                      ML_base, ML_bits,
+                                                      ML_defaultDTable, dctx->fseEntropy,
+                                                      dctx->ddictIsCold, nbSeq);
+            if (ZSTD_isError(mlhSize)) return ERROR(corruption_detected);
+            ip += mlhSize;
+        }
+    }
+
+    return ip-istart;
+}
+
+
+typedef struct {
+    size_t litLength;
+    size_t matchLength;
+    size_t offset;
+    const BYTE* match;
+} seq_t;
+
+typedef struct {
+    size_t state;
+    const ZSTD_seqSymbol* table;
+} ZSTD_fseState;
+
+typedef struct {
+    BIT_DStream_t DStream;
+    ZSTD_fseState stateLL;
+    ZSTD_fseState stateOffb;
+    ZSTD_fseState stateML;
+    size_t prevOffset[ZSTD_REP_NUM];
+    const BYTE* prefixStart;
+    const BYTE* dictEnd;
+    size_t pos;
+} seqState_t;
+
+
+/* ZSTD_execSequenceLast7():
+ * exceptional case : decompress a match starting within last 7 bytes of output buffer.
+ * requires more careful checks, to ensure there is no overflow.
+ * performance does not matter though.
+ * note : this case is supposed to be never generated "naturally" by reference encoder,
+ *        since in most cases it needs at least 8 bytes to look for a match.
+ *        but it's allowed by the specification. */
+FORCE_NOINLINE
+size_t ZSTD_execSequenceLast7(BYTE* op,
+                              BYTE* const oend, seq_t sequence,
+                              const BYTE** litPtr, const BYTE* const litLimit,
+                              const BYTE* const base, const BYTE* const vBase, const BYTE* const dictEnd)
+{
+    BYTE* const oLitEnd = op + sequence.litLength;
+    size_t const sequenceLength = sequence.litLength + sequence.matchLength;
+    BYTE* const oMatchEnd = op + sequenceLength;   /* risk : address space overflow (32-bits) */
+    const BYTE* const iLitEnd = *litPtr + sequence.litLength;
+    const BYTE* match = oLitEnd - sequence.offset;
+
+    /* check */
+    if (oMatchEnd>oend) return ERROR(dstSize_tooSmall);   /* last match must fit within dstBuffer */
+    if (iLitEnd > litLimit) return ERROR(corruption_detected);   /* try to read beyond literal buffer */
+
+    /* copy literals */
+    while (op < oLitEnd) *op++ = *(*litPtr)++;
+
+    /* copy Match */
+    if (sequence.offset > (size_t)(oLitEnd - base)) {
+        /* offset beyond prefix */
+        if (sequence.offset > (size_t)(oLitEnd - vBase)) return ERROR(corruption_detected);
+        match = dictEnd - (base-match);
+        if (match + sequence.matchLength <= dictEnd) {
+            memmove(oLitEnd, match, sequence.matchLength);
+            return sequenceLength;
+        }
+        /* span extDict & currentPrefixSegment */
+        {   size_t const length1 = dictEnd - match;
+            memmove(oLitEnd, match, length1);
+            op = oLitEnd + length1;
+            sequence.matchLength -= length1;
+            match = base;
+    }   }
+    while (op < oMatchEnd) *op++ = *match++;
+    return sequenceLength;
+}
+
+
+HINT_INLINE
+size_t ZSTD_execSequence(BYTE* op,
+                         BYTE* const oend, seq_t sequence,
+                         const BYTE** litPtr, const BYTE* const litLimit,
+                         const BYTE* const prefixStart, const BYTE* const virtualStart, const BYTE* const dictEnd)
+{
+    BYTE* const oLitEnd = op + sequence.litLength;
+    size_t const sequenceLength = sequence.litLength + sequence.matchLength;
+    BYTE* const oMatchEnd = op + sequenceLength;   /* risk : address space overflow (32-bits) */
+    BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH;
+    const BYTE* const iLitEnd = *litPtr + sequence.litLength;
+    const BYTE* match = oLitEnd - sequence.offset;
+
+    /* check */
+    if (oMatchEnd>oend) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */
+    if (iLitEnd > litLimit) return ERROR(corruption_detected);   /* over-read beyond lit buffer */
+    if (oLitEnd>oend_w) return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, prefixStart, virtualStart, dictEnd);
+
+    /* copy Literals */
+    ZSTD_copy8(op, *litPtr);
+    if (sequence.litLength > 8)
+        ZSTD_wildcopy(op+8, (*litPtr)+8, sequence.litLength - 8);   /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */
+    op = oLitEnd;
+    *litPtr = iLitEnd;   /* update for next sequence */
+
+    /* copy Match */
+    if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
+        /* offset beyond prefix -> go into extDict */
+        if (sequence.offset > (size_t)(oLitEnd - virtualStart))
+            return ERROR(corruption_detected);
+        match = dictEnd + (match - prefixStart);
+        if (match + sequence.matchLength <= dictEnd) {
+            memmove(oLitEnd, match, sequence.matchLength);
+            return sequenceLength;
+        }
+        /* span extDict & currentPrefixSegment */
+        {   size_t const length1 = dictEnd - match;
+            memmove(oLitEnd, match, length1);
+            op = oLitEnd + length1;
+            sequence.matchLength -= length1;
+            match = prefixStart;
+            if (op > oend_w || sequence.matchLength < MINMATCH) {
+              U32 i;
+              for (i = 0; i < sequence.matchLength; ++i) op[i] = match[i];
+              return sequenceLength;
+            }
+    }   }
+    /* Requirement: op <= oend_w && sequence.matchLength >= MINMATCH */
+
+    /* match within prefix */
+    if (sequence.offset < 8) {
+        /* close range match, overlap */
+        static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };   /* added */
+        static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 };   /* subtracted */
+        int const sub2 = dec64table[sequence.offset];
+        op[0] = match[0];
+        op[1] = match[1];
+        op[2] = match[2];
+        op[3] = match[3];
+        match += dec32table[sequence.offset];
+        ZSTD_copy4(op+4, match);
+        match -= sub2;
+    } else {
+        ZSTD_copy8(op, match);
+    }
+    op += 8; match += 8;
+
+    if (oMatchEnd > oend-(16-MINMATCH)) {
+        if (op < oend_w) {
+            ZSTD_wildcopy(op, match, oend_w - op);
+            match += oend_w - op;
+            op = oend_w;
+        }
+        while (op < oMatchEnd) *op++ = *match++;
+    } else {
+        ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8);   /* works even if matchLength < 8 */
+    }
+    return sequenceLength;
+}
+
+
+HINT_INLINE
+size_t ZSTD_execSequenceLong(BYTE* op,
+                             BYTE* const oend, seq_t sequence,
+                             const BYTE** litPtr, const BYTE* const litLimit,
+                             const BYTE* const prefixStart, const BYTE* const dictStart, const BYTE* const dictEnd)
+{
+    BYTE* const oLitEnd = op + sequence.litLength;
+    size_t const sequenceLength = sequence.litLength + sequence.matchLength;
+    BYTE* const oMatchEnd = op + sequenceLength;   /* risk : address space overflow (32-bits) */
+    BYTE* const oend_w = oend - WILDCOPY_OVERLENGTH;
+    const BYTE* const iLitEnd = *litPtr + sequence.litLength;
+    const BYTE* match = sequence.match;
+
+    /* check */
+    if (oMatchEnd > oend) return ERROR(dstSize_tooSmall); /* last match must start at a minimum distance of WILDCOPY_OVERLENGTH from oend */
+    if (iLitEnd > litLimit) return ERROR(corruption_detected);   /* over-read beyond lit buffer */
+    if (oLitEnd > oend_w) return ZSTD_execSequenceLast7(op, oend, sequence, litPtr, litLimit, prefixStart, dictStart, dictEnd);
+
+    /* copy Literals */
+    ZSTD_copy8(op, *litPtr);  /* note : op <= oLitEnd <= oend_w == oend - 8 */
+    if (sequence.litLength > 8)
+        ZSTD_wildcopy(op+8, (*litPtr)+8, sequence.litLength - 8);   /* note : since oLitEnd <= oend-WILDCOPY_OVERLENGTH, no risk of overwrite beyond oend */
+    op = oLitEnd;
+    *litPtr = iLitEnd;   /* update for next sequence */
+
+    /* copy Match */
+    if (sequence.offset > (size_t)(oLitEnd - prefixStart)) {
+        /* offset beyond prefix */
+        if (sequence.offset > (size_t)(oLitEnd - dictStart)) return ERROR(corruption_detected);
+        if (match + sequence.matchLength <= dictEnd) {
+            memmove(oLitEnd, match, sequence.matchLength);
+            return sequenceLength;
+        }
+        /* span extDict & currentPrefixSegment */
+        {   size_t const length1 = dictEnd - match;
+            memmove(oLitEnd, match, length1);
+            op = oLitEnd + length1;
+            sequence.matchLength -= length1;
+            match = prefixStart;
+            if (op > oend_w || sequence.matchLength < MINMATCH) {
+              U32 i;
+              for (i = 0; i < sequence.matchLength; ++i) op[i] = match[i];
+              return sequenceLength;
+            }
+    }   }
+    assert(op <= oend_w);
+    assert(sequence.matchLength >= MINMATCH);
+
+    /* match within prefix */
+    if (sequence.offset < 8) {
+        /* close range match, overlap */
+        static const U32 dec32table[] = { 0, 1, 2, 1, 4, 4, 4, 4 };   /* added */
+        static const int dec64table[] = { 8, 8, 8, 7, 8, 9,10,11 };   /* subtracted */
+        int const sub2 = dec64table[sequence.offset];
+        op[0] = match[0];
+        op[1] = match[1];
+        op[2] = match[2];
+        op[3] = match[3];
+        match += dec32table[sequence.offset];
+        ZSTD_copy4(op+4, match);
+        match -= sub2;
+    } else {
+        ZSTD_copy8(op, match);
+    }
+    op += 8; match += 8;
+
+    if (oMatchEnd > oend-(16-MINMATCH)) {
+        if (op < oend_w) {
+            ZSTD_wildcopy(op, match, oend_w - op);
+            match += oend_w - op;
+            op = oend_w;
+        }
+        while (op < oMatchEnd) *op++ = *match++;
+    } else {
+        ZSTD_wildcopy(op, match, (ptrdiff_t)sequence.matchLength-8);   /* works even if matchLength < 8 */
+    }
+    return sequenceLength;
+}
+
+static void
+ZSTD_initFseState(ZSTD_fseState* DStatePtr, BIT_DStream_t* bitD, const ZSTD_seqSymbol* dt)
+{
+    const void* ptr = dt;
+    const ZSTD_seqSymbol_header* const DTableH = (const ZSTD_seqSymbol_header*)ptr;
+    DStatePtr->state = BIT_readBits(bitD, DTableH->tableLog);
+    DEBUGLOG(6, "ZSTD_initFseState : val=%u using %u bits",
+                (U32)DStatePtr->state, DTableH->tableLog);
+    BIT_reloadDStream(bitD);
+    DStatePtr->table = dt + 1;
+}
+
+FORCE_INLINE_TEMPLATE void
+ZSTD_updateFseState(ZSTD_fseState* DStatePtr, BIT_DStream_t* bitD)
+{
+    ZSTD_seqSymbol const DInfo = DStatePtr->table[DStatePtr->state];
+    U32 const nbBits = DInfo.nbBits;
+    size_t const lowBits = BIT_readBits(bitD, nbBits);
+    DStatePtr->state = DInfo.nextState + lowBits;
+}
+
+/* We need to add at most (ZSTD_WINDOWLOG_MAX_32 - 1) bits to read the maximum
+ * offset bits. But we can only read at most (STREAM_ACCUMULATOR_MIN_32 - 1)
+ * bits before reloading. This value is the maximum number of bytes we read
+ * after reloading when we are decoding long offets.
+ */
+#define LONG_OFFSETS_MAX_EXTRA_BITS_32                       \
+    (ZSTD_WINDOWLOG_MAX_32 > STREAM_ACCUMULATOR_MIN_32       \
+        ? ZSTD_WINDOWLOG_MAX_32 - STREAM_ACCUMULATOR_MIN_32  \
+        : 0)
+
+typedef enum { ZSTD_lo_isRegularOffset, ZSTD_lo_isLongOffset=1 } ZSTD_longOffset_e;
+
+#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG
+FORCE_INLINE_TEMPLATE seq_t
+ZSTD_decodeSequence(seqState_t* seqState, const ZSTD_longOffset_e longOffsets)
+{
+    seq_t seq;
+    U32 const llBits = seqState->stateLL.table[seqState->stateLL.state].nbAdditionalBits;
+    U32 const mlBits = seqState->stateML.table[seqState->stateML.state].nbAdditionalBits;
+    U32 const ofBits = seqState->stateOffb.table[seqState->stateOffb.state].nbAdditionalBits;
+    U32 const totalBits = llBits+mlBits+ofBits;
+    U32 const llBase = seqState->stateLL.table[seqState->stateLL.state].baseValue;
+    U32 const mlBase = seqState->stateML.table[seqState->stateML.state].baseValue;
+    U32 const ofBase = seqState->stateOffb.table[seqState->stateOffb.state].baseValue;
+
+    /* sequence */
+    {   size_t offset;
+        if (!ofBits)
+            offset = 0;
+        else {
+            ZSTD_STATIC_ASSERT(ZSTD_lo_isLongOffset == 1);
+            ZSTD_STATIC_ASSERT(LONG_OFFSETS_MAX_EXTRA_BITS_32 == 5);
+            assert(ofBits <= MaxOff);
+            if (MEM_32bits() && longOffsets && (ofBits >= STREAM_ACCUMULATOR_MIN_32)) {
+                U32 const extraBits = ofBits - MIN(ofBits, 32 - seqState->DStream.bitsConsumed);
+                offset = ofBase + (BIT_readBitsFast(&seqState->DStream, ofBits - extraBits) << extraBits);
+                BIT_reloadDStream(&seqState->DStream);
+                if (extraBits) offset += BIT_readBitsFast(&seqState->DStream, extraBits);
+                assert(extraBits <= LONG_OFFSETS_MAX_EXTRA_BITS_32);   /* to avoid another reload */
+            } else {
+                offset = ofBase + BIT_readBitsFast(&seqState->DStream, ofBits/*>0*/);   /* <=  (ZSTD_WINDOWLOG_MAX-1) bits */
+                if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream);
+            }
+        }
+
+        if (ofBits <= 1) {
+            offset += (llBase==0);
+            if (offset) {
+                size_t temp = (offset==3) ? seqState->prevOffset[0] - 1 : seqState->prevOffset[offset];
+                temp += !temp;   /* 0 is not valid; input is corrupted; force offset to 1 */
+                if (offset != 1) seqState->prevOffset[2] = seqState->prevOffset[1];
+                seqState->prevOffset[1] = seqState->prevOffset[0];
+                seqState->prevOffset[0] = offset = temp;
+            } else {  /* offset == 0 */
+                offset = seqState->prevOffset[0];
+            }
+        } else {
+            seqState->prevOffset[2] = seqState->prevOffset[1];
+            seqState->prevOffset[1] = seqState->prevOffset[0];
+            seqState->prevOffset[0] = offset;
+        }
+        seq.offset = offset;
+    }
+
+    seq.matchLength = mlBase
+                    + ((mlBits>0) ? BIT_readBitsFast(&seqState->DStream, mlBits/*>0*/) : 0);  /* <=  16 bits */
+    if (MEM_32bits() && (mlBits+llBits >= STREAM_ACCUMULATOR_MIN_32-LONG_OFFSETS_MAX_EXTRA_BITS_32))
+        BIT_reloadDStream(&seqState->DStream);
+    if (MEM_64bits() && (totalBits >= STREAM_ACCUMULATOR_MIN_64-(LLFSELog+MLFSELog+OffFSELog)))
+        BIT_reloadDStream(&seqState->DStream);
+    /* Ensure there are enough bits to read the rest of data in 64-bit mode. */
+    ZSTD_STATIC_ASSERT(16+LLFSELog+MLFSELog+OffFSELog < STREAM_ACCUMULATOR_MIN_64);
+
+    seq.litLength = llBase
+                  + ((llBits>0) ? BIT_readBitsFast(&seqState->DStream, llBits/*>0*/) : 0);    /* <=  16 bits */
+    if (MEM_32bits())
+        BIT_reloadDStream(&seqState->DStream);
+
+    DEBUGLOG(6, "seq: litL=%u, matchL=%u, offset=%u",
+                (U32)seq.litLength, (U32)seq.matchLength, (U32)seq.offset);
+
+    /* ANS state update */
+    ZSTD_updateFseState(&seqState->stateLL, &seqState->DStream);    /* <=  9 bits */
+    ZSTD_updateFseState(&seqState->stateML, &seqState->DStream);    /* <=  9 bits */
+    if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream);    /* <= 18 bits */
+    ZSTD_updateFseState(&seqState->stateOffb, &seqState->DStream);  /* <=  8 bits */
+
+    return seq;
+}
+
+FORCE_INLINE_TEMPLATE size_t
+ZSTD_decompressSequences_body( ZSTD_DCtx* dctx,
+                               void* dst, size_t maxDstSize,
+                         const void* seqStart, size_t seqSize, int nbSeq,
+                         const ZSTD_longOffset_e isLongOffset)
+{
+    const BYTE* ip = (const BYTE*)seqStart;
+    const BYTE* const iend = ip + seqSize;
+    BYTE* const ostart = (BYTE* const)dst;
+    BYTE* const oend = ostart + maxDstSize;
+    BYTE* op = ostart;
+    const BYTE* litPtr = dctx->litPtr;
+    const BYTE* const litEnd = litPtr + dctx->litSize;
+    const BYTE* const prefixStart = (const BYTE*) (dctx->prefixStart);
+    const BYTE* const vBase = (const BYTE*) (dctx->virtualStart);
+    const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd);
+    DEBUGLOG(5, "ZSTD_decompressSequences_body");
+
+    /* Regen sequences */
+    if (nbSeq) {
+        seqState_t seqState;
+        dctx->fseEntropy = 1;
+        { U32 i; for (i=0; i<ZSTD_REP_NUM; i++) seqState.prevOffset[i] = dctx->entropy.rep[i]; }
+        CHECK_E(BIT_initDStream(&seqState.DStream, ip, iend-ip), corruption_detected);
+        ZSTD_initFseState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr);
+        ZSTD_initFseState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr);
+        ZSTD_initFseState(&seqState.stateML, &seqState.DStream, dctx->MLTptr);
+
+        for ( ; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && nbSeq ; ) {
+            nbSeq--;
+            {   seq_t const sequence = ZSTD_decodeSequence(&seqState, isLongOffset);
+                size_t const oneSeqSize = ZSTD_execSequence(op, oend, sequence, &litPtr, litEnd, prefixStart, vBase, dictEnd);
+                DEBUGLOG(6, "regenerated sequence size : %u", (U32)oneSeqSize);
+                if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
+                op += oneSeqSize;
+        }   }
+
+        /* check if reached exact end */
+        DEBUGLOG(5, "ZSTD_decompressSequences_body: after decode loop, remaining nbSeq : %i", nbSeq);
+        if (nbSeq) return ERROR(corruption_detected);
+        /* save reps for next block */
+        { U32 i; for (i=0; i<ZSTD_REP_NUM; i++) dctx->entropy.rep[i] = (U32)(seqState.prevOffset[i]); }
+    }
+
+    /* last literal segment */
+    {   size_t const lastLLSize = litEnd - litPtr;
+        if (lastLLSize > (size_t)(oend-op)) return ERROR(dstSize_tooSmall);
+        memcpy(op, litPtr, lastLLSize);
+        op += lastLLSize;
+    }
+
+    return op-ostart;
+}
+
+static size_t
+ZSTD_decompressSequences_default(ZSTD_DCtx* dctx,
+                                 void* dst, size_t maxDstSize,
+                           const void* seqStart, size_t seqSize, int nbSeq,
+                           const ZSTD_longOffset_e isLongOffset)
+{
+    return ZSTD_decompressSequences_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
+}
+#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG */
+
+
+
+#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT
+FORCE_INLINE_TEMPLATE seq_t
+ZSTD_decodeSequenceLong(seqState_t* seqState, ZSTD_longOffset_e const longOffsets)
+{
+    seq_t seq;
+    U32 const llBits = seqState->stateLL.table[seqState->stateLL.state].nbAdditionalBits;
+    U32 const mlBits = seqState->stateML.table[seqState->stateML.state].nbAdditionalBits;
+    U32 const ofBits = seqState->stateOffb.table[seqState->stateOffb.state].nbAdditionalBits;
+    U32 const totalBits = llBits+mlBits+ofBits;
+    U32 const llBase = seqState->stateLL.table[seqState->stateLL.state].baseValue;
+    U32 const mlBase = seqState->stateML.table[seqState->stateML.state].baseValue;
+    U32 const ofBase = seqState->stateOffb.table[seqState->stateOffb.state].baseValue;
+
+    /* sequence */
+    {   size_t offset;
+        if (!ofBits)
+            offset = 0;
+        else {
+            ZSTD_STATIC_ASSERT(ZSTD_lo_isLongOffset == 1);
+            ZSTD_STATIC_ASSERT(LONG_OFFSETS_MAX_EXTRA_BITS_32 == 5);
+            assert(ofBits <= MaxOff);
+            if (MEM_32bits() && longOffsets) {
+                U32 const extraBits = ofBits - MIN(ofBits, STREAM_ACCUMULATOR_MIN_32-1);
+                offset = ofBase + (BIT_readBitsFast(&seqState->DStream, ofBits - extraBits) << extraBits);
+                if (MEM_32bits() || extraBits) BIT_reloadDStream(&seqState->DStream);
+                if (extraBits) offset += BIT_readBitsFast(&seqState->DStream, extraBits);
+            } else {
+                offset = ofBase + BIT_readBitsFast(&seqState->DStream, ofBits);   /* <=  (ZSTD_WINDOWLOG_MAX-1) bits */
+                if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream);
+            }
+        }
+
+        if (ofBits <= 1) {
+            offset += (llBase==0);
+            if (offset) {
+                size_t temp = (offset==3) ? seqState->prevOffset[0] - 1 : seqState->prevOffset[offset];
+                temp += !temp;   /* 0 is not valid; input is corrupted; force offset to 1 */
+                if (offset != 1) seqState->prevOffset[2] = seqState->prevOffset[1];
+                seqState->prevOffset[1] = seqState->prevOffset[0];
+                seqState->prevOffset[0] = offset = temp;
+            } else {
+                offset = seqState->prevOffset[0];
+            }
+        } else {
+            seqState->prevOffset[2] = seqState->prevOffset[1];
+            seqState->prevOffset[1] = seqState->prevOffset[0];
+            seqState->prevOffset[0] = offset;
+        }
+        seq.offset = offset;
+    }
+
+    seq.matchLength = mlBase + ((mlBits>0) ? BIT_readBitsFast(&seqState->DStream, mlBits) : 0);  /* <=  16 bits */
+    if (MEM_32bits() && (mlBits+llBits >= STREAM_ACCUMULATOR_MIN_32-LONG_OFFSETS_MAX_EXTRA_BITS_32))
+        BIT_reloadDStream(&seqState->DStream);
+    if (MEM_64bits() && (totalBits >= STREAM_ACCUMULATOR_MIN_64-(LLFSELog+MLFSELog+OffFSELog)))
+        BIT_reloadDStream(&seqState->DStream);
+    /* Verify that there is enough bits to read the rest of the data in 64-bit mode. */
+    ZSTD_STATIC_ASSERT(16+LLFSELog+MLFSELog+OffFSELog < STREAM_ACCUMULATOR_MIN_64);
+
+    seq.litLength = llBase + ((llBits>0) ? BIT_readBitsFast(&seqState->DStream, llBits) : 0);    /* <=  16 bits */
+    if (MEM_32bits())
+        BIT_reloadDStream(&seqState->DStream);
+
+    {   size_t const pos = seqState->pos + seq.litLength;
+        const BYTE* const matchBase = (seq.offset > pos) ? seqState->dictEnd : seqState->prefixStart;
+        seq.match = matchBase + pos - seq.offset;  /* note : this operation can overflow when seq.offset is really too large, which can only happen when input is corrupted.
+                                                    * No consequence though : no memory access will occur, overly large offset will be detected in ZSTD_execSequenceLong() */
+        seqState->pos = pos + seq.matchLength;
+    }
+
+    /* ANS state update */
+    ZSTD_updateFseState(&seqState->stateLL, &seqState->DStream);    /* <=  9 bits */
+    ZSTD_updateFseState(&seqState->stateML, &seqState->DStream);    /* <=  9 bits */
+    if (MEM_32bits()) BIT_reloadDStream(&seqState->DStream);    /* <= 18 bits */
+    ZSTD_updateFseState(&seqState->stateOffb, &seqState->DStream);  /* <=  8 bits */
+
+    return seq;
+}
+
+FORCE_INLINE_TEMPLATE size_t
+ZSTD_decompressSequencesLong_body(
+                               ZSTD_DCtx* dctx,
+                               void* dst, size_t maxDstSize,
+                         const void* seqStart, size_t seqSize, int nbSeq,
+                         const ZSTD_longOffset_e isLongOffset)
+{
+    const BYTE* ip = (const BYTE*)seqStart;
+    const BYTE* const iend = ip + seqSize;
+    BYTE* const ostart = (BYTE* const)dst;
+    BYTE* const oend = ostart + maxDstSize;
+    BYTE* op = ostart;
+    const BYTE* litPtr = dctx->litPtr;
+    const BYTE* const litEnd = litPtr + dctx->litSize;
+    const BYTE* const prefixStart = (const BYTE*) (dctx->prefixStart);
+    const BYTE* const dictStart = (const BYTE*) (dctx->virtualStart);
+    const BYTE* const dictEnd = (const BYTE*) (dctx->dictEnd);
+
+    /* Regen sequences */
+    if (nbSeq) {
+#define STORED_SEQS 4
+#define STORED_SEQS_MASK (STORED_SEQS-1)
+#define ADVANCED_SEQS 4
+        seq_t sequences[STORED_SEQS];
+        int const seqAdvance = MIN(nbSeq, ADVANCED_SEQS);
+        seqState_t seqState;
+        int seqNb;
+        dctx->fseEntropy = 1;
+        { int i; for (i=0; i<ZSTD_REP_NUM; i++) seqState.prevOffset[i] = dctx->entropy.rep[i]; }
+        seqState.prefixStart = prefixStart;
+        seqState.pos = (size_t)(op-prefixStart);
+        seqState.dictEnd = dictEnd;
+        assert(iend >= ip);
+        CHECK_E(BIT_initDStream(&seqState.DStream, ip, iend-ip), corruption_detected);
+        ZSTD_initFseState(&seqState.stateLL, &seqState.DStream, dctx->LLTptr);
+        ZSTD_initFseState(&seqState.stateOffb, &seqState.DStream, dctx->OFTptr);
+        ZSTD_initFseState(&seqState.stateML, &seqState.DStream, dctx->MLTptr);
+
+        /* prepare in advance */
+        for (seqNb=0; (BIT_reloadDStream(&seqState.DStream) <= BIT_DStream_completed) && (seqNb<seqAdvance); seqNb++) {
+            sequences[seqNb] = ZSTD_decodeSequenceLong(&seqState, isLongOffset);
+            PREFETCH_L1(sequences[seqNb].match); PREFETCH_L1(sequences[seqNb].match + sequences[seqNb].matchLength - 1); /* note : it's safe to invoke PREFETCH() on any memory address, including invalid ones */
+        }
+        if (seqNb<seqAdvance) return ERROR(corruption_detected);
+
+        /* decode and decompress */
+        for ( ; (BIT_reloadDStream(&(seqState.DStream)) <= BIT_DStream_completed) && (seqNb<nbSeq) ; seqNb++) {
+            seq_t const sequence = ZSTD_decodeSequenceLong(&seqState, isLongOffset);
+            size_t const oneSeqSize = ZSTD_execSequenceLong(op, oend, sequences[(seqNb-ADVANCED_SEQS) & STORED_SEQS_MASK], &litPtr, litEnd, prefixStart, dictStart, dictEnd);
+            if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
+            PREFETCH_L1(sequence.match); PREFETCH_L1(sequence.match + sequence.matchLength - 1); /* note : it's safe to invoke PREFETCH() on any memory address, including invalid ones */
+            sequences[seqNb & STORED_SEQS_MASK] = sequence;
+            op += oneSeqSize;
+        }
+        if (seqNb<nbSeq) return ERROR(corruption_detected);
+
+        /* finish queue */
+        seqNb -= seqAdvance;
+        for ( ; seqNb<nbSeq ; seqNb++) {
+            size_t const oneSeqSize = ZSTD_execSequenceLong(op, oend, sequences[seqNb&STORED_SEQS_MASK], &litPtr, litEnd, prefixStart, dictStart, dictEnd);
+            if (ZSTD_isError(oneSeqSize)) return oneSeqSize;
+            op += oneSeqSize;
+        }
+
+        /* save reps for next block */
+        { U32 i; for (i=0; i<ZSTD_REP_NUM; i++) dctx->entropy.rep[i] = (U32)(seqState.prevOffset[i]); }
+    }
+
+    /* last literal segment */
+    {   size_t const lastLLSize = litEnd - litPtr;
+        if (lastLLSize > (size_t)(oend-op)) return ERROR(dstSize_tooSmall);
+        memcpy(op, litPtr, lastLLSize);
+        op += lastLLSize;
+    }
+
+    return op-ostart;
+}
+
+static size_t
+ZSTD_decompressSequencesLong_default(ZSTD_DCtx* dctx,
+                                 void* dst, size_t maxDstSize,
+                           const void* seqStart, size_t seqSize, int nbSeq,
+                           const ZSTD_longOffset_e isLongOffset)
+{
+    return ZSTD_decompressSequencesLong_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
+}
+#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT */
+
+
+
+#if DYNAMIC_BMI2
+
+#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG
+static TARGET_ATTRIBUTE("bmi2") size_t
+ZSTD_decompressSequences_bmi2(ZSTD_DCtx* dctx,
+                                 void* dst, size_t maxDstSize,
+                           const void* seqStart, size_t seqSize, int nbSeq,
+                           const ZSTD_longOffset_e isLongOffset)
+{
+    return ZSTD_decompressSequences_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
+}
+#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG */
+
+#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT
+static TARGET_ATTRIBUTE("bmi2") size_t
+ZSTD_decompressSequencesLong_bmi2(ZSTD_DCtx* dctx,
+                                 void* dst, size_t maxDstSize,
+                           const void* seqStart, size_t seqSize, int nbSeq,
+                           const ZSTD_longOffset_e isLongOffset)
+{
+    return ZSTD_decompressSequencesLong_body(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
+}
+#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT */
+
+#endif /* DYNAMIC_BMI2 */
+
+typedef size_t (*ZSTD_decompressSequences_t)(
+                            ZSTD_DCtx* dctx,
+                            void* dst, size_t maxDstSize,
+                            const void* seqStart, size_t seqSize, int nbSeq,
+                            const ZSTD_longOffset_e isLongOffset);
+
+#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG
+static size_t
+ZSTD_decompressSequences(ZSTD_DCtx* dctx, void* dst, size_t maxDstSize,
+                   const void* seqStart, size_t seqSize, int nbSeq,
+                   const ZSTD_longOffset_e isLongOffset)
+{
+    DEBUGLOG(5, "ZSTD_decompressSequences");
+#if DYNAMIC_BMI2
+    if (dctx->bmi2) {
+        return ZSTD_decompressSequences_bmi2(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
+    }
+#endif
+  return ZSTD_decompressSequences_default(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
+}
+#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG */
+
+
+#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT
+/* ZSTD_decompressSequencesLong() :
+ * decompression function triggered when a minimum share of offsets is considered "long",
+ * aka out of cache.
+ * note : "long" definition seems overloaded here, sometimes meaning "wider than bitstream register", and sometimes mearning "farther than memory cache distance".
+ * This function will try to mitigate main memory latency through the use of prefetching */
+static size_t
+ZSTD_decompressSequencesLong(ZSTD_DCtx* dctx,
+                             void* dst, size_t maxDstSize,
+                             const void* seqStart, size_t seqSize, int nbSeq,
+                             const ZSTD_longOffset_e isLongOffset)
+{
+    DEBUGLOG(5, "ZSTD_decompressSequencesLong");
+#if DYNAMIC_BMI2
+    if (dctx->bmi2) {
+        return ZSTD_decompressSequencesLong_bmi2(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
+    }
+#endif
+  return ZSTD_decompressSequencesLong_default(dctx, dst, maxDstSize, seqStart, seqSize, nbSeq, isLongOffset);
+}
+#endif /* ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT */
+
+
+
+#if !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT) && \
+    !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG)
+/* ZSTD_getLongOffsetsShare() :
+ * condition : offTable must be valid
+ * @return : "share" of long offsets (arbitrarily defined as > (1<<23))
+ *           compared to maximum possible of (1<<OffFSELog) */
+static unsigned
+ZSTD_getLongOffsetsShare(const ZSTD_seqSymbol* offTable)
+{
+    const void* ptr = offTable;
+    U32 const tableLog = ((const ZSTD_seqSymbol_header*)ptr)[0].tableLog;
+    const ZSTD_seqSymbol* table = offTable + 1;
+    U32 const max = 1 << tableLog;
+    U32 u, total = 0;
+    DEBUGLOG(5, "ZSTD_getLongOffsetsShare: (tableLog=%u)", tableLog);
+
+    assert(max <= (1 << OffFSELog));  /* max not too large */
+    for (u=0; u<max; u++) {
+        if (table[u].nbAdditionalBits > 22) total += 1;
+    }
+
+    assert(tableLog <= OffFSELog);
+    total <<= (OffFSELog - tableLog);  /* scale to OffFSELog */
+
+    return total;
+}
+#endif
+
+
+size_t
+ZSTD_decompressBlock_internal(ZSTD_DCtx* dctx,
+                              void* dst, size_t dstCapacity,
+                        const void* src, size_t srcSize, const int frame)
+{   /* blockType == blockCompressed */
+    const BYTE* ip = (const BYTE*)src;
+    /* isLongOffset must be true if there are long offsets.
+     * Offsets are long if they are larger than 2^STREAM_ACCUMULATOR_MIN.
+     * We don't expect that to be the case in 64-bit mode.
+     * In block mode, window size is not known, so we have to be conservative.
+     * (note: but it could be evaluated from current-lowLimit)
+     */
+    ZSTD_longOffset_e const isLongOffset = (ZSTD_longOffset_e)(MEM_32bits() && (!frame || (dctx->fParams.windowSize > (1ULL << STREAM_ACCUMULATOR_MIN))));
+    DEBUGLOG(5, "ZSTD_decompressBlock_internal (size : %u)", (U32)srcSize);
+
+    if (srcSize >= ZSTD_BLOCKSIZE_MAX) return ERROR(srcSize_wrong);
+
+    /* Decode literals section */
+    {   size_t const litCSize = ZSTD_decodeLiteralsBlock(dctx, src, srcSize);
+        DEBUGLOG(5, "ZSTD_decodeLiteralsBlock : %u", (U32)litCSize);
+        if (ZSTD_isError(litCSize)) return litCSize;
+        ip += litCSize;
+        srcSize -= litCSize;
+    }
+
+    /* Build Decoding Tables */
+    {
+        /* These macros control at build-time which decompressor implementation
+         * we use. If neither is defined, we do some inspection and dispatch at
+         * runtime.
+         */
+#if !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT) && \
+    !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG)
+        int usePrefetchDecoder = dctx->ddictIsCold;
+#endif
+        int nbSeq;
+        size_t const seqHSize = ZSTD_decodeSeqHeaders(dctx, &nbSeq, ip, srcSize);
+        if (ZSTD_isError(seqHSize)) return seqHSize;
+        ip += seqHSize;
+        srcSize -= seqHSize;
+
+#if !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT) && \
+    !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG)
+        if ( !usePrefetchDecoder
+          && (!frame || (dctx->fParams.windowSize > (1<<24)))
+          && (nbSeq>ADVANCED_SEQS) ) {  /* could probably use a larger nbSeq limit */
+            U32 const shareLongOffsets = ZSTD_getLongOffsetsShare(dctx->OFTptr);
+            U32 const minShare = MEM_64bits() ? 7 : 20; /* heuristic values, correspond to 2.73% and 7.81% */
+            usePrefetchDecoder = (shareLongOffsets >= minShare);
+        }
+#endif
+
+        dctx->ddictIsCold = 0;
+
+#if !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT) && \
+    !defined(ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG)
+        if (usePrefetchDecoder)
+#endif
+#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_SHORT
+            return ZSTD_decompressSequencesLong(dctx, dst, dstCapacity, ip, srcSize, nbSeq, isLongOffset);
+#endif
+
+#ifndef ZSTD_FORCE_DECOMPRESS_SEQUENCES_LONG
+        /* else */
+        return ZSTD_decompressSequences(dctx, dst, dstCapacity, ip, srcSize, nbSeq, isLongOffset);
+#endif
+    }
+}
+
+
+size_t ZSTD_decompressBlock(ZSTD_DCtx* dctx,
+                            void* dst, size_t dstCapacity,
+                      const void* src, size_t srcSize)
+{
+    size_t dSize;
+    ZSTD_checkContinuity(dctx, dst);
+    dSize = ZSTD_decompressBlock_internal(dctx, dst, dstCapacity, src, srcSize, /* frame */ 0);
+    dctx->previousDstEnd = (char*)dst + dSize;
+    return dSize;
+}
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/python-zstandard/zstd/decompress/zstd_decompress_block.h	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,59 @@
+/*
+ * Copyright (c) 2016-present, Yann Collet, Facebook, Inc.
+ * All rights reserved.
+ *
+ * This source code is licensed under both the BSD-style license (found in the
+ * LICENSE file in the root directory of this source tree) and the GPLv2 (found
+ * in the COPYING file in the root directory of this source tree).
+ * You may select, at your option, one of the above-listed licenses.
+ */
+
+
+#ifndef ZSTD_DEC_BLOCK_H
+#define ZSTD_DEC_BLOCK_H
+
+/*-*******************************************************
+ *  Dependencies
+ *********************************************************/
+#include <stddef.h>   /* size_t */
+#include "zstd.h"    /* DCtx, and some public functions */
+#include "zstd_internal.h"  /* blockProperties_t, and some public functions */
+#include "zstd_decompress_internal.h"  /* ZSTD_seqSymbol */
+
+
+/* ===   Prototypes   === */
+
+/* note: prototypes already published within `zstd.h` :
+ * ZSTD_decompressBlock()
+ */
+
+/* note: prototypes already published within `zstd_internal.h` :
+ * ZSTD_getcBlockSize()
+ * ZSTD_decodeSeqHeaders()
+ */
+
+
+/* ZSTD_decompressBlock_internal() :
+ * decompress block, starting at `src`,
+ * into destination buffer `dst`.
+ * @return : decompressed block size,
+ *           or an error code (which can be tested using ZSTD_isError())
+ */
+size_t ZSTD_decompressBlock_internal(ZSTD_DCtx* dctx,
+                               void* dst, size_t dstCapacity,
+                         const void* src, size_t srcSize, const int frame);
+
+/* ZSTD_buildFSETable() :
+ * generate FSE decoding table for one symbol (ll, ml or off)
+ * this function must be called with valid parameters only
+ * (dt is large enough, normalizedCounter distribution total is a power of 2, max is within range, etc.)
+ * in which case it cannot fail.
+ * Internal use only.
+ */
+void ZSTD_buildFSETable(ZSTD_seqSymbol* dt,
+             const short* normalizedCounter, unsigned maxSymbolValue,
+             const U32* baseValue, const U32* nbAdditionalBits,
+                   unsigned tableLog);
+
+
+#endif /* ZSTD_DEC_BLOCK_H */
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/contrib/python-zstandard/zstd/decompress/zstd_decompress_internal.h	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,168 @@
+/*
+ * Copyright (c) 2016-present, Yann Collet, Facebook, Inc.
+ * All rights reserved.
+ *
+ * This source code is licensed under both the BSD-style license (found in the
+ * LICENSE file in the root directory of this source tree) and the GPLv2 (found
+ * in the COPYING file in the root directory of this source tree).
+ * You may select, at your option, one of the above-listed licenses.
+ */
+
+
+/* zstd_decompress_internal:
+ * objects and definitions shared within lib/decompress modules */
+
+ #ifndef ZSTD_DECOMPRESS_INTERNAL_H
+ #define ZSTD_DECOMPRESS_INTERNAL_H
+
+
+/*-*******************************************************
+ *  Dependencies
+ *********************************************************/
+#include "mem.h"             /* BYTE, U16, U32 */
+#include "zstd_internal.h"   /* ZSTD_seqSymbol */
+
+
+
+/*-*******************************************************
+ *  Constants
+ *********************************************************/
+static const U32 LL_base[MaxLL+1] = {
+                 0,    1,    2,     3,     4,     5,     6,      7,
+                 8,    9,   10,    11,    12,    13,    14,     15,
+                16,   18,   20,    22,    24,    28,    32,     40,
+                48,   64, 0x80, 0x100, 0x200, 0x400, 0x800, 0x1000,
+                0x2000, 0x4000, 0x8000, 0x10000 };
+
+static const U32 OF_base[MaxOff+1] = {
+                 0,        1,       1,       5,     0xD,     0x1D,     0x3D,     0x7D,
+                 0xFD,   0x1FD,   0x3FD,   0x7FD,   0xFFD,   0x1FFD,   0x3FFD,   0x7FFD,
+                 0xFFFD, 0x1FFFD, 0x3FFFD, 0x7FFFD, 0xFFFFD, 0x1FFFFD, 0x3FFFFD, 0x7FFFFD,
+                 0xFFFFFD, 0x1FFFFFD, 0x3FFFFFD, 0x7FFFFFD, 0xFFFFFFD, 0x1FFFFFFD, 0x3FFFFFFD, 0x7FFFFFFD };
+
+static const U32 OF_bits[MaxOff+1] = {
+                     0,  1,  2,  3,  4,  5,  6,  7,
+                     8,  9, 10, 11, 12, 13, 14, 15,
+                    16, 17, 18, 19, 20, 21, 22, 23,
+                    24, 25, 26, 27, 28, 29, 30, 31 };
+
+static const U32 ML_base[MaxML+1] = {
+                     3,  4,  5,    6,     7,     8,     9,    10,
+                    11, 12, 13,   14,    15,    16,    17,    18,
+                    19, 20, 21,   22,    23,    24,    25,    26,
+                    27, 28, 29,   30,    31,    32,    33,    34,
+                    35, 37, 39,   41,    43,    47,    51,    59,
+                    67, 83, 99, 0x83, 0x103, 0x203, 0x403, 0x803,
+                    0x1003, 0x2003, 0x4003, 0x8003, 0x10003 };
+
+
+/*-*******************************************************
+ *  Decompression types
+ *********************************************************/
+ typedef struct {
+     U32 fastMode;
+     U32 tableLog;
+ } ZSTD_seqSymbol_header;
+
+ typedef struct {
+     U16  nextState;
+     BYTE nbAdditionalBits;
+     BYTE nbBits;
+     U32  baseValue;
+ } ZSTD_seqSymbol;
+
+ #define SEQSYMBOL_TABLE_SIZE(log)   (1 + (1 << (log)))
+
+typedef struct {
+    ZSTD_seqSymbol LLTable[SEQSYMBOL_TABLE_SIZE(LLFSELog)];    /* Note : Space reserved for FSE Tables */
+    ZSTD_seqSymbol OFTable[SEQSYMBOL_TABLE_SIZE(OffFSELog)];   /* is also used as temporary workspace while building hufTable during DDict creation */
+    ZSTD_seqSymbol MLTable[SEQSYMBOL_TABLE_SIZE(MLFSELog)];    /* and therefore must be at least HUF_DECOMPRESS_WORKSPACE_SIZE large */
+    HUF_DTable hufTable[HUF_DTABLE_SIZE(HufLog)];  /* can accommodate HUF_decompress4X */
+    U32 rep[ZSTD_REP_NUM];
+} ZSTD_entropyDTables_t;
+
+typedef enum { ZSTDds_getFrameHeaderSize, ZSTDds_decodeFrameHeader,
+               ZSTDds_decodeBlockHeader, ZSTDds_decompressBlock,
+               ZSTDds_decompressLastBlock, ZSTDds_checkChecksum,
+               ZSTDds_decodeSkippableHeader, ZSTDds_skipFrame } ZSTD_dStage;
+
+typedef enum { zdss_init=0, zdss_loadHeader,
+               zdss_read, zdss_load, zdss_flush } ZSTD_dStreamStage;
+
+struct ZSTD_DCtx_s
+{
+    const ZSTD_seqSymbol* LLTptr;
+    const ZSTD_seqSymbol* MLTptr;
+    const ZSTD_seqSymbol* OFTptr;
+    const HUF_DTable* HUFptr;
+    ZSTD_entropyDTables_t entropy;
+    U32 workspace[HUF_DECOMPRESS_WORKSPACE_SIZE_U32];   /* space needed when building huffman tables */
+    const void* previousDstEnd;   /* detect continuity */
+    const void* prefixStart;      /* start of current segment */
+    const void* virtualStart;     /* virtual start of previous segment if it was just before current one */
+    const void* dictEnd;          /* end of previous segment */
+    size_t expected;
+    ZSTD_frameHeader fParams;
+    U64 decodedSize;
+    blockType_e bType;            /* used in ZSTD_decompressContinue(), store blockType between block header decoding and block decompression stages */
+    ZSTD_dStage stage;
+    U32 litEntropy;
+    U32 fseEntropy;
+    XXH64_state_t xxhState;
+    size_t headerSize;
+    ZSTD_format_e format;
+    const BYTE* litPtr;
+    ZSTD_customMem customMem;
+    size_t litSize;
+    size_t rleSize;
+    size_t staticSize;
+    int bmi2;                     /* == 1 if the CPU supports BMI2 and 0 otherwise. CPU support is determined dynamically once per context lifetime. */
+
+    /* dictionary */
+    ZSTD_DDict* ddictLocal;
+    const ZSTD_DDict* ddict;     /* set by ZSTD_initDStream_usingDDict(), or ZSTD_DCtx_refDDict() */
+    U32 dictID;
+    int ddictIsCold;             /* if == 1 : dictionary is "new" for working context, and presumed "cold" (not in cpu cache) */
+
+    /* streaming */
+    ZSTD_dStreamStage streamStage;
+    char*  inBuff;
+    size_t inBuffSize;
+    size_t inPos;
+    size_t maxWindowSize;
+    char*  outBuff;
+    size_t outBuffSize;
+    size_t outStart;
+    size_t outEnd;
+    size_t lhSize;
+    void* legacyContext;
+    U32 previousLegacyVersion;
+    U32 legacyVersion;
+    U32 hostageByte;
+    int noForwardProgress;
+
+    /* workspace */
+    BYTE litBuffer[ZSTD_BLOCKSIZE_MAX + WILDCOPY_OVERLENGTH];
+    BYTE headerBuffer[ZSTD_FRAMEHEADERSIZE_MAX];
+};  /* typedef'd to ZSTD_DCtx within "zstd.h" */
+
+
+/*-*******************************************************
+ *  Shared internal functions
+ *********************************************************/
+
+/*! ZSTD_loadDEntropy() :
+ *  dict : must point at beginning of a valid zstd dictionary.
+ * @return : size of entropy tables read */
+size_t ZSTD_loadDEntropy(ZSTD_entropyDTables_t* entropy,
+                   const void* const dict, size_t const dictSize);
+
+/*! ZSTD_checkContinuity() :
+ *  check if next `dst` follows previous position, where decompression ended.
+ *  If yes, do nothing (continue on current segment).
+ *  If not, classify previous segment as "external dictionary", and start a new segment.
+ *  This function cannot fail. */
+void ZSTD_checkContinuity(ZSTD_DCtx* dctx, const void* dst);
+
+
+#endif /* ZSTD_DECOMPRESS_INTERNAL_H */
--- a/contrib/python-zstandard/zstd/dictBuilder/cover.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/dictBuilder/cover.c	Wed Apr 17 13:41:18 2019 -0400
@@ -39,7 +39,7 @@
 /*-*************************************
 *  Constants
 ***************************************/
-#define COVER_MAX_SAMPLES_SIZE (sizeof(size_t) == 8 ? ((U32)-1) : ((U32)1 GB))
+#define COVER_MAX_SAMPLES_SIZE (sizeof(size_t) == 8 ? ((unsigned)-1) : ((unsigned)1 GB))
 #define DEFAULT_SPLITPOINT 1.0
 
 /*-*************************************
@@ -543,7 +543,7 @@
   if (totalSamplesSize < MAX(d, sizeof(U64)) ||
       totalSamplesSize >= (size_t)COVER_MAX_SAMPLES_SIZE) {
     DISPLAYLEVEL(1, "Total samples size is too large (%u MB), maximum size is %u MB\n",
-                 (U32)(totalSamplesSize>>20), (COVER_MAX_SAMPLES_SIZE >> 20));
+                 (unsigned)(totalSamplesSize>>20), (COVER_MAX_SAMPLES_SIZE >> 20));
     return 0;
   }
   /* Check if there are at least 5 training samples */
@@ -559,9 +559,9 @@
   /* Zero the context */
   memset(ctx, 0, sizeof(*ctx));
   DISPLAYLEVEL(2, "Training on %u samples of total size %u\n", nbTrainSamples,
-               (U32)trainingSamplesSize);
+               (unsigned)trainingSamplesSize);
   DISPLAYLEVEL(2, "Testing on %u samples of total size %u\n", nbTestSamples,
-               (U32)testSamplesSize);
+               (unsigned)testSamplesSize);
   ctx->samples = samples;
   ctx->samplesSizes = samplesSizes;
   ctx->nbSamples = nbSamples;
@@ -639,11 +639,11 @@
   /* Divide the data up into epochs of equal size.
    * We will select at least one segment from each epoch.
    */
-  const U32 epochs = MAX(1, (U32)(dictBufferCapacity / parameters.k / 4));
-  const U32 epochSize = (U32)(ctx->suffixSize / epochs);
+  const unsigned epochs = MAX(1, (U32)(dictBufferCapacity / parameters.k / 4));
+  const unsigned epochSize = (U32)(ctx->suffixSize / epochs);
   size_t epoch;
-  DISPLAYLEVEL(2, "Breaking content into %u epochs of size %u\n", epochs,
-               epochSize);
+  DISPLAYLEVEL(2, "Breaking content into %u epochs of size %u\n",
+                epochs, epochSize);
   /* Loop through the epochs until there are no more segments or the dictionary
    * is full.
    */
@@ -670,7 +670,7 @@
     memcpy(dict + tail, ctx->samples + segment.begin, segmentSize);
     DISPLAYUPDATE(
         2, "\r%u%%       ",
-        (U32)(((dictBufferCapacity - tail) * 100) / dictBufferCapacity));
+        (unsigned)(((dictBufferCapacity - tail) * 100) / dictBufferCapacity));
   }
   DISPLAYLEVEL(2, "\r%79s\r", "");
   return tail;
@@ -722,7 +722,7 @@
         samplesBuffer, samplesSizes, nbSamples, parameters.zParams);
     if (!ZSTD_isError(dictionarySize)) {
       DISPLAYLEVEL(2, "Constructed dictionary of size %u\n",
-                   (U32)dictionarySize);
+                   (unsigned)dictionarySize);
     }
     COVER_ctx_destroy(&ctx);
     COVER_map_destroy(&activeDmers);
@@ -868,6 +868,8 @@
         if (!best->dict) {
           best->compressedSize = ERROR(GENERIC);
           best->dictSize = 0;
+          ZSTD_pthread_cond_signal(&best->cond);
+          ZSTD_pthread_mutex_unlock(&best->mutex);
           return;
         }
       }
@@ -1054,7 +1056,7 @@
       }
       /* Print status */
       LOCALDISPLAYUPDATE(displayLevel, 2, "\r%u%%       ",
-                         (U32)((iteration * 100) / kIterations));
+                         (unsigned)((iteration * 100) / kIterations));
       ++iteration;
     }
     COVER_best_wait(&best);
--- a/contrib/python-zstandard/zstd/dictBuilder/fastcover.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/dictBuilder/fastcover.c	Wed Apr 17 13:41:18 2019 -0400
@@ -20,7 +20,7 @@
 /*-*************************************
 *  Constants
 ***************************************/
-#define FASTCOVER_MAX_SAMPLES_SIZE (sizeof(size_t) == 8 ? ((U32)-1) : ((U32)1 GB))
+#define FASTCOVER_MAX_SAMPLES_SIZE (sizeof(size_t) == 8 ? ((unsigned)-1) : ((unsigned)1 GB))
 #define FASTCOVER_MAX_F 31
 #define FASTCOVER_MAX_ACCEL 10
 #define DEFAULT_SPLITPOINT 0.75
@@ -159,15 +159,15 @@
    */
   while (activeSegment.end < end) {
     /* Get hash value of current dmer */
-    const size_t index = FASTCOVER_hashPtrToIndex(ctx->samples + activeSegment.end, f, d);
+    const size_t idx = FASTCOVER_hashPtrToIndex(ctx->samples + activeSegment.end, f, d);
 
     /* Add frequency of this index to score if this is the first occurence of index in active segment */
-    if (segmentFreqs[index] == 0) {
-      activeSegment.score += freqs[index];
+    if (segmentFreqs[idx] == 0) {
+      activeSegment.score += freqs[idx];
     }
     /* Increment end of segment and segmentFreqs*/
     activeSegment.end += 1;
-    segmentFreqs[index] += 1;
+    segmentFreqs[idx] += 1;
     /* If the window is now too large, drop the first position */
     if (activeSegment.end - activeSegment.begin == dmersInK + 1) {
       /* Get hash value of the dmer to be eliminated from active segment */
@@ -309,7 +309,7 @@
     if (totalSamplesSize < MAX(d, sizeof(U64)) ||
         totalSamplesSize >= (size_t)FASTCOVER_MAX_SAMPLES_SIZE) {
         DISPLAYLEVEL(1, "Total samples size is too large (%u MB), maximum size is %u MB\n",
-                    (U32)(totalSamplesSize >> 20), (FASTCOVER_MAX_SAMPLES_SIZE >> 20));
+                    (unsigned)(totalSamplesSize >> 20), (FASTCOVER_MAX_SAMPLES_SIZE >> 20));
         return 0;
     }
 
@@ -328,9 +328,9 @@
     /* Zero the context */
     memset(ctx, 0, sizeof(*ctx));
     DISPLAYLEVEL(2, "Training on %u samples of total size %u\n", nbTrainSamples,
-                    (U32)trainingSamplesSize);
+                    (unsigned)trainingSamplesSize);
     DISPLAYLEVEL(2, "Testing on %u samples of total size %u\n", nbTestSamples,
-                    (U32)testSamplesSize);
+                    (unsigned)testSamplesSize);
 
     ctx->samples = samples;
     ctx->samplesSizes = samplesSizes;
@@ -389,11 +389,11 @@
   /* Divide the data up into epochs of equal size.
    * We will select at least one segment from each epoch.
    */
-  const U32 epochs = MAX(1, (U32)(dictBufferCapacity / parameters.k));
-  const U32 epochSize = (U32)(ctx->nbDmers / epochs);
+  const unsigned epochs = MAX(1, (U32)(dictBufferCapacity / parameters.k));
+  const unsigned epochSize = (U32)(ctx->nbDmers / epochs);
   size_t epoch;
-  DISPLAYLEVEL(2, "Breaking content into %u epochs of size %u\n", epochs,
-               epochSize);
+  DISPLAYLEVEL(2, "Breaking content into %u epochs of size %u\n",
+                epochs, epochSize);
   /* Loop through the epochs until there are no more segments or the dictionary
    * is full.
    */
@@ -423,7 +423,7 @@
     memcpy(dict + tail, ctx->samples + segment.begin, segmentSize);
     DISPLAYUPDATE(
         2, "\r%u%%       ",
-        (U32)(((dictBufferCapacity - tail) * 100) / dictBufferCapacity));
+        (unsigned)(((dictBufferCapacity - tail) * 100) / dictBufferCapacity));
   }
   DISPLAYLEVEL(2, "\r%79s\r", "");
   return tail;
@@ -577,7 +577,7 @@
           samplesBuffer, samplesSizes, nbFinalizeSamples, coverParams.zParams);
       if (!ZSTD_isError(dictionarySize)) {
           DISPLAYLEVEL(2, "Constructed dictionary of size %u\n",
-                      (U32)dictionarySize);
+                      (unsigned)dictionarySize);
       }
       FASTCOVER_ctx_destroy(&ctx);
       free(segmentFreqs);
@@ -702,7 +702,7 @@
         }
         /* Print status */
         LOCALDISPLAYUPDATE(displayLevel, 2, "\r%u%%       ",
-                           (U32)((iteration * 100) / kIterations));
+                           (unsigned)((iteration * 100) / kIterations));
         ++iteration;
       }
       COVER_best_wait(&best);
--- a/contrib/python-zstandard/zstd/dictBuilder/zdict.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/dictBuilder/zdict.c	Wed Apr 17 13:41:18 2019 -0400
@@ -255,15 +255,15 @@
     }
 
     {   int i;
-        U32 searchLength;
+        U32 mml;
         U32 refinedStart = start;
         U32 refinedEnd = end;
 
         DISPLAYLEVEL(4, "\n");
-        DISPLAYLEVEL(4, "found %3u matches of length >= %i at pos %7u  ", (U32)(end-start), MINMATCHLENGTH, (U32)pos);
+        DISPLAYLEVEL(4, "found %3u matches of length >= %i at pos %7u  ", (unsigned)(end-start), MINMATCHLENGTH, (unsigned)pos);
         DISPLAYLEVEL(4, "\n");
 
-        for (searchLength = MINMATCHLENGTH ; ; searchLength++) {
+        for (mml = MINMATCHLENGTH ; ; mml++) {
             BYTE currentChar = 0;
             U32 currentCount = 0;
             U32 currentID = refinedStart;
@@ -271,13 +271,13 @@
             U32 selectedCount = 0;
             U32 selectedID = currentID;
             for (id =refinedStart; id < refinedEnd; id++) {
-                if (b[suffix[id] + searchLength] != currentChar) {
+                if (b[suffix[id] + mml] != currentChar) {
                     if (currentCount > selectedCount) {
                         selectedCount = currentCount;
                         selectedID = currentID;
                     }
                     currentID = id;
-                    currentChar = b[ suffix[id] + searchLength];
+                    currentChar = b[ suffix[id] + mml];
                     currentCount = 0;
                 }
                 currentCount ++;
@@ -342,7 +342,7 @@
             savings[i] = savings[i-1] + (lengthList[i] * (i-3));
 
         DISPLAYLEVEL(4, "Selected dict at position %u, of length %u : saves %u (ratio: %.2f)  \n",
-                     (U32)pos, (U32)maxLength, savings[maxLength], (double)savings[maxLength] / maxLength);
+                     (unsigned)pos, (unsigned)maxLength, (unsigned)savings[maxLength], (double)savings[maxLength] / maxLength);
 
         solution.pos = (U32)pos;
         solution.length = (U32)maxLength;
@@ -497,7 +497,7 @@
 static size_t ZDICT_trainBuffer_legacy(dictItem* dictList, U32 dictListSize,
                             const void* const buffer, size_t bufferSize,   /* buffer must end with noisy guard band */
                             const size_t* fileSizes, unsigned nbFiles,
-                            U32 minRatio, U32 notificationLevel)
+                            unsigned minRatio, U32 notificationLevel)
 {
     int* const suffix0 = (int*)malloc((bufferSize+2)*sizeof(*suffix0));
     int* const suffix = suffix0+1;
@@ -523,11 +523,11 @@
     memset(doneMarks, 0, bufferSize+16);
 
     /* limit sample set size (divsufsort limitation)*/
-    if (bufferSize > ZDICT_MAX_SAMPLES_SIZE) DISPLAYLEVEL(3, "sample set too large : reduced to %u MB ...\n", (U32)(ZDICT_MAX_SAMPLES_SIZE>>20));
+    if (bufferSize > ZDICT_MAX_SAMPLES_SIZE) DISPLAYLEVEL(3, "sample set too large : reduced to %u MB ...\n", (unsigned)(ZDICT_MAX_SAMPLES_SIZE>>20));
     while (bufferSize > ZDICT_MAX_SAMPLES_SIZE) bufferSize -= fileSizes[--nbFiles];
 
     /* sort */
-    DISPLAYLEVEL(2, "sorting %u files of total size %u MB ...\n", nbFiles, (U32)(bufferSize>>20));
+    DISPLAYLEVEL(2, "sorting %u files of total size %u MB ...\n", nbFiles, (unsigned)(bufferSize>>20));
     {   int const divSuftSortResult = divsufsort((const unsigned char*)buffer, suffix, (int)bufferSize, 0);
         if (divSuftSortResult != 0) { result = ERROR(GENERIC); goto _cleanup; }
     }
@@ -589,7 +589,7 @@
 #define MAXREPOFFSET 1024
 
 static void ZDICT_countEStats(EStats_ress_t esr, ZSTD_parameters params,
-                              U32* countLit, U32* offsetcodeCount, U32* matchlengthCount, U32* litlengthCount, U32* repOffsets,
+                              unsigned* countLit, unsigned* offsetcodeCount, unsigned* matchlengthCount, unsigned* litlengthCount, U32* repOffsets,
                               const void* src, size_t srcSize,
                               U32 notificationLevel)
 {
@@ -602,7 +602,7 @@
 
     }
     cSize = ZSTD_compressBlock(esr.zc, esr.workPlace, ZSTD_BLOCKSIZE_MAX, src, srcSize);
-    if (ZSTD_isError(cSize)) { DISPLAYLEVEL(3, "warning : could not compress sample size %u \n", (U32)srcSize); return; }
+    if (ZSTD_isError(cSize)) { DISPLAYLEVEL(3, "warning : could not compress sample size %u \n", (unsigned)srcSize); return; }
 
     if (cSize) {  /* if == 0; block is not compressible */
         const seqStore_t* const seqStorePtr = ZSTD_getSeqStore(esr.zc);
@@ -671,7 +671,7 @@
  * rewrite `countLit` to contain a mostly flat but still compressible distribution of literals.
  * necessary to avoid generating a non-compressible distribution that HUF_writeCTable() cannot encode.
  */
-static void ZDICT_flatLit(U32* countLit)
+static void ZDICT_flatLit(unsigned* countLit)
 {
     int u;
     for (u=1; u<256; u++) countLit[u] = 2;
@@ -687,14 +687,14 @@
                              const void* dictBuffer, size_t  dictBufferSize,
                                    unsigned notificationLevel)
 {
-    U32 countLit[256];
+    unsigned countLit[256];
     HUF_CREATE_STATIC_CTABLE(hufTable, 255);
-    U32 offcodeCount[OFFCODE_MAX+1];
+    unsigned offcodeCount[OFFCODE_MAX+1];
     short offcodeNCount[OFFCODE_MAX+1];
     U32 offcodeMax = ZSTD_highbit32((U32)(dictBufferSize + 128 KB));
-    U32 matchLengthCount[MaxML+1];
+    unsigned matchLengthCount[MaxML+1];
     short matchLengthNCount[MaxML+1];
-    U32 litLengthCount[MaxLL+1];
+    unsigned litLengthCount[MaxLL+1];
     short litLengthNCount[MaxLL+1];
     U32 repOffset[MAXREPOFFSET];
     offsetCount_t bestRepOffset[ZSTD_REP_NUM+1];
@@ -983,33 +983,33 @@
 
     /* display best matches */
     if (params.zParams.notificationLevel>= 3) {
-        U32 const nb = MIN(25, dictList[0].pos);
-        U32 const dictContentSize = ZDICT_dictSize(dictList);
-        U32 u;
-        DISPLAYLEVEL(3, "\n %u segments found, of total size %u \n", dictList[0].pos-1, dictContentSize);
+        unsigned const nb = MIN(25, dictList[0].pos);
+        unsigned const dictContentSize = ZDICT_dictSize(dictList);
+        unsigned u;
+        DISPLAYLEVEL(3, "\n %u segments found, of total size %u \n", (unsigned)dictList[0].pos-1, dictContentSize);
         DISPLAYLEVEL(3, "list %u best segments \n", nb-1);
         for (u=1; u<nb; u++) {
-            U32 const pos = dictList[u].pos;
-            U32 const length = dictList[u].length;
+            unsigned const pos = dictList[u].pos;
+            unsigned const length = dictList[u].length;
             U32 const printedLength = MIN(40, length);
             if ((pos > samplesBuffSize) || ((pos + length) > samplesBuffSize)) {
                 free(dictList);
                 return ERROR(GENERIC);   /* should never happen */
             }
             DISPLAYLEVEL(3, "%3u:%3u bytes at pos %8u, savings %7u bytes |",
-                         u, length, pos, dictList[u].savings);
+                         u, length, pos, (unsigned)dictList[u].savings);
             ZDICT_printHex((const char*)samplesBuffer+pos, printedLength);
             DISPLAYLEVEL(3, "| \n");
     }   }
 
 
     /* create dictionary */
-    {   U32 dictContentSize = ZDICT_dictSize(dictList);
+    {   unsigned dictContentSize = ZDICT_dictSize(dictList);
         if (dictContentSize < ZDICT_CONTENTSIZE_MIN) { free(dictList); return ERROR(dictionaryCreation_failed); }   /* dictionary content too small */
         if (dictContentSize < targetDictSize/4) {
-            DISPLAYLEVEL(2, "!  warning : selected content significantly smaller than requested (%u < %u) \n", dictContentSize, (U32)maxDictSize);
+            DISPLAYLEVEL(2, "!  warning : selected content significantly smaller than requested (%u < %u) \n", dictContentSize, (unsigned)maxDictSize);
             if (samplesBuffSize < 10 * targetDictSize)
-                DISPLAYLEVEL(2, "!  consider increasing the number of samples (total size : %u MB)\n", (U32)(samplesBuffSize>>20));
+                DISPLAYLEVEL(2, "!  consider increasing the number of samples (total size : %u MB)\n", (unsigned)(samplesBuffSize>>20));
             if (minRep > MINRATIO) {
                 DISPLAYLEVEL(2, "!  consider increasing selectivity to produce larger dictionary (-s%u) \n", selectivity+1);
                 DISPLAYLEVEL(2, "!  note : larger dictionaries are not necessarily better, test its efficiency on samples \n");
@@ -1017,9 +1017,9 @@
         }
 
         if ((dictContentSize > targetDictSize*3) && (nbSamples > 2*MINRATIO) && (selectivity>1)) {
-            U32 proposedSelectivity = selectivity-1;
+            unsigned proposedSelectivity = selectivity-1;
             while ((nbSamples >> proposedSelectivity) <= MINRATIO) { proposedSelectivity--; }
-            DISPLAYLEVEL(2, "!  note : calculated dictionary significantly larger than requested (%u > %u) \n", dictContentSize, (U32)maxDictSize);
+            DISPLAYLEVEL(2, "!  note : calculated dictionary significantly larger than requested (%u > %u) \n", dictContentSize, (unsigned)maxDictSize);
             DISPLAYLEVEL(2, "!  consider increasing dictionary size, or produce denser dictionary (-s%u) \n", proposedSelectivity);
             DISPLAYLEVEL(2, "!  always test dictionary efficiency on real samples \n");
         }
--- a/contrib/python-zstandard/zstd/zstd.h	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python-zstandard/zstd/zstd.h	Wed Apr 17 13:41:18 2019 -0400
@@ -71,16 +71,16 @@
 /*------   Version   ------*/
 #define ZSTD_VERSION_MAJOR    1
 #define ZSTD_VERSION_MINOR    3
-#define ZSTD_VERSION_RELEASE  6
+#define ZSTD_VERSION_RELEASE  8
 
 #define ZSTD_VERSION_NUMBER  (ZSTD_VERSION_MAJOR *100*100 + ZSTD_VERSION_MINOR *100 + ZSTD_VERSION_RELEASE)
-ZSTDLIB_API unsigned ZSTD_versionNumber(void);   /**< useful to check dll version */
+ZSTDLIB_API unsigned ZSTD_versionNumber(void);   /**< to check runtime library version */
 
 #define ZSTD_LIB_VERSION ZSTD_VERSION_MAJOR.ZSTD_VERSION_MINOR.ZSTD_VERSION_RELEASE
 #define ZSTD_QUOTE(str) #str
 #define ZSTD_EXPAND_AND_QUOTE(str) ZSTD_QUOTE(str)
 #define ZSTD_VERSION_STRING ZSTD_EXPAND_AND_QUOTE(ZSTD_LIB_VERSION)
-ZSTDLIB_API const char* ZSTD_versionString(void);   /* v1.3.0+ */
+ZSTDLIB_API const char* ZSTD_versionString(void);   /* requires v1.3.0+ */
 
 /***************************************
 *  Default constant
@@ -110,7 +110,7 @@
 ZSTDLIB_API size_t ZSTD_decompress( void* dst, size_t dstCapacity,
                               const void* src, size_t compressedSize);
 
-/*! ZSTD_getFrameContentSize() : added in v1.3.0
+/*! ZSTD_getFrameContentSize() : requires v1.3.0+
  *  `src` should point to the start of a ZSTD encoded frame.
  *  `srcSize` must be at least as large as the frame header.
  *            hint : any size >= `ZSTD_frameHeaderSize_max` is large enough.
@@ -167,8 +167,10 @@
 ZSTDLIB_API size_t     ZSTD_freeCCtx(ZSTD_CCtx* cctx);
 
 /*! ZSTD_compressCCtx() :
- *  Same as ZSTD_compress(), requires an allocated ZSTD_CCtx (see ZSTD_createCCtx()). */
-ZSTDLIB_API size_t ZSTD_compressCCtx(ZSTD_CCtx* ctx,
+ *  Same as ZSTD_compress(), using an explicit ZSTD_CCtx
+ *  The function will compress at requested compression level,
+ *  ignoring any other parameter */
+ZSTDLIB_API size_t ZSTD_compressCCtx(ZSTD_CCtx* cctx,
                                      void* dst, size_t dstCapacity,
                                const void* src, size_t srcSize,
                                      int compressionLevel);
@@ -184,8 +186,11 @@
 ZSTDLIB_API size_t     ZSTD_freeDCtx(ZSTD_DCtx* dctx);
 
 /*! ZSTD_decompressDCtx() :
- *  Same as ZSTD_decompress(), requires an allocated ZSTD_DCtx (see ZSTD_createDCtx()) */
-ZSTDLIB_API size_t ZSTD_decompressDCtx(ZSTD_DCtx* ctx,
+ *  Same as ZSTD_decompress(),
+ *  requires an allocated ZSTD_DCtx.
+ *  Compatible with sticky parameters.
+ */
+ZSTDLIB_API size_t ZSTD_decompressDCtx(ZSTD_DCtx* dctx,
                                        void* dst, size_t dstCapacity,
                                  const void* src, size_t srcSize);
 
@@ -194,9 +199,12 @@
 *  Simple dictionary API
 ***************************/
 /*! ZSTD_compress_usingDict() :
- *  Compression using a predefined Dictionary (see dictBuilder/zdict.h).
+ *  Compression at an explicit compression level using a Dictionary.
+ *  A dictionary can be any arbitrary data segment (also called a prefix),
+ *  or a buffer with specified information (see dictBuilder/zdict.h).
  *  Note : This function loads the dictionary, resulting in significant startup delay.
- *  Note : When `dict == NULL || dictSize < 8` no dictionary is used. */
+ *         It's intended for a dictionary used only once.
+ *  Note 2 : When `dict == NULL || dictSize < 8` no dictionary is used. */
 ZSTDLIB_API size_t ZSTD_compress_usingDict(ZSTD_CCtx* ctx,
                                            void* dst, size_t dstCapacity,
                                      const void* src, size_t srcSize,
@@ -204,9 +212,10 @@
                                            int compressionLevel);
 
 /*! ZSTD_decompress_usingDict() :
- *  Decompression using a predefined Dictionary (see dictBuilder/zdict.h).
+ *  Decompression using a known Dictionary.
  *  Dictionary must be identical to the one used during compression.
  *  Note : This function loads the dictionary, resulting in significant startup delay.
+ *         It's intended for a dictionary used only once.
  *  Note : When `dict == NULL || dictSize < 8` no dictionary is used. */
 ZSTDLIB_API size_t ZSTD_decompress_usingDict(ZSTD_DCtx* dctx,
                                              void* dst, size_t dstCapacity,
@@ -214,17 +223,18 @@
                                        const void* dict,size_t dictSize);
 
 
-/**********************************
+/***********************************
  *  Bulk processing dictionary API
- *********************************/
+ **********************************/
 typedef struct ZSTD_CDict_s ZSTD_CDict;
 
 /*! ZSTD_createCDict() :
- *  When compressing multiple messages / blocks with the same dictionary, it's recommended to load it just once.
- *  ZSTD_createCDict() will create a digested dictionary, ready to start future compression operations without startup delay.
+ *  When compressing multiple messages / blocks using the same dictionary, it's recommended to load it only once.
+ *  ZSTD_createCDict() will create a digested dictionary, ready to start future compression operations without startup cost.
  *  ZSTD_CDict can be created once and shared by multiple threads concurrently, since its usage is read-only.
- *  `dictBuffer` can be released after ZSTD_CDict creation, since its content is copied within CDict
- *  Note : A ZSTD_CDict can be created with an empty dictionary, but it is inefficient for small data. */
+ * `dictBuffer` can be released after ZSTD_CDict creation, because its content is copied within CDict.
+ *  Consider experimental function `ZSTD_createCDict_byReference()` if you prefer to not duplicate `dictBuffer` content.
+ *  Note : A ZSTD_CDict can be created from an empty dictBuffer, but it is inefficient when used to compress small data. */
 ZSTDLIB_API ZSTD_CDict* ZSTD_createCDict(const void* dictBuffer, size_t dictSize,
                                          int compressionLevel);
 
@@ -234,11 +244,9 @@
 
 /*! ZSTD_compress_usingCDict() :
  *  Compression using a digested Dictionary.
- *  Faster startup than ZSTD_compress_usingDict(), recommended when same dictionary is used multiple times.
- *  Note that compression level is decided during dictionary creation.
- *  Frame parameters are hardcoded (dictID=yes, contentSize=yes, checksum=no)
- *  Note : ZSTD_compress_usingCDict() can be used with a ZSTD_CDict created from an empty dictionary.
- *         But it is inefficient for small data, and it is recommended to use ZSTD_compressCCtx(). */
+ *  Recommended when same dictionary is used multiple times.
+ *  Note : compression level is _decided at dictionary creation time_,
+ *     and frame parameters are hardcoded (dictID=yes, contentSize=yes, checksum=no) */
 ZSTDLIB_API size_t ZSTD_compress_usingCDict(ZSTD_CCtx* cctx,
                                             void* dst, size_t dstCapacity,
                                       const void* src, size_t srcSize,
@@ -249,7 +257,7 @@
 
 /*! ZSTD_createDDict() :
  *  Create a digested dictionary, ready to start decompression operation without startup delay.
- *  dictBuffer can be released after DDict creation, as its content is copied inside DDict */
+ *  dictBuffer can be released after DDict creation, as its content is copied inside DDict. */
 ZSTDLIB_API ZSTD_DDict* ZSTD_createDDict(const void* dictBuffer, size_t dictSize);
 
 /*! ZSTD_freeDDict() :
@@ -258,7 +266,7 @@
 
 /*! ZSTD_decompress_usingDDict() :
  *  Decompression using a digested Dictionary.
- *  Faster startup than ZSTD_decompress_usingDict(), recommended when same dictionary is used multiple times. */
+ *  Recommended when same dictionary is used multiple times. */
 ZSTDLIB_API size_t ZSTD_decompress_usingDDict(ZSTD_DCtx* dctx,
                                               void* dst, size_t dstCapacity,
                                         const void* src, size_t srcSize,
@@ -289,13 +297,17 @@
 *  A ZSTD_CStream object is required to track streaming operation.
 *  Use ZSTD_createCStream() and ZSTD_freeCStream() to create/release resources.
 *  ZSTD_CStream objects can be reused multiple times on consecutive compression operations.
-*  It is recommended to re-use ZSTD_CStream in situations where many streaming operations will be achieved consecutively,
-*  since it will play nicer with system's memory, by re-using already allocated memory.
-*  Use one separate ZSTD_CStream per thread for parallel execution.
+*  It is recommended to re-use ZSTD_CStream since it will play nicer with system's memory, by re-using already allocated memory.
+*
+*  For parallel execution, use one separate ZSTD_CStream per thread.
+*
+*  note : since v1.3.0, ZSTD_CStream and ZSTD_CCtx are the same thing.
 *
-*  Start a new compression by initializing ZSTD_CStream context.
-*  Use ZSTD_initCStream() to start a new compression operation.
-*  Use variants ZSTD_initCStream_usingDict() or ZSTD_initCStream_usingCDict() for streaming with dictionary (experimental section)
+*  Parameters are sticky : when starting a new compression on the same context,
+*  it will re-use the same sticky parameters as previous compression session.
+*  When in doubt, it's recommended to fully initialize the context before usage.
+*  Use ZSTD_initCStream() to set the parameter to a selected compression level.
+*  Use advanced API (ZSTD_CCtx_setParameter(), etc.) to set more specific parameters.
 *
 *  Use ZSTD_compressStream() as many times as necessary to consume input stream.
 *  The function will automatically update both `pos` fields within `input` and `output`.
@@ -304,12 +316,11 @@
 *  in which case `input.pos < input.size`.
 *  The caller must check if input has been entirely consumed.
 *  If not, the caller must make some room to receive more compressed data,
-*  typically by emptying output buffer, or allocating a new output buffer,
 *  and then present again remaining input data.
-*  @return : a size hint, preferred nb of bytes to use as input for next function call
-*            or an error code, which can be tested using ZSTD_isError().
-*            Note 1 : it's just a hint, to help latency a little, any other value will work fine.
-*            Note 2 : size hint is guaranteed to be <= ZSTD_CStreamInSize()
+* @return : a size hint, preferred nb of bytes to use as input for next function call
+*           or an error code, which can be tested using ZSTD_isError().
+*           Note 1 : it's just a hint, to help latency a little, any value will work fine.
+*           Note 2 : size hint is guaranteed to be <= ZSTD_CStreamInSize()
 *
 *  At any moment, it's possible to flush whatever data might remain stuck within internal buffer,
 *  using ZSTD_flushStream(). `output->pos` will be updated.
@@ -353,23 +364,28 @@
 *  Use ZSTD_createDStream() and ZSTD_freeDStream() to create/release resources.
 *  ZSTD_DStream objects can be re-used multiple times.
 *
-*  Use ZSTD_initDStream() to start a new decompression operation,
-*   or ZSTD_initDStream_usingDict() if decompression requires a dictionary.
-*   @return : recommended first input size
+*  Use ZSTD_initDStream() to start a new decompression operation.
+* @return : recommended first input size
+*  Alternatively, use advanced API to set specific properties.
 *
 *  Use ZSTD_decompressStream() repetitively to consume your input.
 *  The function will update both `pos` fields.
 *  If `input.pos < input.size`, some input has not been consumed.
 *  It's up to the caller to present again remaining data.
+*  The function tries to flush all data decoded immediately, respecting output buffer size.
 *  If `output.pos < output.size`, decoder has flushed everything it could.
-*  @return : 0 when a frame is completely decoded and fully flushed,
-*            an error code, which can be tested using ZSTD_isError(),
-*            any other value > 0, which means there is still some decoding to do to complete current frame.
-*            The return value is a suggested next input size (a hint to improve latency) that will never load more than the current frame.
+*  But if `output.pos == output.size`, there might be some data left within internal buffers.,
+*  In which case, call ZSTD_decompressStream() again to flush whatever remains in the buffer.
+*  Note : with no additional input provided, amount of data flushed is necessarily <= ZSTD_BLOCKSIZE_MAX.
+* @return : 0 when a frame is completely decoded and fully flushed,
+*        or an error code, which can be tested using ZSTD_isError(),
+*        or any other value > 0, which means there is still some decoding or flushing to do to complete current frame :
+*                                the return value is a suggested next input size (just a hint for better latency)
+*                                that will never request more than the remaining frame size.
 * *******************************************************************************/
 
 typedef ZSTD_DCtx ZSTD_DStream;  /**< DCtx and DStream are now effectively same object (>= v1.3.0) */
-                                 /* For compatibility with versions <= v1.2.0, continue to consider them separated. */
+                                 /* For compatibility with versions <= v1.2.0, prefer differentiating them. */
 /*===== ZSTD_DStream management functions =====*/
 ZSTDLIB_API ZSTD_DStream* ZSTD_createDStream(void);
 ZSTDLIB_API size_t ZSTD_freeDStream(ZSTD_DStream* zds);
@@ -386,77 +402,602 @@
 
 
 
-#if defined(ZSTD_STATIC_LINKING_ONLY) && !defined(ZSTD_H_ZSTD_STATIC_LINKING_ONLY)
-#define ZSTD_H_ZSTD_STATIC_LINKING_ONLY
-
 /****************************************************************************************
  *   ADVANCED AND EXPERIMENTAL FUNCTIONS
  ****************************************************************************************
- * The definitions in this section are considered experimental.
+ * The definitions in the following section are considered experimental.
+ * They are provided for advanced scenarios.
  * They should never be used with a dynamic library, as prototypes may change in the future.
- * They are provided for advanced scenarios.
  * Use them only in association with static linking.
  * ***************************************************************************************/
 
+#if defined(ZSTD_STATIC_LINKING_ONLY) && !defined(ZSTD_H_ZSTD_STATIC_LINKING_ONLY)
+#define ZSTD_H_ZSTD_STATIC_LINKING_ONLY
+
+
+/****************************************************************************************
+ *   Candidate API for promotion to stable status
+ ****************************************************************************************
+ * The following symbols and constants form the "staging area" :
+ * they are considered to join "stable API" by v1.4.0.
+ * The proposal is written so that it can be made stable "as is",
+ * though it's still possible to suggest improvements.
+ * Staging is in fact last chance for changes,
+ * the API is locked once reaching "stable" status.
+ * ***************************************************************************************/
+
+
+/* ===  Constants   === */
+
+/* all magic numbers are supposed read/written to/from files/memory using little-endian convention */
+#define ZSTD_MAGICNUMBER            0xFD2FB528    /* valid since v0.8.0 */
+#define ZSTD_MAGIC_DICTIONARY       0xEC30A437    /* valid since v0.7.0 */
+#define ZSTD_MAGIC_SKIPPABLE_START  0x184D2A50    /* all 16 values, from 0x184D2A50 to 0x184D2A5F, signal the beginning of a skippable frame */
+#define ZSTD_MAGIC_SKIPPABLE_MASK   0xFFFFFFF0
+
+#define ZSTD_BLOCKSIZELOG_MAX  17
+#define ZSTD_BLOCKSIZE_MAX     (1<<ZSTD_BLOCKSIZELOG_MAX)
+
+
+/* ===   query limits   === */
+
 ZSTDLIB_API int ZSTD_minCLevel(void);  /*!< minimum negative compression level allowed */
 
-/* ---  Constants  ---*/
-#define ZSTD_MAGICNUMBER            0xFD2FB528   /* v0.8+ */
-#define ZSTD_MAGIC_DICTIONARY       0xEC30A437   /* v0.7+ */
-#define ZSTD_MAGIC_SKIPPABLE_START  0x184D2A50U
+
+/* ===   frame size   === */
+
+/*! ZSTD_findFrameCompressedSize() :
+ * `src` should point to the start of a ZSTD frame or skippable frame.
+ * `srcSize` must be >= first frame size
+ * @return : the compressed size of the first frame starting at `src`,
+ *           suitable to pass as `srcSize` to `ZSTD_decompress` or similar,
+ *        or an error code if input is invalid */
+ZSTDLIB_API size_t ZSTD_findFrameCompressedSize(const void* src, size_t srcSize);
+
+
+/* ===   Memory management   === */
+
+/*! ZSTD_sizeof_*() :
+ *  These functions give the _current_ memory usage of selected object.
+ *  Note that object memory usage can evolve (increase or decrease) over time. */
+ZSTDLIB_API size_t ZSTD_sizeof_CCtx(const ZSTD_CCtx* cctx);
+ZSTDLIB_API size_t ZSTD_sizeof_DCtx(const ZSTD_DCtx* dctx);
+ZSTDLIB_API size_t ZSTD_sizeof_CStream(const ZSTD_CStream* zcs);
+ZSTDLIB_API size_t ZSTD_sizeof_DStream(const ZSTD_DStream* zds);
+ZSTDLIB_API size_t ZSTD_sizeof_CDict(const ZSTD_CDict* cdict);
+ZSTDLIB_API size_t ZSTD_sizeof_DDict(const ZSTD_DDict* ddict);
+
+
+/***************************************
+*  Advanced compression API
+***************************************/
+
+/* API design :
+ *   Parameters are pushed one by one into an existing context,
+ *   using ZSTD_CCtx_set*() functions.
+ *   Pushed parameters are sticky : they are valid for next compressed frame, and any subsequent frame.
+ *   "sticky" parameters are applicable to `ZSTD_compress2()` and `ZSTD_compressStream*()` !
+ *   They do not apply to "simple" one-shot variants such as ZSTD_compressCCtx()
+ *
+ *   It's possible to reset all parameters to "default" using ZSTD_CCtx_reset().
+ *
+ *   This API supercedes all other "advanced" API entry points in the experimental section.
+ *   In the future, we expect to remove from experimental API entry points which are redundant with this API.
+ */
+
+
+/* Compression strategies, listed from fastest to strongest */
+typedef enum { ZSTD_fast=1,
+               ZSTD_dfast=2,
+               ZSTD_greedy=3,
+               ZSTD_lazy=4,
+               ZSTD_lazy2=5,
+               ZSTD_btlazy2=6,
+               ZSTD_btopt=7,
+               ZSTD_btultra=8,
+               ZSTD_btultra2=9
+               /* note : new strategies _might_ be added in the future.
+                         Only the order (from fast to strong) is guaranteed */
+} ZSTD_strategy;
+
+
+typedef enum {
 
-#define ZSTD_BLOCKSIZELOG_MAX 17
-#define ZSTD_BLOCKSIZE_MAX   (1<<ZSTD_BLOCKSIZELOG_MAX)   /* define, for static allocation */
+    /* compression parameters */
+    ZSTD_c_compressionLevel=100, /* Update all compression parameters according to pre-defined cLevel table
+                              * Default level is ZSTD_CLEVEL_DEFAULT==3.
+                              * Special: value 0 means default, which is controlled by ZSTD_CLEVEL_DEFAULT.
+                              * Note 1 : it's possible to pass a negative compression level.
+                              * Note 2 : setting a level sets all default values of other compression parameters */
+    ZSTD_c_windowLog=101,    /* Maximum allowed back-reference distance, expressed as power of 2.
+                              * Must be clamped between ZSTD_WINDOWLOG_MIN and ZSTD_WINDOWLOG_MAX.
+                              * Special: value 0 means "use default windowLog".
+                              * Note: Using a windowLog greater than ZSTD_WINDOWLOG_LIMIT_DEFAULT
+                              *       requires explicitly allowing such window size at decompression stage if using streaming. */
+    ZSTD_c_hashLog=102,      /* Size of the initial probe table, as a power of 2.
+                              * Resulting memory usage is (1 << (hashLog+2)).
+                              * Must be clamped between ZSTD_HASHLOG_MIN and ZSTD_HASHLOG_MAX.
+                              * Larger tables improve compression ratio of strategies <= dFast,
+                              * and improve speed of strategies > dFast.
+                              * Special: value 0 means "use default hashLog". */
+    ZSTD_c_chainLog=103,     /* Size of the multi-probe search table, as a power of 2.
+                              * Resulting memory usage is (1 << (chainLog+2)).
+                              * Must be clamped between ZSTD_CHAINLOG_MIN and ZSTD_CHAINLOG_MAX.
+                              * Larger tables result in better and slower compression.
+                              * This parameter is useless when using "fast" strategy.
+                              * It's still useful when using "dfast" strategy,
+                              * in which case it defines a secondary probe table.
+                              * Special: value 0 means "use default chainLog". */
+    ZSTD_c_searchLog=104,    /* Number of search attempts, as a power of 2.
+                              * More attempts result in better and slower compression.
+                              * This parameter is useless when using "fast" and "dFast" strategies.
+                              * Special: value 0 means "use default searchLog". */
+    ZSTD_c_minMatch=105,     /* Minimum size of searched matches.
+                              * Note that Zstandard can still find matches of smaller size,
+                              * it just tweaks its search algorithm to look for this size and larger.
+                              * Larger values increase compression and decompression speed, but decrease ratio.
+                              * Must be clamped between ZSTD_MINMATCH_MIN and ZSTD_MINMATCH_MAX.
+                              * Note that currently, for all strategies < btopt, effective minimum is 4.
+                              *                    , for all strategies > fast, effective maximum is 6.
+                              * Special: value 0 means "use default minMatchLength". */
+    ZSTD_c_targetLength=106, /* Impact of this field depends on strategy.
+                              * For strategies btopt, btultra & btultra2:
+                              *     Length of Match considered "good enough" to stop search.
+                              *     Larger values make compression stronger, and slower.
+                              * For strategy fast:
+                              *     Distance between match sampling.
+                              *     Larger values make compression faster, and weaker.
+                              * Special: value 0 means "use default targetLength". */
+    ZSTD_c_strategy=107,     /* See ZSTD_strategy enum definition.
+                              * The higher the value of selected strategy, the more complex it is,
+                              * resulting in stronger and slower compression.
+                              * Special: value 0 means "use default strategy". */
+
+    /* LDM mode parameters */
+    ZSTD_c_enableLongDistanceMatching=160, /* Enable long distance matching.
+                                     * This parameter is designed to improve compression ratio
+                                     * for large inputs, by finding large matches at long distance.
+                                     * It increases memory usage and window size.
+                                     * Note: enabling this parameter increases default ZSTD_c_windowLog to 128 MB
+                                     * except when expressly set to a different value. */
+    ZSTD_c_ldmHashLog=161,   /* Size of the table for long distance matching, as a power of 2.
+                              * Larger values increase memory usage and compression ratio,
+                              * but decrease compression speed.
+                              * Must be clamped between ZSTD_HASHLOG_MIN and ZSTD_HASHLOG_MAX
+                              * default: windowlog - 7.
+                              * Special: value 0 means "automatically determine hashlog". */
+    ZSTD_c_ldmMinMatch=162,  /* Minimum match size for long distance matcher.
+                              * Larger/too small values usually decrease compression ratio.
+                              * Must be clamped between ZSTD_LDM_MINMATCH_MIN and ZSTD_LDM_MINMATCH_MAX.
+                              * Special: value 0 means "use default value" (default: 64). */
+    ZSTD_c_ldmBucketSizeLog=163, /* Log size of each bucket in the LDM hash table for collision resolution.
+                              * Larger values improve collision resolution but decrease compression speed.
+                              * The maximum value is ZSTD_LDM_BUCKETSIZELOG_MAX.
+                              * Special: value 0 means "use default value" (default: 3). */
+    ZSTD_c_ldmHashRateLog=164, /* Frequency of inserting/looking up entries into the LDM hash table.
+                              * Must be clamped between 0 and (ZSTD_WINDOWLOG_MAX - ZSTD_HASHLOG_MIN).
+                              * Default is MAX(0, (windowLog - ldmHashLog)), optimizing hash table usage.
+                              * Larger values improve compression speed.
+                              * Deviating far from default value will likely result in a compression ratio decrease.
+                              * Special: value 0 means "automatically determine hashRateLog". */
+
+    /* frame parameters */
+    ZSTD_c_contentSizeFlag=200, /* Content size will be written into frame header _whenever known_ (default:1)
+                              * Content size must be known at the beginning of compression.
+                              * This is automatically the case when using ZSTD_compress2(),
+                              * For streaming variants, content size must be provided with ZSTD_CCtx_setPledgedSrcSize() */
+    ZSTD_c_checksumFlag=201, /* A 32-bits checksum of content is written at end of frame (default:0) */
+    ZSTD_c_dictIDFlag=202,   /* When applicable, dictionary's ID is written into frame header (default:1) */
 
-#define ZSTD_WINDOWLOG_MAX_32   30
-#define ZSTD_WINDOWLOG_MAX_64   31
-#define ZSTD_WINDOWLOG_MAX    ((unsigned)(sizeof(size_t) == 4 ? ZSTD_WINDOWLOG_MAX_32 : ZSTD_WINDOWLOG_MAX_64))
-#define ZSTD_WINDOWLOG_MIN      10
-#define ZSTD_HASHLOG_MAX      ((ZSTD_WINDOWLOG_MAX < 30) ? ZSTD_WINDOWLOG_MAX : 30)
-#define ZSTD_HASHLOG_MIN         6
-#define ZSTD_CHAINLOG_MAX_32    29
-#define ZSTD_CHAINLOG_MAX_64    30
-#define ZSTD_CHAINLOG_MAX     ((unsigned)(sizeof(size_t) == 4 ? ZSTD_CHAINLOG_MAX_32 : ZSTD_CHAINLOG_MAX_64))
-#define ZSTD_CHAINLOG_MIN       ZSTD_HASHLOG_MIN
-#define ZSTD_HASHLOG3_MAX       17
-#define ZSTD_SEARCHLOG_MAX     (ZSTD_WINDOWLOG_MAX-1)
-#define ZSTD_SEARCHLOG_MIN       1
-#define ZSTD_SEARCHLENGTH_MAX    7   /* only for ZSTD_fast, other strategies are limited to 6 */
-#define ZSTD_SEARCHLENGTH_MIN    3   /* only for ZSTD_btopt, other strategies are limited to 4 */
-#define ZSTD_TARGETLENGTH_MAX  ZSTD_BLOCKSIZE_MAX
-#define ZSTD_TARGETLENGTH_MIN    0   /* note : comparing this constant to an unsigned results in a tautological test */
-#define ZSTD_LDM_MINMATCH_MAX 4096
-#define ZSTD_LDM_MINMATCH_MIN    4
-#define ZSTD_LDM_BUCKETSIZELOG_MAX 8
+    /* multi-threading parameters */
+    /* These parameters are only useful if multi-threading is enabled (compiled with build macro ZSTD_MULTITHREAD).
+     * They return an error otherwise. */
+    ZSTD_c_nbWorkers=400,    /* Select how many threads will be spawned to compress in parallel.
+                              * When nbWorkers >= 1, triggers asynchronous mode when used with ZSTD_compressStream*() :
+                              * ZSTD_compressStream*() consumes input and flush output if possible, but immediately gives back control to caller,
+                              * while compression work is performed in parallel, within worker threads.
+                              * (note : a strong exception to this rule is when first invocation of ZSTD_compressStream2() sets ZSTD_e_end :
+                              *  in which case, ZSTD_compressStream2() delegates to ZSTD_compress2(), which is always a blocking call).
+                              * More workers improve speed, but also increase memory usage.
+                              * Default value is `0`, aka "single-threaded mode" : no worker is spawned, compression is performed inside Caller's thread, all invocations are blocking */
+    ZSTD_c_jobSize=401,      /* Size of a compression job. This value is enforced only when nbWorkers >= 1.
+                              * Each compression job is completed in parallel, so this value can indirectly impact the nb of active threads.
+                              * 0 means default, which is dynamically determined based on compression parameters.
+                              * Job size must be a minimum of overlap size, or 1 MB, whichever is largest.
+                              * The minimum size is automatically and transparently enforced */
+    ZSTD_c_overlapLog=402,   /* Control the overlap size, as a fraction of window size.
+                              * The overlap size is an amount of data reloaded from previous job at the beginning of a new job.
+                              * It helps preserve compression ratio, while each job is compressed in parallel.
+                              * This value is enforced only when nbWorkers >= 1.
+                              * Larger values increase compression ratio, but decrease speed.
+                              * Possible values range from 0 to 9 :
+                              * - 0 means "default" : value will be determined by the library, depending on strategy
+                              * - 1 means "no overlap"
+                              * - 9 means "full overlap", using a full window size.
+                              * Each intermediate rank increases/decreases load size by a factor 2 :
+                              * 9: full window;  8: w/2;  7: w/4;  6: w/8;  5:w/16;  4: w/32;  3:w/64;  2:w/128;  1:no overlap;  0:default
+                              * default value varies between 6 and 9, depending on strategy */
+
+    /* note : additional experimental parameters are also available
+     * within the experimental section of the API.
+     * At the time of this writing, they include :
+     * ZSTD_c_rsyncable
+     * ZSTD_c_format
+     * ZSTD_c_forceMaxWindow
+     * ZSTD_c_forceAttachDict
+     * Because they are not stable, it's necessary to define ZSTD_STATIC_LINKING_ONLY to access them.
+     * note : never ever use experimentalParam? names directly;
+     *        also, the enums values themselves are unstable and can still change.
+     */
+     ZSTD_c_experimentalParam1=500,
+     ZSTD_c_experimentalParam2=10,
+     ZSTD_c_experimentalParam3=1000,
+     ZSTD_c_experimentalParam4=1001
+} ZSTD_cParameter;
+
+
+typedef struct {
+    size_t error;
+    int lowerBound;
+    int upperBound;
+} ZSTD_bounds;
+
+/*! ZSTD_cParam_getBounds() :
+ *  All parameters must belong to an interval with lower and upper bounds,
+ *  otherwise they will either trigger an error or be automatically clamped.
+ * @return : a structure, ZSTD_bounds, which contains
+ *         - an error status field, which must be tested using ZSTD_isError()
+ *         - lower and upper bounds, both inclusive
+ */
+ZSTDLIB_API ZSTD_bounds ZSTD_cParam_getBounds(ZSTD_cParameter cParam);
+
+/*! ZSTD_CCtx_setParameter() :
+ *  Set one compression parameter, selected by enum ZSTD_cParameter.
+ *  All parameters have valid bounds. Bounds can be queried using ZSTD_cParam_getBounds().
+ *  Providing a value beyond bound will either clamp it, or trigger an error (depending on parameter).
+ *  Setting a parameter is generally only possible during frame initialization (before starting compression).
+ *  Exception : when using multi-threading mode (nbWorkers >= 1),
+ *              the following parameters can be updated _during_ compression (within same frame):
+ *              => compressionLevel, hashLog, chainLog, searchLog, minMatch, targetLength and strategy.
+ *              new parameters will be active for next job only (after a flush()).
+ * @return : an error code (which can be tested using ZSTD_isError()).
+ */
+ZSTDLIB_API size_t ZSTD_CCtx_setParameter(ZSTD_CCtx* cctx, ZSTD_cParameter param, int value);
 
-#define ZSTD_FRAMEHEADERSIZE_PREFIX 5   /* minimum input size to know frame header size */
-#define ZSTD_FRAMEHEADERSIZE_MIN    6
-#define ZSTD_FRAMEHEADERSIZE_MAX   18   /* for static allocation */
-static const size_t ZSTD_frameHeaderSize_prefix = ZSTD_FRAMEHEADERSIZE_PREFIX;
-static const size_t ZSTD_frameHeaderSize_min = ZSTD_FRAMEHEADERSIZE_MIN;
-static const size_t ZSTD_frameHeaderSize_max = ZSTD_FRAMEHEADERSIZE_MAX;
-static const size_t ZSTD_skippableHeaderSize = 8;  /* magic number + skippable frame length */
+/*! ZSTD_CCtx_setPledgedSrcSize() :
+ *  Total input data size to be compressed as a single frame.
+ *  Value will be written in frame header, unless if explicitly forbidden using ZSTD_c_contentSizeFlag.
+ *  This value will also be controlled at end of frame, and trigger an error if not respected.
+ * @result : 0, or an error code (which can be tested with ZSTD_isError()).
+ *  Note 1 : pledgedSrcSize==0 actually means zero, aka an empty frame.
+ *           In order to mean "unknown content size", pass constant ZSTD_CONTENTSIZE_UNKNOWN.
+ *           ZSTD_CONTENTSIZE_UNKNOWN is default value for any new frame.
+ *  Note 2 : pledgedSrcSize is only valid once, for the next frame.
+ *           It's discarded at the end of the frame, and replaced by ZSTD_CONTENTSIZE_UNKNOWN.
+ *  Note 3 : Whenever all input data is provided and consumed in a single round,
+ *           for example with ZSTD_compress2(),
+ *           or invoking immediately ZSTD_compressStream2(,,,ZSTD_e_end),
+ *           this value is automatically overriden by srcSize instead.
+ */
+ZSTDLIB_API size_t ZSTD_CCtx_setPledgedSrcSize(ZSTD_CCtx* cctx, unsigned long long pledgedSrcSize);
+
+/*! ZSTD_CCtx_loadDictionary() :
+ *  Create an internal CDict from `dict` buffer.
+ *  Decompression will have to use same dictionary.
+ * @result : 0, or an error code (which can be tested with ZSTD_isError()).
+ *  Special: Loading a NULL (or 0-size) dictionary invalidates previous dictionary,
+ *           meaning "return to no-dictionary mode".
+ *  Note 1 : Dictionary is sticky, it will be used for all future compressed frames.
+ *           To return to "no-dictionary" situation, load a NULL dictionary (or reset parameters).
+ *  Note 2 : Loading a dictionary involves building tables.
+ *           It's also a CPU consuming operation, with non-negligible impact on latency.
+ *           Tables are dependent on compression parameters, and for this reason,
+ *           compression parameters can no longer be changed after loading a dictionary.
+ *  Note 3 :`dict` content will be copied internally.
+ *           Use experimental ZSTD_CCtx_loadDictionary_byReference() to reference content instead.
+ *           In such a case, dictionary buffer must outlive its users.
+ *  Note 4 : Use ZSTD_CCtx_loadDictionary_advanced()
+ *           to precisely select how dictionary content must be interpreted. */
+ZSTDLIB_API size_t ZSTD_CCtx_loadDictionary(ZSTD_CCtx* cctx, const void* dict, size_t dictSize);
+
+/*! ZSTD_CCtx_refCDict() :
+ *  Reference a prepared dictionary, to be used for all next compressed frames.
+ *  Note that compression parameters are enforced from within CDict,
+ *  and supercede any compression parameter previously set within CCtx.
+ *  The dictionary will remain valid for future compressed frames using same CCtx.
+ * @result : 0, or an error code (which can be tested with ZSTD_isError()).
+ *  Special : Referencing a NULL CDict means "return to no-dictionary mode".
+ *  Note 1 : Currently, only one dictionary can be managed.
+ *           Referencing a new dictionary effectively "discards" any previous one.
+ *  Note 2 : CDict is just referenced, its lifetime must outlive its usage within CCtx. */
+ZSTDLIB_API size_t ZSTD_CCtx_refCDict(ZSTD_CCtx* cctx, const ZSTD_CDict* cdict);
+
+/*! ZSTD_CCtx_refPrefix() :
+ *  Reference a prefix (single-usage dictionary) for next compressed frame.
+ *  A prefix is **only used once**. Tables are discarded at end of frame (ZSTD_e_end).
+ *  Decompression will need same prefix to properly regenerate data.
+ *  Compressing with a prefix is similar in outcome as performing a diff and compressing it,
+ *  but performs much faster, especially during decompression (compression speed is tunable with compression level).
+ * @result : 0, or an error code (which can be tested with ZSTD_isError()).
+ *  Special: Adding any prefix (including NULL) invalidates any previous prefix or dictionary
+ *  Note 1 : Prefix buffer is referenced. It **must** outlive compression.
+ *           Its content must remain unmodified during compression.
+ *  Note 2 : If the intention is to diff some large src data blob with some prior version of itself,
+ *           ensure that the window size is large enough to contain the entire source.
+ *           See ZSTD_c_windowLog.
+ *  Note 3 : Referencing a prefix involves building tables, which are dependent on compression parameters.
+ *           It's a CPU consuming operation, with non-negligible impact on latency.
+ *           If there is a need to use the same prefix multiple times, consider loadDictionary instead.
+ *  Note 4 : By default, the prefix is interpreted as raw content (ZSTD_dm_rawContent).
+ *           Use experimental ZSTD_CCtx_refPrefix_advanced() to alter dictionary interpretation. */
+ZSTDLIB_API size_t ZSTD_CCtx_refPrefix(ZSTD_CCtx* cctx,
+                                 const void* prefix, size_t prefixSize);
+
+
+typedef enum {
+    ZSTD_reset_session_only = 1,
+    ZSTD_reset_parameters = 2,
+    ZSTD_reset_session_and_parameters = 3
+} ZSTD_ResetDirective;
+
+/*! ZSTD_CCtx_reset() :
+ *  There are 2 different things that can be reset, independently or jointly :
+ *  - The session : will stop compressing current frame, and make CCtx ready to start a new one.
+ *                  Useful after an error, or to interrupt any ongoing compression.
+ *                  Any internal data not yet flushed is cancelled.
+ *                  Compression parameters and dictionary remain unchanged.
+ *                  They will be used to compress next frame.
+ *                  Resetting session never fails.
+ *  - The parameters : changes all parameters back to "default".
+ *                  This removes any reference to any dictionary too.
+ *                  Parameters can only be changed between 2 sessions (i.e. no compression is currently ongoing)
+ *                  otherwise the reset fails, and function returns an error value (which can be tested using ZSTD_isError())
+ *  - Both : similar to resetting the session, followed by resetting parameters.
+ */
+ZSTDLIB_API size_t ZSTD_CCtx_reset(ZSTD_CCtx* cctx, ZSTD_ResetDirective reset);
 
 
 
+/*! ZSTD_compress2() :
+ *  Behave the same as ZSTD_compressCCtx(), but compression parameters are set using the advanced API.
+ *  ZSTD_compress2() always starts a new frame.
+ *  Should cctx hold data from a previously unfinished frame, everything about it is forgotten.
+ *  - Compression parameters are pushed into CCtx before starting compression, using ZSTD_CCtx_set*()
+ *  - The function is always blocking, returns when compression is completed.
+ *  Hint : compression runs faster if `dstCapacity` >=  `ZSTD_compressBound(srcSize)`.
+ * @return : compressed size written into `dst` (<= `dstCapacity),
+ *           or an error code if it fails (which can be tested using ZSTD_isError()).
+ */
+ZSTDLIB_API size_t ZSTD_compress2( ZSTD_CCtx* cctx,
+                                   void* dst, size_t dstCapacity,
+                             const void* src, size_t srcSize);
+
+typedef enum {
+    ZSTD_e_continue=0, /* collect more data, encoder decides when to output compressed result, for optimal compression ratio */
+    ZSTD_e_flush=1,    /* flush any data provided so far,
+                        * it creates (at least) one new block, that can be decoded immediately on reception;
+                        * frame will continue: any future data can still reference previously compressed data, improving compression. */
+    ZSTD_e_end=2       /* flush any remaining data _and_ close current frame.
+                        * note that frame is only closed after compressed data is fully flushed (return value == 0).
+                        * After that point, any additional data starts a new frame.
+                        * note : each frame is independent (does not reference any content from previous frame). */
+} ZSTD_EndDirective;
+
+/*! ZSTD_compressStream2() :
+ *  Behaves about the same as ZSTD_compressStream, with additional control on end directive.
+ *  - Compression parameters are pushed into CCtx before starting compression, using ZSTD_CCtx_set*()
+ *  - Compression parameters cannot be changed once compression is started (save a list of exceptions in multi-threading mode)
+ *  - outpot->pos must be <= dstCapacity, input->pos must be <= srcSize
+ *  - outpot->pos and input->pos will be updated. They are guaranteed to remain below their respective limit.
+ *  - When nbWorkers==0 (default), function is blocking : it completes its job before returning to caller.
+ *  - When nbWorkers>=1, function is non-blocking : it just acquires a copy of input, and distributes jobs to internal worker threads, flush whatever is available,
+ *                                                  and then immediately returns, just indicating that there is some data remaining to be flushed.
+ *                                                  The function nonetheless guarantees forward progress : it will return only after it reads or write at least 1+ byte.
+ *  - Exception : if the first call requests a ZSTD_e_end directive and provides enough dstCapacity, the function delegates to ZSTD_compress2() which is always blocking.
+ *  - @return provides a minimum amount of data remaining to be flushed from internal buffers
+ *            or an error code, which can be tested using ZSTD_isError().
+ *            if @return != 0, flush is not fully completed, there is still some data left within internal buffers.
+ *            This is useful for ZSTD_e_flush, since in this case more flushes are necessary to empty all buffers.
+ *            For ZSTD_e_end, @return == 0 when internal buffers are fully flushed and frame is completed.
+ *  - after a ZSTD_e_end directive, if internal buffer is not fully flushed (@return != 0),
+ *            only ZSTD_e_end or ZSTD_e_flush operations are allowed.
+ *            Before starting a new compression job, or changing compression parameters,
+ *            it is required to fully flush internal buffers.
+ */
+ZSTDLIB_API size_t ZSTD_compressStream2( ZSTD_CCtx* cctx,
+                                         ZSTD_outBuffer* output,
+                                         ZSTD_inBuffer* input,
+                                         ZSTD_EndDirective endOp);
+
+
+
+/* ============================== */
+/*   Advanced decompression API   */
+/* ============================== */
+
+/* The advanced API pushes parameters one by one into an existing DCtx context.
+ * Parameters are sticky, and remain valid for all following frames
+ * using the same DCtx context.
+ * It's possible to reset parameters to default values using ZSTD_DCtx_reset().
+ * Note : This API is compatible with existing ZSTD_decompressDCtx() and ZSTD_decompressStream().
+ *        Therefore, no new decompression function is necessary.
+ */
+
+
+typedef enum {
+
+    ZSTD_d_windowLogMax=100, /* Select a size limit (in power of 2) beyond which
+                              * the streaming API will refuse to allocate memory buffer
+                              * in order to protect the host from unreasonable memory requirements.
+                              * This parameter is only useful in streaming mode, since no internal buffer is allocated in single-pass mode.
+                              * By default, a decompression context accepts window sizes <= (1 << ZSTD_WINDOWLOG_LIMIT_DEFAULT) */
+
+    /* note : additional experimental parameters are also available
+     * within the experimental section of the API.
+     * At the time of this writing, they include :
+     * ZSTD_c_format
+     * Because they are not stable, it's necessary to define ZSTD_STATIC_LINKING_ONLY to access them.
+     * note : never ever use experimentalParam? names directly
+     */
+     ZSTD_d_experimentalParam1=1000
+
+} ZSTD_dParameter;
+
+
+/*! ZSTD_dParam_getBounds() :
+ *  All parameters must belong to an interval with lower and upper bounds,
+ *  otherwise they will either trigger an error or be automatically clamped.
+ * @return : a structure, ZSTD_bounds, which contains
+ *         - an error status field, which must be tested using ZSTD_isError()
+ *         - both lower and upper bounds, inclusive
+ */
+ZSTDLIB_API ZSTD_bounds ZSTD_dParam_getBounds(ZSTD_dParameter dParam);
+
+/*! ZSTD_DCtx_setParameter() :
+ *  Set one compression parameter, selected by enum ZSTD_dParameter.
+ *  All parameters have valid bounds. Bounds can be queried using ZSTD_dParam_getBounds().
+ *  Providing a value beyond bound will either clamp it, or trigger an error (depending on parameter).
+ *  Setting a parameter is only possible during frame initialization (before starting decompression).
+ * @return : 0, or an error code (which can be tested using ZSTD_isError()).
+ */
+ZSTDLIB_API size_t ZSTD_DCtx_setParameter(ZSTD_DCtx* dctx, ZSTD_dParameter param, int value);
+
+
+/*! ZSTD_DCtx_loadDictionary() :
+ *  Create an internal DDict from dict buffer,
+ *  to be used to decompress next frames.
+ *  The dictionary remains valid for all future frames, until explicitly invalidated.
+ * @result : 0, or an error code (which can be tested with ZSTD_isError()).
+ *  Special : Adding a NULL (or 0-size) dictionary invalidates any previous dictionary,
+ *            meaning "return to no-dictionary mode".
+ *  Note 1 : Loading a dictionary involves building tables,
+ *           which has a non-negligible impact on CPU usage and latency.
+ *           It's recommended to "load once, use many times", to amortize the cost
+ *  Note 2 :`dict` content will be copied internally, so `dict` can be released after loading.
+ *           Use ZSTD_DCtx_loadDictionary_byReference() to reference dictionary content instead.
+ *  Note 3 : Use ZSTD_DCtx_loadDictionary_advanced() to take control of
+ *           how dictionary content is loaded and interpreted.
+ */
+ZSTDLIB_API size_t ZSTD_DCtx_loadDictionary(ZSTD_DCtx* dctx, const void* dict, size_t dictSize);
+
+/*! ZSTD_DCtx_refDDict() :
+ *  Reference a prepared dictionary, to be used to decompress next frames.
+ *  The dictionary remains active for decompression of future frames using same DCtx.
+ * @result : 0, or an error code (which can be tested with ZSTD_isError()).
+ *  Note 1 : Currently, only one dictionary can be managed.
+ *           Referencing a new dictionary effectively "discards" any previous one.
+ *  Special: referencing a NULL DDict means "return to no-dictionary mode".
+ *  Note 2 : DDict is just referenced, its lifetime must outlive its usage from DCtx.
+ */
+ZSTDLIB_API size_t ZSTD_DCtx_refDDict(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict);
+
+/*! ZSTD_DCtx_refPrefix() :
+ *  Reference a prefix (single-usage dictionary) to decompress next frame.
+ *  This is the reverse operation of ZSTD_CCtx_refPrefix(),
+ *  and must use the same prefix as the one used during compression.
+ *  Prefix is **only used once**. Reference is discarded at end of frame.
+ *  End of frame is reached when ZSTD_decompressStream() returns 0.
+ * @result : 0, or an error code (which can be tested with ZSTD_isError()).
+ *  Note 1 : Adding any prefix (including NULL) invalidates any previously set prefix or dictionary
+ *  Note 2 : Prefix buffer is referenced. It **must** outlive decompression.
+ *           Prefix buffer must remain unmodified up to the end of frame,
+ *           reached when ZSTD_decompressStream() returns 0.
+ *  Note 3 : By default, the prefix is treated as raw content (ZSTD_dm_rawContent).
+ *           Use ZSTD_CCtx_refPrefix_advanced() to alter dictMode (Experimental section)
+ *  Note 4 : Referencing a raw content prefix has almost no cpu nor memory cost.
+ *           A full dictionary is more costly, as it requires building tables.
+ */
+ZSTDLIB_API size_t ZSTD_DCtx_refPrefix(ZSTD_DCtx* dctx,
+                                 const void* prefix, size_t prefixSize);
+
+/*! ZSTD_DCtx_reset() :
+ *  Return a DCtx to clean state.
+ *  Session and parameters can be reset jointly or separately.
+ *  Parameters can only be reset when no active frame is being decompressed.
+ * @return : 0, or an error code, which can be tested with ZSTD_isError()
+ */
+ZSTDLIB_API size_t ZSTD_DCtx_reset(ZSTD_DCtx* dctx, ZSTD_ResetDirective reset);
+
+
+
+/****************************************************************************************
+ *   experimental API (static linking only)
+ ****************************************************************************************
+ * The following symbols and constants
+ * are not planned to join "stable API" status in the near future.
+ * They can still change in future versions.
+ * Some of them are planned to remain in the static_only section indefinitely.
+ * Some of them might be removed in the future (especially when redundant with existing stable functions)
+ * ***************************************************************************************/
+
+#define ZSTD_FRAMEHEADERSIZE_PREFIX 5   /* minimum input size required to query frame header size */
+#define ZSTD_FRAMEHEADERSIZE_MIN    6
+#define ZSTD_FRAMEHEADERSIZE_MAX   18   /* can be useful for static allocation */
+#define ZSTD_SKIPPABLEHEADERSIZE    8
+
+/* compression parameter bounds */
+#define ZSTD_WINDOWLOG_MAX_32    30
+#define ZSTD_WINDOWLOG_MAX_64    31
+#define ZSTD_WINDOWLOG_MAX     ((int)(sizeof(size_t) == 4 ? ZSTD_WINDOWLOG_MAX_32 : ZSTD_WINDOWLOG_MAX_64))
+#define ZSTD_WINDOWLOG_MIN       10
+#define ZSTD_HASHLOG_MAX       ((ZSTD_WINDOWLOG_MAX < 30) ? ZSTD_WINDOWLOG_MAX : 30)
+#define ZSTD_HASHLOG_MIN          6
+#define ZSTD_CHAINLOG_MAX_32     29
+#define ZSTD_CHAINLOG_MAX_64     30
+#define ZSTD_CHAINLOG_MAX      ((int)(sizeof(size_t) == 4 ? ZSTD_CHAINLOG_MAX_32 : ZSTD_CHAINLOG_MAX_64))
+#define ZSTD_CHAINLOG_MIN        ZSTD_HASHLOG_MIN
+#define ZSTD_SEARCHLOG_MAX      (ZSTD_WINDOWLOG_MAX-1)
+#define ZSTD_SEARCHLOG_MIN        1
+#define ZSTD_MINMATCH_MAX         7   /* only for ZSTD_fast, other strategies are limited to 6 */
+#define ZSTD_MINMATCH_MIN         3   /* only for ZSTD_btopt+, faster strategies are limited to 4 */
+#define ZSTD_TARGETLENGTH_MAX    ZSTD_BLOCKSIZE_MAX
+#define ZSTD_TARGETLENGTH_MIN     0   /* note : comparing this constant to an unsigned results in a tautological test */
+#define ZSTD_STRATEGY_MIN        ZSTD_fast
+#define ZSTD_STRATEGY_MAX        ZSTD_btultra2
+
+
+#define ZSTD_OVERLAPLOG_MIN       0
+#define ZSTD_OVERLAPLOG_MAX       9
+
+#define ZSTD_WINDOWLOG_LIMIT_DEFAULT 27   /* by default, the streaming decoder will refuse any frame
+                                           * requiring larger than (1<<ZSTD_WINDOWLOG_LIMIT_DEFAULT) window size,
+                                           * to preserve host's memory from unreasonable requirements.
+                                           * This limit can be overriden using ZSTD_DCtx_setParameter(,ZSTD_d_windowLogMax,).
+                                           * The limit does not apply for one-pass decoders (such as ZSTD_decompress()), since no additional memory is allocated */
+
+
+/* LDM parameter bounds */
+#define ZSTD_LDM_HASHLOG_MIN      ZSTD_HASHLOG_MIN
+#define ZSTD_LDM_HASHLOG_MAX      ZSTD_HASHLOG_MAX
+#define ZSTD_LDM_MINMATCH_MIN        4
+#define ZSTD_LDM_MINMATCH_MAX     4096
+#define ZSTD_LDM_BUCKETSIZELOG_MIN   1
+#define ZSTD_LDM_BUCKETSIZELOG_MAX   8
+#define ZSTD_LDM_HASHRATELOG_MIN     0
+#define ZSTD_LDM_HASHRATELOG_MAX (ZSTD_WINDOWLOG_MAX - ZSTD_HASHLOG_MIN)
+
+/* internal */
+#define ZSTD_HASHLOG3_MAX           17
+
+
 /* ---  Advanced types  --- */
-typedef enum { ZSTD_fast=1, ZSTD_dfast, ZSTD_greedy, ZSTD_lazy, ZSTD_lazy2,
-               ZSTD_btlazy2, ZSTD_btopt, ZSTD_btultra } ZSTD_strategy;   /* from faster to stronger */
+
+typedef struct ZSTD_CCtx_params_s ZSTD_CCtx_params;
 
 typedef struct {
-    unsigned windowLog;      /**< largest match distance : larger == more compression, more memory needed during decompression */
-    unsigned chainLog;       /**< fully searched segment : larger == more compression, slower, more memory (useless for fast) */
-    unsigned hashLog;        /**< dispatch table : larger == faster, more memory */
-    unsigned searchLog;      /**< nb of searches : larger == more compression, slower */
-    unsigned searchLength;   /**< match length searched : larger == faster decompression, sometimes less compression */
-    unsigned targetLength;   /**< acceptable match size for optimal parser (only) : larger == more compression, slower */
-    ZSTD_strategy strategy;
+    unsigned windowLog;       /**< largest match distance : larger == more compression, more memory needed during decompression */
+    unsigned chainLog;        /**< fully searched segment : larger == more compression, slower, more memory (useless for fast) */
+    unsigned hashLog;         /**< dispatch table : larger == faster, more memory */
+    unsigned searchLog;       /**< nb of searches : larger == more compression, slower */
+    unsigned minMatch;        /**< match length searched : larger == faster decompression, sometimes less compression */
+    unsigned targetLength;    /**< acceptable match size for optimal parser (only) : larger == more compression, slower */
+    ZSTD_strategy strategy;   /**< see ZSTD_strategy definition above */
 } ZSTD_compressionParameters;
 
 typedef struct {
-    unsigned contentSizeFlag; /**< 1: content size will be in frame header (when known) */
-    unsigned checksumFlag;    /**< 1: generate a 32-bits checksum at end of frame, for error detection */
-    unsigned noDictIDFlag;    /**< 1: no dictID will be saved into frame header (if dictionary compression) */
+    int contentSizeFlag; /**< 1: content size will be in frame header (when known) */
+    int checksumFlag;    /**< 1: generate a 32-bits checksum using XXH64 algorithm at end of frame, for error detection */
+    int noDictIDFlag;    /**< 1: no dictID will be saved into frame header (dictID is only useful for dictionary compression) */
 } ZSTD_frameParameters;
 
 typedef struct {
@@ -464,33 +1005,70 @@
     ZSTD_frameParameters fParams;
 } ZSTD_parameters;
 
-typedef struct ZSTD_CCtx_params_s ZSTD_CCtx_params;
-
 typedef enum {
-    ZSTD_dct_auto=0,      /* dictionary is "full" when starting with ZSTD_MAGIC_DICTIONARY, otherwise it is "rawContent" */
-    ZSTD_dct_rawContent,  /* ensures dictionary is always loaded as rawContent, even if it starts with ZSTD_MAGIC_DICTIONARY */
-    ZSTD_dct_fullDict     /* refuses to load a dictionary if it does not respect Zstandard's specification */
+    ZSTD_dct_auto = 0,       /* dictionary is "full" when starting with ZSTD_MAGIC_DICTIONARY, otherwise it is "rawContent" */
+    ZSTD_dct_rawContent = 1, /* ensures dictionary is always loaded as rawContent, even if it starts with ZSTD_MAGIC_DICTIONARY */
+    ZSTD_dct_fullDict = 2    /* refuses to load a dictionary if it does not respect Zstandard's specification, starting with ZSTD_MAGIC_DICTIONARY */
 } ZSTD_dictContentType_e;
 
 typedef enum {
-    ZSTD_dlm_byCopy = 0, /**< Copy dictionary content internally */
-    ZSTD_dlm_byRef,      /**< Reference dictionary content -- the dictionary buffer must outlive its users. */
+    ZSTD_dlm_byCopy = 0,  /**< Copy dictionary content internally */
+    ZSTD_dlm_byRef = 1,   /**< Reference dictionary content -- the dictionary buffer must outlive its users. */
 } ZSTD_dictLoadMethod_e;
 
+typedef enum {
+    /* Opened question : should we have a format ZSTD_f_auto ?
+     * Today, it would mean exactly the same as ZSTD_f_zstd1.
+     * But, in the future, should several formats become supported,
+     * on the compression side, it would mean "default format".
+     * On the decompression side, it would mean "automatic format detection",
+     * so that ZSTD_f_zstd1 would mean "accept *only* zstd frames".
+     * Since meaning is a little different, another option could be to define different enums for compression and decompression.
+     * This question could be kept for later, when there are actually multiple formats to support,
+     * but there is also the question of pinning enum values, and pinning value `0` is especially important */
+    ZSTD_f_zstd1 = 0,           /* zstd frame format, specified in zstd_compression_format.md (default) */
+    ZSTD_f_zstd1_magicless = 1, /* Variant of zstd frame format, without initial 4-bytes magic number.
+                                 * Useful to save 4 bytes per generated frame.
+                                 * Decoder cannot recognise automatically this format, requiring this instruction. */
+} ZSTD_format_e;
+
+typedef enum {
+    /* Note: this enum and the behavior it controls are effectively internal
+     * implementation details of the compressor. They are expected to continue
+     * to evolve and should be considered only in the context of extremely
+     * advanced performance tuning.
+     *
+     * Zstd currently supports the use of a CDict in two ways:
+     *
+     * - The contents of the CDict can be copied into the working context. This
+     *   means that the compression can search both the dictionary and input
+     *   while operating on a single set of internal tables. This makes
+     *   the compression faster per-byte of input. However, the initial copy of
+     *   the CDict's tables incurs a fixed cost at the beginning of the
+     *   compression. For small compressions (< 8 KB), that copy can dominate
+     *   the cost of the compression.
+     *
+     * - The CDict's tables can be used in-place. In this model, compression is
+     *   slower per input byte, because the compressor has to search two sets of
+     *   tables. However, this model incurs no start-up cost (as long as the
+     *   working context's tables can be reused). For small inputs, this can be
+     *   faster than copying the CDict's tables.
+     *
+     * Zstd has a simple internal heuristic that selects which strategy to use
+     * at the beginning of a compression. However, if experimentation shows that
+     * Zstd is making poor choices, it is possible to override that choice with
+     * this enum.
+     */
+    ZSTD_dictDefaultAttach = 0, /* Use the default heuristic. */
+    ZSTD_dictForceAttach   = 1, /* Never copy the dictionary. */
+    ZSTD_dictForceCopy     = 2, /* Always copy the dictionary. */
+} ZSTD_dictAttachPref_e;
 
 
 /***************************************
 *  Frame size functions
 ***************************************/
 
-/*! ZSTD_findFrameCompressedSize() :
- *  `src` should point to the start of a ZSTD encoded frame or skippable frame
- *  `srcSize` must be >= first frame size
- *  @return : the compressed size of the first frame starting at `src`,
- *            suitable to pass to `ZSTD_decompress` or similar,
- *            or an error code if input is invalid */
-ZSTDLIB_API size_t ZSTD_findFrameCompressedSize(const void* src, size_t srcSize);
-
 /*! ZSTD_findDecompressedSize() :
  *  `src` should point the start of a series of ZSTD encoded and/or skippable frames
  *  `srcSize` must be the _exact_ size of this series
@@ -515,7 +1093,7 @@
 ZSTDLIB_API unsigned long long ZSTD_findDecompressedSize(const void* src, size_t srcSize);
 
 /*! ZSTD_frameHeaderSize() :
- *  srcSize must be >= ZSTD_frameHeaderSize_prefix.
+ *  srcSize must be >= ZSTD_FRAMEHEADERSIZE_PREFIX.
  * @return : size of the Frame Header,
  *           or an error code (if srcSize is too small) */
 ZSTDLIB_API size_t ZSTD_frameHeaderSize(const void* src, size_t srcSize);
@@ -525,16 +1103,6 @@
 *  Memory management
 ***************************************/
 
-/*! ZSTD_sizeof_*() :
- *  These functions give the current memory usage of selected object.
- *  Object memory usage can evolve when re-used. */
-ZSTDLIB_API size_t ZSTD_sizeof_CCtx(const ZSTD_CCtx* cctx);
-ZSTDLIB_API size_t ZSTD_sizeof_DCtx(const ZSTD_DCtx* dctx);
-ZSTDLIB_API size_t ZSTD_sizeof_CStream(const ZSTD_CStream* zcs);
-ZSTDLIB_API size_t ZSTD_sizeof_DStream(const ZSTD_DStream* zds);
-ZSTDLIB_API size_t ZSTD_sizeof_CDict(const ZSTD_CDict* cdict);
-ZSTDLIB_API size_t ZSTD_sizeof_DDict(const ZSTD_DDict* ddict);
-
 /*! ZSTD_estimate*() :
  *  These functions make it possible to estimate memory usage
  *  of a future {D,C}Ctx, before its creation.
@@ -542,7 +1110,7 @@
  *  It will also consider src size to be arbitrarily "large", which is worst case.
  *  If srcSize is known to always be small, ZSTD_estimateCCtxSize_usingCParams() can provide a tighter estimation.
  *  ZSTD_estimateCCtxSize_usingCParams() can be used in tandem with ZSTD_getCParams() to create cParams from compressionLevel.
- *  ZSTD_estimateCCtxSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParam_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_p_nbWorkers is >= 1.
+ *  ZSTD_estimateCCtxSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParam_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_c_nbWorkers is >= 1.
  *  Note : CCtx size estimation is only correct for single-threaded compression. */
 ZSTDLIB_API size_t ZSTD_estimateCCtxSize(int compressionLevel);
 ZSTDLIB_API size_t ZSTD_estimateCCtxSize_usingCParams(ZSTD_compressionParameters cParams);
@@ -554,7 +1122,7 @@
  *  It will also consider src size to be arbitrarily "large", which is worst case.
  *  If srcSize is known to always be small, ZSTD_estimateCStreamSize_usingCParams() can provide a tighter estimation.
  *  ZSTD_estimateCStreamSize_usingCParams() can be used in tandem with ZSTD_getCParams() to create cParams from compressionLevel.
- *  ZSTD_estimateCStreamSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParam_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_p_nbWorkers is >= 1.
+ *  ZSTD_estimateCStreamSize_usingCCtxParams() can be used in tandem with ZSTD_CCtxParam_setParameter(). Only single-threaded compression is supported. This function will return an error code if ZSTD_c_nbWorkers is >= 1.
  *  Note : CStream size estimation is only correct for single-threaded compression.
  *  ZSTD_DStream memory budget depends on window Size.
  *  This information can be passed manually, using ZSTD_estimateDStreamSize,
@@ -617,6 +1185,7 @@
                                         ZSTD_dictLoadMethod_e dictLoadMethod,
                                         ZSTD_dictContentType_e dictContentType);
 
+
 /*! Custom memory allocation :
  *  These prototypes make it possible to pass your own allocation/free functions.
  *  ZSTD_customMem is provided at creation time, using ZSTD_create*_advanced() variants listed below.
@@ -651,8 +1220,9 @@
 
 /*! ZSTD_createCDict_byReference() :
  *  Create a digested dictionary for compression
- *  Dictionary content is simply referenced, and therefore stays in dictBuffer.
- *  It is important that dictBuffer outlives CDict, it must remain read accessible throughout the lifetime of CDict */
+ *  Dictionary content is just referenced, not duplicated.
+ *  As a consequence, `dictBuffer` **must** outlive CDict,
+ *  and its content must remain unmodified throughout the lifetime of CDict. */
 ZSTDLIB_API ZSTD_CDict* ZSTD_createCDict_byReference(const void* dictBuffer, size_t dictSize, int compressionLevel);
 
 /*! ZSTD_getCParams() :
@@ -675,22 +1245,161 @@
 ZSTDLIB_API ZSTD_compressionParameters ZSTD_adjustCParams(ZSTD_compressionParameters cPar, unsigned long long srcSize, size_t dictSize);
 
 /*! ZSTD_compress_advanced() :
-*   Same as ZSTD_compress_usingDict(), with fine-tune control over each compression parameter */
-ZSTDLIB_API size_t ZSTD_compress_advanced (ZSTD_CCtx* cctx,
-                                  void* dst, size_t dstCapacity,
-                            const void* src, size_t srcSize,
-                            const void* dict,size_t dictSize,
-                                  ZSTD_parameters params);
+ *  Same as ZSTD_compress_usingDict(), with fine-tune control over compression parameters (by structure) */
+ZSTDLIB_API size_t ZSTD_compress_advanced(ZSTD_CCtx* cctx,
+                                          void* dst, size_t dstCapacity,
+                                    const void* src, size_t srcSize,
+                                    const void* dict,size_t dictSize,
+                                          ZSTD_parameters params);
 
 /*! ZSTD_compress_usingCDict_advanced() :
-*   Same as ZSTD_compress_usingCDict(), with fine-tune control over frame parameters */
+ *  Same as ZSTD_compress_usingCDict(), with fine-tune control over frame parameters */
 ZSTDLIB_API size_t ZSTD_compress_usingCDict_advanced(ZSTD_CCtx* cctx,
-                                  void* dst, size_t dstCapacity,
-                            const void* src, size_t srcSize,
-                            const ZSTD_CDict* cdict, ZSTD_frameParameters fParams);
+                                              void* dst, size_t dstCapacity,
+                                        const void* src, size_t srcSize,
+                                        const ZSTD_CDict* cdict,
+                                              ZSTD_frameParameters fParams);
+
+
+/*! ZSTD_CCtx_loadDictionary_byReference() :
+ *  Same as ZSTD_CCtx_loadDictionary(), but dictionary content is referenced, instead of being copied into CCtx.
+ *  It saves some memory, but also requires that `dict` outlives its usage within `cctx` */
+ZSTDLIB_API size_t ZSTD_CCtx_loadDictionary_byReference(ZSTD_CCtx* cctx, const void* dict, size_t dictSize);
+
+/*! ZSTD_CCtx_loadDictionary_advanced() :
+ *  Same as ZSTD_CCtx_loadDictionary(), but gives finer control over
+ *  how to load the dictionary (by copy ? by reference ?)
+ *  and how to interpret it (automatic ? force raw mode ? full mode only ?) */
+ZSTDLIB_API size_t ZSTD_CCtx_loadDictionary_advanced(ZSTD_CCtx* cctx, const void* dict, size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod, ZSTD_dictContentType_e dictContentType);
+
+/*! ZSTD_CCtx_refPrefix_advanced() :
+ *  Same as ZSTD_CCtx_refPrefix(), but gives finer control over
+ *  how to interpret prefix content (automatic ? force raw mode (default) ? full mode only ?) */
+ZSTDLIB_API size_t ZSTD_CCtx_refPrefix_advanced(ZSTD_CCtx* cctx, const void* prefix, size_t prefixSize, ZSTD_dictContentType_e dictContentType);
+
+/* ===   experimental parameters   === */
+/* these parameters can be used with ZSTD_setParameter()
+ * they are not guaranteed to remain supported in the future */
+
+ /* Enables rsyncable mode,
+  * which makes compressed files more rsync friendly
+  * by adding periodic synchronization points to the compressed data.
+  * The target average block size is ZSTD_c_jobSize / 2.
+  * It's possible to modify the job size to increase or decrease
+  * the granularity of the synchronization point.
+  * Once the jobSize is smaller than the window size,
+  * it will result in compression ratio degradation.
+  * NOTE 1: rsyncable mode only works when multithreading is enabled.
+  * NOTE 2: rsyncable performs poorly in combination with long range mode,
+  * since it will decrease the effectiveness of synchronization points,
+  * though mileage may vary.
+  * NOTE 3: Rsyncable mode limits maximum compression speed to ~400 MB/s.
+  * If the selected compression level is already running significantly slower,
+  * the overall speed won't be significantly impacted.
+  */
+ #define ZSTD_c_rsyncable ZSTD_c_experimentalParam1
+
+/* Select a compression format.
+ * The value must be of type ZSTD_format_e.
+ * See ZSTD_format_e enum definition for details */
+#define ZSTD_c_format ZSTD_c_experimentalParam2
+
+/* Force back-reference distances to remain < windowSize,
+ * even when referencing into Dictionary content (default:0) */
+#define ZSTD_c_forceMaxWindow ZSTD_c_experimentalParam3
+
+/* Controls whether the contents of a CDict
+ * are used in place, or copied into the working context.
+ * Accepts values from the ZSTD_dictAttachPref_e enum.
+ * See the comments on that enum for an explanation of the feature. */
+#define ZSTD_c_forceAttachDict ZSTD_c_experimentalParam4
+
+/*! ZSTD_CCtx_getParameter() :
+ *  Get the requested compression parameter value, selected by enum ZSTD_cParameter,
+ *  and store it into int* value.
+ * @return : 0, or an error code (which can be tested with ZSTD_isError()).
+ */
+ZSTDLIB_API size_t ZSTD_CCtx_getParameter(ZSTD_CCtx* cctx, ZSTD_cParameter param, int* value);
 
 
-/*--- Advanced decompression functions ---*/
+/*! ZSTD_CCtx_params :
+ *  Quick howto :
+ *  - ZSTD_createCCtxParams() : Create a ZSTD_CCtx_params structure
+ *  - ZSTD_CCtxParam_setParameter() : Push parameters one by one into
+ *                                    an existing ZSTD_CCtx_params structure.
+ *                                    This is similar to
+ *                                    ZSTD_CCtx_setParameter().
+ *  - ZSTD_CCtx_setParametersUsingCCtxParams() : Apply parameters to
+ *                                    an existing CCtx.
+ *                                    These parameters will be applied to
+ *                                    all subsequent frames.
+ *  - ZSTD_compressStream2() : Do compression using the CCtx.
+ *  - ZSTD_freeCCtxParams() : Free the memory.
+ *
+ *  This can be used with ZSTD_estimateCCtxSize_advanced_usingCCtxParams()
+ *  for static allocation of CCtx for single-threaded compression.
+ */
+ZSTDLIB_API ZSTD_CCtx_params* ZSTD_createCCtxParams(void);
+ZSTDLIB_API size_t ZSTD_freeCCtxParams(ZSTD_CCtx_params* params);
+
+/*! ZSTD_CCtxParams_reset() :
+ *  Reset params to default values.
+ */
+ZSTDLIB_API size_t ZSTD_CCtxParams_reset(ZSTD_CCtx_params* params);
+
+/*! ZSTD_CCtxParams_init() :
+ *  Initializes the compression parameters of cctxParams according to
+ *  compression level. All other parameters are reset to their default values.
+ */
+ZSTDLIB_API size_t ZSTD_CCtxParams_init(ZSTD_CCtx_params* cctxParams, int compressionLevel);
+
+/*! ZSTD_CCtxParams_init_advanced() :
+ *  Initializes the compression and frame parameters of cctxParams according to
+ *  params. All other parameters are reset to their default values.
+ */
+ZSTDLIB_API size_t ZSTD_CCtxParams_init_advanced(ZSTD_CCtx_params* cctxParams, ZSTD_parameters params);
+
+/*! ZSTD_CCtxParam_setParameter() :
+ *  Similar to ZSTD_CCtx_setParameter.
+ *  Set one compression parameter, selected by enum ZSTD_cParameter.
+ *  Parameters must be applied to a ZSTD_CCtx using ZSTD_CCtx_setParametersUsingCCtxParams().
+ * @result : 0, or an error code (which can be tested with ZSTD_isError()).
+ */
+ZSTDLIB_API size_t ZSTD_CCtxParam_setParameter(ZSTD_CCtx_params* params, ZSTD_cParameter param, int value);
+
+/*! ZSTD_CCtxParam_getParameter() :
+ * Similar to ZSTD_CCtx_getParameter.
+ * Get the requested value of one compression parameter, selected by enum ZSTD_cParameter.
+ * @result : 0, or an error code (which can be tested with ZSTD_isError()).
+ */
+ZSTDLIB_API size_t ZSTD_CCtxParam_getParameter(ZSTD_CCtx_params* params, ZSTD_cParameter param, int* value);
+
+/*! ZSTD_CCtx_setParametersUsingCCtxParams() :
+ *  Apply a set of ZSTD_CCtx_params to the compression context.
+ *  This can be done even after compression is started,
+ *    if nbWorkers==0, this will have no impact until a new compression is started.
+ *    if nbWorkers>=1, new parameters will be picked up at next job,
+ *       with a few restrictions (windowLog, pledgedSrcSize, nbWorkers, jobSize, and overlapLog are not updated).
+ */
+ZSTDLIB_API size_t ZSTD_CCtx_setParametersUsingCCtxParams(
+        ZSTD_CCtx* cctx, const ZSTD_CCtx_params* params);
+
+/*! ZSTD_compressStream2_simpleArgs() :
+ *  Same as ZSTD_compressStream2(),
+ *  but using only integral types as arguments.
+ *  This variant might be helpful for binders from dynamic languages
+ *  which have troubles handling structures containing memory pointers.
+ */
+ZSTDLIB_API size_t ZSTD_compressStream2_simpleArgs (
+                            ZSTD_CCtx* cctx,
+                            void* dst, size_t dstCapacity, size_t* dstPos,
+                      const void* src, size_t srcSize, size_t* srcPos,
+                            ZSTD_EndDirective endOp);
+
+
+/***************************************
+*  Advanced decompression functions
+***************************************/
 
 /*! ZSTD_isFrame() :
  *  Tells if the content of `buffer` starts with a valid Frame Identifier.
@@ -731,9 +1440,64 @@
  *  When identifying the exact failure cause, it's possible to use ZSTD_getFrameHeader(), which will provide a more precise error code. */
 ZSTDLIB_API unsigned ZSTD_getDictID_fromFrame(const void* src, size_t srcSize);
 
+/*! ZSTD_DCtx_loadDictionary_byReference() :
+ *  Same as ZSTD_DCtx_loadDictionary(),
+ *  but references `dict` content instead of copying it into `dctx`.
+ *  This saves memory if `dict` remains around.,
+ *  However, it's imperative that `dict` remains accessible (and unmodified) while being used, so it must outlive decompression. */
+ZSTDLIB_API size_t ZSTD_DCtx_loadDictionary_byReference(ZSTD_DCtx* dctx, const void* dict, size_t dictSize);
+
+/*! ZSTD_DCtx_loadDictionary_advanced() :
+ *  Same as ZSTD_DCtx_loadDictionary(),
+ *  but gives direct control over
+ *  how to load the dictionary (by copy ? by reference ?)
+ *  and how to interpret it (automatic ? force raw mode ? full mode only ?). */
+ZSTDLIB_API size_t ZSTD_DCtx_loadDictionary_advanced(ZSTD_DCtx* dctx, const void* dict, size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod, ZSTD_dictContentType_e dictContentType);
+
+/*! ZSTD_DCtx_refPrefix_advanced() :
+ *  Same as ZSTD_DCtx_refPrefix(), but gives finer control over
+ *  how to interpret prefix content (automatic ? force raw mode (default) ? full mode only ?) */
+ZSTDLIB_API size_t ZSTD_DCtx_refPrefix_advanced(ZSTD_DCtx* dctx, const void* prefix, size_t prefixSize, ZSTD_dictContentType_e dictContentType);
+
+/*! ZSTD_DCtx_setMaxWindowSize() :
+ *  Refuses allocating internal buffers for frames requiring a window size larger than provided limit.
+ *  This protects a decoder context from reserving too much memory for itself (potential attack scenario).
+ *  This parameter is only useful in streaming mode, since no internal buffer is allocated in single-pass mode.
+ *  By default, a decompression context accepts all window sizes <= (1 << ZSTD_WINDOWLOG_LIMIT_DEFAULT)
+ * @return : 0, or an error code (which can be tested using ZSTD_isError()).
+ */
+ZSTDLIB_API size_t ZSTD_DCtx_setMaxWindowSize(ZSTD_DCtx* dctx, size_t maxWindowSize);
+
+/* ZSTD_d_format
+ * experimental parameter,
+ * allowing selection between ZSTD_format_e input compression formats
+ */
+#define ZSTD_d_format ZSTD_d_experimentalParam1
+
+/*! ZSTD_DCtx_setFormat() :
+ *  Instruct the decoder context about what kind of data to decode next.
+ *  This instruction is mandatory to decode data without a fully-formed header,
+ *  such ZSTD_f_zstd1_magicless for example.
+ * @return : 0, or an error code (which can be tested using ZSTD_isError()). */
+ZSTDLIB_API size_t ZSTD_DCtx_setFormat(ZSTD_DCtx* dctx, ZSTD_format_e format);
+
+/*! ZSTD_decompressStream_simpleArgs() :
+ *  Same as ZSTD_decompressStream(),
+ *  but using only integral types as arguments.
+ *  This can be helpful for binders from dynamic languages
+ *  which have troubles handling structures containing memory pointers.
+ */
+ZSTDLIB_API size_t ZSTD_decompressStream_simpleArgs (
+                            ZSTD_DCtx* dctx,
+                            void* dst, size_t dstCapacity, size_t* dstPos,
+                      const void* src, size_t srcSize, size_t* srcPos);
+
 
 /********************************************************************
 *  Advanced streaming functions
+*  Warning : most of these functions are now redundant with the Advanced API.
+*  Once Advanced API reaches "stable" status,
+*  redundant functions will be deprecated, and then at some point removed.
 ********************************************************************/
 
 /*=====   Advanced Streaming compression functions  =====*/
@@ -745,7 +1509,7 @@
 ZSTDLIB_API size_t ZSTD_initCStream_usingCDict_advanced(ZSTD_CStream* zcs, const ZSTD_CDict* cdict, ZSTD_frameParameters fParams, unsigned long long pledgedSrcSize);  /**< same as ZSTD_initCStream_usingCDict(), with control over frame parameters. pledgedSrcSize must be correct. If srcSize is not known at init time, use value ZSTD_CONTENTSIZE_UNKNOWN. */
 
 /*! ZSTD_resetCStream() :
- *  start a new compression job, using same parameters from previous job.
+ *  start a new frame, using same parameters from previous frame.
  *  This is typically useful to skip dictionary loading stage, since it will re-use it in-place.
  *  Note that zcs must be init at least once before using ZSTD_resetCStream().
  *  If pledgedSrcSize is not known at reset time, use macro ZSTD_CONTENTSIZE_UNKNOWN.
@@ -784,16 +1548,13 @@
  *  + there is no active job (could be checked with ZSTD_frameProgression()), or
  *  + oldest job is still actively compressing data,
  *    but everything it has produced has also been flushed so far,
- *    therefore flushing speed is currently limited by production speed of oldest job
- *    irrespective of the speed of concurrent newer jobs.
+ *    therefore flush speed is limited by production speed of oldest job
+ *    irrespective of the speed of concurrent (and newer) jobs.
  */
 ZSTDLIB_API size_t ZSTD_toFlushNow(ZSTD_CCtx* cctx);
 
 
-
 /*=====   Advanced Streaming decompression functions  =====*/
-typedef enum { DStream_p_maxWindowSize } ZSTD_DStreamParameter_e;
-ZSTDLIB_API size_t ZSTD_setDStreamParameter(ZSTD_DStream* zds, ZSTD_DStreamParameter_e paramType, unsigned paramValue);   /* obsolete : this API will be removed in a future version */
 ZSTDLIB_API size_t ZSTD_initDStream_usingDict(ZSTD_DStream* zds, const void* dict, size_t dictSize); /**< note: no dictionary will be used if dict == NULL or dictSize < 8 */
 ZSTDLIB_API size_t ZSTD_initDStream_usingDDict(ZSTD_DStream* zds, const ZSTD_DDict* ddict);  /**< note : ddict is referenced, it must outlive decompression session */
 ZSTDLIB_API size_t ZSTD_resetDStream(ZSTD_DStream* zds);  /**< re-use decompression parameters from previous init; saves dictionary loading */
@@ -934,12 +1695,17 @@
     unsigned dictID;
     unsigned checksumFlag;
 } ZSTD_frameHeader;
+
 /** ZSTD_getFrameHeader() :
  *  decode Frame Header, or requires larger `srcSize`.
  * @return : 0, `zfhPtr` is correctly filled,
  *          >0, `srcSize` is too small, value is wanted `srcSize` amount,
  *           or an error code, which can be tested using ZSTD_isError() */
 ZSTDLIB_API size_t ZSTD_getFrameHeader(ZSTD_frameHeader* zfhPtr, const void* src, size_t srcSize);   /**< doesn't consume input */
+/*! ZSTD_getFrameHeader_advanced() :
+ *  same as ZSTD_getFrameHeader(),
+ *  with added capability to select a format (like ZSTD_f_zstd1_magicless) */
+ZSTDLIB_API size_t ZSTD_getFrameHeader_advanced(ZSTD_frameHeader* zfhPtr, const void* src, size_t srcSize, ZSTD_format_e format);
 ZSTDLIB_API size_t ZSTD_decodingBufferSize_min(unsigned long long windowSize, unsigned long long frameContentSize);  /**< when frame content size is not known, pass in frameContentSize == ZSTD_CONTENTSIZE_UNKNOWN */
 
 ZSTDLIB_API size_t ZSTD_decompressBegin(ZSTD_DCtx* dctx);
@@ -956,522 +1722,6 @@
 
 
 
-/* ============================================ */
-/**       New advanced API (experimental)       */
-/* ============================================ */
-
-/* API design :
- *   In this advanced API, parameters are pushed one by one into an existing context,
- *   using ZSTD_CCtx_set*() functions.
- *   Pushed parameters are sticky : they are applied to next job, and any subsequent job.
- *   It's possible to reset parameters to "default" using ZSTD_CCtx_reset().
- *   Important : "sticky" parameters only work with `ZSTD_compress_generic()` !
- *               For any other entry point, "sticky" parameters are ignored !
- *
- *   This API is intended to replace all others advanced / experimental API entry points.
- */
-
-/* note on enum design :
- * All enum will be pinned to explicit values before reaching "stable API" status */
-
-typedef enum {
-    /* Opened question : should we have a format ZSTD_f_auto ?
-     * Today, it would mean exactly the same as ZSTD_f_zstd1.
-     * But, in the future, should several formats become supported,
-     * on the compression side, it would mean "default format".
-     * On the decompression side, it would mean "automatic format detection",
-     * so that ZSTD_f_zstd1 would mean "accept *only* zstd frames".
-     * Since meaning is a little different, another option could be to define different enums for compression and decompression.
-     * This question could be kept for later, when there are actually multiple formats to support,
-     * but there is also the question of pinning enum values, and pinning value `0` is especially important */
-    ZSTD_f_zstd1 = 0,        /* zstd frame format, specified in zstd_compression_format.md (default) */
-    ZSTD_f_zstd1_magicless,  /* Variant of zstd frame format, without initial 4-bytes magic number.
-                              * Useful to save 4 bytes per generated frame.
-                              * Decoder cannot recognise automatically this format, requiring instructions. */
-} ZSTD_format_e;
-
-typedef enum {
-    /* compression format */
-    ZSTD_p_format = 10,      /* See ZSTD_format_e enum definition.
-                              * Cast selected format as unsigned for ZSTD_CCtx_setParameter() compatibility. */
-
-    /* compression parameters */
-    ZSTD_p_compressionLevel=100, /* Update all compression parameters according to pre-defined cLevel table
-                              * Default level is ZSTD_CLEVEL_DEFAULT==3.
-                              * Special: value 0 means default, which is controlled by ZSTD_CLEVEL_DEFAULT.
-                              * Note 1 : it's possible to pass a negative compression level by casting it to unsigned type.
-                              * Note 2 : setting a level sets all default values of other compression parameters.
-                              * Note 3 : setting compressionLevel automatically updates ZSTD_p_compressLiterals. */
-    ZSTD_p_windowLog,        /* Maximum allowed back-reference distance, expressed as power of 2.
-                              * Must be clamped between ZSTD_WINDOWLOG_MIN and ZSTD_WINDOWLOG_MAX.
-                              * Special: value 0 means "use default windowLog".
-                              * Note: Using a window size greater than ZSTD_MAXWINDOWSIZE_DEFAULT (default: 2^27)
-                              *       requires explicitly allowing such window size during decompression stage. */
-    ZSTD_p_hashLog,          /* Size of the initial probe table, as a power of 2.
-                              * Resulting table size is (1 << (hashLog+2)).
-                              * Must be clamped between ZSTD_HASHLOG_MIN and ZSTD_HASHLOG_MAX.
-                              * Larger tables improve compression ratio of strategies <= dFast,
-                              * and improve speed of strategies > dFast.
-                              * Special: value 0 means "use default hashLog". */
-    ZSTD_p_chainLog,         /* Size of the multi-probe search table, as a power of 2.
-                              * Resulting table size is (1 << (chainLog+2)).
-                              * Must be clamped between ZSTD_CHAINLOG_MIN and ZSTD_CHAINLOG_MAX.
-                              * Larger tables result in better and slower compression.
-                              * This parameter is useless when using "fast" strategy.
-                              * Note it's still useful when using "dfast" strategy,
-                              * in which case it defines a secondary probe table.
-                              * Special: value 0 means "use default chainLog". */
-    ZSTD_p_searchLog,        /* Number of search attempts, as a power of 2.
-                              * More attempts result in better and slower compression.
-                              * This parameter is useless when using "fast" and "dFast" strategies.
-                              * Special: value 0 means "use default searchLog". */
-    ZSTD_p_minMatch,         /* Minimum size of searched matches (note : repCode matches can be smaller).
-                              * Larger values make faster compression and decompression, but decrease ratio.
-                              * Must be clamped between ZSTD_SEARCHLENGTH_MIN and ZSTD_SEARCHLENGTH_MAX.
-                              * Note that currently, for all strategies < btopt, effective minimum is 4.
-                              *                    , for all strategies > fast, effective maximum is 6.
-                              * Special: value 0 means "use default minMatchLength". */
-    ZSTD_p_targetLength,     /* Impact of this field depends on strategy.
-                              * For strategies btopt & btultra:
-                              *     Length of Match considered "good enough" to stop search.
-                              *     Larger values make compression stronger, and slower.
-                              * For strategy fast:
-                              *     Distance between match sampling.
-                              *     Larger values make compression faster, and weaker.
-                              * Special: value 0 means "use default targetLength". */
-    ZSTD_p_compressionStrategy, /* See ZSTD_strategy enum definition.
-                              * Cast selected strategy as unsigned for ZSTD_CCtx_setParameter() compatibility.
-                              * The higher the value of selected strategy, the more complex it is,
-                              * resulting in stronger and slower compression.
-                              * Special: value 0 means "use default strategy". */
-
-    ZSTD_p_enableLongDistanceMatching=160, /* Enable long distance matching.
-                                         * This parameter is designed to improve compression ratio
-                                         * for large inputs, by finding large matches at long distance.
-                                         * It increases memory usage and window size.
-                                         * Note: enabling this parameter increases ZSTD_p_windowLog to 128 MB
-                                         * except when expressly set to a different value. */
-    ZSTD_p_ldmHashLog,       /* Size of the table for long distance matching, as a power of 2.
-                              * Larger values increase memory usage and compression ratio,
-                              * but decrease compression speed.
-                              * Must be clamped between ZSTD_HASHLOG_MIN and ZSTD_HASHLOG_MAX
-                              * default: windowlog - 7.
-                              * Special: value 0 means "automatically determine hashlog". */
-    ZSTD_p_ldmMinMatch,      /* Minimum match size for long distance matcher.
-                              * Larger/too small values usually decrease compression ratio.
-                              * Must be clamped between ZSTD_LDM_MINMATCH_MIN and ZSTD_LDM_MINMATCH_MAX.
-                              * Special: value 0 means "use default value" (default: 64). */
-    ZSTD_p_ldmBucketSizeLog, /* Log size of each bucket in the LDM hash table for collision resolution.
-                              * Larger values improve collision resolution but decrease compression speed.
-                              * The maximum value is ZSTD_LDM_BUCKETSIZELOG_MAX .
-                              * Special: value 0 means "use default value" (default: 3). */
-    ZSTD_p_ldmHashEveryLog,  /* Frequency of inserting/looking up entries in the LDM hash table.
-                              * Must be clamped between 0 and (ZSTD_WINDOWLOG_MAX - ZSTD_HASHLOG_MIN).
-                              * Default is MAX(0, (windowLog - ldmHashLog)), optimizing hash table usage.
-                              * Larger values improve compression speed.
-                              * Deviating far from default value will likely result in a compression ratio decrease.
-                              * Special: value 0 means "automatically determine hashEveryLog". */
-
-    /* frame parameters */
-    ZSTD_p_contentSizeFlag=200, /* Content size will be written into frame header _whenever known_ (default:1)
-                              * Content size must be known at the beginning of compression,
-                              * it is provided using ZSTD_CCtx_setPledgedSrcSize() */
-    ZSTD_p_checksumFlag,     /* A 32-bits checksum of content is written at end of frame (default:0) */
-    ZSTD_p_dictIDFlag,       /* When applicable, dictionary's ID is written into frame header (default:1) */
-
-    /* multi-threading parameters */
-    /* These parameters are only useful if multi-threading is enabled (ZSTD_MULTITHREAD).
-     * They return an error otherwise. */
-    ZSTD_p_nbWorkers=400,    /* Select how many threads will be spawned to compress in parallel.
-                              * When nbWorkers >= 1, triggers asynchronous mode :
-                              * ZSTD_compress_generic() consumes some input, flush some output if possible, and immediately gives back control to caller,
-                              * while compression work is performed in parallel, within worker threads.
-                              * (note : a strong exception to this rule is when first invocation sets ZSTD_e_end : it becomes a blocking call).
-                              * More workers improve speed, but also increase memory usage.
-                              * Default value is `0`, aka "single-threaded mode" : no worker is spawned, compression is performed inside Caller's thread, all invocations are blocking */
-    ZSTD_p_jobSize,          /* Size of a compression job. This value is enforced only in non-blocking mode.
-                              * Each compression job is completed in parallel, so this value indirectly controls the nb of active threads.
-                              * 0 means default, which is dynamically determined based on compression parameters.
-                              * Job size must be a minimum of overlapSize, or 1 MB, whichever is largest.
-                              * The minimum size is automatically and transparently enforced */
-    ZSTD_p_overlapSizeLog,   /* Size of previous input reloaded at the beginning of each job.
-                              * 0 => no overlap, 6(default) => use 1/8th of windowSize, >=9 => use full windowSize */
-
-    /* =================================================================== */
-    /* experimental parameters - no stability guaranteed                   */
-    /* =================================================================== */
-
-    ZSTD_p_forceMaxWindow=1100, /* Force back-reference distances to remain < windowSize,
-                              * even when referencing into Dictionary content (default:0) */
-    ZSTD_p_forceAttachDict,  /* ZSTD supports usage of a CDict in-place
-                              * (avoiding having to copy the compression tables
-                              * from the CDict into the working context). Using
-                              * a CDict in this way saves an initial setup step,
-                              * but comes at the cost of more work per byte of
-                              * input. ZSTD has a simple internal heuristic that
-                              * guesses which strategy will be faster. You can
-                              * use this flag to override that guess.
-                              *
-                              * Note that the by-reference, in-place strategy is
-                              * only used when reusing a compression context
-                              * with compatible compression parameters. (If
-                              * incompatible / uninitialized, the working
-                              * context needs to be cleared anyways, which is
-                              * about as expensive as overwriting it with the
-                              * dictionary context, so there's no savings in
-                              * using the CDict by-ref.)
-                              *
-                              * Values greater than 0 force attaching the dict.
-                              * Values less than 0 force copying the dict.
-                              * 0 selects the default heuristic-guided behavior.
-                              */
-
-} ZSTD_cParameter;
-
-
-/*! ZSTD_CCtx_setParameter() :
- *  Set one compression parameter, selected by enum ZSTD_cParameter.
- *  Setting a parameter is generally only possible during frame initialization (before starting compression).
- *  Exception : when using multi-threading mode (nbThreads >= 1),
- *              following parameters can be updated _during_ compression (within same frame):
- *              => compressionLevel, hashLog, chainLog, searchLog, minMatch, targetLength and strategy.
- *              new parameters will be active on next job, or after a flush().
- *  Note : when `value` type is not unsigned (int, or enum), cast it to unsigned for proper type checking.
- *  @result : informational value (typically, value being set, correctly clamped),
- *            or an error code (which can be tested with ZSTD_isError()). */
-ZSTDLIB_API size_t ZSTD_CCtx_setParameter(ZSTD_CCtx* cctx, ZSTD_cParameter param, unsigned value);
-
-/*! ZSTD_CCtx_getParameter() :
- * Get the requested value of one compression parameter, selected by enum ZSTD_cParameter.
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- */
-ZSTDLIB_API size_t ZSTD_CCtx_getParameter(ZSTD_CCtx* cctx, ZSTD_cParameter param, unsigned* value);
-
-/*! ZSTD_CCtx_setPledgedSrcSize() :
- *  Total input data size to be compressed as a single frame.
- *  This value will be controlled at the end, and result in error if not respected.
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- *  Note 1 : 0 means zero, empty.
- *           In order to mean "unknown content size", pass constant ZSTD_CONTENTSIZE_UNKNOWN.
- *           ZSTD_CONTENTSIZE_UNKNOWN is default value for any new compression job.
- *  Note 2 : If all data is provided and consumed in a single round,
- *           this value is overriden by srcSize instead. */
-ZSTDLIB_API size_t ZSTD_CCtx_setPledgedSrcSize(ZSTD_CCtx* cctx, unsigned long long pledgedSrcSize);
-
-/*! ZSTD_CCtx_loadDictionary() :
- *  Create an internal CDict from `dict` buffer.
- *  Decompression will have to use same dictionary.
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- *  Special: Adding a NULL (or 0-size) dictionary invalidates previous dictionary,
- *           meaning "return to no-dictionary mode".
- *  Note 1 : Dictionary will be used for all future compression jobs.
- *           To return to "no-dictionary" situation, load a NULL dictionary
- *  Note 2 : Loading a dictionary involves building tables, which are dependent on compression parameters.
- *           For this reason, compression parameters cannot be changed anymore after loading a dictionary.
- *           It's also a CPU consuming operation, with non-negligible impact on latency.
- *  Note 3 :`dict` content will be copied internally.
- *           Use ZSTD_CCtx_loadDictionary_byReference() to reference dictionary content instead.
- *           In such a case, dictionary buffer must outlive its users.
- *  Note 4 : Use ZSTD_CCtx_loadDictionary_advanced()
- *           to precisely select how dictionary content must be interpreted. */
-ZSTDLIB_API size_t ZSTD_CCtx_loadDictionary(ZSTD_CCtx* cctx, const void* dict, size_t dictSize);
-ZSTDLIB_API size_t ZSTD_CCtx_loadDictionary_byReference(ZSTD_CCtx* cctx, const void* dict, size_t dictSize);
-ZSTDLIB_API size_t ZSTD_CCtx_loadDictionary_advanced(ZSTD_CCtx* cctx, const void* dict, size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod, ZSTD_dictContentType_e dictContentType);
-
-
-/*! ZSTD_CCtx_refCDict() :
- *  Reference a prepared dictionary, to be used for all next compression jobs.
- *  Note that compression parameters are enforced from within CDict,
- *  and supercede any compression parameter previously set within CCtx.
- *  The dictionary will remain valid for future compression jobs using same CCtx.
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- *  Special : adding a NULL CDict means "return to no-dictionary mode".
- *  Note 1 : Currently, only one dictionary can be managed.
- *           Adding a new dictionary effectively "discards" any previous one.
- *  Note 2 : CDict is just referenced, its lifetime must outlive CCtx. */
-ZSTDLIB_API size_t ZSTD_CCtx_refCDict(ZSTD_CCtx* cctx, const ZSTD_CDict* cdict);
-
-/*! ZSTD_CCtx_refPrefix() :
- *  Reference a prefix (single-usage dictionary) for next compression job.
- *  Decompression will need same prefix to properly regenerate data.
- *  Compressing with a prefix is similar in outcome as performing a diff and compressing it,
- *  but performs much faster, especially during decompression (compression speed is tunable with compression level).
- *  Note that prefix is **only used once**. Tables are discarded at end of compression job (ZSTD_e_end).
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- *  Special: Adding any prefix (including NULL) invalidates any previous prefix or dictionary
- *  Note 1 : Prefix buffer is referenced. It **must** outlive compression job.
- *           Its contain must remain unmodified up to end of compression (ZSTD_e_end).
- *  Note 2 : If the intention is to diff some large src data blob with some prior version of itself,
- *           ensure that the window size is large enough to contain the entire source.
- *           See ZSTD_p_windowLog.
- *  Note 3 : Referencing a prefix involves building tables, which are dependent on compression parameters.
- *           It's a CPU consuming operation, with non-negligible impact on latency.
- *           If there is a need to use same prefix multiple times, consider loadDictionary instead.
- *  Note 4 : By default, the prefix is treated as raw content (ZSTD_dm_rawContent).
- *           Use ZSTD_CCtx_refPrefix_advanced() to alter dictMode. */
-ZSTDLIB_API size_t ZSTD_CCtx_refPrefix(ZSTD_CCtx* cctx,
-                                       const void* prefix, size_t prefixSize);
-ZSTDLIB_API size_t ZSTD_CCtx_refPrefix_advanced(ZSTD_CCtx* cctx,
-                                       const void* prefix, size_t prefixSize,
-                                       ZSTD_dictContentType_e dictContentType);
-
-/*! ZSTD_CCtx_reset() :
- *  Return a CCtx to clean state.
- *  Useful after an error, or to interrupt an ongoing compression job and start a new one.
- *  Any internal data not yet flushed is cancelled.
- *  The parameters and dictionary are kept unchanged, to reset them use ZSTD_CCtx_resetParameters().
- */
-ZSTDLIB_API void ZSTD_CCtx_reset(ZSTD_CCtx* cctx);
-
-/*! ZSTD_CCtx_resetParameters() :
- *  All parameters are back to default values (compression level is ZSTD_CLEVEL_DEFAULT).
- *  Dictionary (if any) is dropped.
- *  Resetting parameters is only possible during frame initialization (before starting compression).
- *  To reset the context use ZSTD_CCtx_reset().
- *  @return 0 or an error code (which can be checked with ZSTD_isError()).
- */
-ZSTDLIB_API size_t ZSTD_CCtx_resetParameters(ZSTD_CCtx* cctx);
-
-
-
-typedef enum {
-    ZSTD_e_continue=0, /* collect more data, encoder decides when to output compressed result, for optimal conditions */
-    ZSTD_e_flush,      /* flush any data provided so far - frame will continue, future data can still reference previous data for better compression */
-    ZSTD_e_end         /* flush any remaining data and close current frame. Any additional data starts a new frame. */
-} ZSTD_EndDirective;
-
-/*! ZSTD_compress_generic() :
- *  Behave about the same as ZSTD_compressStream. To note :
- *  - Compression parameters are pushed into CCtx before starting compression, using ZSTD_CCtx_setParameter()
- *  - Compression parameters cannot be changed once compression is started.
- *  - outpot->pos must be <= dstCapacity, input->pos must be <= srcSize
- *  - outpot->pos and input->pos will be updated. They are guaranteed to remain below their respective limit.
- *  - In single-thread mode (default), function is blocking : it completed its job before returning to caller.
- *  - In multi-thread mode, function is non-blocking : it just acquires a copy of input, and distribute job to internal worker threads,
- *                                                     and then immediately returns, just indicating that there is some data remaining to be flushed.
- *                                                     The function nonetheless guarantees forward progress : it will return only after it reads or write at least 1+ byte.
- *  - Exception : in multi-threading mode, if the first call requests a ZSTD_e_end directive, it is blocking : it will complete compression before giving back control to caller.
- *  - @return provides a minimum amount of data remaining to be flushed from internal buffers
- *            or an error code, which can be tested using ZSTD_isError().
- *            if @return != 0, flush is not fully completed, there is still some data left within internal buffers.
- *            This is useful for ZSTD_e_flush, since in this case more flushes are necessary to empty all buffers.
- *            For ZSTD_e_end, @return == 0 when internal buffers are fully flushed and frame is completed.
- *  - after a ZSTD_e_end directive, if internal buffer is not fully flushed (@return != 0),
- *            only ZSTD_e_end or ZSTD_e_flush operations are allowed.
- *            Before starting a new compression job, or changing compression parameters,
- *            it is required to fully flush internal buffers.
- */
-ZSTDLIB_API size_t ZSTD_compress_generic (ZSTD_CCtx* cctx,
-                                          ZSTD_outBuffer* output,
-                                          ZSTD_inBuffer* input,
-                                          ZSTD_EndDirective endOp);
-
-
-/*! ZSTD_compress_generic_simpleArgs() :
- *  Same as ZSTD_compress_generic(),
- *  but using only integral types as arguments.
- *  Argument list is larger than ZSTD_{in,out}Buffer,
- *  but can be helpful for binders from dynamic languages
- *  which have troubles handling structures containing memory pointers.
- */
-ZSTDLIB_API size_t ZSTD_compress_generic_simpleArgs (
-                            ZSTD_CCtx* cctx,
-                            void* dst, size_t dstCapacity, size_t* dstPos,
-                      const void* src, size_t srcSize, size_t* srcPos,
-                            ZSTD_EndDirective endOp);
-
-
-/*! ZSTD_CCtx_params :
- *  Quick howto :
- *  - ZSTD_createCCtxParams() : Create a ZSTD_CCtx_params structure
- *  - ZSTD_CCtxParam_setParameter() : Push parameters one by one into
- *                                    an existing ZSTD_CCtx_params structure.
- *                                    This is similar to
- *                                    ZSTD_CCtx_setParameter().
- *  - ZSTD_CCtx_setParametersUsingCCtxParams() : Apply parameters to
- *                                    an existing CCtx.
- *                                    These parameters will be applied to
- *                                    all subsequent compression jobs.
- *  - ZSTD_compress_generic() : Do compression using the CCtx.
- *  - ZSTD_freeCCtxParams() : Free the memory.
- *
- *  This can be used with ZSTD_estimateCCtxSize_advanced_usingCCtxParams()
- *  for static allocation for single-threaded compression.
- */
-ZSTDLIB_API ZSTD_CCtx_params* ZSTD_createCCtxParams(void);
-ZSTDLIB_API size_t ZSTD_freeCCtxParams(ZSTD_CCtx_params* params);
-
-
-/*! ZSTD_CCtxParams_reset() :
- *  Reset params to default values.
- */
-ZSTDLIB_API size_t ZSTD_CCtxParams_reset(ZSTD_CCtx_params* params);
-
-/*! ZSTD_CCtxParams_init() :
- *  Initializes the compression parameters of cctxParams according to
- *  compression level. All other parameters are reset to their default values.
- */
-ZSTDLIB_API size_t ZSTD_CCtxParams_init(ZSTD_CCtx_params* cctxParams, int compressionLevel);
-
-/*! ZSTD_CCtxParams_init_advanced() :
- *  Initializes the compression and frame parameters of cctxParams according to
- *  params. All other parameters are reset to their default values.
- */
-ZSTDLIB_API size_t ZSTD_CCtxParams_init_advanced(ZSTD_CCtx_params* cctxParams, ZSTD_parameters params);
-
-
-/*! ZSTD_CCtxParam_setParameter() :
- *  Similar to ZSTD_CCtx_setParameter.
- *  Set one compression parameter, selected by enum ZSTD_cParameter.
- *  Parameters must be applied to a ZSTD_CCtx using ZSTD_CCtx_setParametersUsingCCtxParams().
- *  Note : when `value` is an enum, cast it to unsigned for proper type checking.
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- */
-ZSTDLIB_API size_t ZSTD_CCtxParam_setParameter(ZSTD_CCtx_params* params, ZSTD_cParameter param, unsigned value);
-
-/*! ZSTD_CCtxParam_getParameter() :
- * Similar to ZSTD_CCtx_getParameter.
- * Get the requested value of one compression parameter, selected by enum ZSTD_cParameter.
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- */
-ZSTDLIB_API size_t ZSTD_CCtxParam_getParameter(ZSTD_CCtx_params* params, ZSTD_cParameter param, unsigned* value);
-
-/*! ZSTD_CCtx_setParametersUsingCCtxParams() :
- *  Apply a set of ZSTD_CCtx_params to the compression context.
- *  This can be done even after compression is started,
- *    if nbWorkers==0, this will have no impact until a new compression is started.
- *    if nbWorkers>=1, new parameters will be picked up at next job,
- *       with a few restrictions (windowLog, pledgedSrcSize, nbWorkers, jobSize, and overlapLog are not updated).
- */
-ZSTDLIB_API size_t ZSTD_CCtx_setParametersUsingCCtxParams(
-        ZSTD_CCtx* cctx, const ZSTD_CCtx_params* params);
-
-
-/* ==================================== */
-/*===   Advanced decompression API   ===*/
-/* ==================================== */
-
-/* The following API works the same way as the advanced compression API :
- * a context is created, parameters are pushed into it one by one,
- * then the context can be used to decompress data using an interface similar to the straming API.
- */
-
-/*! ZSTD_DCtx_loadDictionary() :
- *  Create an internal DDict from dict buffer,
- *  to be used to decompress next frames.
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- *  Special : Adding a NULL (or 0-size) dictionary invalidates any previous dictionary,
- *            meaning "return to no-dictionary mode".
- *  Note 1 : `dict` content will be copied internally.
- *            Use ZSTD_DCtx_loadDictionary_byReference()
- *            to reference dictionary content instead.
- *            In which case, the dictionary buffer must outlive its users.
- *  Note 2 : Loading a dictionary involves building tables,
- *           which has a non-negligible impact on CPU usage and latency.
- *  Note 3 : Use ZSTD_DCtx_loadDictionary_advanced() to select
- *           how dictionary content will be interpreted and loaded.
- */
-ZSTDLIB_API size_t ZSTD_DCtx_loadDictionary(ZSTD_DCtx* dctx, const void* dict, size_t dictSize);
-ZSTDLIB_API size_t ZSTD_DCtx_loadDictionary_byReference(ZSTD_DCtx* dctx, const void* dict, size_t dictSize);
-ZSTDLIB_API size_t ZSTD_DCtx_loadDictionary_advanced(ZSTD_DCtx* dctx, const void* dict, size_t dictSize, ZSTD_dictLoadMethod_e dictLoadMethod, ZSTD_dictContentType_e dictContentType);
-
-
-/*! ZSTD_DCtx_refDDict() :
- *  Reference a prepared dictionary, to be used to decompress next frames.
- *  The dictionary remains active for decompression of future frames using same DCtx.
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- *  Note 1 : Currently, only one dictionary can be managed.
- *           Referencing a new dictionary effectively "discards" any previous one.
- *  Special : adding a NULL DDict means "return to no-dictionary mode".
- *  Note 2 : DDict is just referenced, its lifetime must outlive its usage from DCtx.
- */
-ZSTDLIB_API size_t ZSTD_DCtx_refDDict(ZSTD_DCtx* dctx, const ZSTD_DDict* ddict);
-
-
-/*! ZSTD_DCtx_refPrefix() :
- *  Reference a prefix (single-usage dictionary) for next compression job.
- *  This is the reverse operation of ZSTD_CCtx_refPrefix(),
- *  and must use the same prefix as the one used during compression.
- *  Prefix is **only used once**. Reference is discarded at end of frame.
- *  End of frame is reached when ZSTD_DCtx_decompress_generic() returns 0.
- * @result : 0, or an error code (which can be tested with ZSTD_isError()).
- *  Note 1 : Adding any prefix (including NULL) invalidates any previously set prefix or dictionary
- *  Note 2 : Prefix buffer is referenced. It **must** outlive decompression job.
- *           Prefix buffer must remain unmodified up to the end of frame,
- *           reached when ZSTD_DCtx_decompress_generic() returns 0.
- *  Note 3 : By default, the prefix is treated as raw content (ZSTD_dm_rawContent).
- *           Use ZSTD_CCtx_refPrefix_advanced() to alter dictMode.
- *  Note 4 : Referencing a raw content prefix has almost no cpu nor memory cost.
- *           A fulldict prefix is more costly though.
- */
-ZSTDLIB_API size_t ZSTD_DCtx_refPrefix(ZSTD_DCtx* dctx,
-                                    const void* prefix, size_t prefixSize);
-ZSTDLIB_API size_t ZSTD_DCtx_refPrefix_advanced(ZSTD_DCtx* dctx,
-                                    const void* prefix, size_t prefixSize,
-                                    ZSTD_dictContentType_e dictContentType);
-
-
-/*! ZSTD_DCtx_setMaxWindowSize() :
- *  Refuses allocating internal buffers for frames requiring a window size larger than provided limit.
- *  This is useful to prevent a decoder context from reserving too much memory for itself (potential attack scenario).
- *  This parameter is only useful in streaming mode, since no internal buffer is allocated in direct mode.
- *  By default, a decompression context accepts all window sizes <= (1 << ZSTD_WINDOWLOG_MAX)
- * @return : 0, or an error code (which can be tested using ZSTD_isError()).
- */
-ZSTDLIB_API size_t ZSTD_DCtx_setMaxWindowSize(ZSTD_DCtx* dctx, size_t maxWindowSize);
-
-
-/*! ZSTD_DCtx_setFormat() :
- *  Instruct the decoder context about what kind of data to decode next.
- *  This instruction is mandatory to decode data without a fully-formed header,
- *  such ZSTD_f_zstd1_magicless for example.
- * @return : 0, or an error code (which can be tested using ZSTD_isError()).
- */
-ZSTDLIB_API size_t ZSTD_DCtx_setFormat(ZSTD_DCtx* dctx, ZSTD_format_e format);
-
-
-/*! ZSTD_getFrameHeader_advanced() :
- *  same as ZSTD_getFrameHeader(),
- *  with added capability to select a format (like ZSTD_f_zstd1_magicless) */
-ZSTDLIB_API size_t ZSTD_getFrameHeader_advanced(ZSTD_frameHeader* zfhPtr,
-                        const void* src, size_t srcSize, ZSTD_format_e format);
-
-
-/*! ZSTD_decompress_generic() :
- *  Behave the same as ZSTD_decompressStream.
- *  Decompression parameters cannot be changed once decompression is started.
- * @return : an error code, which can be tested using ZSTD_isError()
- *           if >0, a hint, nb of expected input bytes for next invocation.
- *           `0` means : a frame has just been fully decoded and flushed.
- */
-ZSTDLIB_API size_t ZSTD_decompress_generic(ZSTD_DCtx* dctx,
-                                           ZSTD_outBuffer* output,
-                                           ZSTD_inBuffer* input);
-
-
-/*! ZSTD_decompress_generic_simpleArgs() :
- *  Same as ZSTD_decompress_generic(),
- *  but using only integral types as arguments.
- *  Argument list is larger than ZSTD_{in,out}Buffer,
- *  but can be helpful for binders from dynamic languages
- *  which have troubles handling structures containing memory pointers.
- */
-ZSTDLIB_API size_t ZSTD_decompress_generic_simpleArgs (
-                            ZSTD_DCtx* dctx,
-                            void* dst, size_t dstCapacity, size_t* dstPos,
-                      const void* src, size_t srcSize, size_t* srcPos);
-
-
-/*! ZSTD_DCtx_reset() :
- *  Return a DCtx to clean state.
- *  If a decompression was ongoing, any internal data not yet flushed is cancelled.
- *  All parameters are back to default values, including sticky ones.
- *  Dictionary (if any) is dropped.
- *  Parameters can be modified again after a reset.
- */
-ZSTDLIB_API void ZSTD_DCtx_reset(ZSTD_DCtx* dctx);
-
-
 
 /* ============================ */
 /**       Block level API       */
@@ -1491,10 +1741,10 @@
       + copyCCtx() and copyDCtx() can be used too
     - Block size is limited, it must be <= ZSTD_getBlockSize() <= ZSTD_BLOCKSIZE_MAX == 128 KB
       + If input is larger than a block size, it's necessary to split input data into multiple blocks
-      + For inputs larger than a single block size, consider using the regular ZSTD_compress() instead.
+      + For inputs larger than a single block, really consider using regular ZSTD_compress() instead.
         Frame metadata is not that costly, and quickly becomes negligible as source size grows larger.
     - When a block is considered not compressible enough, ZSTD_compressBlock() result will be zero.
-      In which case, nothing is produced into `dst`.
+      In which case, nothing is produced into `dst` !
       + User must test for such outcome and deal directly with uncompressed data
       + ZSTD_decompressBlock() doesn't accept uncompressed data as input !!!
       + In case of multiple successive blocks, should some of them be uncompressed,
--- a/contrib/python-zstandard/zstd_cffi.py	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,1952 +0,0 @@
-# Copyright (c) 2016-present, Gregory Szorc
-# All rights reserved.
-#
-# This software may be modified and distributed under the terms
-# of the BSD license. See the LICENSE file for details.
-
-"""Python interface to the Zstandard (zstd) compression library."""
-
-from __future__ import absolute_import, unicode_literals
-
-# This should match what the C extension exports.
-__all__ = [
-    #'BufferSegment',
-    #'BufferSegments',
-    #'BufferWithSegments',
-    #'BufferWithSegmentsCollection',
-    'CompressionParameters',
-    'ZstdCompressionDict',
-    'ZstdCompressionParameters',
-    'ZstdCompressor',
-    'ZstdError',
-    'ZstdDecompressor',
-    'FrameParameters',
-    'estimate_decompression_context_size',
-    'frame_content_size',
-    'frame_header_size',
-    'get_frame_parameters',
-    'train_dictionary',
-
-    # Constants.
-    'COMPRESSOBJ_FLUSH_FINISH',
-    'COMPRESSOBJ_FLUSH_BLOCK',
-    'ZSTD_VERSION',
-    'FRAME_HEADER',
-    'CONTENTSIZE_UNKNOWN',
-    'CONTENTSIZE_ERROR',
-    'MAX_COMPRESSION_LEVEL',
-    'COMPRESSION_RECOMMENDED_INPUT_SIZE',
-    'COMPRESSION_RECOMMENDED_OUTPUT_SIZE',
-    'DECOMPRESSION_RECOMMENDED_INPUT_SIZE',
-    'DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE',
-    'MAGIC_NUMBER',
-    'BLOCKSIZELOG_MAX',
-    'BLOCKSIZE_MAX',
-    'WINDOWLOG_MIN',
-    'WINDOWLOG_MAX',
-    'CHAINLOG_MIN',
-    'CHAINLOG_MAX',
-    'HASHLOG_MIN',
-    'HASHLOG_MAX',
-    'HASHLOG3_MAX',
-    'SEARCHLOG_MIN',
-    'SEARCHLOG_MAX',
-    'SEARCHLENGTH_MIN',
-    'SEARCHLENGTH_MAX',
-    'TARGETLENGTH_MIN',
-    'TARGETLENGTH_MAX',
-    'LDM_MINMATCH_MIN',
-    'LDM_MINMATCH_MAX',
-    'LDM_BUCKETSIZELOG_MAX',
-    'STRATEGY_FAST',
-    'STRATEGY_DFAST',
-    'STRATEGY_GREEDY',
-    'STRATEGY_LAZY',
-    'STRATEGY_LAZY2',
-    'STRATEGY_BTLAZY2',
-    'STRATEGY_BTOPT',
-    'STRATEGY_BTULTRA',
-    'DICT_TYPE_AUTO',
-    'DICT_TYPE_RAWCONTENT',
-    'DICT_TYPE_FULLDICT',
-    'FORMAT_ZSTD1',
-    'FORMAT_ZSTD1_MAGICLESS',
-]
-
-import io
-import os
-import sys
-
-from _zstd_cffi import (
-    ffi,
-    lib,
-)
-
-if sys.version_info[0] == 2:
-    bytes_type = str
-    int_type = long
-else:
-    bytes_type = bytes
-    int_type = int
-
-
-COMPRESSION_RECOMMENDED_INPUT_SIZE = lib.ZSTD_CStreamInSize()
-COMPRESSION_RECOMMENDED_OUTPUT_SIZE = lib.ZSTD_CStreamOutSize()
-DECOMPRESSION_RECOMMENDED_INPUT_SIZE = lib.ZSTD_DStreamInSize()
-DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE = lib.ZSTD_DStreamOutSize()
-
-new_nonzero = ffi.new_allocator(should_clear_after_alloc=False)
-
-
-MAX_COMPRESSION_LEVEL = lib.ZSTD_maxCLevel()
-MAGIC_NUMBER = lib.ZSTD_MAGICNUMBER
-FRAME_HEADER = b'\x28\xb5\x2f\xfd'
-CONTENTSIZE_UNKNOWN = lib.ZSTD_CONTENTSIZE_UNKNOWN
-CONTENTSIZE_ERROR = lib.ZSTD_CONTENTSIZE_ERROR
-ZSTD_VERSION = (lib.ZSTD_VERSION_MAJOR, lib.ZSTD_VERSION_MINOR, lib.ZSTD_VERSION_RELEASE)
-
-BLOCKSIZELOG_MAX = lib.ZSTD_BLOCKSIZELOG_MAX
-BLOCKSIZE_MAX = lib.ZSTD_BLOCKSIZE_MAX
-WINDOWLOG_MIN = lib.ZSTD_WINDOWLOG_MIN
-WINDOWLOG_MAX = lib.ZSTD_WINDOWLOG_MAX
-CHAINLOG_MIN = lib.ZSTD_CHAINLOG_MIN
-CHAINLOG_MAX = lib.ZSTD_CHAINLOG_MAX
-HASHLOG_MIN = lib.ZSTD_HASHLOG_MIN
-HASHLOG_MAX = lib.ZSTD_HASHLOG_MAX
-HASHLOG3_MAX = lib.ZSTD_HASHLOG3_MAX
-SEARCHLOG_MIN = lib.ZSTD_SEARCHLOG_MIN
-SEARCHLOG_MAX = lib.ZSTD_SEARCHLOG_MAX
-SEARCHLENGTH_MIN = lib.ZSTD_SEARCHLENGTH_MIN
-SEARCHLENGTH_MAX = lib.ZSTD_SEARCHLENGTH_MAX
-TARGETLENGTH_MIN = lib.ZSTD_TARGETLENGTH_MIN
-TARGETLENGTH_MAX = lib.ZSTD_TARGETLENGTH_MAX
-LDM_MINMATCH_MIN = lib.ZSTD_LDM_MINMATCH_MIN
-LDM_MINMATCH_MAX = lib.ZSTD_LDM_MINMATCH_MAX
-LDM_BUCKETSIZELOG_MAX = lib.ZSTD_LDM_BUCKETSIZELOG_MAX
-
-STRATEGY_FAST = lib.ZSTD_fast
-STRATEGY_DFAST = lib.ZSTD_dfast
-STRATEGY_GREEDY = lib.ZSTD_greedy
-STRATEGY_LAZY = lib.ZSTD_lazy
-STRATEGY_LAZY2 = lib.ZSTD_lazy2
-STRATEGY_BTLAZY2 = lib.ZSTD_btlazy2
-STRATEGY_BTOPT = lib.ZSTD_btopt
-STRATEGY_BTULTRA = lib.ZSTD_btultra
-
-DICT_TYPE_AUTO = lib.ZSTD_dct_auto
-DICT_TYPE_RAWCONTENT = lib.ZSTD_dct_rawContent
-DICT_TYPE_FULLDICT = lib.ZSTD_dct_fullDict
-
-FORMAT_ZSTD1 = lib.ZSTD_f_zstd1
-FORMAT_ZSTD1_MAGICLESS = lib.ZSTD_f_zstd1_magicless
-
-COMPRESSOBJ_FLUSH_FINISH = 0
-COMPRESSOBJ_FLUSH_BLOCK = 1
-
-
-def _cpu_count():
-    # os.cpu_count() was introducd in Python 3.4.
-    try:
-        return os.cpu_count() or 0
-    except AttributeError:
-        pass
-
-    # Linux.
-    try:
-        if sys.version_info[0] == 2:
-            return os.sysconf(b'SC_NPROCESSORS_ONLN')
-        else:
-            return os.sysconf(u'SC_NPROCESSORS_ONLN')
-    except (AttributeError, ValueError):
-        pass
-
-    # TODO implement on other platforms.
-    return 0
-
-
-class ZstdError(Exception):
-    pass
-
-
-def _zstd_error(zresult):
-    # Resolves to bytes on Python 2 and 3. We use the string for formatting
-    # into error messages, which will be literal unicode. So convert it to
-    # unicode.
-    return ffi.string(lib.ZSTD_getErrorName(zresult)).decode('utf-8')
-
-def _make_cctx_params(params):
-    res = lib.ZSTD_createCCtxParams()
-    if res == ffi.NULL:
-        raise MemoryError()
-
-    res = ffi.gc(res, lib.ZSTD_freeCCtxParams)
-
-    attrs = [
-        (lib.ZSTD_p_format, params.format),
-        (lib.ZSTD_p_compressionLevel, params.compression_level),
-        (lib.ZSTD_p_windowLog, params.window_log),
-        (lib.ZSTD_p_hashLog, params.hash_log),
-        (lib.ZSTD_p_chainLog, params.chain_log),
-        (lib.ZSTD_p_searchLog, params.search_log),
-        (lib.ZSTD_p_minMatch, params.min_match),
-        (lib.ZSTD_p_targetLength, params.target_length),
-        (lib.ZSTD_p_compressionStrategy, params.compression_strategy),
-        (lib.ZSTD_p_contentSizeFlag, params.write_content_size),
-        (lib.ZSTD_p_checksumFlag, params.write_checksum),
-        (lib.ZSTD_p_dictIDFlag, params.write_dict_id),
-        (lib.ZSTD_p_nbWorkers, params.threads),
-        (lib.ZSTD_p_jobSize, params.job_size),
-        (lib.ZSTD_p_overlapSizeLog, params.overlap_size_log),
-        (lib.ZSTD_p_forceMaxWindow, params.force_max_window),
-        (lib.ZSTD_p_enableLongDistanceMatching, params.enable_ldm),
-        (lib.ZSTD_p_ldmHashLog, params.ldm_hash_log),
-        (lib.ZSTD_p_ldmMinMatch, params.ldm_min_match),
-        (lib.ZSTD_p_ldmBucketSizeLog, params.ldm_bucket_size_log),
-        (lib.ZSTD_p_ldmHashEveryLog, params.ldm_hash_every_log),
-    ]
-
-    for param, value in attrs:
-        _set_compression_parameter(res, param, value)
-
-    return res
-
-class ZstdCompressionParameters(object):
-    @staticmethod
-    def from_level(level, source_size=0, dict_size=0, **kwargs):
-        params = lib.ZSTD_getCParams(level, source_size, dict_size)
-
-        args = {
-            'window_log': 'windowLog',
-            'chain_log': 'chainLog',
-            'hash_log': 'hashLog',
-            'search_log': 'searchLog',
-            'min_match': 'searchLength',
-            'target_length': 'targetLength',
-            'compression_strategy': 'strategy',
-        }
-
-        for arg, attr in args.items():
-            if arg not in kwargs:
-                kwargs[arg] = getattr(params, attr)
-
-        return ZstdCompressionParameters(**kwargs)
-
-    def __init__(self, format=0, compression_level=0, window_log=0, hash_log=0,
-                 chain_log=0, search_log=0, min_match=0, target_length=0,
-                 compression_strategy=0, write_content_size=1, write_checksum=0,
-                 write_dict_id=0, job_size=0, overlap_size_log=0,
-                 force_max_window=0, enable_ldm=0, ldm_hash_log=0,
-                 ldm_min_match=0, ldm_bucket_size_log=0, ldm_hash_every_log=0,
-                 threads=0):
-
-        if threads < 0:
-            threads = _cpu_count()
-
-        self.format = format
-        self.compression_level = compression_level
-        self.window_log = window_log
-        self.hash_log = hash_log
-        self.chain_log = chain_log
-        self.search_log = search_log
-        self.min_match = min_match
-        self.target_length = target_length
-        self.compression_strategy = compression_strategy
-        self.write_content_size = write_content_size
-        self.write_checksum = write_checksum
-        self.write_dict_id = write_dict_id
-        self.job_size = job_size
-        self.overlap_size_log = overlap_size_log
-        self.force_max_window = force_max_window
-        self.enable_ldm = enable_ldm
-        self.ldm_hash_log = ldm_hash_log
-        self.ldm_min_match = ldm_min_match
-        self.ldm_bucket_size_log = ldm_bucket_size_log
-        self.ldm_hash_every_log = ldm_hash_every_log
-        self.threads = threads
-
-        self.params = _make_cctx_params(self)
-
-    def estimated_compression_context_size(self):
-        return lib.ZSTD_estimateCCtxSize_usingCCtxParams(self.params)
-
-CompressionParameters = ZstdCompressionParameters
-
-def estimate_decompression_context_size():
-    return lib.ZSTD_estimateDCtxSize()
-
-
-def _set_compression_parameter(params, param, value):
-    zresult = lib.ZSTD_CCtxParam_setParameter(params, param,
-                                              ffi.cast('unsigned', value))
-    if lib.ZSTD_isError(zresult):
-        raise ZstdError('unable to set compression context parameter: %s' %
-                        _zstd_error(zresult))
-
-class ZstdCompressionWriter(object):
-    def __init__(self, compressor, writer, source_size, write_size):
-        self._compressor = compressor
-        self._writer = writer
-        self._source_size = source_size
-        self._write_size = write_size
-        self._entered = False
-        self._bytes_compressed = 0
-
-    def __enter__(self):
-        if self._entered:
-            raise ZstdError('cannot __enter__ multiple times')
-
-        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._compressor._cctx,
-                                                  self._source_size)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        self._entered = True
-        return self
-
-    def __exit__(self, exc_type, exc_value, exc_tb):
-        self._entered = False
-
-        if not exc_type and not exc_value and not exc_tb:
-            dst_buffer = ffi.new('char[]', self._write_size)
-
-            out_buffer = ffi.new('ZSTD_outBuffer *')
-            in_buffer = ffi.new('ZSTD_inBuffer *')
-
-            out_buffer.dst = dst_buffer
-            out_buffer.size = len(dst_buffer)
-            out_buffer.pos = 0
-
-            in_buffer.src = ffi.NULL
-            in_buffer.size = 0
-            in_buffer.pos = 0
-
-            while True:
-                zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                                    out_buffer, in_buffer,
-                                                    lib.ZSTD_e_end)
-
-                if lib.ZSTD_isError(zresult):
-                    raise ZstdError('error ending compression stream: %s' %
-                                    _zstd_error(zresult))
-
-                if out_buffer.pos:
-                    self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
-                    out_buffer.pos = 0
-
-                if zresult == 0:
-                    break
-
-        self._compressor = None
-
-        return False
-
-    def memory_size(self):
-        if not self._entered:
-            raise ZstdError('cannot determine size of an inactive compressor; '
-                            'call when a context manager is active')
-
-        return lib.ZSTD_sizeof_CCtx(self._compressor._cctx)
-
-    def write(self, data):
-        if not self._entered:
-            raise ZstdError('write() must be called from an active context '
-                            'manager')
-
-        total_write = 0
-
-        data_buffer = ffi.from_buffer(data)
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        in_buffer.src = data_buffer
-        in_buffer.size = len(data_buffer)
-        in_buffer.pos = 0
-
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-        dst_buffer = ffi.new('char[]', self._write_size)
-        out_buffer.dst = dst_buffer
-        out_buffer.size = self._write_size
-        out_buffer.pos = 0
-
-        while in_buffer.pos < in_buffer.size:
-            zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                                out_buffer, in_buffer,
-                                                lib.ZSTD_e_continue)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' %
-                                _zstd_error(zresult))
-
-            if out_buffer.pos:
-                self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
-                total_write += out_buffer.pos
-                self._bytes_compressed += out_buffer.pos
-                out_buffer.pos = 0
-
-        return total_write
-
-    def flush(self):
-        if not self._entered:
-            raise ZstdError('flush must be called from an active context manager')
-
-        total_write = 0
-
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-        dst_buffer = ffi.new('char[]', self._write_size)
-        out_buffer.dst = dst_buffer
-        out_buffer.size = self._write_size
-        out_buffer.pos = 0
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        in_buffer.src = ffi.NULL
-        in_buffer.size = 0
-        in_buffer.pos = 0
-
-        while True:
-            zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                                out_buffer, in_buffer,
-                                                lib.ZSTD_e_flush)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' %
-                                _zstd_error(zresult))
-
-            if out_buffer.pos:
-                self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
-                total_write += out_buffer.pos
-                self._bytes_compressed += out_buffer.pos
-                out_buffer.pos = 0
-
-            if not zresult:
-                break
-
-        return total_write
-
-    def tell(self):
-        return self._bytes_compressed
-
-
-class ZstdCompressionObj(object):
-    def compress(self, data):
-        if self._finished:
-            raise ZstdError('cannot call compress() after compressor finished')
-
-        data_buffer = ffi.from_buffer(data)
-        source = ffi.new('ZSTD_inBuffer *')
-        source.src = data_buffer
-        source.size = len(data_buffer)
-        source.pos = 0
-
-        chunks = []
-
-        while source.pos < len(data):
-            zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                                self._out,
-                                                source,
-                                                lib.ZSTD_e_continue)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' %
-                                _zstd_error(zresult))
-
-            if self._out.pos:
-                chunks.append(ffi.buffer(self._out.dst, self._out.pos)[:])
-                self._out.pos = 0
-
-        return b''.join(chunks)
-
-    def flush(self, flush_mode=COMPRESSOBJ_FLUSH_FINISH):
-        if flush_mode not in (COMPRESSOBJ_FLUSH_FINISH, COMPRESSOBJ_FLUSH_BLOCK):
-            raise ValueError('flush mode not recognized')
-
-        if self._finished:
-            raise ZstdError('compressor object already finished')
-
-        if flush_mode == COMPRESSOBJ_FLUSH_BLOCK:
-            z_flush_mode = lib.ZSTD_e_flush
-        elif flush_mode == COMPRESSOBJ_FLUSH_FINISH:
-            z_flush_mode = lib.ZSTD_e_end
-            self._finished = True
-        else:
-            raise ZstdError('unhandled flush mode')
-
-        assert self._out.pos == 0
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        in_buffer.src = ffi.NULL
-        in_buffer.size = 0
-        in_buffer.pos = 0
-
-        chunks = []
-
-        while True:
-            zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                                self._out,
-                                                in_buffer,
-                                                z_flush_mode)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('error ending compression stream: %s' %
-                                _zstd_error(zresult))
-
-            if self._out.pos:
-                chunks.append(ffi.buffer(self._out.dst, self._out.pos)[:])
-                self._out.pos = 0
-
-            if not zresult:
-                break
-
-        return b''.join(chunks)
-
-
-class ZstdCompressionChunker(object):
-    def __init__(self, compressor, chunk_size):
-        self._compressor = compressor
-        self._out = ffi.new('ZSTD_outBuffer *')
-        self._dst_buffer = ffi.new('char[]', chunk_size)
-        self._out.dst = self._dst_buffer
-        self._out.size = chunk_size
-        self._out.pos = 0
-
-        self._in = ffi.new('ZSTD_inBuffer *')
-        self._in.src = ffi.NULL
-        self._in.size = 0
-        self._in.pos = 0
-        self._finished = False
-
-    def compress(self, data):
-        if self._finished:
-            raise ZstdError('cannot call compress() after compression finished')
-
-        if self._in.src != ffi.NULL:
-            raise ZstdError('cannot perform operation before consuming output '
-                            'from previous operation')
-
-        data_buffer = ffi.from_buffer(data)
-
-        if not len(data_buffer):
-            return
-
-        self._in.src = data_buffer
-        self._in.size = len(data_buffer)
-        self._in.pos = 0
-
-        while self._in.pos < self._in.size:
-            zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                                self._out,
-                                                self._in,
-                                                lib.ZSTD_e_continue)
-
-            if self._in.pos == self._in.size:
-                self._in.src = ffi.NULL
-                self._in.size = 0
-                self._in.pos = 0
-
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' %
-                                _zstd_error(zresult))
-
-            if self._out.pos == self._out.size:
-                yield ffi.buffer(self._out.dst, self._out.pos)[:]
-                self._out.pos = 0
-
-    def flush(self):
-        if self._finished:
-            raise ZstdError('cannot call flush() after compression finished')
-
-        if self._in.src != ffi.NULL:
-            raise ZstdError('cannot call flush() before consuming output from '
-                            'previous operation')
-
-        while True:
-            zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                                self._out, self._in,
-                                                lib.ZSTD_e_flush)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' % _zstd_error(zresult))
-
-            if self._out.pos:
-                yield ffi.buffer(self._out.dst, self._out.pos)[:]
-                self._out.pos = 0
-
-            if not zresult:
-                return
-
-    def finish(self):
-        if self._finished:
-            raise ZstdError('cannot call finish() after compression finished')
-
-        if self._in.src != ffi.NULL:
-            raise ZstdError('cannot call finish() before consuming output from '
-                            'previous operation')
-
-        while True:
-            zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                                self._out, self._in,
-                                                lib.ZSTD_e_end)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s' % _zstd_error(zresult))
-
-            if self._out.pos:
-                yield ffi.buffer(self._out.dst, self._out.pos)[:]
-                self._out.pos = 0
-
-            if not zresult:
-                self._finished = True
-                return
-
-
-class CompressionReader(object):
-    def __init__(self, compressor, source, read_size):
-        self._compressor = compressor
-        self._source = source
-        self._read_size = read_size
-        self._entered = False
-        self._closed = False
-        self._bytes_compressed = 0
-        self._finished_input = False
-        self._finished_output = False
-
-        self._in_buffer = ffi.new('ZSTD_inBuffer *')
-        # Holds a ref so backing bytes in self._in_buffer stay alive.
-        self._source_buffer = None
-
-    def __enter__(self):
-        if self._entered:
-            raise ValueError('cannot __enter__ multiple times')
-
-        self._entered = True
-        return self
-
-    def __exit__(self, exc_type, exc_value, exc_tb):
-        self._entered = False
-        self._closed = True
-        self._source = None
-        self._compressor = None
-
-        return False
-
-    def readable(self):
-        return True
-
-    def writable(self):
-        return False
-
-    def seekable(self):
-        return False
-
-    def readline(self):
-        raise io.UnsupportedOperation()
-
-    def readlines(self):
-        raise io.UnsupportedOperation()
-
-    def write(self, data):
-        raise OSError('stream is not writable')
-
-    def writelines(self, ignored):
-        raise OSError('stream is not writable')
-
-    def isatty(self):
-        return False
-
-    def flush(self):
-        return None
-
-    def close(self):
-        self._closed = True
-        return None
-
-    @property
-    def closed(self):
-        return self._closed
-
-    def tell(self):
-        return self._bytes_compressed
-
-    def readall(self):
-        raise NotImplementedError()
-
-    def __iter__(self):
-        raise io.UnsupportedOperation()
-
-    def __next__(self):
-        raise io.UnsupportedOperation()
-
-    next = __next__
-
-    def read(self, size=-1):
-        if self._closed:
-            raise ValueError('stream is closed')
-
-        if self._finished_output:
-            return b''
-
-        if size < 1:
-            raise ValueError('cannot read negative or size 0 amounts')
-
-        # Need a dedicated ref to dest buffer otherwise it gets collected.
-        dst_buffer = ffi.new('char[]', size)
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-        out_buffer.dst = dst_buffer
-        out_buffer.size = size
-        out_buffer.pos = 0
-
-        def compress_input():
-            if self._in_buffer.pos >= self._in_buffer.size:
-                return
-
-            old_pos = out_buffer.pos
-
-            zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                                out_buffer, self._in_buffer,
-                                                lib.ZSTD_e_continue)
-
-            self._bytes_compressed += out_buffer.pos - old_pos
-
-            if self._in_buffer.pos == self._in_buffer.size:
-                self._in_buffer.src = ffi.NULL
-                self._in_buffer.pos = 0
-                self._in_buffer.size = 0
-                self._source_buffer = None
-
-                if not hasattr(self._source, 'read'):
-                    self._finished_input = True
-
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd compress error: %s',
-                                _zstd_error(zresult))
-
-            if out_buffer.pos and out_buffer.pos == out_buffer.size:
-                return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
-
-        def get_input():
-            if self._finished_input:
-                return
-
-            if hasattr(self._source, 'read'):
-                data = self._source.read(self._read_size)
-
-                if not data:
-                    self._finished_input = True
-                    return
-
-                self._source_buffer = ffi.from_buffer(data)
-                self._in_buffer.src = self._source_buffer
-                self._in_buffer.size = len(self._source_buffer)
-                self._in_buffer.pos = 0
-            else:
-                self._source_buffer = ffi.from_buffer(self._source)
-                self._in_buffer.src = self._source_buffer
-                self._in_buffer.size = len(self._source_buffer)
-                self._in_buffer.pos = 0
-
-        result = compress_input()
-        if result:
-            return result
-
-        while not self._finished_input:
-            get_input()
-            result = compress_input()
-            if result:
-                return result
-
-        # EOF
-        old_pos = out_buffer.pos
-
-        zresult = lib.ZSTD_compress_generic(self._compressor._cctx,
-                                            out_buffer, self._in_buffer,
-                                            lib.ZSTD_e_end)
-
-        self._bytes_compressed += out_buffer.pos - old_pos
-
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('error ending compression stream: %s',
-                            _zstd_error(zresult))
-
-        if zresult == 0:
-            self._finished_output = True
-
-        return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
-
-class ZstdCompressor(object):
-    def __init__(self, level=3, dict_data=None, compression_params=None,
-                 write_checksum=None, write_content_size=None,
-                 write_dict_id=None, threads=0):
-        if level > lib.ZSTD_maxCLevel():
-            raise ValueError('level must be less than %d' % lib.ZSTD_maxCLevel())
-
-        if threads < 0:
-            threads = _cpu_count()
-
-        if compression_params and write_checksum is not None:
-            raise ValueError('cannot define compression_params and '
-                             'write_checksum')
-
-        if compression_params and write_content_size is not None:
-            raise ValueError('cannot define compression_params and '
-                             'write_content_size')
-
-        if compression_params and write_dict_id is not None:
-            raise ValueError('cannot define compression_params and '
-                             'write_dict_id')
-
-        if compression_params and threads:
-            raise ValueError('cannot define compression_params and threads')
-
-        if compression_params:
-            self._params = _make_cctx_params(compression_params)
-        else:
-            if write_dict_id is None:
-                write_dict_id = True
-
-            params = lib.ZSTD_createCCtxParams()
-            if params == ffi.NULL:
-                raise MemoryError()
-
-            self._params = ffi.gc(params, lib.ZSTD_freeCCtxParams)
-
-            _set_compression_parameter(self._params,
-                                       lib.ZSTD_p_compressionLevel,
-                                       level)
-
-            _set_compression_parameter(
-                self._params,
-                lib.ZSTD_p_contentSizeFlag,
-                write_content_size if write_content_size is not None else 1)
-
-            _set_compression_parameter(self._params,
-                                       lib.ZSTD_p_checksumFlag,
-                                       1 if write_checksum else 0)
-
-            _set_compression_parameter(self._params,
-                                       lib.ZSTD_p_dictIDFlag,
-                                       1 if write_dict_id else 0)
-
-            if threads:
-                _set_compression_parameter(self._params,
-                                           lib.ZSTD_p_nbWorkers,
-                                           threads)
-
-        cctx = lib.ZSTD_createCCtx()
-        if cctx == ffi.NULL:
-            raise MemoryError()
-
-        self._cctx = cctx
-        self._dict_data = dict_data
-
-        # We defer setting up garbage collection until after calling
-        # _setup_cctx() to ensure the memory size estimate is more accurate.
-        try:
-            self._setup_cctx()
-        finally:
-            self._cctx = ffi.gc(cctx, lib.ZSTD_freeCCtx,
-                                size=lib.ZSTD_sizeof_CCtx(cctx))
-
-    def _setup_cctx(self):
-        zresult = lib.ZSTD_CCtx_setParametersUsingCCtxParams(self._cctx,
-                                                             self._params)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('could not set compression parameters: %s' %
-                            _zstd_error(zresult))
-
-        dict_data = self._dict_data
-
-        if dict_data:
-            if dict_data._cdict:
-                zresult = lib.ZSTD_CCtx_refCDict(self._cctx, dict_data._cdict)
-            else:
-                zresult = lib.ZSTD_CCtx_loadDictionary_advanced(
-                    self._cctx, dict_data.as_bytes(), len(dict_data),
-                    lib.ZSTD_dlm_byRef, dict_data._dict_type)
-
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('could not load compression dictionary: %s' %
-                                _zstd_error(zresult))
-
-    def memory_size(self):
-        return lib.ZSTD_sizeof_CCtx(self._cctx)
-
-    def compress(self, data):
-        lib.ZSTD_CCtx_reset(self._cctx)
-
-        data_buffer = ffi.from_buffer(data)
-
-        dest_size = lib.ZSTD_compressBound(len(data_buffer))
-        out = new_nonzero('char[]', dest_size)
-
-        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, len(data_buffer))
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-
-        out_buffer.dst = out
-        out_buffer.size = dest_size
-        out_buffer.pos = 0
-
-        in_buffer.src = data_buffer
-        in_buffer.size = len(data_buffer)
-        in_buffer.pos = 0
-
-        zresult = lib.ZSTD_compress_generic(self._cctx,
-                                            out_buffer,
-                                            in_buffer,
-                                            lib.ZSTD_e_end)
-
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('cannot compress: %s' %
-                            _zstd_error(zresult))
-        elif zresult:
-            raise ZstdError('unexpected partial frame flush')
-
-        return ffi.buffer(out, out_buffer.pos)[:]
-
-    def compressobj(self, size=-1):
-        lib.ZSTD_CCtx_reset(self._cctx)
-
-        if size < 0:
-            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
-
-        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        cobj = ZstdCompressionObj()
-        cobj._out = ffi.new('ZSTD_outBuffer *')
-        cobj._dst_buffer = ffi.new('char[]', COMPRESSION_RECOMMENDED_OUTPUT_SIZE)
-        cobj._out.dst = cobj._dst_buffer
-        cobj._out.size = COMPRESSION_RECOMMENDED_OUTPUT_SIZE
-        cobj._out.pos = 0
-        cobj._compressor = self
-        cobj._finished = False
-
-        return cobj
-
-    def chunker(self, size=-1, chunk_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-        lib.ZSTD_CCtx_reset(self._cctx)
-
-        if size < 0:
-            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
-
-        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        return ZstdCompressionChunker(self, chunk_size=chunk_size)
-
-    def copy_stream(self, ifh, ofh, size=-1,
-                    read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE,
-                    write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-
-        if not hasattr(ifh, 'read'):
-            raise ValueError('first argument must have a read() method')
-        if not hasattr(ofh, 'write'):
-            raise ValueError('second argument must have a write() method')
-
-        lib.ZSTD_CCtx_reset(self._cctx)
-
-        if size < 0:
-            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
-
-        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-
-        dst_buffer = ffi.new('char[]', write_size)
-        out_buffer.dst = dst_buffer
-        out_buffer.size = write_size
-        out_buffer.pos = 0
-
-        total_read, total_write = 0, 0
-
-        while True:
-            data = ifh.read(read_size)
-            if not data:
-                break
-
-            data_buffer = ffi.from_buffer(data)
-            total_read += len(data_buffer)
-            in_buffer.src = data_buffer
-            in_buffer.size = len(data_buffer)
-            in_buffer.pos = 0
-
-            while in_buffer.pos < in_buffer.size:
-                zresult = lib.ZSTD_compress_generic(self._cctx,
-                                                    out_buffer,
-                                                    in_buffer,
-                                                    lib.ZSTD_e_continue)
-                if lib.ZSTD_isError(zresult):
-                    raise ZstdError('zstd compress error: %s' %
-                                    _zstd_error(zresult))
-
-                if out_buffer.pos:
-                    ofh.write(ffi.buffer(out_buffer.dst, out_buffer.pos))
-                    total_write += out_buffer.pos
-                    out_buffer.pos = 0
-
-        # We've finished reading. Flush the compressor.
-        while True:
-            zresult = lib.ZSTD_compress_generic(self._cctx,
-                                                out_buffer,
-                                                in_buffer,
-                                                lib.ZSTD_e_end)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('error ending compression stream: %s' %
-                                _zstd_error(zresult))
-
-            if out_buffer.pos:
-                ofh.write(ffi.buffer(out_buffer.dst, out_buffer.pos))
-                total_write += out_buffer.pos
-                out_buffer.pos = 0
-
-            if zresult == 0:
-                break
-
-        return total_read, total_write
-
-    def stream_reader(self, source, size=-1,
-                      read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE):
-        lib.ZSTD_CCtx_reset(self._cctx)
-
-        try:
-            size = len(source)
-        except Exception:
-            pass
-
-        if size < 0:
-            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
-
-        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        return CompressionReader(self, source, read_size)
-
-    def stream_writer(self, writer, size=-1,
-                 write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-
-        if not hasattr(writer, 'write'):
-            raise ValueError('must pass an object with a write() method')
-
-        lib.ZSTD_CCtx_reset(self._cctx)
-
-        if size < 0:
-            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
-
-        return ZstdCompressionWriter(self, writer, size, write_size)
-
-    write_to = stream_writer
-
-    def read_to_iter(self, reader, size=-1,
-                     read_size=COMPRESSION_RECOMMENDED_INPUT_SIZE,
-                     write_size=COMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-        if hasattr(reader, 'read'):
-            have_read = True
-        elif hasattr(reader, '__getitem__'):
-            have_read = False
-            buffer_offset = 0
-            size = len(reader)
-        else:
-            raise ValueError('must pass an object with a read() method or '
-                             'conforms to buffer protocol')
-
-        lib.ZSTD_CCtx_reset(self._cctx)
-
-        if size < 0:
-            size = lib.ZSTD_CONTENTSIZE_UNKNOWN
-
-        zresult = lib.ZSTD_CCtx_setPledgedSrcSize(self._cctx, size)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('error setting source size: %s' %
-                            _zstd_error(zresult))
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-
-        in_buffer.src = ffi.NULL
-        in_buffer.size = 0
-        in_buffer.pos = 0
-
-        dst_buffer = ffi.new('char[]', write_size)
-        out_buffer.dst = dst_buffer
-        out_buffer.size = write_size
-        out_buffer.pos = 0
-
-        while True:
-            # We should never have output data sitting around after a previous
-            # iteration.
-            assert out_buffer.pos == 0
-
-            # Collect input data.
-            if have_read:
-                read_result = reader.read(read_size)
-            else:
-                remaining = len(reader) - buffer_offset
-                slice_size = min(remaining, read_size)
-                read_result = reader[buffer_offset:buffer_offset + slice_size]
-                buffer_offset += slice_size
-
-            # No new input data. Break out of the read loop.
-            if not read_result:
-                break
-
-            # Feed all read data into the compressor and emit output until
-            # exhausted.
-            read_buffer = ffi.from_buffer(read_result)
-            in_buffer.src = read_buffer
-            in_buffer.size = len(read_buffer)
-            in_buffer.pos = 0
-
-            while in_buffer.pos < in_buffer.size:
-                zresult = lib.ZSTD_compress_generic(self._cctx, out_buffer, in_buffer,
-                                                    lib.ZSTD_e_continue)
-                if lib.ZSTD_isError(zresult):
-                    raise ZstdError('zstd compress error: %s' %
-                                    _zstd_error(zresult))
-
-                if out_buffer.pos:
-                    data = ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
-                    out_buffer.pos = 0
-                    yield data
-
-            assert out_buffer.pos == 0
-
-            # And repeat the loop to collect more data.
-            continue
-
-        # If we get here, input is exhausted. End the stream and emit what
-        # remains.
-        while True:
-            assert out_buffer.pos == 0
-            zresult = lib.ZSTD_compress_generic(self._cctx,
-                                                out_buffer,
-                                                in_buffer,
-                                                lib.ZSTD_e_end)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('error ending compression stream: %s' %
-                                _zstd_error(zresult))
-
-            if out_buffer.pos:
-                data = ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
-                out_buffer.pos = 0
-                yield data
-
-            if zresult == 0:
-                break
-
-    read_from = read_to_iter
-
-    def frame_progression(self):
-        progression = lib.ZSTD_getFrameProgression(self._cctx)
-
-        return progression.ingested, progression.consumed, progression.produced
-
-
-class FrameParameters(object):
-    def __init__(self, fparams):
-        self.content_size = fparams.frameContentSize
-        self.window_size = fparams.windowSize
-        self.dict_id = fparams.dictID
-        self.has_checksum = bool(fparams.checksumFlag)
-
-
-def frame_content_size(data):
-    data_buffer = ffi.from_buffer(data)
-
-    size = lib.ZSTD_getFrameContentSize(data_buffer, len(data_buffer))
-
-    if size == lib.ZSTD_CONTENTSIZE_ERROR:
-        raise ZstdError('error when determining content size')
-    elif size == lib.ZSTD_CONTENTSIZE_UNKNOWN:
-        return -1
-    else:
-        return size
-
-
-def frame_header_size(data):
-    data_buffer = ffi.from_buffer(data)
-
-    zresult = lib.ZSTD_frameHeaderSize(data_buffer, len(data_buffer))
-    if lib.ZSTD_isError(zresult):
-        raise ZstdError('could not determine frame header size: %s' %
-                        _zstd_error(zresult))
-
-    return zresult
-
-
-def get_frame_parameters(data):
-    params = ffi.new('ZSTD_frameHeader *')
-
-    data_buffer = ffi.from_buffer(data)
-    zresult = lib.ZSTD_getFrameHeader(params, data_buffer, len(data_buffer))
-    if lib.ZSTD_isError(zresult):
-        raise ZstdError('cannot get frame parameters: %s' %
-                        _zstd_error(zresult))
-
-    if zresult:
-        raise ZstdError('not enough data for frame parameters; need %d bytes' %
-                        zresult)
-
-    return FrameParameters(params[0])
-
-
-class ZstdCompressionDict(object):
-    def __init__(self, data, dict_type=DICT_TYPE_AUTO, k=0, d=0):
-        assert isinstance(data, bytes_type)
-        self._data = data
-        self.k = k
-        self.d = d
-
-        if dict_type not in (DICT_TYPE_AUTO, DICT_TYPE_RAWCONTENT,
-                             DICT_TYPE_FULLDICT):
-            raise ValueError('invalid dictionary load mode: %d; must use '
-                             'DICT_TYPE_* constants')
-
-        self._dict_type = dict_type
-        self._cdict = None
-
-    def __len__(self):
-        return len(self._data)
-
-    def dict_id(self):
-        return int_type(lib.ZDICT_getDictID(self._data, len(self._data)))
-
-    def as_bytes(self):
-        return self._data
-
-    def precompute_compress(self, level=0, compression_params=None):
-        if level and compression_params:
-            raise ValueError('must only specify one of level or '
-                             'compression_params')
-
-        if not level and not compression_params:
-            raise ValueError('must specify one of level or compression_params')
-
-        if level:
-            cparams = lib.ZSTD_getCParams(level, 0, len(self._data))
-        else:
-            cparams = ffi.new('ZSTD_compressionParameters')
-            cparams.chainLog = compression_params.chain_log
-            cparams.hashLog = compression_params.hash_log
-            cparams.searchLength = compression_params.min_match
-            cparams.searchLog = compression_params.search_log
-            cparams.strategy = compression_params.compression_strategy
-            cparams.targetLength = compression_params.target_length
-            cparams.windowLog = compression_params.window_log
-
-        cdict = lib.ZSTD_createCDict_advanced(self._data, len(self._data),
-                                              lib.ZSTD_dlm_byRef,
-                                              self._dict_type,
-                                              cparams,
-                                              lib.ZSTD_defaultCMem)
-        if cdict == ffi.NULL:
-            raise ZstdError('unable to precompute dictionary')
-
-        self._cdict = ffi.gc(cdict, lib.ZSTD_freeCDict,
-                             size=lib.ZSTD_sizeof_CDict(cdict))
-
-    @property
-    def _ddict(self):
-        ddict = lib.ZSTD_createDDict_advanced(self._data, len(self._data),
-                                              lib.ZSTD_dlm_byRef,
-                                              self._dict_type,
-                                              lib.ZSTD_defaultCMem)
-
-        if ddict == ffi.NULL:
-            raise ZstdError('could not create decompression dict')
-
-        ddict = ffi.gc(ddict, lib.ZSTD_freeDDict,
-                       size=lib.ZSTD_sizeof_DDict(ddict))
-        self.__dict__['_ddict'] = ddict
-
-        return ddict
-
-def train_dictionary(dict_size, samples, k=0, d=0, notifications=0, dict_id=0,
-                     level=0, steps=0, threads=0):
-    if not isinstance(samples, list):
-        raise TypeError('samples must be a list')
-
-    if threads < 0:
-        threads = _cpu_count()
-
-    total_size = sum(map(len, samples))
-
-    samples_buffer = new_nonzero('char[]', total_size)
-    sample_sizes = new_nonzero('size_t[]', len(samples))
-
-    offset = 0
-    for i, sample in enumerate(samples):
-        if not isinstance(sample, bytes_type):
-            raise ValueError('samples must be bytes')
-
-        l = len(sample)
-        ffi.memmove(samples_buffer + offset, sample, l)
-        offset += l
-        sample_sizes[i] = l
-
-    dict_data = new_nonzero('char[]', dict_size)
-
-    dparams = ffi.new('ZDICT_cover_params_t *')[0]
-    dparams.k = k
-    dparams.d = d
-    dparams.steps = steps
-    dparams.nbThreads = threads
-    dparams.zParams.notificationLevel = notifications
-    dparams.zParams.dictID = dict_id
-    dparams.zParams.compressionLevel = level
-
-    if (not dparams.k and not dparams.d and not dparams.steps
-        and not dparams.nbThreads and not dparams.zParams.notificationLevel
-        and not dparams.zParams.dictID
-        and not dparams.zParams.compressionLevel):
-        zresult = lib.ZDICT_trainFromBuffer(
-            ffi.addressof(dict_data), dict_size,
-            ffi.addressof(samples_buffer),
-            ffi.addressof(sample_sizes, 0), len(samples))
-    elif dparams.steps or dparams.nbThreads:
-        zresult = lib.ZDICT_optimizeTrainFromBuffer_cover(
-            ffi.addressof(dict_data), dict_size,
-            ffi.addressof(samples_buffer),
-            ffi.addressof(sample_sizes, 0), len(samples),
-            ffi.addressof(dparams))
-    else:
-        zresult = lib.ZDICT_trainFromBuffer_cover(
-            ffi.addressof(dict_data), dict_size,
-            ffi.addressof(samples_buffer),
-            ffi.addressof(sample_sizes, 0), len(samples),
-            dparams)
-
-    if lib.ZDICT_isError(zresult):
-        msg = ffi.string(lib.ZDICT_getErrorName(zresult)).decode('utf-8')
-        raise ZstdError('cannot train dict: %s' % msg)
-
-    return ZstdCompressionDict(ffi.buffer(dict_data, zresult)[:],
-                               dict_type=DICT_TYPE_FULLDICT,
-                               k=dparams.k, d=dparams.d)
-
-
-class ZstdDecompressionObj(object):
-    def __init__(self, decompressor, write_size):
-        self._decompressor = decompressor
-        self._write_size = write_size
-        self._finished = False
-
-    def decompress(self, data):
-        if self._finished:
-            raise ZstdError('cannot use a decompressobj multiple times')
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-
-        data_buffer = ffi.from_buffer(data)
-        in_buffer.src = data_buffer
-        in_buffer.size = len(data_buffer)
-        in_buffer.pos = 0
-
-        dst_buffer = ffi.new('char[]', self._write_size)
-        out_buffer.dst = dst_buffer
-        out_buffer.size = len(dst_buffer)
-        out_buffer.pos = 0
-
-        chunks = []
-
-        while True:
-            zresult = lib.ZSTD_decompress_generic(self._decompressor._dctx,
-                                                  out_buffer, in_buffer)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd decompressor error: %s' %
-                                _zstd_error(zresult))
-
-            if zresult == 0:
-                self._finished = True
-                self._decompressor = None
-
-            if out_buffer.pos:
-                chunks.append(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
-
-            if (zresult == 0 or
-                    (in_buffer.pos == in_buffer.size and out_buffer.pos == 0)):
-                break
-
-            out_buffer.pos = 0
-
-        return b''.join(chunks)
-
-
-class DecompressionReader(object):
-    def __init__(self, decompressor, source, read_size):
-        self._decompressor = decompressor
-        self._source = source
-        self._read_size = read_size
-        self._entered = False
-        self._closed = False
-        self._bytes_decompressed = 0
-        self._finished_input = False
-        self._finished_output = False
-        self._in_buffer = ffi.new('ZSTD_inBuffer *')
-        # Holds a ref to self._in_buffer.src.
-        self._source_buffer = None
-
-    def __enter__(self):
-        if self._entered:
-            raise ValueError('cannot __enter__ multiple times')
-
-        self._entered = True
-        return self
-
-    def __exit__(self, exc_type, exc_value, exc_tb):
-        self._entered = False
-        self._closed = True
-        self._source = None
-        self._decompressor = None
-
-        return False
-
-    def readable(self):
-        return True
-
-    def writable(self):
-        return False
-
-    def seekable(self):
-        return True
-
-    def readline(self):
-        raise NotImplementedError()
-
-    def readlines(self):
-        raise NotImplementedError()
-
-    def write(self, data):
-        raise io.UnsupportedOperation()
-
-    def writelines(self, lines):
-        raise io.UnsupportedOperation()
-
-    def isatty(self):
-        return False
-
-    def flush(self):
-        return None
-
-    def close(self):
-        self._closed = True
-        return None
-
-    @property
-    def closed(self):
-        return self._closed
-
-    def tell(self):
-        return self._bytes_decompressed
-
-    def readall(self):
-        raise NotImplementedError()
-
-    def __iter__(self):
-        raise NotImplementedError()
-
-    def __next__(self):
-        raise NotImplementedError()
-
-    next = __next__
-
-    def read(self, size):
-        if self._closed:
-            raise ValueError('stream is closed')
-
-        if self._finished_output:
-            return b''
-
-        if size < 1:
-            raise ValueError('cannot read negative or size 0 amounts')
-
-        dst_buffer = ffi.new('char[]', size)
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-        out_buffer.dst = dst_buffer
-        out_buffer.size = size
-        out_buffer.pos = 0
-
-        def decompress():
-            zresult = lib.ZSTD_decompress_generic(self._decompressor._dctx,
-                                                  out_buffer, self._in_buffer)
-
-            if self._in_buffer.pos == self._in_buffer.size:
-                self._in_buffer.src = ffi.NULL
-                self._in_buffer.pos = 0
-                self._in_buffer.size = 0
-                self._source_buffer = None
-
-                if not hasattr(self._source, 'read'):
-                    self._finished_input = True
-
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd decompress error: %s',
-                                _zstd_error(zresult))
-            elif zresult == 0:
-                self._finished_output = True
-
-            if out_buffer.pos and out_buffer.pos == out_buffer.size:
-                self._bytes_decompressed += out_buffer.size
-                return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
-
-        def get_input():
-            if self._finished_input:
-                return
-
-            if hasattr(self._source, 'read'):
-                data = self._source.read(self._read_size)
-
-                if not data:
-                    self._finished_input = True
-                    return
-
-                self._source_buffer = ffi.from_buffer(data)
-                self._in_buffer.src = self._source_buffer
-                self._in_buffer.size = len(self._source_buffer)
-                self._in_buffer.pos = 0
-            else:
-                self._source_buffer = ffi.from_buffer(self._source)
-                self._in_buffer.src = self._source_buffer
-                self._in_buffer.size = len(self._source_buffer)
-                self._in_buffer.pos = 0
-
-        get_input()
-        result = decompress()
-        if result:
-            return result
-
-        while not self._finished_input:
-            get_input()
-            result = decompress()
-            if result:
-                return result
-
-        self._bytes_decompressed += out_buffer.pos
-        return ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
-
-    def seek(self, pos, whence=os.SEEK_SET):
-        if self._closed:
-            raise ValueError('stream is closed')
-
-        read_amount = 0
-
-        if whence == os.SEEK_SET:
-            if pos < 0:
-                raise ValueError('cannot seek to negative position with SEEK_SET')
-
-            if pos < self._bytes_decompressed:
-                raise ValueError('cannot seek zstd decompression stream '
-                                 'backwards')
-
-            read_amount = pos - self._bytes_decompressed
-
-        elif whence == os.SEEK_CUR:
-            if pos < 0:
-                raise ValueError('cannot seek zstd decompression stream '
-                                 'backwards')
-
-            read_amount = pos
-        elif whence == os.SEEK_END:
-            raise ValueError('zstd decompression streams cannot be seeked '
-                             'with SEEK_END')
-
-        while read_amount:
-            result = self.read(min(read_amount,
-                                   DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE))
-
-            if not result:
-                break
-
-            read_amount -= len(result)
-
-        return self._bytes_decompressed
-
-class ZstdDecompressionWriter(object):
-    def __init__(self, decompressor, writer, write_size):
-        self._decompressor = decompressor
-        self._writer = writer
-        self._write_size = write_size
-        self._entered = False
-
-    def __enter__(self):
-        if self._entered:
-            raise ZstdError('cannot __enter__ multiple times')
-
-        self._decompressor._ensure_dctx()
-        self._entered = True
-
-        return self
-
-    def __exit__(self, exc_type, exc_value, exc_tb):
-        self._entered = False
-
-    def memory_size(self):
-        if not self._decompressor._dctx:
-            raise ZstdError('cannot determine size of inactive decompressor '
-                            'call when context manager is active')
-
-        return lib.ZSTD_sizeof_DCtx(self._decompressor._dctx)
-
-    def write(self, data):
-        if not self._entered:
-            raise ZstdError('write must be called from an active context manager')
-
-        total_write = 0
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-
-        data_buffer = ffi.from_buffer(data)
-        in_buffer.src = data_buffer
-        in_buffer.size = len(data_buffer)
-        in_buffer.pos = 0
-
-        dst_buffer = ffi.new('char[]', self._write_size)
-        out_buffer.dst = dst_buffer
-        out_buffer.size = len(dst_buffer)
-        out_buffer.pos = 0
-
-        dctx = self._decompressor._dctx
-
-        while in_buffer.pos < in_buffer.size:
-            zresult = lib.ZSTD_decompress_generic(dctx, out_buffer, in_buffer)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('zstd decompress error: %s' %
-                                _zstd_error(zresult))
-
-            if out_buffer.pos:
-                self._writer.write(ffi.buffer(out_buffer.dst, out_buffer.pos)[:])
-                total_write += out_buffer.pos
-                out_buffer.pos = 0
-
-        return total_write
-
-
-class ZstdDecompressor(object):
-    def __init__(self, dict_data=None, max_window_size=0, format=FORMAT_ZSTD1):
-        self._dict_data = dict_data
-        self._max_window_size = max_window_size
-        self._format = format
-
-        dctx = lib.ZSTD_createDCtx()
-        if dctx == ffi.NULL:
-            raise MemoryError()
-
-        self._dctx = dctx
-
-        # Defer setting up garbage collection until full state is loaded so
-        # the memory size is more accurate.
-        try:
-            self._ensure_dctx()
-        finally:
-            self._dctx = ffi.gc(dctx, lib.ZSTD_freeDCtx,
-                                size=lib.ZSTD_sizeof_DCtx(dctx))
-
-    def memory_size(self):
-        return lib.ZSTD_sizeof_DCtx(self._dctx)
-
-    def decompress(self, data, max_output_size=0):
-        self._ensure_dctx()
-
-        data_buffer = ffi.from_buffer(data)
-
-        output_size = lib.ZSTD_getFrameContentSize(data_buffer, len(data_buffer))
-
-        if output_size == lib.ZSTD_CONTENTSIZE_ERROR:
-            raise ZstdError('error determining content size from frame header')
-        elif output_size == 0:
-            return b''
-        elif output_size == lib.ZSTD_CONTENTSIZE_UNKNOWN:
-            if not max_output_size:
-                raise ZstdError('could not determine content size in frame header')
-
-            result_buffer = ffi.new('char[]', max_output_size)
-            result_size = max_output_size
-            output_size = 0
-        else:
-            result_buffer = ffi.new('char[]', output_size)
-            result_size = output_size
-
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-        out_buffer.dst = result_buffer
-        out_buffer.size = result_size
-        out_buffer.pos = 0
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        in_buffer.src = data_buffer
-        in_buffer.size = len(data_buffer)
-        in_buffer.pos = 0
-
-        zresult = lib.ZSTD_decompress_generic(self._dctx, out_buffer, in_buffer)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('decompression error: %s' %
-                            _zstd_error(zresult))
-        elif zresult:
-            raise ZstdError('decompression error: did not decompress full frame')
-        elif output_size and out_buffer.pos != output_size:
-            raise ZstdError('decompression error: decompressed %d bytes; expected %d' %
-                            (zresult, output_size))
-
-        return ffi.buffer(result_buffer, out_buffer.pos)[:]
-
-    def stream_reader(self, source, read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE):
-        self._ensure_dctx()
-        return DecompressionReader(self, source, read_size)
-
-    def decompressobj(self, write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-        if write_size < 1:
-            raise ValueError('write_size must be positive')
-
-        self._ensure_dctx()
-        return ZstdDecompressionObj(self, write_size=write_size)
-
-    def read_to_iter(self, reader, read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
-                     write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE,
-                     skip_bytes=0):
-        if skip_bytes >= read_size:
-            raise ValueError('skip_bytes must be smaller than read_size')
-
-        if hasattr(reader, 'read'):
-            have_read = True
-        elif hasattr(reader, '__getitem__'):
-            have_read = False
-            buffer_offset = 0
-            size = len(reader)
-        else:
-            raise ValueError('must pass an object with a read() method or '
-                             'conforms to buffer protocol')
-
-        if skip_bytes:
-            if have_read:
-                reader.read(skip_bytes)
-            else:
-                if skip_bytes > size:
-                    raise ValueError('skip_bytes larger than first input chunk')
-
-                buffer_offset = skip_bytes
-
-        self._ensure_dctx()
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-
-        dst_buffer = ffi.new('char[]', write_size)
-        out_buffer.dst = dst_buffer
-        out_buffer.size = len(dst_buffer)
-        out_buffer.pos = 0
-
-        while True:
-            assert out_buffer.pos == 0
-
-            if have_read:
-                read_result = reader.read(read_size)
-            else:
-                remaining = size - buffer_offset
-                slice_size = min(remaining, read_size)
-                read_result = reader[buffer_offset:buffer_offset + slice_size]
-                buffer_offset += slice_size
-
-            # No new input. Break out of read loop.
-            if not read_result:
-                break
-
-            # Feed all read data into decompressor and emit output until
-            # exhausted.
-            read_buffer = ffi.from_buffer(read_result)
-            in_buffer.src = read_buffer
-            in_buffer.size = len(read_buffer)
-            in_buffer.pos = 0
-
-            while in_buffer.pos < in_buffer.size:
-                assert out_buffer.pos == 0
-
-                zresult = lib.ZSTD_decompress_generic(self._dctx, out_buffer, in_buffer)
-                if lib.ZSTD_isError(zresult):
-                    raise ZstdError('zstd decompress error: %s' %
-                                    _zstd_error(zresult))
-
-                if out_buffer.pos:
-                    data = ffi.buffer(out_buffer.dst, out_buffer.pos)[:]
-                    out_buffer.pos = 0
-                    yield data
-
-                if zresult == 0:
-                    return
-
-            # Repeat loop to collect more input data.
-            continue
-
-        # If we get here, input is exhausted.
-
-    read_from = read_to_iter
-
-    def stream_writer(self, writer, write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-        if not hasattr(writer, 'write'):
-            raise ValueError('must pass an object with a write() method')
-
-        return ZstdDecompressionWriter(self, writer, write_size)
-
-    write_to = stream_writer
-
-    def copy_stream(self, ifh, ofh,
-                    read_size=DECOMPRESSION_RECOMMENDED_INPUT_SIZE,
-                    write_size=DECOMPRESSION_RECOMMENDED_OUTPUT_SIZE):
-        if not hasattr(ifh, 'read'):
-            raise ValueError('first argument must have a read() method')
-        if not hasattr(ofh, 'write'):
-            raise ValueError('second argument must have a write() method')
-
-        self._ensure_dctx()
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-
-        dst_buffer = ffi.new('char[]', write_size)
-        out_buffer.dst = dst_buffer
-        out_buffer.size = write_size
-        out_buffer.pos = 0
-
-        total_read, total_write = 0, 0
-
-        # Read all available input.
-        while True:
-            data = ifh.read(read_size)
-            if not data:
-                break
-
-            data_buffer = ffi.from_buffer(data)
-            total_read += len(data_buffer)
-            in_buffer.src = data_buffer
-            in_buffer.size = len(data_buffer)
-            in_buffer.pos = 0
-
-            # Flush all read data to output.
-            while in_buffer.pos < in_buffer.size:
-                zresult = lib.ZSTD_decompress_generic(self._dctx, out_buffer, in_buffer)
-                if lib.ZSTD_isError(zresult):
-                    raise ZstdError('zstd decompressor error: %s' %
-                                    _zstd_error(zresult))
-
-                if out_buffer.pos:
-                    ofh.write(ffi.buffer(out_buffer.dst, out_buffer.pos))
-                    total_write += out_buffer.pos
-                    out_buffer.pos = 0
-
-            # Continue loop to keep reading.
-
-        return total_read, total_write
-
-    def decompress_content_dict_chain(self, frames):
-        if not isinstance(frames, list):
-            raise TypeError('argument must be a list')
-
-        if not frames:
-            raise ValueError('empty input chain')
-
-        # First chunk should not be using a dictionary. We handle it specially.
-        chunk = frames[0]
-        if not isinstance(chunk, bytes_type):
-            raise ValueError('chunk 0 must be bytes')
-
-        # All chunks should be zstd frames and should have content size set.
-        chunk_buffer = ffi.from_buffer(chunk)
-        params = ffi.new('ZSTD_frameHeader *')
-        zresult = lib.ZSTD_getFrameHeader(params, chunk_buffer, len(chunk_buffer))
-        if lib.ZSTD_isError(zresult):
-            raise ValueError('chunk 0 is not a valid zstd frame')
-        elif zresult:
-            raise ValueError('chunk 0 is too small to contain a zstd frame')
-
-        if params.frameContentSize == lib.ZSTD_CONTENTSIZE_UNKNOWN:
-            raise ValueError('chunk 0 missing content size in frame')
-
-        self._ensure_dctx(load_dict=False)
-
-        last_buffer = ffi.new('char[]', params.frameContentSize)
-
-        out_buffer = ffi.new('ZSTD_outBuffer *')
-        out_buffer.dst = last_buffer
-        out_buffer.size = len(last_buffer)
-        out_buffer.pos = 0
-
-        in_buffer = ffi.new('ZSTD_inBuffer *')
-        in_buffer.src = chunk_buffer
-        in_buffer.size = len(chunk_buffer)
-        in_buffer.pos = 0
-
-        zresult = lib.ZSTD_decompress_generic(self._dctx, out_buffer, in_buffer)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('could not decompress chunk 0: %s' %
-                            _zstd_error(zresult))
-        elif zresult:
-            raise ZstdError('chunk 0 did not decompress full frame')
-
-        # Special case of chain length of 1
-        if len(frames) == 1:
-            return ffi.buffer(last_buffer, len(last_buffer))[:]
-
-        i = 1
-        while i < len(frames):
-            chunk = frames[i]
-            if not isinstance(chunk, bytes_type):
-                raise ValueError('chunk %d must be bytes' % i)
-
-            chunk_buffer = ffi.from_buffer(chunk)
-            zresult = lib.ZSTD_getFrameHeader(params, chunk_buffer, len(chunk_buffer))
-            if lib.ZSTD_isError(zresult):
-                raise ValueError('chunk %d is not a valid zstd frame' % i)
-            elif zresult:
-                raise ValueError('chunk %d is too small to contain a zstd frame' % i)
-
-            if params.frameContentSize == lib.ZSTD_CONTENTSIZE_UNKNOWN:
-                raise ValueError('chunk %d missing content size in frame' % i)
-
-            dest_buffer = ffi.new('char[]', params.frameContentSize)
-
-            out_buffer.dst = dest_buffer
-            out_buffer.size = len(dest_buffer)
-            out_buffer.pos = 0
-
-            in_buffer.src = chunk_buffer
-            in_buffer.size = len(chunk_buffer)
-            in_buffer.pos = 0
-
-            zresult = lib.ZSTD_decompress_generic(self._dctx, out_buffer, in_buffer)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('could not decompress chunk %d: %s' %
-                                _zstd_error(zresult))
-            elif zresult:
-                raise ZstdError('chunk %d did not decompress full frame' % i)
-
-            last_buffer = dest_buffer
-            i += 1
-
-        return ffi.buffer(last_buffer, len(last_buffer))[:]
-
-    def _ensure_dctx(self, load_dict=True):
-        lib.ZSTD_DCtx_reset(self._dctx)
-
-        if self._max_window_size:
-            zresult = lib.ZSTD_DCtx_setMaxWindowSize(self._dctx,
-                                                     self._max_window_size)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('unable to set max window size: %s' %
-                                _zstd_error(zresult))
-
-        zresult = lib.ZSTD_DCtx_setFormat(self._dctx, self._format)
-        if lib.ZSTD_isError(zresult):
-            raise ZstdError('unable to set decoding format: %s' %
-                            _zstd_error(zresult))
-
-        if self._dict_data and load_dict:
-            zresult = lib.ZSTD_DCtx_refDDict(self._dctx, self._dict_data._ddict)
-            if lib.ZSTD_isError(zresult):
-                raise ZstdError('unable to reference prepared dictionary: %s' %
-                                _zstd_error(zresult))
--- a/contrib/python3-whitelist	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/python3-whitelist	Wed Apr 17 13:41:18 2019 -0400
@@ -5,6 +5,7 @@
 test-absorb-rename.t
 test-absorb-strip.t
 test-absorb.t
+test-acl.t
 test-add.t
 test-addremove-similar.t
 test-addremove.t
@@ -14,6 +15,7 @@
 test-ancestor.py
 test-annotate.py
 test-annotate.t
+test-arbitraryfilectx.t
 test-archive-symlinks.t
 test-archive.t
 test-atomictempfile.py
@@ -25,6 +27,7 @@
 test-bad-extension.t
 test-bad-pull.t
 test-basic.t
+test-batching.py
 test-bdiff.py
 test-bheads.t
 test-bisect.t
@@ -42,6 +45,7 @@
 test-branch-option.t
 test-branch-tag-confict.t
 test-branches.t
+test-bugzilla.t
 test-bundle-phases.t
 test-bundle-r.t
 test-bundle-type.t
@@ -54,14 +58,15 @@
 test-bundle2-remote-changegroup.t
 test-cache-abuse.t
 test-cappedreader.py
+test-casecollision-merge.t
 test-casecollision.t
+test-casefolding.t
 test-cat.t
 test-cbor.py
 test-censor.t
 test-changelog-exec.t
 test-check-code.t
 test-check-commit.t
-test-check-config.py
 test-check-config.t
 test-check-execute.t
 test-check-help.t
@@ -83,6 +88,7 @@
 test-close-head.t
 test-commandserver.t
 test-commit-amend.t
+test-commit-interactive-curses.t
 test-commit-interactive.t
 test-commit-multiple.t
 test-commit-unresolved.t
@@ -111,11 +117,16 @@
 test-convert-cvsnt-mergepoints.t
 test-convert-datesort.t
 test-convert-filemap.t
+test-convert-git.t
 test-convert-hg-sink.t
 test-convert-hg-source.t
 test-convert-hg-startrev.t
+test-convert-mtn.t
 test-convert-splicemap.t
+test-convert-svn-sink.t
 test-convert-tagsbranch-topology.t
+test-convert.t
+test-copies.t
 test-copy-move-merge.t
 test-copy.t
 test-copytrace-heuristics.t
@@ -127,6 +138,7 @@
 test-debugindexdot.t
 test-debugrename.t
 test-default-push.t
+test-demandimport.py
 test-diff-antipatience.t
 test-diff-binary-file.t
 test-diff-change.t
@@ -149,6 +161,7 @@
 test-dirstate-race.t
 test-dirstate.t
 test-dispatch.py
+test-dispatch.t
 test-doctest.py
 test-double-merge.t
 test-drawdag.t
@@ -159,6 +172,7 @@
 test-empty-group.t
 test-empty.t
 test-encode.t
+test-encoding-align.t
 test-encoding-func.py
 test-encoding-textwrap.t
 test-encoding.t
@@ -198,6 +212,7 @@
 test-extdata.t
 test-extdiff.t
 test-extension-timing.t
+test-extension.t
 test-extensions-afterloaded.t
 test-extensions-wrapfunction.py
 test-extra-filelog-entry.t
@@ -217,6 +232,7 @@
 test-fileset.t
 test-fix-topology.t
 test-fix.t
+test-flagprocessor.t
 test-flags.t
 test-fncache.t
 test-gendoc-da.t
@@ -235,6 +251,7 @@
 test-generaldelta.t
 test-getbundle.t
 test-git-export.t
+test-githelp.t
 test-globalopts.t
 test-glog-beautifygraph.t
 test-glog-topological.t
@@ -251,17 +268,24 @@
 test-hgk.t
 test-hgrc.t
 test-hgweb-annotate-whitespace.t
+test-hgweb-auth.py
 test-hgweb-bundle.t
+test-hgweb-commands.t
 test-hgweb-csp.t
 test-hgweb-descend-empties.t
 test-hgweb-diffs.t
 test-hgweb-empty.t
 test-hgweb-filelog.t
+test-hgweb-json.t
+test-hgweb-no-path-info.t
+test-hgweb-no-request-uri.t
 test-hgweb-non-interactive.t
 test-hgweb-raw.t
 test-hgweb-removed.t
+test-hgweb-symrev.t
 test-hgweb.t
 test-hgwebdir-paths.py
+test-hgwebdir.t
 test-hgwebdirsym.t
 test-histedit-arguments.t
 test-histedit-base.t
@@ -271,6 +295,7 @@
 test-histedit-edit.t
 test-histedit-fold-non-commute.t
 test-histedit-fold.t
+test-histedit-merge-tools.t
 test-histedit-no-backup.t
 test-histedit-no-change.t
 test-histedit-non-commute-abort.t
@@ -278,11 +303,17 @@
 test-histedit-obsolete.t
 test-histedit-outgoing.t
 test-histedit-templates.t
+test-http-api-httpv2.t
+test-http-api.t
+test-http-bad-server.t
 test-http-branchmap.t
 test-http-bundle1.t
 test-http-clone-r.t
 test-http-permissions.t
+test-http-protocol.t
+test-http-proxy.t
 test-http.t
+test-https.t
 test-hybridencode.py
 test-i18n.t
 test-identify.t
@@ -290,6 +321,7 @@
 test-import-bypass.t
 test-import-context.t
 test-import-eol.t
+test-import-git.t
 test-import-merge.t
 test-import-unknown.t
 test-import.t
@@ -300,6 +332,7 @@
 test-infinitepush.t
 test-inherit-mode.t
 test-init.t
+test-install.t
 test-issue1089.t
 test-issue1102.t
 test-issue1175.t
@@ -335,11 +368,14 @@
 test-lfs-bundle.t
 test-lfs-largefiles.t
 test-lfs-pointer.py
+test-lfs-serve.t
+test-lfs-test-server.t
 test-lfs.t
 test-linelog.py
 test-linerange.py
 test-locate.t
 test-lock-badness.t
+test-lock.py
 test-log-exthook.t
 test-log-linerange.t
 test-log.t
@@ -381,11 +417,14 @@
 test-merge9.t
 test-minifileset.py
 test-minirst.py
+test-missing-capability.t
+test-mq-eol.t
 test-mq-git.t
 test-mq-guards.t
 test-mq-header-date.t
 test-mq-header-from.t
 test-mq-merge.t
+test-mq-missingfiles.t
 test-mq-pull-from-bundle.t
 test-mq-qclone-http.t
 test-mq-qdelete.t
@@ -393,6 +432,7 @@
 test-mq-qfold.t
 test-mq-qgoto.t
 test-mq-qimport-fail-cleanup.t
+test-mq-qimport.t
 test-mq-qnew.t
 test-mq-qpush-exact.t
 test-mq-qpush-fail.t
@@ -403,6 +443,7 @@
 test-mq-qrename.t
 test-mq-qsave.t
 test-mq-safety.t
+test-mq-subrepo-svn.t
 test-mq-subrepo.t
 test-mq-symlinks.t
 test-mq.t
@@ -438,8 +479,10 @@
 test-narrow.t
 test-nested-repo.t
 test-newbranch.t
+test-newcgi.t
 test-newercgi.t
 test-nointerrupt.t
+test-notify-changegroup.t
 test-obshistory.t
 test-obsmarker-template.t
 test-obsmarkers-effectflag.t
@@ -451,11 +494,13 @@
 test-obsolete-divergent.t
 test-obsolete-tag-cache.t
 test-obsolete.t
+test-oldcgi.t
 test-origbackup-conflict.t
 test-pager-legacy.t
 test-pager.t
 test-parents.t
 test-parse-date.t
+test-parseindex.t
 test-parseindex2.py
 test-patch-offset.t
 test-patch.t
@@ -468,12 +513,15 @@
 test-pathencode.py
 test-pending.t
 test-permissions.t
+test-phabricator.t
+test-phase-archived.t
 test-phases-exchange.t
 test-phases.t
 test-profile.t
 test-progress.t
 test-propertycache.py
 test-pull-branch.t
+test-pull-bundle.t
 test-pull-http.t
 test-pull-permission.t
 test-pull-pull-corruption.t
@@ -557,16 +605,23 @@
 test-remotefilelog-cacheprocess.t
 test-remotefilelog-clone-tree.t
 test-remotefilelog-clone.t
+test-remotefilelog-corrupt-cache.t
+test-remotefilelog-datapack.py
+test-remotefilelog-gc.t
 test-remotefilelog-gcrepack.t
+test-remotefilelog-histpack.py
 test-remotefilelog-http.t
 test-remotefilelog-keepset.t
+test-remotefilelog-linknodes.t
 test-remotefilelog-local.t
 test-remotefilelog-log.t
 test-remotefilelog-partial-shallow.t
 test-remotefilelog-permissions.t
-test-remotefilelog-permisssions.t
 test-remotefilelog-prefetch.t
 test-remotefilelog-pull-noshallow.t
+test-remotefilelog-push-pull.t
+test-remotefilelog-repack-fast.t
+test-remotefilelog-repack.t
 test-remotefilelog-share.t
 test-remotefilelog-sparse.t
 test-remotefilelog-tags.t
@@ -597,12 +652,15 @@
 test-revset-dirstate-parents.t
 test-revset-legacy-lookup.t
 test-revset-outgoing.t
+test-revset.t
+test-revset2.t
 test-rollback.t
 test-run-tests.py
 test-run-tests.t
 test-rust-ancestor.py
 test-schemes.t
 test-serve.t
+test-server-view.t
 test-setdiscovery.t
 test-share.t
 test-shelve.t
@@ -631,6 +689,7 @@
 test-ssh.t
 test-sshserver.py
 test-stack.t
+test-static-http.t
 test-status-color.t
 test-status-inprocess.py
 test-status-rev.t
@@ -642,10 +701,12 @@
 test-strip-cross.t
 test-strip.t
 test-subrepo-deep-nested-change.t
+test-subrepo-git.t
 test-subrepo-missing.t
 test-subrepo-paths.t
 test-subrepo-recursion.t
 test-subrepo-relative-path.t
+test-subrepo-svn.t
 test-subrepo.t
 test-symlink-os-yes-fs-no.py
 test-symlink-placeholder.t
@@ -658,7 +719,10 @@
 test-template-map.t
 test-tools.t
 test-transplant.t
+test-treediscovery-legacy.t
+test-treediscovery.t
 test-treemanifest.t
+test-trusted.py
 test-ui-color.py
 test-ui-config.py
 test-ui-verbosity.py
@@ -669,6 +733,7 @@
 test-unionrepo.t
 test-unrelated-pull.t
 test-up-local-change.t
+test-update-atomic.t
 test-update-branches.t
 test-update-dest.t
 test-update-issue1456.t
@@ -685,19 +750,26 @@
 test-walkrepo.py
 test-websub.t
 test-win32text.t
+test-wireproto-caching.t
 test-wireproto-clientreactor.py
 test-wireproto-command-branchmap.t
+test-wireproto-command-capabilities.t
 test-wireproto-command-changesetdata.t
 test-wireproto-command-filedata.t
 test-wireproto-command-filesdata.t
 test-wireproto-command-heads.t
+test-wireproto-command-known.t
 test-wireproto-command-listkeys.t
 test-wireproto-command-lookup.t
 test-wireproto-command-manifestdata.t
 test-wireproto-command-pushkey.t
 test-wireproto-command-rawstorefiledata.t
+test-wireproto-content-redirects.t
+test-wireproto-exchangev2.t
 test-wireproto-framing.py
 test-wireproto-serverreactor.py
 test-wireproto.py
+test-wireproto.t
+test-worker.t
 test-wsgirequest.py
 test-xdg.t
--- a/contrib/relnotes	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/relnotes	Wed Apr 17 13:41:18 2019 -0400
@@ -14,6 +14,7 @@
     r"\(issue": 100,
     r"\(BC\)": 100,
     r"\(API\)": 100,
+    r"\(SEC\)": 100,
     # core commands, bump up
     r"(commit|files|log|pull|push|patch|status|tag|summary)(|s|es):": 20,
     r"(annotate|alias|branch|bookmark|clone|graft|import|verify).*:": 20,
@@ -21,6 +22,7 @@
     r"(mq|shelve|rebase):": 20,
     # newsy
     r": deprecate": 20,
+    r": new.*(extension|flag|module)": 10,
     r"( ability|command|feature|option|support)": 10,
     # experimental
     r"hg-experimental": 20,
@@ -29,22 +31,23 @@
     # bug-like?
     r"(fix|don't break|improve)": 7,
     r"(not|n't|avoid|fix|prevent).*crash": 10,
+    r"vulnerab": 10,
     # boring stuff, bump down
     r"^contrib": -5,
     r"debug": -5,
     r"help": -5,
+    r"minor": -5,
     r"(doc|metavar|bundle2|obsolete|obsmarker|rpm|setup|debug\S+:)": -15,
     r"(check-code|check-commit|check-config|import-checker)": -20,
     r"(flake8|lintian|pyflakes|pylint)": -20,
     # cleanups and refactoring
-    r"(cleanup|white ?space|spelling|quoting)": -20,
+    r"(clean ?up|white ?space|spelling|quoting)": -20,
     r"(flatten|dedent|indent|nesting|unnest)": -20,
     r"(typo|hint|note|comment|TODO|FIXME)": -20,
     r"(style:|convention|one-?liner)": -20,
-    r"_": -10,
     r"(argument|absolute_import|attribute|assignment|mutable)": -15,
     r"(scope|True|False)": -10,
-    r"(unused|useless|unnecessary|superfluous|duplicate|deprecated)": -10,
+    r"(unused|useless|unnecessar|superfluous|duplicate|deprecated)": -10,
     r"(redundant|pointless|confusing|uninitialized|meaningless|dead)": -10,
     r": (drop|remove|delete|rip out)": -10,
     r": (inherit|rename|simplify|naming|inline)": -10,
@@ -54,9 +57,12 @@
     r": (move|extract) .* (to|into|from|out of)": -20,
     r": implement ": -5,
     r": use .* implementation": -20,
+    r": use .* instead of": -20,
+    # code
+    r"_": -10,
+    r"__": -5,
+    r"\(\)": -5,
     r"\S\S\S+\.\S\S\S\S+": -5,
-    r": use .* instead of": -20,
-    r"__": -5,
     # dumb keywords
     r"\S+/\S+:": -10,
     r"\S+\.\S+:": -10,
@@ -92,6 +98,15 @@
     (r"shelve|unshelve", "extensions"),
 ]
 
+def wikify(desc):
+    desc = desc.replace("(issue", "(Bts:issue")
+    desc = re.sub(r"\b([0-9a-f]{12})\b", r"Cset:\1", desc)
+    # stop ParseError from being recognized as a (nonexistent) wiki page
+    desc = re.sub(r" ([A-Z][a-z]+[A-Z][a-z]+)\b", r" !\1", desc)
+    # prevent wiki markup of magic methods
+    desc = re.sub(r"\b(\S*__\S*)\b", r"`\1`", desc)
+    return desc
+
 def main():
     desc = "example: %(prog)s 4.7.2 --stoprev 4.8rc0"
     ap = argparse.ArgumentParser(description=desc)
@@ -148,10 +163,8 @@
             if re.search(rule, desc):
                 score += val
 
-        desc = desc.replace("(issue", "(Bts:issue")
-
         if score >= cutoff:
-            commits.append(desc)
+            commits.append(wikify(desc))
     # Group unflagged notes.
     groups = {}
     bcs = []
--- a/contrib/revsetbenchmarks.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/revsetbenchmarks.py	Wed Apr 17 13:41:18 2019 -0400
@@ -71,8 +71,8 @@
             print(exc.output, file=sys.stderr)
         return None
 
-outputre = re.compile(r'! wall (\d+.\d+) comb (\d+.\d+) user (\d+.\d+) '
-                      'sys (\d+.\d+) \(best of (\d+)\)')
+outputre = re.compile(br'! wall (\d+.\d+) comb (\d+.\d+) user (\d+.\d+) '
+                      br'sys (\d+.\d+) \(best of (\d+)\)')
 
 def parseoutput(output):
     """parse a textual output into a dict
--- a/contrib/showstack.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/showstack.py	Wed Apr 17 13:41:18 2019 -0400
@@ -1,7 +1,7 @@
 # showstack.py - extension to dump a Python stack trace on signal
 #
 # binds to both SIGQUIT (Ctrl-\) and SIGINFO (Ctrl-T on BSDs)
-"""dump stack trace when receiving SIGQUIT (Ctrl-\) and SIGINFO (Ctrl-T on BSDs)
+r"""dump stack trace when receiving SIGQUIT (Ctrl-\) or SIGINFO (Ctrl-T on BSDs)
 """
 
 from __future__ import absolute_import, print_function
--- a/contrib/synthrepo.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/synthrepo.py	Wed Apr 17 13:41:18 2019 -0400
@@ -349,7 +349,7 @@
     # to the modeled directory structure.
     initcount = int(opts['initfiles'])
     if initcount and initdirs:
-        pctx = repo[None].parents()[0]
+        pctx = repo['.']
         dirs = set(pctx.dirs())
         files = {}
 
@@ -450,7 +450,6 @@
                 path = fctx.path()
                 changes[path] = '\n'.join(lines) + '\n'
             for __ in xrange(pick(filesremoved)):
-                path = random.choice(mfk)
                 for __ in xrange(10):
                     path = random.choice(mfk)
                     if path not in changes:
--- a/contrib/testparseutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/testparseutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -265,7 +265,7 @@
 class fileheredocmatcher(embeddedmatcher):
     """Detect "cat > FILE << LIMIT" style embedded code
 
-    >>> matcher = fileheredocmatcher(b'heredoc .py file', br'[^<]+\.py')
+    >>> matcher = fileheredocmatcher(b'heredoc .py file', br'[^<]+\\.py')
     >>> b2s(matcher.startsat(b'  $ cat > file.py << EOF\\n'))
     ('file.py', '  > EOF\\n')
     >>> b2s(matcher.startsat(b'  $ cat   >>file.py   <<EOF\\n'))
--- a/contrib/win32/hgwebdir_wsgi.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/win32/hgwebdir_wsgi.py	Wed Apr 17 13:41:18 2019 -0400
@@ -6,7 +6,6 @@
 #
 # Requirements:
 # - Python 2.7, preferably 64 bit
-# - PyWin32 for Python 2.7 (32 or 64 bit)
 # - Mercurial installed from source (python setup.py install) or download the
 #   python module installer from https://www.mercurial-scm.org/wiki/Download
 # - IIS 7 or newer
--- a/contrib/win32/mercurial.iss	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,120 +0,0 @@
-; Script generated by the Inno Setup Script Wizard.
-; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES!
-
-#ifndef VERSION
-#define FileHandle
-#define FileLine
-#define VERSION = "unknown"
-#if FileHandle = FileOpen(SourcePath + "\..\..\mercurial\__version__.py")
-  #expr FileLine = FileRead(FileHandle)
-  #expr FileLine = FileRead(FileHandle)
-  #define VERSION = Copy(FileLine, Pos('"', FileLine)+1, Len(FileLine)-Pos('"', FileLine)-1)
-#endif
-#if FileHandle
-  #expr FileClose(FileHandle)
-#endif
-#pragma message "Detected Version: " + VERSION
-#endif
-
-#ifndef ARCH
-#define ARCH = "x86"
-#endif
-
-[Setup]
-AppCopyright=Copyright 2005-2019 Matt Mackall and others
-AppName=Mercurial
-AppVersion={#VERSION}
-#if ARCH == "x64"
-AppVerName=Mercurial {#VERSION} (64-bit)
-OutputBaseFilename=Mercurial-{#VERSION}-x64
-ArchitecturesAllowed=x64
-ArchitecturesInstallIn64BitMode=x64
-#else
-AppVerName=Mercurial {#VERSION}
-OutputBaseFilename=Mercurial-{#VERSION}
-#endif
-InfoAfterFile=contrib/win32/postinstall.txt
-LicenseFile=COPYING
-ShowLanguageDialog=yes
-AppPublisher=Matt Mackall and others
-AppPublisherURL=https://mercurial-scm.org/
-AppSupportURL=https://mercurial-scm.org/
-AppUpdatesURL=https://mercurial-scm.org/
-AppID={{4B95A5F1-EF59-4B08-BED8-C891C46121B3}
-AppContact=mercurial@mercurial-scm.org
-DefaultDirName={pf}\Mercurial
-SourceDir=..\..
-VersionInfoDescription=Mercurial distributed SCM (version {#VERSION})
-VersionInfoCopyright=Copyright 2005-2019 Matt Mackall and others
-VersionInfoCompany=Matt Mackall and others
-InternalCompressLevel=max
-SolidCompression=true
-SetupIconFile=contrib\win32\mercurial.ico
-AllowNoIcons=true
-DefaultGroupName=Mercurial
-PrivilegesRequired=none
-
-[Files]
-Source: contrib\mercurial.el; DestDir: {app}/Contrib
-Source: contrib\vim\*.*; DestDir: {app}/Contrib/Vim
-Source: contrib\zsh_completion; DestDir: {app}/Contrib
-Source: contrib\bash_completion; DestDir: {app}/Contrib
-Source: contrib\tcsh_completion; DestDir: {app}/Contrib
-Source: contrib\tcsh_completion_build.sh; DestDir: {app}/Contrib
-Source: contrib\hgk; DestDir: {app}/Contrib; DestName: hgk.tcl
-Source: contrib\xml.rnc; DestDir: {app}/Contrib
-Source: contrib\mercurial.el; DestDir: {app}/Contrib
-Source: contrib\mq.el; DestDir: {app}/Contrib
-Source: contrib\hgweb.fcgi; DestDir: {app}/Contrib
-Source: contrib\hgweb.wsgi; DestDir: {app}/Contrib
-Source: contrib\win32\ReadMe.html; DestDir: {app}; Flags: isreadme
-Source: contrib\win32\postinstall.txt; DestDir: {app}; DestName: ReleaseNotes.txt
-Source: dist\hg.exe; DestDir: {app}; AfterInstall: Touch('{app}\hg.exe.local')
-#if ARCH == "x64"
-Source: dist\lib\*.dll; Destdir: {app}\lib
-Source: dist\lib\*.pyd; Destdir: {app}\lib
-#else
-Source: dist\w9xpopen.exe; DestDir: {app}
-#endif
-Source: dist\python*.dll; Destdir: {app}; Flags: skipifsourcedoesntexist
-Source: dist\msvc*.dll; DestDir: {app}; Flags: skipifsourcedoesntexist
-Source: dist\Microsoft.VC*.CRT.manifest; DestDir: {app}; Flags: skipifsourcedoesntexist
-Source: dist\lib\library.zip; DestDir: {app}\lib
-Source: dist\add_path.exe; DestDir: {app}
-Source: doc\*.html; DestDir: {app}\Docs
-Source: doc\style.css; DestDir: {app}\Docs
-Source: mercurial\help\*.txt; DestDir: {app}\help
-Source: mercurial\help\internals\*.txt; DestDir: {app}\help\internals
-Source: mercurial\default.d\*.rc; DestDir: {app}\default.d
-Source: mercurial\locale\*.*; DestDir: {app}\locale; Flags: recursesubdirs createallsubdirs skipifsourcedoesntexist
-Source: mercurial\templates\*.*; DestDir: {app}\Templates; Flags: recursesubdirs createallsubdirs
-Source: CONTRIBUTORS; DestDir: {app}; DestName: Contributors.txt
-Source: COPYING; DestDir: {app}; DestName: Copying.txt
-
-[INI]
-Filename: {app}\Mercurial.url; Section: InternetShortcut; Key: URL; String: https://mercurial-scm.org/
-Filename: {app}\default.d\editor.rc; Section: ui; Key: editor; String: notepad
-
-[UninstallDelete]
-Type: files; Name: {app}\Mercurial.url
-Type: filesandordirs; Name: {app}\default.d
-Type: files; Name: "{app}\hg.exe.local"
-
-[Icons]
-Name: {group}\Uninstall Mercurial; Filename: {uninstallexe}
-Name: {group}\Mercurial Command Reference; Filename: {app}\Docs\hg.1.html
-Name: {group}\Mercurial Configuration Files; Filename: {app}\Docs\hgrc.5.html
-Name: {group}\Mercurial Ignore Files; Filename: {app}\Docs\hgignore.5.html
-Name: {group}\Mercurial Web Site; Filename: {app}\Mercurial.url
-
-[Run]
-Filename: "{app}\add_path.exe"; Parameters: "{app}"; Flags: postinstall; Description: "Add the installation path to the search path"
-
-[UninstallRun]
-Filename: "{app}\add_path.exe"; Parameters: "/del {app}"
-
-[Code]
-procedure Touch(fn: String);
-begin
-  SaveStringToFile(ExpandConstant(fn), '', False);
-end;
--- a/contrib/win32/win32-build.txt	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,130 +0,0 @@
-The standalone Windows installer for Mercurial is built in a somewhat
-jury-rigged fashion.
-
-It has the following prerequisites. Ensure to take the packages
-matching the mercurial version you want to build (32-bit or 64-bit).
-
-  Python 2.6 for Windows
-      http://www.python.org/download/releases/
-
-  A compiler:
-    either MinGW
-      http://www.mingw.org/
-    or Microsoft Visual C++ 2008 SP1 Express Edition
-      http://www.microsoft.com/express/Downloads/Download-2008.aspx
-
-  Python for Windows Extensions
-      http://sourceforge.net/projects/pywin32/
-
-  mfc71.dll (just download, don't install; not needed for Python 2.6)
-      http://starship.python.net/crew/mhammond/win32/
-
-  Visual C++ 2008 redistributable package (needed for >= Python 2.6 or if you compile with MSVC)
-    for 32-bit:
-      http://www.microsoft.com/downloads/details.aspx?FamilyID=9b2da534-3e03-4391-8a4d-074b9f2bc1bf
-    for 64-bit:
-      http://www.microsoft.com/downloads/details.aspx?familyid=bd2a6171-e2d6-4230-b809-9a8d7548c1b6
-
-  The py2exe distutils extension
-      http://sourceforge.net/projects/py2exe/
-
-  GnuWin32 gettext utility (if you want to build translations)
-      http://gnuwin32.sourceforge.net/packages/gettext.htm
-
-  Inno Setup
-      http://www.jrsoftware.org/isdl.php#qsp
-
-      Get and install ispack-5.3.10.exe or later (includes Inno Setup Processor),
-      which is necessary to package Mercurial.
-
-  ISTool - optional
-      http://www.istool.org/default.aspx/
-
-  add_path (you need only add_path.exe in the zip file)
-      http://www.barisione.org/apps.html#add_path
-
-  Docutils
-      http://docutils.sourceforge.net/
-
-  CA Certs file
-      http://curl.haxx.se/ca/cacert.pem
-
-And, of course, Mercurial itself.
-
-Once you have all this installed and built, clone a copy of the
-Mercurial repository you want to package, and name the repo
-C:\hg\hg-release.
-
-In a shell, build a standalone copy of the hg.exe program.
-
-Building instructions for MinGW:
-  python setup.py build -c mingw32
-  python setup.py py2exe -b 2
-Note: the previously suggested combined command of "python setup.py build -c
-mingw32 py2exe -b 2" doesn't work correctly anymore as it doesn't include the
-extensions in the mercurial subdirectory.
-If you want to create a file named setup.cfg with the contents:
-[build]
-compiler=mingw32
-you can skip the first build step.
-
-Building instructions with MSVC 2008 Express Edition:
-  for 32-bit:
-    "C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat" x86
-    python setup.py py2exe -b 2
-  for 64-bit:
-    "C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat" x86_amd64
-    python setup.py py2exe -b 3
-
-Copy add_path.exe and cacert.pem files into the dist directory that just got created.
-
-If you are using Python 2.6 or later, or if you are using MSVC 2008 to compile
-mercurial, you must include the C runtime libraries in the installer. To do so,
-install the Visual C++ 2008 redistributable package. Then in your windows\winsxs
-folder, locate the folder containing the dlls version 9.0.21022.8.
-For x86, it should be named like x86_Microsoft.VC90.CRT_(...)_9.0.21022.8(...).
-For x64, it should be named like amd64_Microsoft.VC90.CRT_(...)_9.0.21022.8(...).
-Copy the files named msvcm90.dll, msvcp90.dll and msvcr90.dll into the dist
-directory.
-Then in the windows\winsxs\manifests folder, locate the corresponding manifest
-file (x86_Microsoft.VC90.CRT_(...)_9.0.21022.8(...).manifest for x86,
-amd64_Microsoft.VC90.CRT_(...)_9.0.21022.8(...).manifest for x64), copy it in the
-dist directory and rename it to Microsoft.VC90.CRT.manifest.
-
-Before building the installer, you have to build Mercurial HTML documentation
-(or fix mercurial.iss to not reference the doc directory):
-
-  cd doc
-  mingw32-make html
-  cd ..
-
-If you use ISTool, you open the C:\hg\hg-release\contrib\win32\mercurial.iss
-file and type Ctrl-F9 to compile the installer file.
-
-Otherwise you run the Inno Setup compiler.  Assuming it's in the path
-you should execute:
-
-  iscc contrib\win32\mercurial.iss /dVERSION=foo
-
-Where 'foo' is the version number you would like to see in the
-'Add/Remove Applications' tool.  The installer will be placed into
-a directory named Output/ at the root of your repository.
-If the /dVERSION=foo parameter is not given in the command line, the
-installer will retrieve the version information from the __version__.py file.
-
-If you want to build an installer for a 64-bit mercurial, add /dARCH=x64 to
-your command line:
-  iscc contrib\win32\mercurial.iss /dARCH=x64
-
-To automate the steps above you may want to create a batchfile based on the
-following (MinGW build chain):
-
-  echo [build] > setup.cfg
-  echo compiler=mingw32 >> setup.cfg
-  python setup.py py2exe -b 2
-  cd doc
-  mingw32-make html
-  cd ..
-  iscc contrib\win32\mercurial.iss /dVERSION=snapshot
-
-and run it from the root of the hg repository (c:\hg\hg-release).
--- a/contrib/wix/README.txt	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,31 +0,0 @@
-WiX installer source files
-==========================
-
-The files in this folder are used by the thg-winbuild [1] package
-building architecture to create a Mercurial MSI installer.   These files
-are versioned within the Mercurial source tree because the WXS files
-must kept up to date with distribution changes within their branch.  In
-other words, the default branch WXS files are expected to diverge from
-the stable branch WXS files.  Storing them within the same repository is
-the only sane way to keep the source tree and the installer in sync.
-
-The MSI installer builder uses only the mercurial.ini file from the
-contrib/win32 folder, the contents of which have been historically used
-to create an InnoSetup based installer.  The rest of the files there are
-ignored.
-
-The MSI packages built by thg-winbuild require elevated (admin)
-privileges to be installed due to the installation of MSVC CRT libraries
-under the C:\WINDOWS\WinSxS folder.  Thus the InnoSetup installers may
-still be useful to some users.
-
-To build your own MSI packages, clone the thg-winbuild [1] repository
-and follow the README.txt [2] instructions closely.  There are fewer
-prerequisites for a WiX [3] installer than an InnoSetup installer, but
-they are more specific.
-
-Direct questions or comments to Steve Borho <steve@borho.org>
-
-[1] http://bitbucket.org/tortoisehg/thg-winbuild
-[2] http://bitbucket.org/tortoisehg/thg-winbuild/src/tip/README.txt
-[3] http://wix.sourceforge.net/
--- a/contrib/wix/contrib.wxs	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,43 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
-
-  <?include guids.wxi ?>
-  <?include defines.wxi ?>
-
-  <Fragment>
-    <ComponentGroup Id="contribFolder">
-      <ComponentRef Id="contrib" />
-      <ComponentRef Id="contrib.vim" />
-    </ComponentGroup>
-  </Fragment>
-
-  <Fragment>
-    <DirectoryRef Id="INSTALLDIR">
-      <Directory Id="contribdir" Name="contrib" FileSource="$(var.SourceDir)">
-        <Component Id="contrib" Guid="$(var.contrib.guid)" Win64='$(var.IsX64)'>
-          <File Name="bash_completion" KeyPath="yes" />
-          <File Name="hgk" />
-          <File Name="hgweb.fcgi" />
-          <File Name="hgweb.wsgi" />
-          <File Name="logo-droplets.svg" />
-          <File Name="mercurial.el" />
-          <File Name="tcsh_completion" />
-          <File Name="tcsh_completion_build.sh" />
-          <File Name="xml.rnc" />
-          <File Name="zsh_completion" />
-        </Component>
-        <Directory Id="vimdir" Name="vim">
-          <Component Id="contrib.vim" Guid="$(var.contrib.vim.guid)" Win64='$(var.IsX64)'>
-            <File Name="hg-menu.vim" KeyPath="yes" />
-            <File Name="HGAnnotate.vim" />
-            <File Name="hgcommand.vim" />
-            <File Name="patchreview.txt" />
-            <File Name="patchreview.vim" />
-            <File Name="hgtest.vim" />
-          </Component>
-        </Directory>
-      </Directory>
-    </DirectoryRef>
-  </Fragment>
-
-</Wix>
--- a/contrib/wix/defines.wxi	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,9 +0,0 @@
-<Include>
-
-  <?if $(var.Platform) = "x64" ?>
-    <?define IsX64 = yes ?>
-  <?else?>
-    <?define IsX64 = no ?>
-  <?endif?>
-
-</Include>
--- a/contrib/wix/dist.wxs	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,37 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
-
-  <?include guids.wxi ?>
-  <?include defines.wxi ?>
-
-  <Fragment>
-    <DirectoryRef Id="INSTALLDIR" FileSource="$(var.SourceDir)">
-      <Component Id="distOutput" Guid="$(var.dist.guid)" Win64='$(var.IsX64)'>
-        <File Name="python27.dll" KeyPath="yes" />
-      </Component>
-      <Directory Id="libdir" Name="lib" FileSource="$(var.SourceDir)/lib">
-        <Component Id="libOutput" Guid="$(var.lib.guid)" Win64='$(var.IsX64)'>
-          <File Name="library.zip" KeyPath="yes" />
-          <File Name="mercurial.cext.base85.pyd" />
-          <File Name="mercurial.cext.bdiff.pyd" />
-          <File Name="mercurial.cext.mpatch.pyd" />
-          <File Name="mercurial.cext.osutil.pyd" />
-          <File Name="mercurial.cext.parsers.pyd" />
-          <File Name="mercurial.zstd.pyd" />
-          <File Name="hgext.fsmonitor.pywatchman.bser.pyd" />
-          <File Name="pyexpat.pyd" />
-          <File Name="bz2.pyd" />
-          <File Name="select.pyd" />
-          <File Name="unicodedata.pyd" />
-          <File Name="_ctypes.pyd" />
-          <File Name="_elementtree.pyd" />
-          <File Name="_testcapi.pyd" />
-          <File Name="_hashlib.pyd" />
-          <File Name="_socket.pyd" />
-          <File Name="_ssl.pyd" />
-        </Component>
-      </Directory>
-    </DirectoryRef>
-  </Fragment>
-
-</Wix>
--- a/contrib/wix/doc.wxs	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,50 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
-
-  <?include guids.wxi ?>
-  <?include defines.wxi ?>
-
-  <Fragment>
-    <ComponentGroup Id="docFolder">
-      <ComponentRef Id="doc.hg.1.html" />
-      <ComponentRef Id="doc.hgignore.5.html" />
-      <ComponentRef Id="doc.hgrc.5.html" />
-      <ComponentRef Id="doc.style.css" />
-    </ComponentGroup>
-  </Fragment>
-
-  <Fragment>
-    <DirectoryRef Id="INSTALLDIR">
-      <Directory Id="docdir" Name="doc" FileSource="$(var.SourceDir)">
-        <Component Id="doc.hg.1.html" Guid="$(var.doc.hg.1.html.guid)" Win64='$(var.IsX64)'>
-          <File Name="hg.1.html" KeyPath="yes">
-            <Shortcut Id="hg1StartMenu" Directory="ProgramMenuDir"
-                      Name="Mercurial Command Reference"
-                      Icon="hgIcon.ico" IconIndex="0" Advertise="yes"
-            />
-          </File>
-        </Component>
-        <Component Id="doc.hgignore.5.html" Guid="$(var.doc.hgignore.5.html.guid)" Win64='$(var.IsX64)'>
-          <File Name="hgignore.5.html" KeyPath="yes">
-            <Shortcut Id="hgignore5StartMenu" Directory="ProgramMenuDir"
-                      Name="Mercurial Ignore Files"
-                      Icon="hgIcon.ico" IconIndex="0" Advertise="yes"
-            />
-          </File>
-        </Component>
-        <Component Id="doc.hgrc.5.html" Guid="$(var.doc.hgrc.5.html)" Win64='$(var.IsX64)'>
-          <File Name="hgrc.5.html" KeyPath="yes">
-            <Shortcut Id="hgrc5StartMenu" Directory="ProgramMenuDir"
-                      Name="Mercurial Configuration Files"
-                      Icon="hgIcon.ico" IconIndex="0" Advertise="yes"
-            />
-          </File>
-        </Component>
-        <Component Id="doc.style.css" Guid="$(var.doc.style.css)" Win64='$(var.IsX64)'>
-          <File Name="style.css" KeyPath="yes" />
-        </Component>
-      </Directory>
-    </DirectoryRef>
-  </Fragment>
-
-</Wix>
--- a/contrib/wix/guids.wxi	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,52 +0,0 @@
-<Include>
-  <!-- These are component GUIDs used for Mercurial installers.
-       YOU MUST CHANGE ALL GUIDs below when copying this file
-       and replace 'Mercurial' in this notice with the name of
-       your project. Component GUIDs have global namespace!      -->
-
-  <!-- contrib.wxs -->
-  <?define contrib.guid = {4E11FFC2-E2F7-482A-8460-9394B5489F02} ?>
-  <?define contrib.vim.guid = {BB04903A-652D-4C4F-9590-2BD07A2304F2} ?>
-
-  <!-- dist.wxs -->
-  <?define dist.guid = {CE405FE6-CD1E-4873-9C9A-7683AE5A3D90} ?>
-  <?define lib.guid = {877633b5-0b7e-4b46-8f1c-224a61733297} ?>
-
-  <!-- doc.wxs -->
-  <?define doc.hg.1.html.guid = {AAAA3FDA-EDC5-4220-B59D-D342722358A2} ?>
-  <?define doc.hgignore.5.html.guid = {AA9118C4-F3A0-4429-A5F4-5A1906B2D67F} ?>
-  <?define doc.hgrc.5.html = {E0CEA1EB-FA01-408c-844B-EE5965165BAE} ?>
-  <?define doc.style.css = {172F8262-98E0-4711-BD39-4DAE0D77EF05} ?>
-
-  <!-- help.wxs -->
-  <?define help.root.guid = {9FA957DB-6DFE-44f2-AD03-293B2791CF17} ?>
-  <?define help.internals.guid = {2DD7669D-0DB8-4C39-9806-78E6475E7ACC} ?>
-
-  <!-- i18n.wxs -->
-  <?define i18nFolder.guid = {1BF8026D-CF7C-4174-AEE6-D6B7BF119248} ?>
-
-  <!-- templates.wxs -->
-  <?define templates.root.guid = {437FD55C-7756-4EA0-87E5-FDBE75DC8595} ?>
-  <?define templates.atom.guid = {D30E14A5-8AF0-4268-8B00-00BEE9E09E39} ?>
-  <?define templates.coal.guid = {B63CCAAB-4EAF-43b4-901E-4BD13F5B78FC} ?>
-  <?define templates.gitweb.guid = {827334AF-1EFD-421B-962C-5660A068F612} ?>
-  <?define templates.json.guid = {F535BE7A-EC34-46E0-B9BE-013F3DBAFB19} ?>
-  <?define templates.monoblue.guid = {8060A1E4-BD4C-453E-92CB-9536DC44A9E3} ?>
-  <?define templates.paper.guid = {61AB1DE9-645F-46ED-8AF8-0CF02267FFBB} ?>
-  <?define templates.raw.guid = {834DF8D7-9784-43A6-851D-A96CE1B3575B} ?>
-  <?define templates.rss.guid = {9338FA09-E128-4B1C-B723-1142DBD09E14} ?>
-  <?define templates.spartan.guid = {80222625-FA8F-44b1-86CE-1781EF375D09} ?>
-  <?define templates.static.guid = {6B3D7C24-98DA-4B67-9F18-35F77357B0B4} ?>
-
-  <!-- mercurial.wxs -->
-  <?define ProductUpgradeCode = {A1CC6134-E945-4399-BE36-EB0017FDF7CF} ?>
-
-  <?define ComponentMainExecutableGUID = {D102B8FA-059B-4ACC-9FA3-8C78C3B58EEF} ?>
-
-  <?define ReadMe.guid = {56A8E372-991D-4DCA-B91D-93D775974CF5} ?>
-  <?define COPYING.guid = {B7801DBA-1C49-4BF4-91AD-33C65F5C7895} ?>
-  <?define mercurial.rc.guid = {1D5FAEEE-7E6E-43B1-9F7F-802714316B15} ?>
-  <?define mergetools.rc.guid = {E8A1DC29-FF40-4B5F-BD12-80B9F7BF0CCD} ?>
-  <?define ProgramMenuDir.guid = {D5A63320-1238-489B-B68B-CF053E9577CA} ?>
-
-</Include>
--- a/contrib/wix/help.wxs	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,64 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
-
-  <?include guids.wxi ?>
-  <?include defines.wxi ?>
-
-  <Fragment>
-    <ComponentGroup Id='helpFolder'>
-      <ComponentRef Id='help.root' />
-      <ComponentRef Id='help.internals' />
-    </ComponentGroup>
-  </Fragment>
-
-  <Fragment>
-    <DirectoryRef Id="INSTALLDIR">
-      <Directory Id="helpdir" Name="help" FileSource="$(var.SourceDir)">
-        <Component Id="help.root" Guid="$(var.help.root.guid)" Win64='$(var.IsX64)'>
-          <File Name="bundlespec.txt" />
-          <File Name="color.txt" />
-          <File Name="config.txt" KeyPath="yes" />
-          <File Name="dates.txt" />
-          <File Name="deprecated.txt" />
-          <File Name="diffs.txt" />
-          <File Name="environment.txt" />
-          <File Name="extensions.txt" />
-          <File Name="filesets.txt" />
-          <File Name="flags.txt" />
-          <File Name="glossary.txt" />
-          <File Name="hgignore.txt" />
-          <File Name="hgweb.txt" />
-          <File Name="merge-tools.txt" />
-          <File Name="pager.txt" />
-          <File Name="patterns.txt" />
-          <File Name="phases.txt" />
-          <File Name="revisions.txt" />
-          <File Name="scripting.txt" />
-          <File Name="subrepos.txt" />
-          <File Name="templates.txt" />
-          <File Name="urls.txt" />
-        </Component>
-
-        <Directory Id="help.internaldir" Name="internals">
-          <Component Id="help.internals" Guid="$(var.help.internals.guid)" Win64='$(var.IsX64)'>
-            <File Id="internals.bundle2.txt"      Name="bundle2.txt" />
-            <File Id="internals.bundles.txt"      Name="bundles.txt" KeyPath="yes" />
-            <File Id="internals.cbor.txt"         Name="cbor.txt" />
-            <File Id="internals.censor.txt"       Name="censor.txt" />
-            <File Id="internals.changegroups.txt" Name="changegroups.txt" />
-            <File Id="internals.config.txt"       Name="config.txt" />
-            <File Id="internals.extensions.txt"   Name="extensions.txt" />
-            <File Id="internals.linelog.txt"      Name="linelog.txt" />
-            <File Id="internals.requirements.txt" Name="requirements.txt" />
-            <File Id="internals.revlogs.txt"      Name="revlogs.txt" />
-            <File Id="internals.wireprotocol.txt" Name="wireprotocol.txt" />
-            <File Id="internals.wireprotocolrpc.txt" Name="wireprotocolrpc.txt" />
-            <File Id="internals.wireprotocolv2.txt" Name="wireprotocolv2.txt" />
-          </Component>
-        </Directory>
-
-      </Directory>
-    </DirectoryRef>
-  </Fragment>
-
-</Wix>
--- a/contrib/wix/hg.cmd	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,3 +0,0 @@
-@echo off
-rem launch hg.exe from parent folder
-"%~dp0\..\hg.exe" %*
--- a/contrib/wix/i18n.wxs	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,26 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
-
-  <?include guids.wxi ?>
-  <?include defines.wxi ?>
-
-  <?define hg_po_langs =
-    da;de;el;fr;it;ja;pt_BR;ro;ru;sv;zh_CN;zh_TW
-  ?>
-
-  <Fragment>
-    <DirectoryRef Id="INSTALLDIR">
-      <Directory Id="i18ndir" Name="i18n" FileSource="$(var.SourceDir)">
-        <Component Id="i18nFolder" Guid="$(var.i18nFolder.guid)" Win64='$(var.IsX64)'>
-          <File Name="hggettext" KeyPath="yes" />
-          <?foreach LANG in $(var.hg_po_langs) ?>
-            <File Id="hg.$(var.LANG).po"
-                  Name="$(var.LANG).po"
-            />
-          <?endforeach?>
-        </Component>
-      </Directory>
-    </DirectoryRef>
-  </Fragment>
-
-</Wix>
--- a/contrib/wix/locale.wxs	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,34 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
-
-  <?include defines.wxi ?>
-
-  <?define hglocales =
-    da;de;el;fr;it;ja;pt_BR;ro;ru;sv;zh_CN;zh_TW
-  ?>
-
-  <Fragment>
-    <ComponentGroup Id="localeFolder">
-      <?foreach LOC in $(var.hglocales) ?>
-        <ComponentRef Id="hg.locale.$(var.LOC)"/>
-      <?endforeach?>
-    </ComponentGroup>
-  </Fragment>
-
-  <Fragment>
-    <DirectoryRef Id="INSTALLDIR">
-      <Directory Id="localedir" Name="locale" FileSource="$(var.SourceDir)">
-        <?foreach LOC in $(var.hglocales) ?>
-          <Directory Id="hg.locale.$(var.LOC)" Name="$(var.LOC)">
-            <Directory Id="hg.locale.$(var.LOC).LC_MESSAGES" Name="LC_MESSAGES">
-              <Component Id="hg.locale.$(var.LOC)" Guid="*" Win64='$(var.IsX64)'>
-                <File Id="hg.mo.$(var.LOC)" Name="hg.mo" KeyPath="yes" />
-              </Component>
-            </Directory>
-          </Directory>
-        <?endforeach?>
-      </Directory>
-    </DirectoryRef>
-  </Fragment>
-
-</Wix>
--- a/contrib/wix/mercurial.wxs	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,162 +0,0 @@
-<?xml version='1.0' encoding='windows-1252'?>
-<Wix xmlns='http://schemas.microsoft.com/wix/2006/wi'>
-
-  <!-- Copyright 2010 Steve Borho <steve@borho.org>
-
-  This software may be used and distributed according to the terms of the
-  GNU General Public License version 2 or any later version. -->
-
-  <?include guids.wxi ?>
-  <?include defines.wxi ?>
-
-  <?if $(var.Platform) = "x64" ?>
-    <?define PFolder = ProgramFiles64Folder ?>
-  <?else?>
-    <?define PFolder = ProgramFilesFolder ?>
-  <?endif?>
-
-  <Product Id='*'
-    Name='Mercurial $(var.Version) ($(var.Platform))'
-    UpgradeCode='$(var.ProductUpgradeCode)'
-    Language='1033' Codepage='1252' Version='$(var.Version)'
-    Manufacturer='Matt Mackall and others'>
-
-    <Package Id='*'
-      Keywords='Installer'
-      Description="Mercurial distributed SCM (version $(var.Version))"
-      Comments='$(var.Comments)'
-      Platform='$(var.Platform)'
-      Manufacturer='Matt Mackall and others'
-      InstallerVersion='300' Languages='1033' Compressed='yes' SummaryCodepage='1252' />
-
-    <Media Id='1' Cabinet='mercurial.cab' EmbedCab='yes' DiskPrompt='CD-ROM #1'
-           CompressionLevel='high' />
-    <Property Id='DiskPrompt' Value="Mercurial $(var.Version) Installation [1]" />
-
-    <Condition Message='Mercurial MSI installers require Windows XP or higher'>
-        VersionNT >= 501
-    </Condition>
-
-    <Property Id="INSTALLDIR">
-      <ComponentSearch Id='SearchForMainExecutableComponent'
-                       Guid='$(var.ComponentMainExecutableGUID)' />
-    </Property>
-
-    <!--Property Id='ARPCOMMENTS'>any comments</Property-->
-    <Property Id='ARPCONTACT'>mercurial@mercurial-scm.org</Property>
-    <Property Id='ARPHELPLINK'>https://mercurial-scm.org/wiki/</Property>
-    <Property Id='ARPURLINFOABOUT'>https://mercurial-scm.org/about/</Property>
-    <Property Id='ARPURLUPDATEINFO'>https://mercurial-scm.org/downloads/</Property>
-    <Property Id='ARPHELPTELEPHONE'>https://mercurial-scm.org/wiki/Support</Property>
-    <Property Id='ARPPRODUCTICON'>hgIcon.ico</Property>
-
-    <Property Id='INSTALLEDMERCURIALPRODUCTS' Secure='yes'></Property>
-    <Property Id='REINSTALLMODE'>amus</Property>
-
-    <!--Auto-accept the license page-->
-    <Property Id='LicenseAccepted'>1</Property>
-
-    <Directory Id='TARGETDIR' Name='SourceDir'>
-      <Directory Id='$(var.PFolder)' Name='PFiles'>
-        <Directory Id='INSTALLDIR' Name='Mercurial'>
-          <Component Id='MainExecutable' Guid='$(var.ComponentMainExecutableGUID)' Win64='$(var.IsX64)'>
-            <File Id='hgEXE' Name='hg.exe' Source='dist\hg.exe' KeyPath='yes' />
-            <Environment Id="Environment" Name="PATH" Part="last" System="yes"
-                         Permanent="no" Value="[INSTALLDIR]" Action="set" />
-          </Component>
-          <Component Id='ReadMe' Guid='$(var.ReadMe.guid)' Win64='$(var.IsX64)'>
-              <File Id='ReadMe' Name='ReadMe.html' Source='contrib\win32\ReadMe.html'
-                    KeyPath='yes'/>
-          </Component>
-          <Component Id='COPYING' Guid='$(var.COPYING.guid)' Win64='$(var.IsX64)'>
-            <File Id='COPYING' Name='COPYING.rtf' Source='contrib\wix\COPYING.rtf'
-                  KeyPath='yes'/>
-          </Component>
-
-          <Directory Id='HGRCD' Name='hgrc.d'>
-            <Component Id='mercurial.rc' Guid='$(var.mercurial.rc.guid)' Win64='$(var.IsX64)'>
-              <File Id='mercurial.rc' Name='Mercurial.rc' Source='contrib\win32\mercurial.ini'
-                    ReadOnly='yes' KeyPath='yes'/>
-            </Component>
-            <Component Id='mergetools.rc' Guid='$(var.mergetools.rc.guid)' Win64='$(var.IsX64)'>
-              <File Id='mergetools.rc' Name='MergeTools.rc' Source='mercurial\default.d\mergetools.rc'
-                    ReadOnly='yes' KeyPath='yes'/>
-            </Component>
-          </Directory>
-
-        </Directory>
-      </Directory>
-
-      <Directory Id="ProgramMenuFolder" Name="Programs">
-        <Directory Id="ProgramMenuDir" Name="Mercurial $(var.Version)">
-          <Component Id="ProgramMenuDir" Guid="$(var.ProgramMenuDir.guid)" Win64='$(var.IsX64)'>
-            <RemoveFolder Id='ProgramMenuDir' On='uninstall' />
-            <RegistryValue Root='HKCU' Key='Software\Mercurial\InstallDir' Type='string'
-                           Value='[INSTALLDIR]' KeyPath='yes' />
-            <Shortcut Id='UrlShortcut' Directory='ProgramMenuDir' Name='Mercurial Web Site'
-                      Target='[ARPHELPLINK]' Icon="hgIcon.ico" IconIndex='0' />
-          </Component>
-        </Directory>
-      </Directory>
-
-      <?if $(var.Platform) = "x86" ?>
-        <Merge Id='VCRuntime' DiskId='1' Language='1033'
-              SourceFile='$(var.VCRedistSrcDir)\microsoft.vcxx.crt.x86_msm.msm' />
-        <Merge Id='VCRuntimePolicy' DiskId='1' Language='1033'
-              SourceFile='$(var.VCRedistSrcDir)\policy.x.xx.microsoft.vcxx.crt.x86_msm.msm' />
-      <?else?>
-        <Merge Id='VCRuntime' DiskId='1' Language='1033'
-              SourceFile='$(var.VCRedistSrcDir)\microsoft.vcxx.crt.x64_msm.msm' />
-        <Merge Id='VCRuntimePolicy' DiskId='1' Language='1033'
-              SourceFile='$(var.VCRedistSrcDir)\policy.x.xx.microsoft.vcxx.crt.x64_msm.msm' />
-      <?endif?>
-    </Directory>
-
-    <Feature Id='Complete' Title='Mercurial' Description='The complete package'
-        Display='expand' Level='1' ConfigurableDirectory='INSTALLDIR' >
-      <Feature Id='MainProgram' Title='Program' Description='Mercurial command line app'
-             Level='1' Absent='disallow' >
-        <ComponentRef Id='MainExecutable' />
-        <ComponentRef Id='distOutput' />
-        <ComponentRef Id='libOutput' />
-        <ComponentRef Id='ProgramMenuDir' />
-        <ComponentRef Id='ReadMe' />
-        <ComponentRef Id='COPYING' />
-        <ComponentRef Id='mercurial.rc' />
-        <ComponentRef Id='mergetools.rc' />
-        <ComponentGroupRef Id='helpFolder' />
-        <ComponentGroupRef Id='templatesFolder' />
-        <MergeRef Id='VCRuntime' />
-        <MergeRef Id='VCRuntimePolicy' />
-      </Feature>
-      <Feature Id='Locales' Title='Translations' Description='Translations' Level='1'>
-        <ComponentGroupRef Id='localeFolder' />
-        <ComponentRef Id='i18nFolder' />
-      </Feature>
-      <Feature Id='Documentation' Title='Documentation' Description='HTML man pages' Level='1'>
-        <ComponentGroupRef Id='docFolder' />
-      </Feature>
-      <Feature Id='Misc' Title='Miscellaneous' Description='Contributed scripts' Level='1'>
-        <ComponentGroupRef Id='contribFolder' />
-      </Feature>
-    </Feature>
-
-    <UIRef Id="WixUI_FeatureTree" />
-    <UIRef Id="WixUI_ErrorProgressText" />
-
-    <WixVariable Id="WixUILicenseRtf" Value="contrib\wix\COPYING.rtf" />
-
-    <Icon Id="hgIcon.ico" SourceFile="contrib/win32/mercurial.ico" />
-
-    <Upgrade Id='$(var.ProductUpgradeCode)'>
-      <UpgradeVersion
-        IncludeMinimum='yes' Minimum='0.0.0' IncludeMaximum='no' OnlyDetect='no'
-        Property='INSTALLEDMERCURIALPRODUCTS' />
-    </Upgrade>
-
-    <InstallExecuteSequence>
-      <RemoveExistingProducts After='InstallInitialize'/>
-    </InstallExecuteSequence>
-
-  </Product>
-</Wix>
--- a/contrib/wix/templates.wxs	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,251 +0,0 @@
-<?xml version="1.0" encoding="utf-8"?>
-<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
-
-  <?include guids.wxi ?>
-  <?include defines.wxi ?>
-
-  <Fragment>
-    <ComponentGroup Id="templatesFolder">
-
-      <ComponentRef Id="templates.root" />
-
-      <ComponentRef Id="templates.atom" />
-      <ComponentRef Id="templates.coal" />
-      <ComponentRef Id="templates.gitweb" />
-      <ComponentRef Id="templates.json" />
-      <ComponentRef Id="templates.monoblue" />
-      <ComponentRef Id="templates.paper" />
-      <ComponentRef Id="templates.raw" />
-      <ComponentRef Id="templates.rss" />
-      <ComponentRef Id="templates.spartan" />
-      <ComponentRef Id="templates.static" />
-
-    </ComponentGroup>
-  </Fragment>
-
-  <Fragment>
-    <DirectoryRef Id="INSTALLDIR">
-
-      <Directory Id="templatesdir" Name="templates" FileSource="$(var.SourceDir)">
-
-        <Component Id="templates.root" Guid="$(var.templates.root.guid)" Win64='$(var.IsX64)'>
-          <File Name="map-cmdline.changelog" KeyPath="yes" />
-          <File Name="map-cmdline.compact" />
-          <File Name="map-cmdline.default" />
-          <File Name="map-cmdline.show" />
-          <File Name="map-cmdline.bisect" />
-          <File Name="map-cmdline.xml" />
-          <File Name="map-cmdline.status" />
-          <File Name="map-cmdline.phases" />
-        </Component>
-
-        <Directory Id="templates.jsondir" Name="json">
-          <Component Id="templates.json" Guid="$(var.templates.json.guid)" Win64='$(var.IsX64)'>
-            <File Id="json.changelist.tmpl" Name="changelist.tmpl" KeyPath="yes" />
-            <File Id="json.graph.tmpl"      Name="graph.tmpl" />
-            <File Id="json.map"             Name="map" />
-          </Component>
-        </Directory>
-
-        <Directory Id="templates.atomdir" Name="atom">
-          <Component Id="templates.atom" Guid="$(var.templates.atom.guid)" Win64='$(var.IsX64)'>
-            <File Id="atom.changelog.tmpl"      Name="changelog.tmpl" KeyPath="yes" />
-            <File Id="atom.changelogentry.tmpl" Name="changelogentry.tmpl" />
-            <File Id="atom.error.tmpl"          Name="error.tmpl" />
-            <File Id="atom.filelog.tmpl"        Name="filelog.tmpl" />
-            <File Id="atom.header.tmpl"         Name="header.tmpl" />
-            <File Id="atom.map"                 Name="map" />
-            <File Id="atom.tagentry.tmpl"       Name="tagentry.tmpl" />
-            <File Id="atom.tags.tmpl"           Name="tags.tmpl" />
-            <File Id="atom.branchentry.tmpl"    Name="branchentry.tmpl" />
-            <File Id="atom.branches.tmpl"       Name="branches.tmpl" />
-            <File Id="atom.bookmarks.tmpl"      Name="bookmarks.tmpl" />
-            <File Id="atom.bookmarkentry.tmpl"  Name="bookmarkentry.tmpl" />
-          </Component>
-        </Directory>
-
-        <Directory Id="templates.coaldir" Name="coal">
-          <Component Id="templates.coal" Guid="$(var.templates.coal.guid)" Win64='$(var.IsX64)'>
-            <File Id="coal.header.tmpl" Name="header.tmpl" KeyPath="yes" />
-            <File Id="coal.map"         Name="map" />
-          </Component>
-        </Directory>
-
-        <Directory Id="templates.gitwebdir" Name="gitweb">
-          <Component Id="templates.gitweb" Guid="$(var.templates.gitweb.guid)" Win64='$(var.IsX64)'>
-            <File Id="gitweb.branches.tmpl"       Name="branches.tmpl" KeyPath="yes" />
-            <File Id="gitweb.bookmarks.tmpl"      Name="bookmarks.tmpl" />
-            <File Id="gitweb.changelog.tmpl"      Name="changelog.tmpl" />
-            <File Id="gitweb.changelogentry.tmpl" Name="changelogentry.tmpl" />
-            <File Id="gitweb.changeset.tmpl"      Name="changeset.tmpl" />
-            <File Id="gitweb.error.tmpl"          Name="error.tmpl" />
-            <File Id="gitweb.fileannotate.tmpl"   Name="fileannotate.tmpl" />
-            <File Id="gitweb.filecomparison.tmpl" Name="filecomparison.tmpl" />
-            <File Id="gitweb.filediff.tmpl"       Name="filediff.tmpl" />
-            <File Id="gitweb.filelog.tmpl"        Name="filelog.tmpl" />
-            <File Id="gitweb.filerevision.tmpl"   Name="filerevision.tmpl" />
-            <File Id="gitweb.footer.tmpl"         Name="footer.tmpl" />
-            <File Id="gitweb.graph.tmpl"          Name="graph.tmpl" />
-            <File Id="gitweb.graphentry.tmpl"     Name="graphentry.tmpl" />
-            <File Id="gitweb.header.tmpl"         Name="header.tmpl" />
-            <File Id="gitweb.index.tmpl"          Name="index.tmpl" />
-            <File Id="gitweb.manifest.tmpl"       Name="manifest.tmpl" />
-            <File Id="gitweb.map"                 Name="map" />
-            <File Id="gitweb.notfound.tmpl"       Name="notfound.tmpl" />
-            <File Id="gitweb.search.tmpl"         Name="search.tmpl" />
-            <File Id="gitweb.shortlog.tmpl"       Name="shortlog.tmpl" />
-            <File Id="gitweb.summary.tmpl"        Name="summary.tmpl" />
-            <File Id="gitweb.tags.tmpl"           Name="tags.tmpl" />
-            <File Id="gitweb.help.tmpl"           Name="help.tmpl" />
-            <File Id="gitweb.helptopics.tmpl"     Name="helptopics.tmpl" />
-          </Component>
-        </Directory>
-
-        <Directory Id="templates.monobluedir" Name="monoblue">
-          <Component Id="templates.monoblue" Guid="$(var.templates.monoblue.guid)" Win64='$(var.IsX64)'>
-            <File Id="monoblue.branches.tmpl"       Name="branches.tmpl" KeyPath="yes" />
-            <File Id="monoblue.bookmarks.tmpl"      Name="bookmarks.tmpl" />
-            <File Id="monoblue.changelog.tmpl"      Name="changelog.tmpl" />
-            <File Id="monoblue.changelogentry.tmpl" Name="changelogentry.tmpl" />
-            <File Id="monoblue.changeset.tmpl"      Name="changeset.tmpl" />
-            <File Id="monoblue.error.tmpl"          Name="error.tmpl" />
-            <File Id="monoblue.fileannotate.tmpl"   Name="fileannotate.tmpl" />
-            <File Id="monoblue.filecomparison.tmpl" Name="filecomparison.tmpl" />
-            <File Id="monoblue.filediff.tmpl"       Name="filediff.tmpl" />
-            <File Id="monoblue.filelog.tmpl"        Name="filelog.tmpl" />
-            <File Id="monoblue.filerevision.tmpl"   Name="filerevision.tmpl" />
-            <File Id="monoblue.footer.tmpl"         Name="footer.tmpl" />
-            <File Id="monoblue.graph.tmpl"          Name="graph.tmpl" />
-            <File Id="monoblue.graphentry.tmpl"     Name="graphentry.tmpl" />
-            <File Id="monoblue.header.tmpl"         Name="header.tmpl" />
-            <File Id="monoblue.index.tmpl"          Name="index.tmpl" />
-            <File Id="monoblue.manifest.tmpl"       Name="manifest.tmpl" />
-            <File Id="monoblue.map"                 Name="map" />
-            <File Id="monoblue.notfound.tmpl"       Name="notfound.tmpl" />
-            <File Id="monoblue.search.tmpl"         Name="search.tmpl" />
-            <File Id="monoblue.shortlog.tmpl"       Name="shortlog.tmpl" />
-            <File Id="monoblue.summary.tmpl"        Name="summary.tmpl" />
-            <File Id="monoblue.tags.tmpl"           Name="tags.tmpl" />
-            <File Id="monoblue.help.tmpl"           Name="help.tmpl" />
-            <File Id="monoblue.helptopics.tmpl"     Name="helptopics.tmpl" />
-          </Component>
-        </Directory>
-
-        <Directory Id="templates.paperdir" Name="paper">
-          <Component Id="templates.paper" Guid="$(var.templates.paper.guid)" Win64='$(var.IsX64)'>
-            <File Id="paper.branches.tmpl"      Name="branches.tmpl" KeyPath="yes" />
-            <File Id="paper.bookmarks.tmpl"     Name="bookmarks.tmpl" />
-            <File Id="paper.changeset.tmpl"     Name="changeset.tmpl" />
-            <File Id="paper.diffstat.tmpl"      Name="diffstat.tmpl" />
-            <File Id="paper.error.tmpl"         Name="error.tmpl" />
-            <File Id="paper.fileannotate.tmpl"  Name="fileannotate.tmpl" />
-            <File Id="paper.filecomparison.tmpl" Name="filecomparison.tmpl" />
-            <File Id="paper.filediff.tmpl"      Name="filediff.tmpl" />
-            <File Id="paper.filelog.tmpl"       Name="filelog.tmpl" />
-            <File Id="paper.filelogentry.tmpl"  Name="filelogentry.tmpl" />
-            <File Id="paper.filerevision.tmpl"  Name="filerevision.tmpl" />
-            <File Id="paper.footer.tmpl"        Name="footer.tmpl" />
-            <File Id="paper.graph.tmpl"         Name="graph.tmpl" />
-            <File Id="paper.graphentry.tmpl"    Name="graphentry.tmpl" />
-            <File Id="paper.header.tmpl"        Name="header.tmpl" />
-            <File Id="paper.index.tmpl"         Name="index.tmpl" />
-            <File Id="paper.manifest.tmpl"      Name="manifest.tmpl" />
-            <File Id="paper.map"                Name="map" />
-            <File Id="paper.notfound.tmpl"      Name="notfound.tmpl" />
-            <File Id="paper.search.tmpl"        Name="search.tmpl" />
-            <File Id="paper.shortlog.tmpl"      Name="shortlog.tmpl" />
-            <File Id="paper.shortlogentry.tmpl" Name="shortlogentry.tmpl" />
-            <File Id="paper.tags.tmpl"          Name="tags.tmpl" />
-            <File Id="paper.help.tmpl"          Name="help.tmpl" />
-            <File Id="paper.helptopics.tmpl"    Name="helptopics.tmpl" />
-          </Component>
-        </Directory>
-
-        <Directory Id="templates.rawdir" Name="raw">
-          <Component Id="templates.raw" Guid="$(var.templates.raw.guid)" Win64='$(var.IsX64)'>
-            <File Id="raw.changeset.tmpl"    Name="changeset.tmpl" KeyPath="yes" />
-            <File Id="raw.error.tmpl"        Name="error.tmpl" />
-            <File Id="raw.fileannotate.tmpl" Name="fileannotate.tmpl" />
-            <File Id="raw.filediff.tmpl"     Name="filediff.tmpl" />
-            <File Id="raw.graph.tmpl"        Name="graph.tmpl" />
-            <File Id="raw.graphedge.tmpl"    Name="graphedge.tmpl" />
-            <File Id="raw.graphnode.tmpl"    Name="graphnode.tmpl" />
-            <File Id="raw.index.tmpl"        Name="index.tmpl" />
-            <File Id="raw.manifest.tmpl"     Name="manifest.tmpl" />
-            <File Id="raw.map"               Name="map" />
-            <File Id="raw.notfound.tmpl"     Name="notfound.tmpl" />
-            <File Id="raw.search.tmpl"       Name="search.tmpl" />
-            <File Id="raw.logentry.tmpl"     Name="logentry.tmpl" />
-            <File Id="raw.changelog.tmpl"    Name="changelog.tmpl" />
-          </Component>
-        </Directory>
-
-        <Directory Id="templates.rssdir" Name="rss">
-          <Component Id="templates.rss" Guid="$(var.templates.rss.guid)" Win64='$(var.IsX64)'>
-            <File Id="rss.changelog.tmpl"      Name="changelog.tmpl" KeyPath="yes" />
-            <File Id="rss.changelogentry.tmpl" Name="changelogentry.tmpl" />
-            <File Id="rss.error.tmpl"          Name="error.tmpl" />
-            <File Id="rss.filelog.tmpl"        Name="filelog.tmpl" />
-            <File Id="rss.filelogentry.tmpl"   Name="filelogentry.tmpl" />
-            <File Id="rss.header.tmpl"         Name="header.tmpl" />
-            <File Id="rss.map"                 Name="map" />
-            <File Id="rss.tagentry.tmpl"       Name="tagentry.tmpl" />
-            <File Id="rss.tags.tmpl"           Name="tags.tmpl" />
-            <File Id="rss.bookmarks.tmpl"      Name="bookmarks.tmpl" />
-            <File Id="rss.bookmarkentry.tmpl"  Name="bookmarkentry.tmpl" />
-            <File Id="rss.branchentry.tmpl"    Name="branchentry.tmpl" />
-            <File Id="rss.branches.tmpl"       Name="branches.tmpl" />
-          </Component>
-        </Directory>
-
-        <Directory Id="templates.spartandir" Name="spartan">
-          <Component Id="templates.spartan" Guid="$(var.templates.spartan.guid)" Win64='$(var.IsX64)'>
-            <File Id="spartan.branches.tmpl"       Name="branches.tmpl" KeyPath="yes" />
-            <File Id="spartan.changelog.tmpl"      Name="changelog.tmpl" />
-            <File Id="spartan.changelogentry.tmpl" Name="changelogentry.tmpl" />
-            <File Id="spartan.changeset.tmpl"      Name="changeset.tmpl" />
-            <File Id="spartan.error.tmpl"          Name="error.tmpl" />
-            <File Id="spartan.fileannotate.tmpl"   Name="fileannotate.tmpl" />
-            <File Id="spartan.filediff.tmpl"       Name="filediff.tmpl" />
-            <File Id="spartan.filelog.tmpl"        Name="filelog.tmpl" />
-            <File Id="spartan.filelogentry.tmpl"   Name="filelogentry.tmpl" />
-            <File Id="spartan.filerevision.tmpl"   Name="filerevision.tmpl" />
-            <File Id="spartan.footer.tmpl"         Name="footer.tmpl" />
-            <File Id="spartan.graph.tmpl"          Name="graph.tmpl" />
-            <File Id="spartan.graphentry.tmpl"     Name="graphentry.tmpl" />
-            <File Id="spartan.header.tmpl"         Name="header.tmpl" />
-            <File Id="spartan.index.tmpl"          Name="index.tmpl" />
-            <File Id="spartan.manifest.tmpl"       Name="manifest.tmpl" />
-            <File Id="spartan.map"                 Name="map" />
-            <File Id="spartan.notfound.tmpl"       Name="notfound.tmpl" />
-            <File Id="spartan.search.tmpl"         Name="search.tmpl" />
-            <File Id="spartan.shortlog.tmpl"       Name="shortlog.tmpl" />
-            <File Id="spartan.shortlogentry.tmpl"  Name="shortlogentry.tmpl" />
-            <File Id="spartan.tags.tmpl"           Name="tags.tmpl" />
-          </Component>
-        </Directory>
-
-        <Directory Id="templates.staticdir" Name="static">
-          <Component Id="templates.static" Guid="$(var.templates.static.guid)" Win64='$(var.IsX64)'>
-            <File Id="static.background.png"     Name="background.png" KeyPath="yes" />
-            <File Id="static.coal.file.png"      Name="coal-file.png" />
-            <File Id="static.coal.folder.png"    Name="coal-folder.png" />
-            <File Id="static.followlines.js"     Name="followlines.js" />
-            <File Id="static.mercurial.js"       Name="mercurial.js" />
-            <File Id="static.hgicon.png"         Name="hgicon.png" />
-            <File Id="static.hglogo.png"         Name="hglogo.png" />
-            <File Id="static.style.coal.css"     Name="style-extra-coal.css" />
-            <File Id="static.style.gitweb.css"   Name="style-gitweb.css" />
-            <File Id="static.style.monoblue.css" Name="style-monoblue.css" />
-            <File Id="static.style.paper.css"    Name="style-paper.css" />
-            <File Id="static.style.css"          Name="style.css" />
-            <File Id="static.feed.icon"          Name="feed-icon-14x14.png" />
-          </Component>
-        </Directory>
-
-      </Directory>
-
-    </DirectoryRef>
-  </Fragment>
-
- </Wix>
--- a/contrib/zsh_completion	Tue Mar 19 09:23:35 2019 -0400
+++ b/contrib/zsh_completion	Wed Apr 17 13:41:18 2019 -0400
@@ -248,7 +248,7 @@
 
   [[ -d $PREFIX ]] || PREFIX=$PREFIX:h
 
-  _hg_cmd resolve -l ./$PREFIX | while read rstate rpath
+  _hg_cmd resolve -l ./$PREFIX -T '{mergestatus}\ {relpath\(path\)}\\n' | while read rstate rpath
   do
     [[ $rstate == 'R' ]] && resolved_files+=($rpath)
     [[ $rstate == 'U' ]] && unresolved_files+=($rpath)
--- a/doc/Makefile	Tue Mar 19 09:23:35 2019 -0400
+++ b/doc/Makefile	Wed Apr 17 13:41:18 2019 -0400
@@ -6,7 +6,7 @@
 PREFIX=/usr/local
 MANDIR=$(PREFIX)/share/man
 INSTALL=install -c -m 644
-PYTHON=python
+PYTHON?=python
 RSTARGS=
 
 export HGENCODING=UTF-8
@@ -17,6 +17,7 @@
 
 html: $(HTML)
 
+# This logic is duplicated in setup.py:hgbuilddoc()
 common.txt $(SOURCES) $(SOURCES:%.txt=%.gendoc.txt): $(GENDOC)
 	${PYTHON} gendoc.py "$(basename $@)" > $@.tmp
 	mv $@.tmp $@
--- a/doc/check-seclevel.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/doc/check-seclevel.py	Wed Apr 17 13:41:18 2019 -0400
@@ -163,8 +163,8 @@
     (options, args) = optparser.parse_args()
 
     ui = uimod.ui.load()
-    ui.setconfig('ui', 'verbose', options.verbose, '--verbose')
-    ui.setconfig('ui', 'debug', options.debug, '--debug')
+    ui.setconfig(b'ui', b'verbose', options.verbose, b'--verbose')
+    ui.setconfig(b'ui', b'debug', options.debug, b'--debug')
 
     if options.file:
         if checkfile(ui, options.file, options.initlevel):
--- a/doc/hgmanpage.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/doc/hgmanpage.py	Wed Apr 17 13:41:18 2019 -0400
@@ -263,7 +263,7 @@
             # ensure we get a ".TH" as viewers require it.
             self.head.append(self.header())
         # filter body
-        for i in xrange(len(self.body) - 1, 0, -1):
+        for i in range(len(self.body) - 1, 0, -1):
             # remove superfluous vertical gaps.
             if self.body[i] == '.sp\n':
                 if self.body[i - 1][:4] in ('.BI ','.IP '):
@@ -335,7 +335,7 @@
                 elif style.endswith('roman'):
                     self._indent = 5
 
-            def next(self):
+            def __next__(self):
                 if self._style == 'bullet':
                     return self.enum_style[self._style]
                 elif self._style == 'emdash':
@@ -353,6 +353,9 @@
                     return res.lower()
                 else:
                     return "%d." % self._cnt
+
+            next = __next__
+
             def get_width(self):
                 return self._indent
             def __repr__(self):
@@ -376,7 +379,7 @@
         tmpl = (".TH %(title_upper)s %(manual_section)s"
                 " \"%(date)s\" \"%(version)s\" \"%(manual_group)s\"\n"
                 ".SH NAME\n"
-                "%(title)s \- %(subtitle)s\n")
+                "%(title)s \\- %(subtitle)s\n")
         return tmpl % self._docinfo
 
     def append_header(self):
--- a/hgext/absorb.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/absorb.py	Wed Apr 17 13:41:18 2019 -0400
@@ -50,7 +50,6 @@
     phases,
     pycompat,
     registrar,
-    repair,
     scmutil,
     util,
 )
@@ -191,9 +190,9 @@
             pctx = None # do not add another immutable fctx
             break
         fctxmap[ctx] = fctx # only for mutable fctxs
-        renamed = fctx.renamed()
-        if renamed:
-            path = renamed[0] # follow rename
+        copy = fctx.copysource()
+        if copy:
+            path = copy # follow rename
             if path in ctx: # but do not follow copy
                 pctx = ctx.p1()
                 break
@@ -232,8 +231,8 @@
         else:
             content = fctx.data()
         mode = (fctx.islink(), fctx.isexec())
-        renamed = fctx.renamed() # False or (path, node)
-        return content, mode, (renamed and renamed[0])
+        copy = fctx.copysource()
+        return content, mode, copy
 
 def overlaycontext(memworkingcopy, ctx, parents=None, extra=None):
     """({path: content}, ctx, (p1node, p2node)?, {}?) -> memctx
@@ -683,16 +682,12 @@
 
     def commit(self):
         """commit changes. update self.finalnode, self.replacemap"""
-        with self.repo.wlock(), self.repo.lock():
-            with self.repo.transaction('absorb') as tr:
-                self._commitstack()
-                self._movebookmarks(tr)
-                if self.repo['.'].node() in self.replacemap:
-                    self._moveworkingdirectoryparent()
-                if self._useobsolete:
-                    self._obsoleteoldcommits()
-            if not self._useobsolete: # strip must be outside transactions
-                self._stripoldcommits()
+        with self.repo.transaction('absorb') as tr:
+            self._commitstack()
+            self._movebookmarks(tr)
+            if self.repo['.'].node() in self.replacemap:
+                self._moveworkingdirectoryparent()
+            self._cleanupoldcommits()
         return self.finalnode
 
     def printchunkstats(self):
@@ -726,7 +721,6 @@
                 # nothing changed, nothing commited
                 nextp1 = ctx
                 continue
-            msg = ''
             if self._willbecomenoop(memworkingcopy, ctx, nextp1):
                 # changeset is no longer necessary
                 self.replacemap[ctx.node()] = None
@@ -850,31 +844,19 @@
         if self._useobsolete and self.ui.configbool('absorb', 'add-noise'):
             extra['absorb_source'] = ctx.hex()
         mctx = overlaycontext(memworkingcopy, ctx, parents, extra=extra)
-        # preserve phase
-        with mctx.repo().ui.configoverride({
-            ('phases', 'new-commit'): ctx.phase()}):
-            return mctx.commit()
+        return mctx.commit()
 
     @util.propertycache
     def _useobsolete(self):
         """() -> bool"""
         return obsolete.isenabled(self.repo, obsolete.createmarkersopt)
 
-    def _obsoleteoldcommits(self):
-        relations = [(self.repo[k], v and (self.repo[v],) or ())
-                     for k, v in self.replacemap.iteritems()]
-        if relations:
-            obsolete.createmarkers(self.repo, relations)
-
-    def _stripoldcommits(self):
-        nodelist = self.replacemap.keys()
-        # make sure we don't strip innocent children
-        revs = self.repo.revs('%ln - (::(heads(%ln::)-%ln))', nodelist,
-                              nodelist, nodelist)
-        tonode = self.repo.changelog.node
-        nodelist = [tonode(r) for r in revs]
-        if nodelist:
-            repair.strip(self.repo.ui, self.repo, nodelist)
+    def _cleanupoldcommits(self):
+        replacements = {k: ([v] if v is not None else [])
+                        for k, v in self.replacemap.iteritems()}
+        if replacements:
+            scmutil.cleanupnodes(self.repo, replacements, operation='absorb',
+                                 fixphase=True)
 
 def _parsechunk(hunk):
     """(crecord.uihunk or patch.recordhunk) -> (path, (a1, a2, [bline]))"""
@@ -1023,6 +1005,11 @@
     Returns 0 on success, 1 if all chunks were ignored and nothing amended.
     """
     opts = pycompat.byteskwargs(opts)
-    state = absorb(ui, repo, pats=pats, opts=opts)
-    if sum(s[0] for s in state.chunkstats.values()) == 0:
-        return 1
+
+    with repo.wlock(), repo.lock():
+        if not opts['dry_run']:
+            cmdutil.checkunfinished(repo)
+
+        state = absorb(ui, repo, pats=pats, opts=opts)
+        if sum(s[0] for s in state.chunkstats.values()) == 0:
+            return 1
--- a/hgext/acl.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/acl.py	Wed Apr 17 13:41:18 2019 -0400
@@ -293,15 +293,15 @@
             # if ug is a user  name: !username
             # if ug is a group name: !@groupname
             ug = ug[1:]
-            if not ug.startswith('@') and user != ug \
-                or ug.startswith('@') and user not in _getusers(ui, ug[1:]):
+            if (not ug.startswith('@') and user != ug
+                or ug.startswith('@') and user not in _getusers(ui, ug[1:])):
                 return True
 
         # Test for user or group. Format:
         # if ug is a user  name: username
         # if ug is a group name: @groupname
-        elif user == ug \
-             or ug.startswith('@') and user in _getusers(ui, ug[1:]):
+        elif (user == ug
+              or ug.startswith('@') and user in _getusers(ui, ug[1:])):
             return True
 
     return False
--- a/hgext/automv.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/automv.py	Wed Apr 17 13:41:18 2019 -0400
@@ -64,7 +64,8 @@
         if threshold > 0:
             match = scmutil.match(repo[None], pats, opts)
             added, removed = _interestingfiles(repo, match)
-            renames = _findrenames(repo, match, added, removed,
+            uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
+            renames = _findrenames(repo, uipathfn, added, removed,
                                    threshold / 100.0)
 
     with repo.wlock():
@@ -89,7 +90,7 @@
 
     return added, removed
 
-def _findrenames(repo, matcher, added, removed, similarity):
+def _findrenames(repo, uipathfn, added, removed, similarity):
     """Find what files in added are really moved files.
 
     Any file named in removed that is at least similarity% similar to a file
@@ -103,7 +104,7 @@
             if repo.ui.verbose:
                 repo.ui.status(
                     _('detected move of %s as %s (%d%% similar)\n') % (
-                        matcher.rel(src), matcher.rel(dst), score * 100))
+                        uipathfn(src), uipathfn(dst), score * 100))
             renames[dst] = src
     if renames:
         repo.ui.status(_('detected move of %d files\n') % len(renames))
--- a/hgext/blackbox.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/blackbox.py	Wed Apr 17 13:41:18 2019 -0400
@@ -118,7 +118,6 @@
         date = dateutil.datestr(default, ui.config('blackbox', 'date-format'))
         user = procutil.getuser()
         pid = '%d' % procutil.getpid()
-        rev = '(unknown)'
         changed = ''
         ctx = self._repo[None]
         parents = ctx.parents()
@@ -191,7 +190,7 @@
             break
 
         # count the commands by matching lines like: 2013/01/23 19:13:36 root>
-        if re.match('^\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2} .*> .*', line):
+        if re.match(br'^\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2} .*> .*', line):
             count += 1
         output.append(line)
 
--- a/hgext/bugzilla.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/bugzilla.py	Wed Apr 17 13:41:18 2019 -0400
@@ -303,6 +303,7 @@
     error,
     logcmdutil,
     mail,
+    pycompat,
     registrar,
     url,
     util,
@@ -342,10 +343,10 @@
     default='bugs',
 )
 configitem('bugzilla', 'fixregexp',
-    default=(r'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
-             r'(?:nos?\.?|num(?:ber)?s?)?\s*'
-             r'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
-             r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
+    default=(br'fix(?:es)?\s*(?:bugs?\s*)?,?\s*'
+             br'(?:nos?\.?|num(?:ber)?s?)?\s*'
+             br'(?P<ids>(?:#?\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
+             br'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
 )
 configitem('bugzilla', 'fixresolution',
     default='FIXED',
@@ -363,9 +364,9 @@
     default=None,
 )
 configitem('bugzilla', 'regexp',
-    default=(r'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
-             r'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
-             r'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
+    default=(br'bugs?\s*,?\s*(?:#|nos?\.?|num(?:ber)?s?)?\s*'
+             br'(?P<ids>(?:\d+\s*(?:,?\s*(?:and)?)?\s*)+)'
+             br'\.?\s*(?:h(?:ours?)?\s*(?P<hours>\d*(?:\.\d+)?))?')
 )
 configitem('bugzilla', 'strip',
     default=0,
@@ -599,8 +600,8 @@
 
     def __init__(self, ui):
         bzmysql.__init__(self, ui)
-        self.default_notify = \
-            "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s"
+        self.default_notify = (
+            "cd %(bzdir)s && perl -T contrib/sendbugmail.pl %(id)s %(user)s")
 
 class bzmysql_3_0(bzmysql_2_18):
     '''support for bugzilla 3.0 series.'''
@@ -733,7 +734,7 @@
         c = self.bzproxy.Bug.comments({'ids': [id],
                                        'include_fields': ['text'],
                                        'token': self.bztoken})
-        return ''.join([t['text'] for t in c['bugs'][str(id)]['comments']])
+        return ''.join([t['text'] for t in c['bugs']['%d' % id]['comments']])
 
     def filter_real_bug_ids(self, bugs):
         probe = self.bzproxy.Bug.get({'ids': sorted(bugs.keys()),
@@ -804,11 +805,11 @@
 
     def makecommandline(self, fieldname, value):
         if self.bzvermajor >= 4:
-            return "@%s %s" % (fieldname, str(value))
+            return "@%s %s" % (fieldname, pycompat.bytestr(value))
         else:
             if fieldname == "id":
                 fieldname = "bug_id"
-            return "@%s = %s" % (fieldname, str(value))
+            return "@%s = %s" % (fieldname, pycompat.bytestr(value))
 
     def send_bug_modify_email(self, bugid, commands, comment, committer):
         '''send modification message to Bugzilla bug via email.
@@ -873,7 +874,7 @@
         self.fixresolution = self.ui.config('bugzilla', 'fixresolution')
 
     def apiurl(self, targets, include_fields=None):
-        url = '/'.join([self.bzroot] + [str(t) for t in targets])
+        url = '/'.join([self.bzroot] + [pycompat.bytestr(t) for t in targets])
         qv = {}
         if self.apikey:
             qv['api_key'] = self.apikey
@@ -938,7 +939,7 @@
         for bugid in bugs.keys():
             burl = self.apiurl(('bug', bugid, 'comment'), include_fields='text')
             result = self._fetch(burl)
-            comments = result['bugs'][str(bugid)]['comments']
+            comments = result['bugs'][pycompat.bytestr(bugid)]['comments']
             if any(sn in c['text'] for c in comments):
                 self.ui.status(_('bug %d already knows about changeset %s\n') %
                                (bugid, sn))
@@ -1011,7 +1012,7 @@
             self.ui.config('bugzilla', 'regexp'), re.IGNORECASE)
         self.fix_re = re.compile(
             self.ui.config('bugzilla', 'fixregexp'), re.IGNORECASE)
-        self.split_re = re.compile(r'\D+')
+        self.split_re = re.compile(br'\D+')
 
     def find_bugs(self, ctx):
         '''return bugs dictionary created from commit comment.
@@ -1098,7 +1099,7 @@
         t = logcmdutil.changesettemplater(self.ui, self.repo, spec)
         self.ui.pushbuffer()
         t.show(ctx, changes=ctx.changeset(),
-               bug=str(bugid),
+               bug=pycompat.bytestr(bugid),
                hgweb=self.ui.config('web', 'baseurl'),
                root=self.repo.root,
                webroot=webroot(self.repo.root))
--- a/hgext/commitextras.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/commitextras.py	Wed Apr 17 13:41:18 2019 -0400
@@ -58,7 +58,7 @@
                 if not k:
                     msg = _("unable to parse '%s', keys can't be empty")
                     raise error.Abort(msg % raw)
-                if re.search('[^\w-]', k):
+                if re.search(br'[^\w-]', k):
                     msg = _("keys can only contain ascii letters, digits,"
                             " '_' and '-'")
                     raise error.Abort(msg)
--- a/hgext/convert/convcmd.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/convert/convcmd.py	Wed Apr 17 13:41:18 2019 -0400
@@ -123,7 +123,7 @@
             exceptions.append(inst)
     if not ui.quiet:
         for inst in exceptions:
-            ui.write("%s\n" % pycompat.bytestr(inst))
+            ui.write("%s\n" % pycompat.bytestr(inst.args[0]))
     raise error.Abort(_('%s: missing or unsupported repository') % path)
 
 def convertsink(ui, path, type):
--- a/hgext/convert/cvs.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/convert/cvs.py	Wed Apr 17 13:41:18 2019 -0400
@@ -76,7 +76,6 @@
         d = encoding.getcwd()
         try:
             os.chdir(self.path)
-            id = None
 
             cache = 'update'
             if not self.ui.configbool('convert', 'cvsps.cache'):
@@ -219,7 +218,7 @@
         if "UseUnchanged" in r:
             self.writep.write("UseUnchanged\n")
             self.writep.flush()
-            r = self.readp.readline()
+            self.readp.readline()
 
     def getheads(self):
         self._parse()
--- a/hgext/convert/cvsps.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/convert/cvsps.py	Wed Apr 17 13:41:18 2019 -0400
@@ -122,7 +122,7 @@
     re_31 = re.compile(b'----------------------------$')
     re_32 = re.compile(b'======================================='
                        b'======================================$')
-    re_50 = re.compile(b'revision ([\\d.]+)(\s+locked by:\s+.+;)?$')
+    re_50 = re.compile(br'revision ([\d.]+)(\s+locked by:\s+.+;)?$')
     re_60 = re.compile(br'date:\s+(.+);\s+author:\s+(.+);\s+state:\s+(.+?);'
                        br'(\s+lines:\s+(\+\d+)?\s+(-\d+)?;)?'
                        br'(\s+commitid:\s+([^;]+);)?'
@@ -776,8 +776,8 @@
 
             # Ensure no changeset has a synthetic changeset as a parent.
             while p.synthetic:
-                assert len(p.parents) <= 1, \
-                       _('synthetic changeset cannot have multiple parents')
+                assert len(p.parents) <= 1, (
+                       _('synthetic changeset cannot have multiple parents'))
                 if p.parents:
                     p = p.parents[0]
                 else:
@@ -954,12 +954,12 @@
 
         # have we seen the start tag?
         if revisions and off:
-            if revisions[0] == (b"%d" % cs.id) or \
-                revisions[0] in cs.tags:
+            if (revisions[0] == (b"%d" % cs.id) or
+                revisions[0] in cs.tags):
                 off = False
 
         # see if we reached the end tag
         if len(revisions) > 1 and not off:
-            if revisions[1] == (b"%d" % cs.id) or \
-                revisions[1] in cs.tags:
+            if (revisions[1] == (b"%d" % cs.id) or
+                revisions[1] in cs.tags):
                 break
--- a/hgext/convert/git.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/convert/git.py	Wed Apr 17 13:41:18 2019 -0400
@@ -13,6 +13,7 @@
     config,
     error,
     node as nodemod,
+    pycompat,
 )
 
 from . import (
@@ -175,7 +176,8 @@
         self.catfilepipe[0].flush()
         info = self.catfilepipe[1].readline().split()
         if info[1] != ftype:
-            raise error.Abort(_('cannot read %r object at %s') % (ftype, rev))
+            raise error.Abort(_('cannot read %r object at %s') % (
+                pycompat.bytestr(ftype), rev))
         size = int(info[2])
         data = self.catfilepipe[1].read(size)
         if len(data) < size:
@@ -294,7 +296,7 @@
             if not entry:
                 if not l.startswith(':'):
                     continue
-                entry = l.split()
+                entry = tuple(pycompat.bytestr(p) for p in l.split())
                 continue
             f = l
             if entry[4][0] == 'C':
@@ -385,7 +387,7 @@
     def numcommits(self):
         output, ret = self.gitrunlines('rev-list', '--all')
         if ret:
-            raise error.Abort(_('cannot retrieve number of commits in %s') \
+            raise error.Abort(_('cannot retrieve number of commits in %s')
                               % self.path)
         return len(output)
 
--- a/hgext/convert/hg.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/convert/hg.py	Wed Apr 17 13:41:18 2019 -0400
@@ -105,10 +105,6 @@
         if not branch:
             branch = 'default'
         pbranches = [(b[0], b[1] and b[1] or 'default') for b in pbranches]
-        if pbranches:
-            pbranch = pbranches[0][1]
-        else:
-            pbranch = 'default'
 
         branchpath = os.path.join(self.path, branch)
         if setbranch:
@@ -561,7 +557,7 @@
             if name in self.ignored:
                 continue
             try:
-                copysource, _copynode = ctx.filectx(name).renamed()
+                copysource = ctx.filectx(name).copysource()
                 if copysource in self.ignored:
                     continue
                 # Ignore copy sources not in parent revisions
--- a/hgext/convert/monotone.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/convert/monotone.py	Wed Apr 17 13:41:18 2019 -0400
@@ -93,16 +93,16 @@
         kwargs = pycompat.byteskwargs(kwargs)
         command = []
         for k, v in kwargs.iteritems():
-            command.append("%s:%s" % (len(k), k))
+            command.append("%d:%s" % (len(k), k))
             if v:
-                command.append("%s:%s" % (len(v), v))
+                command.append("%d:%s" % (len(v), v))
         if command:
             command.insert(0, 'o')
             command.append('e')
 
         command.append('l')
         for arg in args:
-            command += "%d:%s" % (len(arg), arg)
+            command.append("%d:%s" % (len(arg), arg))
         command.append('e')
         command = ''.join(command)
 
@@ -138,7 +138,7 @@
                 raise error.Abort(_('bad mtn packet - no end of packet size'))
             lengthstr += read
         try:
-            length = long(lengthstr[:-1])
+            length = pycompat.long(lengthstr[:-1])
         except TypeError:
             raise error.Abort(_('bad mtn packet - bad packet size %s')
                 % lengthstr)
@@ -154,7 +154,7 @@
         retval = []
         while True:
             commandnbr, stream, length, output = self.mtnstdioreadpacket()
-            self.ui.debug('mtn: read packet %s:%s:%s\n' %
+            self.ui.debug('mtn: read packet %s:%s:%d\n' %
                 (commandnbr, stream, length))
 
             if stream == 'l':
@@ -214,13 +214,13 @@
         #   key "test@selenic.com"
         # mtn >= 0.45:
         #   key [ff58a7ffb771907c4ff68995eada1c4da068d328]
-        certlist = re.split('\n\n      key ["\[]', certlist)
+        certlist = re.split(br'\n\n      key ["\[]', certlist)
         for e in certlist:
             m = self.cert_re.match(e)
             if m:
                 name, value = m.groups()
-                value = value.replace(r'\"', '"')
-                value = value.replace(r'\\', '\\')
+                value = value.replace(br'\"', '"')
+                value = value.replace(br'\\', '\\')
                 certs[name] = value
         # Monotone may have subsecond dates: 2005-02-05T09:39:12.364306
         # and all times are stored in UTC
@@ -335,7 +335,6 @@
 
     def before(self):
         # Check if we have a new enough version to use automate stdio
-        version = 0.0
         try:
             versionstr = self.mtnrunsingle("interface_version")
             version = float(versionstr)
--- a/hgext/convert/p4.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/convert/p4.py	Wed Apr 17 13:41:18 2019 -0400
@@ -64,12 +64,12 @@
         self.encoding = self.ui.config('convert', 'p4.encoding',
                                        convcmd.orig_encoding)
         self.re_type = re.compile(
-            "([a-z]+)?(text|binary|symlink|apple|resource|unicode|utf\d+)"
-            "(\+\w+)?$")
+            br"([a-z]+)?(text|binary|symlink|apple|resource|unicode|utf\d+)"
+            br"(\+\w+)?$")
         self.re_keywords = re.compile(
-            r"\$(Id|Header|Date|DateTime|Change|File|Revision|Author)"
-            r":[^$\n]*\$")
-        self.re_keywords_old = re.compile("\$(Id|Header):[^$\n]*\$")
+            br"\$(Id|Header|Date|DateTime|Change|File|Revision|Author)"
+            br":[^$\n]*\$")
+        self.re_keywords_old = re.compile(br"\$(Id|Header):[^$\n]*\$")
 
         if revs and len(revs) > 1:
             raise error.Abort(_("p4 source does not support specifying "
@@ -198,8 +198,8 @@
             for filename in copiedfiles:
                 oldname = depotname[filename]
 
-                flcmd = 'p4 -G filelog %s' \
-                      % procutil.shellquote(oldname)
+                flcmd = ('p4 -G filelog %s'
+                         % procutil.shellquote(oldname))
                 flstdout = procutil.popen(flcmd, mode='rb')
 
                 copiedfilename = None
@@ -272,8 +272,8 @@
         return self.heads
 
     def getfile(self, name, rev):
-        cmd = 'p4 -G print %s' \
-            % procutil.shellquote("%s#%s" % (self.depotname[name], rev))
+        cmd = ('p4 -G print %s'
+               % procutil.shellquote("%s#%s" % (self.depotname[name], rev)))
 
         lasterror = None
         while True:
--- a/hgext/convert/subversion.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/convert/subversion.py	Wed Apr 17 13:41:18 2019 -0400
@@ -790,7 +790,7 @@
                         if childpath:
                             removed.add(self.recode(childpath))
                 else:
-                    self.ui.debug('unknown path in revision %d: %s\n' % \
+                    self.ui.debug('unknown path in revision %d: %s\n' %
                                   (revnum, path))
             elif kind == svn.core.svn_node_dir:
                 if ent.action == 'M':
@@ -984,7 +984,6 @@
         # TODO: ra.get_file transmits the whole file instead of diffs.
         if file in self.removed:
             return None, None
-        mode = ''
         try:
             new_module, revnum = revsplit(rev)[1:]
             if self.module != new_module:
@@ -1183,12 +1182,12 @@
         m = set()
         output = self.run0('ls', recursive=True, xml=True)
         doc = xml.dom.minidom.parseString(output)
-        for e in doc.getElementsByTagName('entry'):
+        for e in doc.getElementsByTagName(r'entry'):
             for n in e.childNodes:
-                if n.nodeType != n.ELEMENT_NODE or n.tagName != 'name':
+                if n.nodeType != n.ELEMENT_NODE or n.tagName != r'name':
                     continue
-                name = ''.join(c.data for c in n.childNodes
-                               if c.nodeType == c.TEXT_NODE)
+                name = r''.join(c.data for c in n.childNodes
+                                if c.nodeType == c.TEXT_NODE)
                 # Entries are compared with names coming from
                 # mercurial, so bytes with undefined encoding. Our
                 # best bet is to assume they are in local
@@ -1207,10 +1206,18 @@
                     os.unlink(filename)
             except OSError:
                 pass
+
+            if self.is_exec:
+                # We need to check executability of the file before the change,
+                # because `vfs.write` is able to reset exec bit.
+                wasexec = False
+                if os.path.exists(self.wjoin(filename)):
+                    wasexec = self.is_exec(self.wjoin(filename))
+
             self.wopener.write(filename, data)
 
             if self.is_exec:
-                if self.is_exec(self.wjoin(filename)):
+                if wasexec:
                     if 'x' not in flags:
                         self.delexec.append(filename)
                 else:
@@ -1325,8 +1332,8 @@
             try:
                 rev = self.commit_re.search(output).group(1)
             except AttributeError:
-                if parents and not files:
-                    return parents[0]
+                if not files:
+                    return parents[0] if parents else None
                 self.ui.warn(_('unexpected svn output:\n'))
                 self.ui.warn(output)
                 raise error.Abort(_('unable to cope with svn output'))
--- a/hgext/extdiff.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/extdiff.py	Wed Apr 17 13:41:18 2019 -0400
@@ -59,6 +59,22 @@
   [diff-tools]
   kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
 
+If a program has a graphical interface, it might be interesting to tell
+Mercurial about it. It will prevent the program from being mistakenly
+used in a terminal-only environment (such as an SSH terminal session),
+and will make :hg:`extdiff --per-file` open multiple file diffs at once
+instead of one by one (if you still want to open file diffs one by one,
+you can use the --confirm option).
+
+Declaring that a tool has a graphical interface can be done with the
+``gui`` flag next to where ``diffargs`` are specified:
+
+::
+
+  [diff-tools]
+  kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
+  kdiff3.gui = true
+
 You can use -I/-X and list of file or directory names like normal
 :hg:`diff` command. The extdiff extension makes snapshots of only
 needed files, so running the external diff program will actually be
@@ -71,6 +87,7 @@
 import re
 import shutil
 import stat
+import subprocess
 
 from mercurial.i18n import _
 from mercurial.node import (
@@ -80,6 +97,7 @@
 from mercurial import (
     archival,
     cmdutil,
+    encoding,
     error,
     filemerge,
     formatter,
@@ -104,11 +122,19 @@
     generic=True,
 )
 
+configitem('extdiff', br'gui\..*',
+    generic=True,
+)
+
 configitem('diff-tools', br'.*\.diffargs$',
     default=None,
     generic=True,
 )
 
+configitem('diff-tools', br'.*\.gui$',
+    generic=True,
+)
+
 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
 # be specifying the version(s) of Mercurial they are tested with, or
@@ -175,7 +201,97 @@
         cmdline += ' $parent1 $child'
     return re.sub(regex, quote, cmdline)
 
-def dodiff(ui, repo, cmdline, pats, opts):
+def _systembackground(cmd, environ=None, cwd=None):
+    ''' like 'procutil.system', but returns the Popen object directly
+        so we don't have to wait on it.
+    '''
+    cmd = procutil.quotecommand(cmd)
+    env = procutil.shellenviron(environ)
+    proc = subprocess.Popen(procutil.tonativestr(cmd),
+                            shell=True, close_fds=procutil.closefds,
+                            env=procutil.tonativeenv(env),
+                            cwd=pycompat.rapply(procutil.tonativestr, cwd))
+    return proc
+
+def _runperfilediff(cmdline, repo_root, ui, guitool, do3way, confirm,
+                    commonfiles, tmproot, dir1a, dir1b,
+                    dir2root, dir2,
+                    rev1a, rev1b, rev2):
+    # Note that we need to sort the list of files because it was
+    # built in an "unstable" way and it's annoying to get files in a
+    # random order, especially when "confirm" mode is enabled.
+    waitprocs = []
+    totalfiles = len(commonfiles)
+    for idx, commonfile in enumerate(sorted(commonfiles)):
+        path1a = os.path.join(tmproot, dir1a, commonfile)
+        label1a = commonfile + rev1a
+        if not os.path.isfile(path1a):
+            path1a = os.devnull
+
+        path1b = ''
+        label1b = ''
+        if do3way:
+            path1b = os.path.join(tmproot, dir1b, commonfile)
+            label1b = commonfile + rev1b
+            if not os.path.isfile(path1b):
+                path1b = os.devnull
+
+        path2 = os.path.join(dir2root, dir2, commonfile)
+        label2 = commonfile + rev2
+
+        if confirm:
+            # Prompt before showing this diff
+            difffiles = _('diff %s (%d of %d)') % (commonfile, idx + 1,
+                                                   totalfiles)
+            responses = _('[Yns?]'
+                          '$$ &Yes, show diff'
+                          '$$ &No, skip this diff'
+                          '$$ &Skip remaining diffs'
+                          '$$ &? (display help)')
+            r = ui.promptchoice('%s %s' % (difffiles, responses))
+            if r == 3: # ?
+                while r == 3:
+                    for c, t in ui.extractchoices(responses)[1]:
+                        ui.write('%s - %s\n' % (c, encoding.lower(t)))
+                    r = ui.promptchoice('%s %s' % (difffiles, responses))
+            if r == 0: # yes
+                pass
+            elif r == 1: # no
+                continue
+            elif r == 2: # skip
+                break
+
+        curcmdline = formatcmdline(
+            cmdline, repo_root, do3way=do3way,
+            parent1=path1a, plabel1=label1a,
+            parent2=path1b, plabel2=label1b,
+            child=path2, clabel=label2)
+
+        if confirm or not guitool:
+            # Run the comparison program and wait for it to exit
+            # before we show the next file.
+            # This is because either we need to wait for confirmation
+            # from the user between each invocation, or because, as far
+            # as we know, the tool doesn't have a GUI, in which case
+            # we can't run multiple CLI programs at the same time.
+            ui.debug('running %r in %s\n' %
+                     (pycompat.bytestr(curcmdline), tmproot))
+            ui.system(curcmdline, cwd=tmproot, blockedtag='extdiff')
+        else:
+            # Run the comparison program but don't wait, as we're
+            # going to rapid-fire each file diff and then wait on
+            # the whole group.
+            ui.debug('running %r in %s (backgrounded)\n' %
+                     (pycompat.bytestr(curcmdline), tmproot))
+            proc = _systembackground(curcmdline, cwd=tmproot)
+            waitprocs.append(proc)
+
+    if waitprocs:
+        with ui.timeblockedsection('extdiff'):
+            for proc in waitprocs:
+                proc.wait()
+
+def dodiff(ui, repo, cmdline, pats, opts, guitool=False):
     '''Do the actual diff:
 
     - copy to a temp structure if diffing 2 internal revisions
@@ -201,6 +317,9 @@
         else:
             ctx1b = repo[nullid]
 
+    perfile = opts.get('per_file')
+    confirm = opts.get('confirm')
+
     node1a = ctx1a.node()
     node1b = ctx1b.node()
     node2 = ctx2.node()
@@ -217,6 +336,8 @@
     if opts.get('patch'):
         if subrepos:
             raise error.Abort(_('--patch cannot be used with --subrepos'))
+        if perfile:
+            raise error.Abort(_('--patch cannot be used with --per-file'))
         if node2 is None:
             raise error.Abort(_('--patch requires two revisions'))
     else:
@@ -304,15 +425,24 @@
             label1b = None
             fnsandstat = []
 
-        # Run the external tool on the 2 temp directories or the patches
-        cmdline = formatcmdline(
-            cmdline, repo.root, do3way=do3way,
-            parent1=dir1a, plabel1=label1a,
-            parent2=dir1b, plabel2=label1b,
-            child=dir2, clabel=label2)
-        ui.debug('running %r in %s\n' % (pycompat.bytestr(cmdline),
-                                         tmproot))
-        ui.system(cmdline, cwd=tmproot, blockedtag='extdiff')
+        if not perfile:
+            # Run the external tool on the 2 temp directories or the patches
+            cmdline = formatcmdline(
+                cmdline, repo.root, do3way=do3way,
+                parent1=dir1a, plabel1=label1a,
+                parent2=dir1b, plabel2=label1b,
+                child=dir2, clabel=label2)
+            ui.debug('running %r in %s\n' % (pycompat.bytestr(cmdline),
+                                             tmproot))
+            ui.system(cmdline, cwd=tmproot, blockedtag='extdiff')
+        else:
+            # Run the external tool once for each pair of files
+            _runperfilediff(
+                cmdline, repo.root, ui, guitool=guitool,
+                do3way=do3way, confirm=confirm,
+                commonfiles=common, tmproot=tmproot, dir1a=dir1a, dir1b=dir1b,
+                dir2root=dir2root, dir2=dir2,
+                rev1a=rev1a, rev1b=rev1b, rev2=rev2)
 
         for copy_fn, working_fn, st in fnsandstat:
             cpstat = os.lstat(copy_fn)
@@ -340,6 +470,10 @@
      _('pass option to comparison program'), _('OPT')),
     ('r', 'rev', [], _('revision'), _('REV')),
     ('c', 'change', '', _('change made by revision'), _('REV')),
+    ('', 'per-file', False,
+     _('compare each file instead of revision snapshots')),
+    ('', 'confirm', False,
+     _('prompt user before each external program invocation')),
     ('', 'patch', None, _('compare patches for two revisions'))
     ] + cmdutil.walkopts + cmdutil.subrepoopts
 
@@ -357,15 +491,29 @@
     default options "-Npru".
 
     To select a different program, use the -p/--program option. The
-    program will be passed the names of two directories to compare. To
-    pass additional options to the program, use -o/--option. These
-    will be passed before the names of the directories to compare.
+    program will be passed the names of two directories to compare,
+    unless the --per-file option is specified (see below). To pass
+    additional options to the program, use -o/--option. These will be
+    passed before the names of the directories or files to compare.
 
     When two revision arguments are given, then changes are shown
     between those revisions. If only one revision is specified then
     that revision is compared to the working directory, and, when no
     revisions are specified, the working directory files are compared
-    to its parent.'''
+    to its parent.
+
+    The --per-file option runs the external program repeatedly on each
+    file to diff, instead of once on two directories. By default,
+    this happens one by one, where the next file diff is open in the
+    external program only once the previous external program (for the
+    previous file diff) has exited. If the external program has a
+    graphical interface, it can open all the file diffs at once instead
+    of one by one. See :hg:`help -e extdiff` for information about how
+    to tell Mercurial that a given program has a graphical interface.
+
+    The --confirm option will prompt the user before each invocation of
+    the external program. It is ignored if --per-file isn't specified.
+    '''
     opts = pycompat.byteskwargs(opts)
     program = opts.get('program')
     option = opts.get('option')
@@ -390,20 +538,22 @@
     to its parent.
     """
 
-    def __init__(self, path, cmdline):
+    def __init__(self, path, cmdline, isgui):
         # We can't pass non-ASCII through docstrings (and path is
         # in an unknown encoding anyway), but avoid double separators on
         # Windows
         docpath = stringutil.escapestr(path).replace(b'\\\\', b'\\')
         self.__doc__ %= {r'path': pycompat.sysstr(stringutil.uirepr(docpath))}
         self._cmdline = cmdline
+        self._isgui = isgui
 
     def __call__(self, ui, repo, *pats, **opts):
         opts = pycompat.byteskwargs(opts)
         options = ' '.join(map(procutil.shellquote, opts['option']))
         if options:
             options = ' ' + options
-        return dodiff(ui, repo, self._cmdline + options, pats, opts)
+        return dodiff(ui, repo, self._cmdline + options, pats, opts,
+                      guitool=self._isgui)
 
 def uisetup(ui):
     for cmd, path in ui.configitems('extdiff'):
@@ -418,7 +568,8 @@
             cmdline = procutil.shellquote(path)
             if diffopts:
                 cmdline += ' ' + diffopts
-        elif cmd.startswith('opts.'):
+            isgui = ui.configbool('extdiff', 'gui.' + cmd)
+        elif cmd.startswith('opts.') or cmd.startswith('gui.'):
             continue
         else:
             if path:
@@ -432,15 +583,20 @@
                     path = filemerge.findexternaltool(ui, cmd) or cmd
                 cmdline = procutil.shellquote(path)
                 diffopts = False
+            isgui = ui.configbool('extdiff', 'gui.' + cmd)
         # look for diff arguments in [diff-tools] then [merge-tools]
         if not diffopts:
-            args = ui.config('diff-tools', cmd+'.diffargs') or \
-                   ui.config('merge-tools', cmd+'.diffargs')
-            if args:
-                cmdline += ' ' + args
+            key = cmd + '.diffargs'
+            for section in ('diff-tools', 'merge-tools'):
+                args = ui.config(section, key)
+                if args:
+                    cmdline += ' ' + args
+                    if isgui is None:
+                        isgui = ui.configbool(section, cmd + '.gui') or False
+                    break
         command(cmd, extdiffopts[:], _('hg %s [OPTION]... [FILE]...') % cmd,
                 helpcategory=command.CATEGORY_FILE_CONTENTS,
-                inferrepo=True)(savedcmd(path, cmdline))
+                inferrepo=True)(savedcmd(path, cmdline, isgui))
 
 # tell hggettext to extract docstrings from these functions:
 i18nfunctions = [savedcmd]
--- a/hgext/fastannotate/commands.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fastannotate/commands.py	Wed Apr 17 13:41:18 2019 -0400
@@ -198,9 +198,9 @@
         formatter.write(result, lines, existinglines=existinglines)
     formatter.end()
 
-_newopts = set([])
-_knownopts = set([opt[1].replace('-', '_') for opt in
-                  (fastannotatecommandargs[r'options'] + commands.globalopts)])
+_newopts = set()
+_knownopts = {opt[1].replace('-', '_') for opt in
+              (fastannotatecommandargs[r'options'] + commands.globalopts)}
 
 def _annotatewrapper(orig, ui, repo, *pats, **opts):
     """used by wrapdefault"""
--- a/hgext/fastannotate/formatter.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fastannotate/formatter.py	Wed Apr 17 13:41:18 2019 -0400
@@ -38,8 +38,8 @@
         if self.opts.get('rev') == 'wdir()':
             orig = hexfunc
             hexfunc = lambda x: None if x is None else orig(x)
-            wnode = hexfunc(repo[None].p1().node()) + '+'
-            wrev = '%d' % repo[None].p1().rev()
+            wnode = hexfunc(repo['.'].node()) + '+'
+            wrev = '%d' % repo['.'].rev()
             wrevpad = ''
             if not opts.get('changeset'): # only show + if changeset is hidden
                 wrev += '+'
--- a/hgext/fastannotate/protocol.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fastannotate/protocol.py	Wed Apr 17 13:41:18 2019 -0400
@@ -71,7 +71,6 @@
             for p in [actx.revmappath, actx.linelogpath]:
                 if not os.path.exists(p):
                     continue
-                content = ''
                 with open(p, 'rb') as f:
                     content = f.read()
                 vfsbaselen = len(repo.vfs.base + '/')
--- a/hgext/fastannotate/support.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fastannotate/support.py	Wed Apr 17 13:41:18 2019 -0400
@@ -109,7 +109,6 @@
 
 def _remotefctxannotate(orig, self, follow=False, skiprevs=None, diffopts=None):
     # skipset: a set-like used to test if a fctx needs to be downloaded
-    skipset = None
     with context.fctxannotatecontext(self, follow, diffopts) as ac:
         skipset = revmap.revmap(ac.revmappath)
     return orig(self, follow, skiprevs=skiprevs, diffopts=diffopts,
--- a/hgext/fetch.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fetch.py	Wed Apr 17 13:41:18 2019 -0400
@@ -68,7 +68,7 @@
     if date:
         opts['date'] = dateutil.parsedate(date)
 
-    parent, _p2 = repo.dirstate.parents()
+    parent = repo.dirstate.p1()
     branch = repo.dirstate.branch()
     try:
         branchnode = repo.branchtip(branch)
--- a/hgext/fix.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fix.py	Wed Apr 17 13:41:18 2019 -0400
@@ -280,10 +280,8 @@
     for rev in sorted(revstofix):
         fixctx = repo[rev]
         match = scmutil.match(fixctx, pats, opts)
-        for path in pathstofix(ui, repo, pats, opts, match, basectxs[rev],
-                               fixctx):
-            if path not in fixctx:
-                continue
+        for path in sorted(pathstofix(
+                        ui, repo, pats, opts, match, basectxs[rev], fixctx)):
             fctx = fixctx[path]
             if fctx.islink():
                 continue
@@ -601,9 +599,7 @@
         if path not in ctx:
             return None
         fctx = ctx[path]
-        copied = fctx.renamed()
-        if copied:
-            copied = copied[0]
+        copysource = fctx.copysource()
         return context.memfilectx(
             repo,
             memctx,
@@ -611,7 +607,7 @@
             data=filedata.get(path, fctx.data()),
             islink=fctx.islink(),
             isexec=fctx.isexec(),
-            copied=copied)
+            copysource=copysource)
 
     extra = ctx.extra().copy()
     extra['fix_source'] = ctx.hex()
--- a/hgext/fsmonitor/__init__.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fsmonitor/__init__.py	Wed Apr 17 13:41:18 2019 -0400
@@ -161,6 +161,12 @@
 configitem('fsmonitor', 'blacklistusers',
     default=list,
 )
+configitem('fsmonitor', 'watchman_exe',
+    default='watchman',
+)
+configitem('fsmonitor', 'verbose',
+    default=True,
+)
 configitem('experimental', 'fsmonitor.transaction_notify',
     default=False,
 )
@@ -172,11 +178,15 @@
 def _handleunavailable(ui, state, ex):
     """Exception handler for Watchman interaction exceptions"""
     if isinstance(ex, watchmanclient.Unavailable):
-        if ex.warn:
-            ui.warn(str(ex) + '\n')
+        # experimental config: fsmonitor.verbose
+        if ex.warn and ui.configbool('fsmonitor', 'verbose'):
+            if 'illegal_fstypes' not in str(ex):
+                ui.warn(str(ex) + '\n')
         if ex.invalidate:
             state.invalidate()
-        ui.log('fsmonitor', 'Watchman unavailable: %s\n', ex.msg)
+        # experimental config: fsmonitor.verbose
+        if ui.configbool('fsmonitor', 'verbose'):
+            ui.log('fsmonitor', 'Watchman unavailable: %s\n', ex.msg)
     else:
         ui.log('fsmonitor', 'Watchman exception: %s\n', ex)
 
@@ -240,24 +250,6 @@
         clock = 'c:0:0'
         notefiles = []
 
-    def fwarn(f, msg):
-        self._ui.warn('%s: %s\n' % (self.pathto(f), msg))
-        return False
-
-    def badtype(mode):
-        kind = _('unknown')
-        if stat.S_ISCHR(mode):
-            kind = _('character device')
-        elif stat.S_ISBLK(mode):
-            kind = _('block device')
-        elif stat.S_ISFIFO(mode):
-            kind = _('fifo')
-        elif stat.S_ISSOCK(mode):
-            kind = _('socket')
-        elif stat.S_ISDIR(mode):
-            kind = _('directory')
-        return _('unsupported file type (type is %s)') % kind
-
     ignore = self._ignore
     dirignore = self._dirignore
     if unknown:
@@ -379,6 +371,9 @@
         fexists = entry['exists']
         kind = getkind(fmode)
 
+        if '/.hg/' in fname or fname.endswith('/.hg'):
+            return bail('nested-repo-detected')
+
         if not fexists:
             # if marked as deleted and we don't already have a change
             # record, mark it as deleted.  If we already have an entry
@@ -485,7 +480,7 @@
 
     working = ctx2.rev() is None
     parentworking = working and ctx1 == self['.']
-    match = match or matchmod.always(self.root, self.getcwd())
+    match = match or matchmod.always()
 
     # Maybe we can use this opportunity to update Watchman's state.
     # Mercurial uses workingcommitctx and/or memctx to represent the part of
@@ -752,6 +747,14 @@
             repo, node, branchmerge, force, ancestor, mergeancestor,
             labels, matcher, **kwargs)
 
+def repo_has_depth_one_nested_repo(repo):
+    for f in repo.wvfs.listdir():
+        if os.path.isdir(os.path.join(repo.root, f, '.hg')):
+            msg = 'fsmonitor: sub-repository %r detected, fsmonitor disabled\n'
+            repo.ui.debug(msg % f)
+            return True
+    return False
+
 def reposetup(ui, repo):
     # We don't work with largefiles or inotify
     exts = extensions.enabled()
@@ -769,6 +772,9 @@
         if repo.wvfs.exists('.hgsubstate') or repo.wvfs.exists('.hgsub'):
             return
 
+        if repo_has_depth_one_nested_repo(repo):
+            return
+
         fsmonitorstate = state.state(repo)
         if fsmonitorstate.mode == 'off':
             return
--- a/hgext/fsmonitor/pywatchman/__init__.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fsmonitor/pywatchman/__init__.py	Wed Apr 17 13:41:18 2019 -0400
@@ -317,7 +317,7 @@
     """ local unix domain socket transport """
     sock = None
 
-    def __init__(self, sockpath, timeout):
+    def __init__(self, sockpath, timeout, watchman_exe):
         self.sockpath = sockpath
         self.timeout = timeout
 
@@ -397,7 +397,7 @@
 class WindowsNamedPipeTransport(Transport):
     """ connect to a named pipe """
 
-    def __init__(self, sockpath, timeout):
+    def __init__(self, sockpath, timeout, watchman_exe):
         self.sockpath = sockpath
         self.timeout = int(math.ceil(timeout * 1000))
         self._iobuf = None
@@ -563,9 +563,10 @@
     proc = None
     closed = True
 
-    def __init__(self, sockpath, timeout):
+    def __init__(self, sockpath, timeout, watchman_exe):
         self.sockpath = sockpath
         self.timeout = timeout
+        self.watchman_exe = watchman_exe
 
     def close(self):
         if self.proc:
@@ -579,7 +580,7 @@
         if self.proc:
             return self.proc
         args = [
-            'watchman',
+            self.watchman_exe,
             '--sockname={0}'.format(self.sockpath),
             '--logfile=/BOGUS',
             '--statefile=/BOGUS',
@@ -756,6 +757,7 @@
     unilateral = ['log', 'subscription']
     tport = None
     useImmutableBser = None
+    watchman_exe = None
 
     def __init__(self,
                  sockpath=None,
@@ -763,10 +765,12 @@
                  transport=None,
                  sendEncoding=None,
                  recvEncoding=None,
-                 useImmutableBser=False):
+                 useImmutableBser=False,
+                 watchman_exe=None):
         self.sockpath = sockpath
         self.timeout = timeout
         self.useImmutableBser = useImmutableBser
+        self.watchman_exe = watchman_exe
 
         if inspect.isclass(transport) and issubclass(transport, Transport):
             self.transport = transport
@@ -817,7 +821,7 @@
         if path:
             return path
 
-        cmd = ['watchman', '--output-encoding=bser', 'get-sockname']
+        cmd = [self.watchman_exe, '--output-encoding=bser', 'get-sockname']
         try:
             args = dict(stdout=subprocess.PIPE,
                         stderr=subprocess.PIPE,
@@ -858,7 +862,7 @@
         if self.sockpath is None:
             self.sockpath = self._resolvesockname()
 
-        self.tport = self.transport(self.sockpath, self.timeout)
+        self.tport = self.transport(self.sockpath, self.timeout, self.watchman_exe)
         self.sendConn = self.sendCodec(self.tport)
         self.recvConn = self.recvCodec(self.tport)
 
--- a/hgext/fsmonitor/pywatchman/capabilities.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fsmonitor/pywatchman/capabilities.py	Wed Apr 17 13:41:18 2019 -0400
@@ -62,7 +62,6 @@
     vers['capabilities'] = {}
     for name in opts['optional']:
         vers['capabilities'][name] = check(parsed_version, name)
-    failed = False
     for name in opts['required']:
         have = check(parsed_version, name)
         vers['capabilities'][name] = have
--- a/hgext/fsmonitor/pywatchman/pybser.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fsmonitor/pywatchman/pybser.py	Wed Apr 17 13:41:18 2019 -0400
@@ -267,7 +267,7 @@
             key = key[3:]
         try:
             return self._values[self._keys.index(key)]
-        except ValueError as ex:
+        except ValueError:
             raise KeyError('_BunserDict has no key %s' % key)
 
     def __len__(self):
@@ -420,7 +420,6 @@
 
 
 def _pdu_info_helper(buf):
-    bser_version = -1
     if buf[0:2] == EMPTY_HEADER[0:2]:
         bser_version = 1
         bser_capabilities = 0
--- a/hgext/fsmonitor/watchmanclient.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/fsmonitor/watchmanclient.py	Wed Apr 17 13:41:18 2019 -0400
@@ -82,9 +82,11 @@
         try:
             if self._watchmanclient is None:
                 self._firsttime = False
+                watchman_exe = self._ui.configpath('fsmonitor', 'watchman_exe')
                 self._watchmanclient = pywatchman.client(
                     timeout=self._timeout,
-                    useImmutableBser=True)
+                    useImmutableBser=True,
+                    watchman_exe=watchman_exe)
             return self._watchmanclient.query(*watchmanargs)
         except pywatchman.CommandError as ex:
             if 'unable to resolve root' in ex.msg:
--- a/hgext/githelp.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/githelp.py	Wed Apr 17 13:41:18 2019 -0400
@@ -25,6 +25,7 @@
     encoding,
     error,
     fancyopts,
+    pycompat,
     registrar,
     scmutil,
 )
@@ -83,21 +84,22 @@
             args = fancyopts.fancyopts(list(args), cmdoptions, opts, True)
             break
         except getopt.GetoptError as ex:
-            flag = None
-            if "requires argument" in ex.msg:
+            if r"requires argument" in ex.msg:
                 raise
-            if ('--' + ex.opt) in ex.msg:
-                flag = '--' + ex.opt
-            elif ('-' + ex.opt) in ex.msg:
-                flag = '-' + ex.opt
+            if (r'--' + ex.opt) in ex.msg:
+                flag = '--' + pycompat.bytestr(ex.opt)
+            elif (r'-' + ex.opt) in ex.msg:
+                flag = '-' + pycompat.bytestr(ex.opt)
             else:
-                raise error.Abort(_("unknown option %s") % ex.opt)
+                raise error.Abort(_("unknown option %s") %
+                                  pycompat.bytestr(ex.opt))
             try:
                 args.remove(flag)
             except Exception:
                 msg = _("unknown option '%s' packed with other options")
                 hint = _("please try passing the option as its own flag: -%s")
-                raise error.Abort(msg % ex.opt, hint=hint % ex.opt)
+                raise error.Abort(msg % pycompat.bytestr(ex.opt),
+                                  hint=hint % pycompat.bytestr(ex.opt))
 
             ui.warn(_("ignoring unknown option %s\n") % flag)
 
@@ -119,7 +121,12 @@
             for k, values in sorted(self.opts.iteritems()):
                 for v in values:
                     if v:
-                        cmd += " %s %s" % (k, v)
+                        if isinstance(v, int):
+                            fmt = ' %s %d'
+                        else:
+                            fmt = ' %s %s'
+
+                        cmd += fmt % (k, v)
                     else:
                         cmd += " %s" % (k,)
         if self.args:
--- a/hgext/gpg.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/gpg.py	Wed Apr 17 13:41:18 2019 -0400
@@ -297,7 +297,7 @@
         return
 
     if not opts["force"]:
-        msigs = match.exact(repo.root, '', ['.hgsigs'])
+        msigs = match.exact(['.hgsigs'])
         if any(repo.status(match=msigs, unknown=True, ignored=True)):
             raise error.Abort(_("working copy of .hgsigs is changed "),
                              hint=_("please commit .hgsigs manually"))
--- a/hgext/histedit.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/histedit.py	Wed Apr 17 13:41:18 2019 -0400
@@ -156,6 +156,15 @@
   [histedit]
   linelen = 120      # truncate rule lines at 120 characters
 
+The summary of a change can be customized as well::
+
+  [histedit]
+  summary-template = '{rev} {bookmarks} {desc|firstline}'
+
+The customized summary should be kept short enough that rule lines
+will fit in the configured line length. See above if that requires
+customization.
+
 ``hg histedit`` attempts to automatically choose an appropriate base
 revision to use. To change which base revision is used, define a
 revset in your configuration file::
@@ -248,6 +257,8 @@
 configitem('ui', 'interface.histedit',
     default=None,
 )
+configitem('histedit', 'summary-template',
+           default='{rev} {desc|firstline}')
 
 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
@@ -480,8 +491,11 @@
         <hash> <rev> <summary>
         """
         ctx = self.repo[self.node]
-        summary = _getsummary(ctx)
-        line = '%s %s %d %s' % (self.verb, ctx, ctx.rev(), summary)
+        ui = self.repo.ui
+        summary = cmdutil.rendertemplate(
+            ctx, ui.config('histedit', 'summary-template')) or ''
+        summary = summary.splitlines()[0]
+        line = '%s %s %s' % (self.verb, ctx, summary)
         # trim to 75 columns by default so it's not stupidly wide in my editor
         # (the 5 more are left for verb)
         maxlen = self.repo.ui.configint('histedit', 'linelen')
@@ -508,17 +522,14 @@
         rulectx = repo[self.node]
         repo.ui.pushbuffer(error=True, labeled=True)
         hg.update(repo, self.state.parentctxnode, quietempty=True)
+        repo.ui.popbuffer()
         stats = applychanges(repo.ui, repo, rulectx, {})
         repo.dirstate.setbranch(rulectx.branch())
         if stats.unresolvedcount:
-            buf = repo.ui.popbuffer()
-            repo.ui.write(buf)
             raise error.InterventionRequired(
                 _('Fix up the change (%s %s)') %
                 (self.verb, node.short(self.node)),
                 hint=_('hg histedit --continue to resume'))
-        else:
-            repo.ui.popbuffer()
 
     def continuedirty(self):
         """Continues the action when changes have been applied to the working
@@ -575,12 +586,14 @@
 
 def applychanges(ui, repo, ctx, opts):
     """Merge changeset from ctx (only) in the current working directory"""
-    wcpar = repo.dirstate.parents()[0]
+    wcpar = repo.dirstate.p1()
     if ctx.p1().node() == wcpar:
         # edits are "in place" we do not need to make any merge,
         # just applies changes on parent for editing
+        ui.pushbuffer()
         cmdutil.revert(ui, repo, ctx, (wcpar, node.nullid), all=True)
         stats = mergemod.updateresult(0, 0, 0, 0)
+        ui.popbuffer()
     else:
         try:
             # ui.forcemerge is an internal variable, do not document
@@ -608,7 +621,7 @@
         if not c.mutable():
             raise error.ParseError(
                 _("cannot fold into public change %s") % node.short(c.node()))
-    base = firstctx.parents()[0]
+    base = firstctx.p1()
 
     # commit a new version of the old changeset, including the update
     # collect all files which might be affected
@@ -631,7 +644,7 @@
                                       fctx.path(), fctx.data(),
                                       islink='l' in flags,
                                       isexec='x' in flags,
-                                      copied=copied.get(path))
+                                      copysource=copied.get(path))
             return mctx
         return None
 
@@ -693,7 +706,7 @@
 class pick(histeditaction):
     def run(self):
         rulectx = self.repo[self.node]
-        if rulectx.parents()[0].node() == self.state.parentctxnode:
+        if rulectx.p1().node() == self.state.parentctxnode:
             self.repo.ui.debug('node %s unchanged\n' % node.short(self.node))
             return rulectx, []
 
@@ -724,7 +737,7 @@
         super(fold, self).verify(prev, expected, seen)
         repo = self.repo
         if not prev:
-            c = repo[self.node].parents()[0]
+            c = repo[self.node].p1()
         elif not prev.verb in ('pick', 'base'):
             return
         else:
@@ -795,7 +808,7 @@
         return False
 
     def finishfold(self, ui, repo, ctx, oldctx, newnode, internalchanges):
-        parent = ctx.parents()[0].node()
+        parent = ctx.p1().node()
         hg.updaterepo(repo, parent, overwrite=False)
         ### prepare new commit data
         commitopts = {}
@@ -934,6 +947,12 @@
 # Curses Support
 try:
     import curses
+
+    # Curses requires setting the locale or it will default to the C
+    # locale. This sets the locale to the user's default system
+    # locale.
+    import locale
+    locale.setlocale(locale.LC_ALL, r'')
 except ImportError:
     curses = None
 
@@ -943,7 +962,8 @@
     'roll': '^roll',
 }
 
-COLOR_HELP, COLOR_SELECTED, COLOR_OK, COLOR_WARN  = 1, 2, 3, 4
+COLOR_HELP, COLOR_SELECTED, COLOR_OK, COLOR_WARN, COLOR_CURRENT  = 1, 2, 3, 4, 5
+COLOR_DIFF_ADD_LINE, COLOR_DIFF_DEL_LINE, COLOR_DIFF_OFFSET = 6, 7, 8
 
 E_QUIT, E_HISTEDIT = 1, 2
 E_PAGEDOWN, E_PAGEUP, E_LINEUP, E_LINEDOWN, E_RESIZE = 3, 4, 5, 6, 7
@@ -1210,19 +1230,29 @@
 def patchcontents(state):
     repo = state['repo']
     rule = state['rules'][state['pos']]
+    repo.ui.verbose = True
     displayer = logcmdutil.changesetdisplayer(repo.ui, repo, {
-        'patch': True, 'verbose': True
+        "patch": True,  "template": "status"
     }, buffered=True)
     displayer.show(rule.ctx)
     displayer.close()
     return displayer.hunk[rule.ctx.rev()].splitlines()
 
 def _chisteditmain(repo, rules, stdscr):
+    try:
+        curses.use_default_colors()
+    except curses.error:
+        pass
+
     # initialize color pattern
     curses.init_pair(COLOR_HELP, curses.COLOR_WHITE, curses.COLOR_BLUE)
     curses.init_pair(COLOR_SELECTED, curses.COLOR_BLACK, curses.COLOR_WHITE)
     curses.init_pair(COLOR_WARN, curses.COLOR_BLACK, curses.COLOR_YELLOW)
     curses.init_pair(COLOR_OK, curses.COLOR_BLACK, curses.COLOR_GREEN)
+    curses.init_pair(COLOR_CURRENT, curses.COLOR_WHITE, curses.COLOR_MAGENTA)
+    curses.init_pair(COLOR_DIFF_ADD_LINE, curses.COLOR_GREEN, -1)
+    curses.init_pair(COLOR_DIFF_DEL_LINE, curses.COLOR_RED, -1)
+    curses.init_pair(COLOR_DIFF_OFFSET, curses.COLOR_MAGENTA, -1)
 
     # don't display the cursor
     try:
@@ -1246,7 +1276,7 @@
         line = "changeset: {0}:{1:<12}".format(ctx.rev(), ctx)
         win.addstr(1, 1, line[:length])
 
-        line = "user:      {0}".format(stringutil.shortuser(ctx.user()))
+        line = "user:      {0}".format(ctx.user())
         win.addstr(2, 1, line[:length])
 
         bms = repo.nodebookmarks(ctx.node())
@@ -1313,21 +1343,36 @@
             if y + start == selected:
                 addln(rulesscr, y, 2, rule, curses.color_pair(COLOR_SELECTED))
             elif y + start == pos:
-                addln(rulesscr, y, 2, rule, curses.A_BOLD)
+                addln(rulesscr, y, 2, rule,
+                      curses.color_pair(COLOR_CURRENT) | curses.A_BOLD)
             else:
                 addln(rulesscr, y, 2, rule)
         rulesscr.noutrefresh()
 
-    def renderstring(win, state, output):
+    def renderstring(win, state, output, diffcolors=False):
         maxy, maxx = win.getmaxyx()
         length = min(maxy - 1, len(output))
         for y in range(0, length):
-            win.addstr(y, 0, output[y])
+            line = output[y]
+            if diffcolors:
+                if line and line[0] == '+':
+                    win.addstr(
+                        y, 0, line, curses.color_pair(COLOR_DIFF_ADD_LINE))
+                elif line and line[0] == '-':
+                    win.addstr(
+                        y, 0, line, curses.color_pair(COLOR_DIFF_DEL_LINE))
+                elif line.startswith('@@ '):
+                    win.addstr(
+                        y, 0, line, curses.color_pair(COLOR_DIFF_OFFSET))
+                else:
+                    win.addstr(y, 0, line)
+            else:
+                win.addstr(y, 0, line)
         win.noutrefresh()
 
     def renderpatch(win, state):
         start = state['modes'][MODE_PATCH]['line_offset']
-        renderstring(win, state, patchcontents(state)[start:])
+        renderstring(win, state, patchcontents(state)[start:], diffcolors=True)
 
     def layout(mode):
         maxy, maxx = stdscr.getmaxyx()
@@ -1459,7 +1504,7 @@
                 'exactly one common root'))
         root = rr[0].node()
 
-        topmost, empty = repo.dirstate.parents()
+        topmost = repo.dirstate.p1()
         revs = between(repo, root, topmost, keep)
         if not revs:
             raise error.Abort(_('%s is not an ancestor of working directory') %
@@ -1472,10 +1517,10 @@
         curses.echo()
         curses.endwin()
         if rc is False:
-            ui.write(_("chistedit aborted\n"))
+            ui.write(_("histedit aborted\n"))
             return 0
         if type(rc) is list:
-            ui.status(_("running histedit\n"))
+            ui.status(_("performing changes\n"))
             rules = makecommands(rc)
             filename = repo.vfs.join('chistedit')
             with open(filename, 'w+') as fp:
@@ -1760,7 +1805,7 @@
             state.write(tr=tr)
             actobj = state.actions[0]
             progress.increment(item=actobj.torule())
-            ui.debug('histedit: processing %s %s\n' % (actobj.verb,\
+            ui.debug('histedit: processing %s %s\n' % (actobj.verb,
                                                        actobj.torule()))
             parentctx, replacement_ = actobj.run()
             state.parentctxnode = parentctx.node()
@@ -1859,7 +1904,7 @@
     else:
         rules = _readfile(ui, rules)
     actions = parserules(rules, state)
-    ctxs = [repo[act.node] \
+    ctxs = [repo[act.node]
             for act in state.actions if act.node]
     warnverifyactions(ui, repo, actions, state, ctxs)
     state.actions = actions
@@ -1873,7 +1918,7 @@
     cmdutil.checkunfinished(repo)
     cmdutil.bailifchanged(repo)
 
-    topmost, empty = repo.dirstate.parents()
+    topmost = repo.dirstate.p1()
     if outg:
         if freeargs:
             remote = freeargs[0]
@@ -1902,7 +1947,7 @@
     actions = parserules(rules, state)
     warnverifyactions(ui, repo, actions, state, ctxs)
 
-    parentctxnode = repo[root].parents()[0].node()
+    parentctxnode = repo[root].p1().node()
 
     state.parentctxnode = parentctxnode
     state.actions = actions
--- a/hgext/infinitepush/__init__.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/infinitepush/__init__.py	Wed Apr 17 13:41:18 2019 -0400
@@ -282,8 +282,8 @@
     scratchbranchpat = ui.config('infinitepush', 'branchpattern')
     if scratchbranchpat:
         global _scratchbranchmatcher
-        kind, pat, _scratchbranchmatcher = \
-                stringutil.stringmatcher(scratchbranchpat)
+        kind, pat, _scratchbranchmatcher = (
+                stringutil.stringmatcher(scratchbranchpat))
 
 def serverextsetup(ui):
     origpushkeyhandler = bundle2.parthandlermapping['pushkey']
@@ -294,8 +294,8 @@
     bundle2.parthandlermapping['pushkey'] = newpushkeyhandler
 
     orighandlephasehandler = bundle2.parthandlermapping['phase-heads']
-    newphaseheadshandler = lambda *args, **kwargs: \
-        bundle2handlephases(orighandlephasehandler, *args, **kwargs)
+    newphaseheadshandler = lambda *args, **kwargs: bundle2handlephases(
+        orighandlephasehandler, *args, **kwargs)
     newphaseheadshandler.params = orighandlephasehandler.params
     bundle2.parthandlermapping['phase-heads'] = newphaseheadshandler
 
@@ -754,10 +754,10 @@
     nametype_idx = 1
     remote_idx = 2
     name_idx = 3
-    remotenames = [remotename for remotename in \
-                   remotenamesext.readremotenames(repo) \
+    remotenames = [remotename for remotename in
+                   remotenamesext.readremotenames(repo)
                    if remotename[remote_idx] == path]
-    remote_bm_names = [remotename[name_idx] for remotename in \
+    remote_bm_names = [remotename[name_idx] for remotename in
                        remotenames if remotename[nametype_idx] == "bookmarks"]
 
     for name in names:
--- a/hgext/journal.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/journal.py	Wed Apr 17 13:41:18 2019 -0400
@@ -194,8 +194,8 @@
     return orig(ui, repo, repopath)
 
 class journalentry(collections.namedtuple(
-        u'journalentry',
-        u'timestamp user command namespace name oldhashes newhashes')):
+        r'journalentry',
+        r'timestamp user command namespace name oldhashes newhashes')):
     """Individual journal entry
 
     * timestamp: a mercurial (time, timezone) tuple
@@ -348,7 +348,6 @@
 
     def _write(self, vfs, entry):
         with self.jlock(vfs):
-            version = None
             # open file in amend mode to ensure it is created if missing
             with vfs('namejournal', mode='a+b') as f:
                 f.seek(0, os.SEEK_SET)
--- a/hgext/largefiles/basestore.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/largefiles/basestore.py	Wed Apr 17 13:41:18 2019 -0400
@@ -136,7 +136,7 @@
         failed = self._verifyfiles(contents, filestocheck)
 
         numrevs = len(verified)
-        numlfiles = len(set([fname for (fname, fnode) in verified]))
+        numlfiles = len({fname for (fname, fnode) in verified})
         if contents:
             self.ui.status(
                 _('verified contents of %d revisions of %d largefiles\n')
--- a/hgext/largefiles/lfcommands.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/largefiles/lfcommands.py	Wed Apr 17 13:41:18 2019 -0400
@@ -207,12 +207,12 @@
             # the largefile-ness of its predecessor
             if f in ctx.manifest():
                 fctx = ctx.filectx(f)
-                renamed = fctx.renamed()
+                renamed = fctx.copysource()
                 if renamed is None:
                     # the code below assumes renamed to be a boolean or a list
                     # and won't quite work with the value None
                     renamed = False
-                renamedlfile = renamed and renamed[0] in lfiles
+                renamedlfile = renamed and renamed in lfiles
                 islfile |= renamedlfile
                 if 'l' in fctx.flags():
                     if renamedlfile:
@@ -232,8 +232,8 @@
             if f in ctx.manifest():
                 fctx = ctx.filectx(f)
                 if 'l' in fctx.flags():
-                    renamed = fctx.renamed()
-                    if renamed and renamed[0] in lfiles:
+                    renamed = fctx.copysource()
+                    if renamed and renamed in lfiles:
                         raise error.Abort(_('largefile %s becomes symlink') % f)
 
                 # largefile was modified, update standins
@@ -259,11 +259,11 @@
                 fctx = ctx.filectx(srcfname)
             except error.LookupError:
                 return None
-            renamed = fctx.renamed()
+            renamed = fctx.copysource()
             if renamed:
                 # standin is always a largefile because largefile-ness
                 # doesn't change after rename or copy
-                renamed = lfutil.standin(renamed[0])
+                renamed = lfutil.standin(renamed)
 
             return context.memfilectx(repo, memctx, f,
                                       lfiletohash[srcfname] + '\n',
@@ -288,12 +288,9 @@
     files = set(ctx.files())
     if node.nullid not in parents:
         mc = ctx.manifest()
-        mp1 = ctx.parents()[0].manifest()
-        mp2 = ctx.parents()[1].manifest()
-        files |= (set(mp1) | set(mp2)) - set(mc)
-        for f in mc:
-            if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
-                files.add(f)
+        for pctx in ctx.parents():
+            for fn in pctx.manifest().diff(mc):
+                files.add(fn)
     return files
 
 # Convert src parents to dst parents
@@ -311,9 +308,7 @@
         fctx = ctx.filectx(f)
     except error.LookupError:
         return None
-    renamed = fctx.renamed()
-    if renamed:
-        renamed = renamed[0]
+    renamed = fctx.copysource()
 
     data = fctx.data()
     if f == '.hgtags':
@@ -467,27 +462,26 @@
         wvfs = repo.wvfs
         wctx = repo[None]
         for lfile in lfiles:
-            rellfile = lfile
-            rellfileorig = os.path.relpath(
-                scmutil.origpath(ui, repo, wvfs.join(rellfile)),
+            lfileorig = os.path.relpath(
+                scmutil.backuppath(ui, repo, lfile),
                 start=repo.root)
-            relstandin = lfutil.standin(lfile)
-            relstandinorig = os.path.relpath(
-                scmutil.origpath(ui, repo, wvfs.join(relstandin)),
+            standin = lfutil.standin(lfile)
+            standinorig = os.path.relpath(
+                scmutil.backuppath(ui, repo, standin),
                 start=repo.root)
-            if wvfs.exists(relstandin):
-                if (wvfs.exists(relstandinorig) and
-                    wvfs.exists(rellfile)):
-                    shutil.copyfile(wvfs.join(rellfile),
-                                    wvfs.join(rellfileorig))
-                    wvfs.unlinkpath(relstandinorig)
-                expecthash = lfutil.readasstandin(wctx[relstandin])
+            if wvfs.exists(standin):
+                if (wvfs.exists(standinorig) and
+                    wvfs.exists(lfile)):
+                    shutil.copyfile(wvfs.join(lfile),
+                                    wvfs.join(lfileorig))
+                    wvfs.unlinkpath(standinorig)
+                expecthash = lfutil.readasstandin(wctx[standin])
                 if expecthash != '':
                     if lfile not in wctx: # not switched to normal file
-                        if repo.dirstate[relstandin] != '?':
-                            wvfs.unlinkpath(rellfile, ignoremissing=True)
+                        if repo.dirstate[standin] != '?':
+                            wvfs.unlinkpath(lfile, ignoremissing=True)
                         else:
-                            dropped.add(rellfile)
+                            dropped.add(lfile)
 
                     # use normallookup() to allocate an entry in largefiles
                     # dirstate to prevent lfilesrepo.status() from reporting
@@ -499,9 +493,9 @@
                 # lfile is added to the repository again. This happens when a
                 # largefile is converted back to a normal file: the standin
                 # disappears, but a new (normal) file appears as the lfile.
-                if (wvfs.exists(rellfile) and
+                if (wvfs.exists(lfile) and
                     repo.dirstate.normalize(lfile) not in wctx):
-                    wvfs.unlinkpath(rellfile)
+                    wvfs.unlinkpath(lfile)
                     removed += 1
 
         # largefile processing might be slow and be interrupted - be prepared
@@ -535,19 +529,18 @@
 
             # copy the exec mode of largefile standin from the repository's
             # dirstate to its state in the lfdirstate.
-            rellfile = lfile
-            relstandin = lfutil.standin(lfile)
-            if wvfs.exists(relstandin):
+            standin = lfutil.standin(lfile)
+            if wvfs.exists(standin):
                 # exec is decided by the users permissions using mask 0o100
-                standinexec = wvfs.stat(relstandin).st_mode & 0o100
-                st = wvfs.stat(rellfile)
+                standinexec = wvfs.stat(standin).st_mode & 0o100
+                st = wvfs.stat(lfile)
                 mode = st.st_mode
                 if standinexec != mode & 0o100:
                     # first remove all X bits, then shift all R bits to X
                     mode &= ~0o111
                     if standinexec:
                         mode |= (mode >> 2) & 0o111 & ~util.umask
-                    wvfs.chmod(rellfile, mode)
+                    wvfs.chmod(lfile, mode)
                     update1 = 1
 
             updated += update1
--- a/hgext/largefiles/lfutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/largefiles/lfutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -76,8 +76,8 @@
     if path:
         return path
     if pycompat.iswindows:
-        appdata = encoding.environ.get('LOCALAPPDATA',\
-                        encoding.environ.get('APPDATA'))
+        appdata = encoding.environ.get('LOCALAPPDATA',
+                                       encoding.environ.get('APPDATA'))
         if appdata:
             return os.path.join(appdata, name)
     elif pycompat.isdarwin:
@@ -168,7 +168,7 @@
 
 def lfdirstatestatus(lfdirstate, repo):
     pctx = repo['.']
-    match = matchmod.always(repo.root, repo.getcwd())
+    match = matchmod.always()
     unsure, s = lfdirstate.status(match, subrepos=[], ignored=False,
                                   clean=False, unknown=False)
     modified, clean = s.modified, s.clean
@@ -518,8 +518,8 @@
             files = set(ctx.files())
             if len(parents) == 2:
                 mc = ctx.manifest()
-                mp1 = ctx.parents()[0].manifest()
-                mp2 = ctx.parents()[1].manifest()
+                mp1 = ctx.p1().manifest()
+                mp2 = ctx.p2().manifest()
                 for f in mp1:
                     if f not in mc:
                         files.add(f)
@@ -552,7 +552,7 @@
         # otherwise to update all standins if the largefiles are
         # large.
         lfdirstate = openlfdirstate(ui, repo)
-        dirtymatch = matchmod.always(repo.root, repo.getcwd())
+        dirtymatch = matchmod.always()
         unsure, s = lfdirstate.status(dirtymatch, subrepos=[], ignored=False,
                                       clean=False, unknown=False)
         modifiedfiles = unsure + s.modified + s.added + s.removed
--- a/hgext/largefiles/overrides.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/largefiles/overrides.py	Wed Apr 17 13:41:18 2019 -0400
@@ -24,6 +24,7 @@
     copies as copiesmod,
     error,
     exchange,
+    extensions,
     exthelper,
     filemerge,
     hg,
@@ -77,49 +78,7 @@
     m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
     return m
 
-def installnormalfilesmatchfn(manifest):
-    '''installmatchfn with a matchfn that ignores all largefiles'''
-    def overridematch(ctx, pats=(), opts=None, globbed=False,
-            default='relpath', badfn=None):
-        if opts is None:
-            opts = {}
-        match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
-        return composenormalfilematcher(match, manifest)
-    oldmatch = installmatchfn(overridematch)
-
-def installmatchfn(f):
-    '''monkey patch the scmutil module with a custom match function.
-    Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
-    oldmatch = scmutil.match
-    setattr(f, 'oldmatch', oldmatch)
-    scmutil.match = f
-    return oldmatch
-
-def restorematchfn():
-    '''restores scmutil.match to what it was before installmatchfn
-    was called.  no-op if scmutil.match is its original function.
-
-    Note that n calls to installmatchfn will require n calls to
-    restore the original matchfn.'''
-    scmutil.match = getattr(scmutil.match, 'oldmatch')
-
-def installmatchandpatsfn(f):
-    oldmatchandpats = scmutil.matchandpats
-    setattr(f, 'oldmatchandpats', oldmatchandpats)
-    scmutil.matchandpats = f
-    return oldmatchandpats
-
-def restorematchandpatsfn():
-    '''restores scmutil.matchandpats to what it was before
-    installmatchandpatsfn was called. No-op if scmutil.matchandpats
-    is its original function.
-
-    Note that n calls to installmatchandpatsfn will require n calls
-    to restore the original matchfn.'''
-    scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
-            scmutil.matchandpats)
-
-def addlargefiles(ui, repo, isaddremove, matcher, **opts):
+def addlargefiles(ui, repo, isaddremove, matcher, uipathfn, **opts):
     large = opts.get(r'large')
     lfsize = lfutil.getminsize(
         ui, lfutil.islfilesrepo(repo), opts.get(r'lfsize'))
@@ -140,17 +99,11 @@
         nfile = f in wctx
         exists = lfile or nfile
 
-        # addremove in core gets fancy with the name, add doesn't
-        if isaddremove:
-            name = m.uipath(f)
-        else:
-            name = m.rel(f)
-
         # Don't warn the user when they attempt to add a normal tracked file.
         # The normal add code will do that for us.
         if exact and exists:
             if lfile:
-                ui.warn(_('%s already a largefile\n') % name)
+                ui.warn(_('%s already a largefile\n') % uipathfn(f))
             continue
 
         if (exact or not exists) and not lfutil.isstandin(f):
@@ -164,7 +117,7 @@
             if large or abovemin or (lfmatcher and lfmatcher(f)):
                 lfnames.append(f)
                 if ui.verbose or not exact:
-                    ui.status(_('adding %s as a largefile\n') % name)
+                    ui.status(_('adding %s as a largefile\n') % uipathfn(f))
 
     bad = []
 
@@ -191,7 +144,7 @@
         added = [f for f in lfnames if f not in bad]
     return added, bad
 
-def removelargefiles(ui, repo, isaddremove, matcher, dryrun, **opts):
+def removelargefiles(ui, repo, isaddremove, matcher, uipathfn, dryrun, **opts):
     after = opts.get(r'after')
     m = composelargefilematcher(matcher, repo[None].manifest())
     try:
@@ -207,11 +160,9 @@
 
     def warn(files, msg):
         for f in files:
-            ui.warn(msg % m.rel(f))
+            ui.warn(msg % uipathfn(f))
         return int(len(files) > 0)
 
-    result = 0
-
     if after:
         remove = deleted
         result = warn(modified + added + clean,
@@ -229,12 +180,7 @@
         lfdirstate = lfutil.openlfdirstate(ui, repo)
         for f in sorted(remove):
             if ui.verbose or not m.exact(f):
-                # addremove in core gets fancy with the name, remove doesn't
-                if isaddremove:
-                    name = m.uipath(f)
-                else:
-                    name = m.rel(f)
-                ui.status(_('removing %s\n') % name)
+                ui.status(_('removing %s\n') % uipathfn(f))
 
             if not dryrun:
                 if not after:
@@ -278,27 +224,27 @@
     return orig(ui, repo, *pats, **opts)
 
 @eh.wrapfunction(cmdutil, 'add')
-def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
+def cmdutiladd(orig, ui, repo, matcher, prefix, uipathfn, explicitonly, **opts):
     # The --normal flag short circuits this override
     if opts.get(r'normal'):
-        return orig(ui, repo, matcher, prefix, explicitonly, **opts)
+        return orig(ui, repo, matcher, prefix, uipathfn, explicitonly, **opts)
 
-    ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
+    ladded, lbad = addlargefiles(ui, repo, False, matcher, uipathfn, **opts)
     normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
                                              ladded)
-    bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
+    bad = orig(ui, repo, normalmatcher, prefix, uipathfn, explicitonly, **opts)
 
     bad.extend(f for f in lbad)
     return bad
 
 @eh.wrapfunction(cmdutil, 'remove')
-def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos,
-                  dryrun):
+def cmdutilremove(orig, ui, repo, matcher, prefix, uipathfn, after, force,
+                  subrepos, dryrun):
     normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
-    result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos,
-                  dryrun)
-    return removelargefiles(ui, repo, False, matcher, dryrun, after=after,
-                            force=force) or result
+    result = orig(ui, repo, normalmatcher, prefix, uipathfn, after, force,
+                  subrepos, dryrun)
+    return removelargefiles(ui, repo, False, matcher, uipathfn, dryrun,
+                            after=after, force=force) or result
 
 @eh.wrapfunction(subrepo.hgsubrepo, 'status')
 def overridestatusfn(orig, repo, rev2, **opts):
@@ -326,7 +272,7 @@
 
 @eh.wrapcommand('log')
 def overridelog(orig, ui, repo, *pats, **opts):
-    def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
+    def overridematchandpats(orig, ctx, pats=(), opts=None, globbed=False,
             default='relpath', badfn=None):
         """Matcher that merges root directory with .hglf, suitable for log.
         It is still possible to match .hglf directly.
@@ -335,8 +281,7 @@
         """
         if opts is None:
             opts = {}
-        matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
-                                       badfn=badfn)
+        matchandpats = orig(ctx, pats, opts, globbed, default, badfn=badfn)
         m, p = copy.copy(matchandpats)
 
         if m.always():
@@ -356,9 +301,10 @@
                 return kindpat[0] + ':' + tostandin(kindpat[1])
             return tostandin(kindpat[1])
 
-        if m._cwd:
+        cwd = repo.getcwd()
+        if cwd:
             hglf = lfutil.shortname
-            back = util.pconvert(m.rel(hglf)[:-len(hglf)])
+            back = util.pconvert(repo.pathto(hglf)[:-len(hglf)])
 
             def tostandin(f):
                 # The file may already be a standin, so truncate the back
@@ -371,10 +317,10 @@
                 # path to the root before building the standin.  Otherwise cwd
                 # is somewhere in the repo, relative to root, and needs to be
                 # prepended before building the standin.
-                if os.path.isabs(m._cwd):
+                if os.path.isabs(cwd):
                     f = f[len(back):]
                 else:
-                    f = m._cwd + '/' + f
+                    f = cwd + '/' + f
                 return back + lfutil.standin(f)
         else:
             def tostandin(f):
@@ -416,20 +362,18 @@
     # (2) to determine what files to print out diffs for.
     # The magic matchandpats override should be used for case (1) but not for
     # case (2).
-    def overridemakefilematcher(repo, pats, opts, badfn=None):
+    oldmatchandpats = scmutil.matchandpats
+    def overridemakefilematcher(orig, repo, pats, opts, badfn=None):
         wctx = repo[None]
         match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
         return lambda ctx: match
 
-    oldmatchandpats = installmatchandpatsfn(overridematchandpats)
-    oldmakefilematcher = logcmdutil._makenofollowfilematcher
-    setattr(logcmdutil, '_makenofollowfilematcher', overridemakefilematcher)
-
-    try:
+    wrappedmatchandpats = extensions.wrappedfunction(scmutil, 'matchandpats',
+                                                     overridematchandpats)
+    wrappedmakefilematcher = extensions.wrappedfunction(
+        logcmdutil, '_makenofollowfilematcher', overridemakefilematcher)
+    with wrappedmatchandpats, wrappedmakefilematcher:
         return orig(ui, repo, *pats, **opts)
-    finally:
-        restorematchandpatsfn()
-        setattr(logcmdutil, '_makenofollowfilematcher', oldmakefilematcher)
 
 @eh.wrapcommand('verify',
     opts=[('', 'large', None,
@@ -636,17 +580,22 @@
     # match largefiles and run it again.
     nonormalfiles = False
     nolfiles = False
-    installnormalfilesmatchfn(repo[None].manifest())
-    try:
-        result = orig(ui, repo, pats, opts, rename)
-    except error.Abort as e:
-        if pycompat.bytestr(e) != _('no files to copy'):
-            raise e
-        else:
-            nonormalfiles = True
-        result = 0
-    finally:
-        restorematchfn()
+    manifest = repo[None].manifest()
+    def normalfilesmatchfn(orig, ctx, pats=(), opts=None, globbed=False,
+        default='relpath', badfn=None):
+        if opts is None:
+            opts = {}
+        match = orig(ctx, pats, opts, globbed, default, badfn=badfn)
+        return composenormalfilematcher(match, manifest)
+    with extensions.wrappedfunction(scmutil, 'match', normalfilesmatchfn):
+        try:
+            result = orig(ui, repo, pats, opts, rename)
+        except error.Abort as e:
+            if pycompat.bytestr(e) != _('no files to copy'):
+                raise e
+            else:
+                nonormalfiles = True
+            result = 0
 
     # The first rename can cause our current working directory to be removed.
     # In that case there is nothing left to copy/rename so just quit.
@@ -672,7 +621,7 @@
         wlock = repo.wlock()
 
         manifest = repo[None].manifest()
-        def overridematch(ctx, pats=(), opts=None, globbed=False,
+        def overridematch(orig, ctx, pats=(), opts=None, globbed=False,
                 default='relpath', badfn=None):
             if opts is None:
                 opts = {}
@@ -684,7 +633,7 @@
                     newpats.append(pat.replace(lfutil.shortname, ''))
                 else:
                     newpats.append(pat)
-            match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
+            match = orig(ctx, newpats, opts, globbed, default, badfn=badfn)
             m = copy.copy(match)
             lfile = lambda f: lfutil.standin(f) in manifest
             m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
@@ -698,7 +647,6 @@
                         None)
             m.matchfn = matchfn
             return m
-        oldmatch = installmatchfn(overridematch)
         listpats = []
         for pat in pats:
             if matchmod.patkind(pat) is not None:
@@ -706,23 +654,19 @@
             else:
                 listpats.append(makestandin(pat))
 
-        try:
-            origcopyfile = util.copyfile
-            copiedfiles = []
-            def overridecopyfile(src, dest, *args, **kwargs):
-                if (lfutil.shortname in src and
-                    dest.startswith(repo.wjoin(lfutil.shortname))):
-                    destlfile = dest.replace(lfutil.shortname, '')
-                    if not opts['force'] and os.path.exists(destlfile):
-                        raise IOError('',
-                            _('destination largefile already exists'))
-                copiedfiles.append((src, dest))
-                origcopyfile(src, dest, *args, **kwargs)
-
-            util.copyfile = overridecopyfile
-            result += orig(ui, repo, listpats, opts, rename)
-        finally:
-            util.copyfile = origcopyfile
+        copiedfiles = []
+        def overridecopyfile(orig, src, dest, *args, **kwargs):
+            if (lfutil.shortname in src and
+                dest.startswith(repo.wjoin(lfutil.shortname))):
+                destlfile = dest.replace(lfutil.shortname, '')
+                if not opts['force'] and os.path.exists(destlfile):
+                    raise IOError('',
+                                  _('destination largefile already exists'))
+            copiedfiles.append((src, dest))
+            orig(src, dest, *args, **kwargs)
+        with extensions.wrappedfunction(util, 'copyfile', overridecopyfile):
+            with extensions.wrappedfunction(scmutil, 'match', overridematch):
+                result += orig(ui, repo, listpats, opts, rename)
 
         lfdirstate = lfutil.openlfdirstate(ui, repo)
         for (src, dest) in copiedfiles:
@@ -752,7 +696,6 @@
         else:
             nolfiles = True
     finally:
-        restorematchfn()
         wlock.release()
 
     if nolfiles and nonormalfiles:
@@ -787,11 +730,11 @@
 
         oldstandins = lfutil.getstandinsstate(repo)
 
-        def overridematch(mctx, pats=(), opts=None, globbed=False,
+        def overridematch(orig, mctx, pats=(), opts=None, globbed=False,
                 default='relpath', badfn=None):
             if opts is None:
                 opts = {}
-            match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
+            match = orig(mctx, pats, opts, globbed, default, badfn=badfn)
             m = copy.copy(match)
 
             # revert supports recursing into subrepos, and though largefiles
@@ -822,11 +765,8 @@
                 return origmatchfn(f)
             m.matchfn = matchfn
             return m
-        oldmatch = installmatchfn(overridematch)
-        try:
+        with extensions.wrappedfunction(scmutil, 'match', overridematch):
             orig(ui, repo, ctx, parents, *pats, **opts)
-        finally:
-            restorematchfn()
 
         newstandins = lfutil.getstandinsstate(repo)
         filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
@@ -1048,8 +988,9 @@
         for subpath in sorted(ctx.substate):
             sub = ctx.workingsub(subpath)
             submatch = matchmod.subdirmatcher(subpath, match)
+            subprefix = prefix + subpath + '/'
             sub._repo.lfstatus = True
-            sub.archive(archiver, prefix, submatch)
+            sub.archive(archiver, subprefix, submatch)
 
     archiver.done()
 
@@ -1075,7 +1016,7 @@
         if decode:
             data = repo._repo.wwritedata(name, data)
 
-        archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
+        archiver.addfile(prefix + name, mode, islink, data)
 
     for f in ctx:
         ff = ctx.flags(f)
@@ -1101,8 +1042,9 @@
     for subpath in sorted(ctx.substate):
         sub = ctx.workingsub(subpath)
         submatch = matchmod.subdirmatcher(subpath, match)
+        subprefix = prefix + subpath + '/'
         sub._repo.lfstatus = True
-        sub.archive(archiver, prefix + repo._path + '/', submatch, decode)
+        sub.archive(archiver, subprefix, submatch, decode)
 
 # If a largefile is modified, the change is not reflected in its
 # standin until a commit. cmdutil.bailifchanged() raises an exception
@@ -1126,11 +1068,11 @@
         repo.lfstatus = False
 
 @eh.wrapfunction(cmdutil, 'forget')
-def cmdutilforget(orig, ui, repo, match, prefix, explicitonly, dryrun,
+def cmdutilforget(orig, ui, repo, match, prefix, uipathfn, explicitonly, dryrun,
                   interactive):
     normalmatcher = composenormalfilematcher(match, repo[None].manifest())
-    bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly, dryrun,
-                       interactive)
+    bad, forgot = orig(ui, repo, normalmatcher, prefix, uipathfn, explicitonly,
+                       dryrun, interactive)
     m = composelargefilematcher(match, repo[None].manifest())
 
     try:
@@ -1146,12 +1088,12 @@
         fstandin = lfutil.standin(f)
         if fstandin not in repo.dirstate and not repo.wvfs.isdir(fstandin):
             ui.warn(_('not removing %s: file is already untracked\n')
-                    % m.rel(f))
+                    % uipathfn(f))
             bad.append(f)
 
     for f in forget:
         if ui.verbose or not m.exact(f):
-            ui.status(_('removing %s\n') % m.rel(f))
+            ui.status(_('removing %s\n') % uipathfn(f))
 
     # Need to lock because standin files are deleted then removed from the
     # repository and we could race in-between.
@@ -1273,16 +1215,15 @@
         repo.lfstatus = False
 
 @eh.wrapfunction(scmutil, 'addremove')
-def scmutiladdremove(orig, repo, matcher, prefix, opts=None):
+def scmutiladdremove(orig, repo, matcher, prefix, uipathfn, opts=None):
     if opts is None:
         opts = {}
     if not lfutil.islfilesrepo(repo):
-        return orig(repo, matcher, prefix, opts)
+        return orig(repo, matcher, prefix, uipathfn, opts)
     # Get the list of missing largefiles so we can remove them
     lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
-    unsure, s = lfdirstate.status(matchmod.always(repo.root, repo.getcwd()),
-                                  subrepos=[], ignored=False, clean=False,
-                                  unknown=False)
+    unsure, s = lfdirstate.status(matchmod.always(), subrepos=[],
+                                  ignored=False, clean=False, unknown=False)
 
     # Call into the normal remove code, but the removing of the standin, we want
     # to have handled by original addremove.  Monkey patching here makes sure
@@ -1298,17 +1239,17 @@
         matchfn = m.matchfn
         m.matchfn = lambda f: f in s.deleted and matchfn(f)
 
-        removelargefiles(repo.ui, repo, True, m, opts.get('dry_run'),
+        removelargefiles(repo.ui, repo, True, m, uipathfn, opts.get('dry_run'),
                          **pycompat.strkwargs(opts))
     # Call into the normal add code, and any files that *should* be added as
     # largefiles will be
-    added, bad = addlargefiles(repo.ui, repo, True, matcher,
+    added, bad = addlargefiles(repo.ui, repo, True, matcher, uipathfn,
                                **pycompat.strkwargs(opts))
     # Now that we've handled largefiles, hand off to the original addremove
     # function to take care of the rest.  Make sure it doesn't do anything with
     # largefiles by passing a matcher that will ignore them.
     matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
-    return orig(repo, matcher, prefix, opts)
+    return orig(repo, matcher, prefix, uipathfn, opts)
 
 # Calling purge with --all will cause the largefiles to be deleted.
 # Override repo.status to prevent this from happening.
@@ -1472,10 +1413,8 @@
         # (*1) deprecated, but used internally (e.g: "rebase --collapse")
 
         lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
-        unsure, s = lfdirstate.status(matchmod.always(repo.root,
-                                                    repo.getcwd()),
-                                      subrepos=[], ignored=False,
-                                      clean=True, unknown=False)
+        unsure, s = lfdirstate.status(matchmod.always(), subrepos=[],
+                                      ignored=False, clean=True, unknown=False)
         oldclean = set(s.clean)
         pctx = repo['.']
         dctx = repo[node]
--- a/hgext/largefiles/reposetup.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/largefiles/reposetup.py	Wed Apr 17 13:41:18 2019 -0400
@@ -103,7 +103,7 @@
             parentworking = working and ctx1 == self['.']
 
             if match is None:
-                match = matchmod.always(self.root, self.getcwd())
+                match = matchmod.always()
 
             wlock = None
             try:
@@ -174,8 +174,8 @@
                             if standin not in ctx1:
                                 # from second parent
                                 modified.append(lfile)
-                            elif lfutil.readasstandin(ctx1[standin]) \
-                                    != lfutil.hashfile(self.wjoin(lfile)):
+                            elif (lfutil.readasstandin(ctx1[standin])
+                                  != lfutil.hashfile(self.wjoin(lfile))):
                                 modified.append(lfile)
                             else:
                                 if listclean:
--- a/hgext/largefiles/storefactory.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/largefiles/storefactory.py	Wed Apr 17 13:41:18 2019 -0400
@@ -43,7 +43,6 @@
             path, _branches = hg.parseurl(path)
             remote = hg.peer(repo or ui, {}, path)
         elif path == 'default-push' or path == 'default':
-            path = ''
             remote = repo
         else:
             path, _branches = hg.parseurl(path)
--- a/hgext/lfs/blobstore.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/lfs/blobstore.py	Wed Apr 17 13:41:18 2019 -0400
@@ -42,7 +42,7 @@
     def join(self, path):
         """split the path at first two characters, like: XX/XXXXX..."""
         if not _lfsre.match(path):
-            raise error.ProgrammingError('unexpected lfs path: %s' % path)
+            raise error.ProgrammingError(b'unexpected lfs path: %s' % path)
         return super(lfsvfs, self).join(path[0:2], path[2:])
 
     def walk(self, path=None, onerror=None):
@@ -56,7 +56,8 @@
         prefixlen = len(pathutil.normasprefix(root))
         oids = []
 
-        for dirpath, dirs, files in os.walk(self.reljoin(self.base, path or ''),
+        for dirpath, dirs, files in os.walk(self.reljoin(self.base, path
+                                                         or b''),
                                             onerror=onerror):
             dirpath = dirpath[prefixlen:]
 
@@ -79,10 +80,11 @@
         # self.vfs.  Raise the same error as a normal vfs when asked to read a
         # file that doesn't exist.  The only difference is the full file path
         # isn't available in the error.
-        raise IOError(errno.ENOENT, '%s: No such file or directory' % oid)
+        raise IOError(errno.ENOENT,
+                      pycompat.sysstr(b'%s: No such file or directory' % oid))
 
     def walk(self, path=None, onerror=None):
-        return ('', [], [])
+        return (b'', [], [])
 
     def write(self, oid, data):
         pass
@@ -123,13 +125,13 @@
     """
 
     def __init__(self, repo):
-        fullpath = repo.svfs.join('lfs/objects')
+        fullpath = repo.svfs.join(b'lfs/objects')
         self.vfs = lfsvfs(fullpath)
 
-        if repo.ui.configbool('experimental', 'lfs.disableusercache'):
+        if repo.ui.configbool(b'experimental', b'lfs.disableusercache'):
             self.cachevfs = nullvfs()
         else:
-            usercache = lfutil._usercachedir(repo.ui, 'lfs')
+            usercache = lfutil._usercachedir(repo.ui, b'lfs')
             self.cachevfs = lfsvfs(usercache)
         self.ui = repo.ui
 
@@ -143,23 +145,23 @@
         # the usercache is the only place it _could_ be.  If not present, the
         # missing file msg here will indicate the local repo, not the usercache.
         if self.cachevfs.exists(oid):
-            return self.cachevfs(oid, 'rb')
+            return self.cachevfs(oid, b'rb')
 
-        return self.vfs(oid, 'rb')
+        return self.vfs(oid, b'rb')
 
     def download(self, oid, src):
         """Read the blob from the remote source in chunks, verify the content,
         and write to this local blobstore."""
         sha256 = hashlib.sha256()
 
-        with self.vfs(oid, 'wb', atomictemp=True) as fp:
+        with self.vfs(oid, b'wb', atomictemp=True) as fp:
             for chunk in util.filechunkiter(src, size=1048576):
                 fp.write(chunk)
                 sha256.update(chunk)
 
             realoid = node.hex(sha256.digest())
             if realoid != oid:
-                raise LfsCorruptionError(_('corrupt remote lfs object: %s')
+                raise LfsCorruptionError(_(b'corrupt remote lfs object: %s')
                                          % oid)
 
         self._linktousercache(oid)
@@ -170,7 +172,7 @@
         This should only be called from the filelog during a commit or similar.
         As such, there is no need to verify the data.  Imports from a remote
         store must use ``download()`` instead."""
-        with self.vfs(oid, 'wb', atomictemp=True) as fp:
+        with self.vfs(oid, b'wb', atomictemp=True) as fp:
             fp.write(data)
 
         self._linktousercache(oid)
@@ -186,7 +188,7 @@
         """
         if (not isinstance(self.cachevfs, nullvfs)
             and not self.vfs.exists(oid)):
-            self.ui.note(_('lfs: found %s in the usercache\n') % oid)
+            self.ui.note(_(b'lfs: found %s in the usercache\n') % oid)
             lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid))
 
     def _linktousercache(self, oid):
@@ -194,7 +196,7 @@
         # the local store on success, but truncate, write and link on failure?
         if (not self.cachevfs.exists(oid)
             and not isinstance(self.cachevfs, nullvfs)):
-            self.ui.note(_('lfs: adding %s to the usercache\n') % oid)
+            self.ui.note(_(b'lfs: adding %s to the usercache\n') % oid)
             lfutil.link(self.vfs.join(oid), self.cachevfs.join(oid))
 
     def read(self, oid, verify=True):
@@ -208,10 +210,10 @@
             # give more useful info about the corruption- simply don't add the
             # hardlink.
             if verify or node.hex(hashlib.sha256(blob).digest()) == oid:
-                self.ui.note(_('lfs: found %s in the usercache\n') % oid)
+                self.ui.note(_(b'lfs: found %s in the usercache\n') % oid)
                 lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid))
         else:
-            self.ui.note(_('lfs: found %s in the local lfs store\n') % oid)
+            self.ui.note(_(b'lfs: found %s in the local lfs store\n') % oid)
             blob = self._read(self.vfs, oid, verify)
         return blob
 
@@ -262,26 +264,45 @@
     else:
         return stringutil.forcebytestr(urlerror)
 
+class lfsauthhandler(util.urlreq.basehandler):
+    handler_order = 480  # Before HTTPDigestAuthHandler (== 490)
+
+    def http_error_401(self, req, fp, code, msg, headers):
+        """Enforces that any authentication performed is HTTP Basic
+        Authentication.  No authentication is also acceptable.
+        """
+        authreq = headers.get(r'www-authenticate', None)
+        if authreq:
+            scheme = authreq.split()[0]
+
+            if scheme.lower() != r'basic':
+                msg = _(b'the server must support Basic Authentication')
+                raise util.urlerr.httperror(req.get_full_url(), code,
+                                            encoding.strfromlocal(msg), headers,
+                                            fp)
+        return None
+
 class _gitlfsremote(object):
 
     def __init__(self, repo, url):
         ui = repo.ui
         self.ui = ui
         baseurl, authinfo = url.authinfo()
-        self.baseurl = baseurl.rstrip('/')
-        useragent = repo.ui.config('experimental', 'lfs.user-agent')
+        self.baseurl = baseurl.rstrip(b'/')
+        useragent = repo.ui.config(b'experimental', b'lfs.user-agent')
         if not useragent:
-            useragent = 'git-lfs/2.3.4 (Mercurial %s)' % util.version()
+            useragent = b'git-lfs/2.3.4 (Mercurial %s)' % util.version()
         self.urlopener = urlmod.opener(ui, authinfo, useragent)
-        self.retry = ui.configint('lfs', 'retry')
+        self.urlopener.add_handler(lfsauthhandler())
+        self.retry = ui.configint(b'lfs', b'retry')
 
     def writebatch(self, pointers, fromstore):
         """Batch upload from local to remote blobstore."""
-        self._batch(_deduplicate(pointers), fromstore, 'upload')
+        self._batch(_deduplicate(pointers), fromstore, b'upload')
 
     def readbatch(self, pointers, tostore):
         """Batch download from remote to local blostore."""
-        self._batch(_deduplicate(pointers), tostore, 'download')
+        self._batch(_deduplicate(pointers), tostore, b'download')
 
     def _batchrequest(self, pointers, action):
         """Get metadata about objects pointed by pointers for given action
@@ -289,52 +310,63 @@
         Return decoded JSON object like {'objects': [{'oid': '', 'size': 1}]}
         See https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
         """
-        objects = [{'oid': p.oid(), 'size': p.size()} for p in pointers]
-        requestdata = json.dumps({
-            'objects': objects,
-            'operation': action,
-        })
-        url = '%s/objects/batch' % self.baseurl
-        batchreq = util.urlreq.request(url, data=requestdata)
-        batchreq.add_header('Accept', 'application/vnd.git-lfs+json')
-        batchreq.add_header('Content-Type', 'application/vnd.git-lfs+json')
+        objects = [{r'oid': pycompat.strurl(p.oid()),
+                    r'size': p.size()} for p in pointers]
+        requestdata = pycompat.bytesurl(json.dumps({
+            r'objects': objects,
+            r'operation': pycompat.strurl(action),
+        }))
+        url = b'%s/objects/batch' % self.baseurl
+        batchreq = util.urlreq.request(pycompat.strurl(url), data=requestdata)
+        batchreq.add_header(r'Accept', r'application/vnd.git-lfs+json')
+        batchreq.add_header(r'Content-Type', r'application/vnd.git-lfs+json')
         try:
             with contextlib.closing(self.urlopener.open(batchreq)) as rsp:
                 rawjson = rsp.read()
         except util.urlerr.httperror as ex:
             hints = {
-                400: _('check that lfs serving is enabled on %s and "%s" is '
-                       'supported') % (self.baseurl, action),
-                404: _('the "lfs.url" config may be used to override %s')
+                400: _(b'check that lfs serving is enabled on %s and "%s" is '
+                       b'supported') % (self.baseurl, action),
+                404: _(b'the "lfs.url" config may be used to override %s')
                        % self.baseurl,
             }
-            hint = hints.get(ex.code, _('api=%s, action=%s') % (url, action))
-            raise LfsRemoteError(_('LFS HTTP error: %s') % ex, hint=hint)
+            hint = hints.get(ex.code, _(b'api=%s, action=%s') % (url, action))
+            raise LfsRemoteError(
+                _(b'LFS HTTP error: %s') % stringutil.forcebytestr(ex),
+                hint=hint)
         except util.urlerr.urlerror as ex:
-            hint = (_('the "lfs.url" config may be used to override %s')
+            hint = (_(b'the "lfs.url" config may be used to override %s')
                     % self.baseurl)
-            raise LfsRemoteError(_('LFS error: %s') % _urlerrorreason(ex),
+            raise LfsRemoteError(_(b'LFS error: %s') % _urlerrorreason(ex),
                                  hint=hint)
         try:
             response = json.loads(rawjson)
         except ValueError:
-            raise LfsRemoteError(_('LFS server returns invalid JSON: %s')
-                                 % rawjson)
+            raise LfsRemoteError(_(b'LFS server returns invalid JSON: %s')
+                                 % rawjson.encode("utf-8"))
 
         if self.ui.debugflag:
-            self.ui.debug('Status: %d\n' % rsp.status)
+            self.ui.debug(b'Status: %d\n' % rsp.status)
             # lfs-test-server and hg serve return headers in different order
-            self.ui.debug('%s\n'
-                          % '\n'.join(sorted(str(rsp.info()).splitlines())))
+            headers = pycompat.bytestr(rsp.info()).strip()
+            self.ui.debug(b'%s\n'
+                          % b'\n'.join(sorted(headers.splitlines())))
 
-            if 'objects' in response:
-                response['objects'] = sorted(response['objects'],
-                                             key=lambda p: p['oid'])
-            self.ui.debug('%s\n'
-                          % json.dumps(response, indent=2,
-                                       separators=('', ': '), sort_keys=True))
+            if r'objects' in response:
+                response[r'objects'] = sorted(response[r'objects'],
+                                              key=lambda p: p[r'oid'])
+            self.ui.debug(b'%s\n'
+                          % pycompat.bytesurl(
+                              json.dumps(response, indent=2,
+                                         separators=(r'', r': '),
+                                         sort_keys=True)))
 
-        return response
+        def encodestr(x):
+            if isinstance(x, pycompat.unicode):
+                return x.encode(u'utf-8')
+            return x
+
+        return pycompat.rapply(encodestr, response)
 
     def _checkforservererror(self, pointers, responses, action):
         """Scans errors from objects
@@ -345,34 +377,34 @@
             # server implementation (ex. lfs-test-server)  does not set "error"
             # but just removes "download" from "actions". Treat that case
             # as the same as 404 error.
-            if 'error' not in response:
-                if (action == 'download'
-                    and action not in response.get('actions', [])):
+            if b'error' not in response:
+                if (action == b'download'
+                    and action not in response.get(b'actions', [])):
                     code = 404
                 else:
                     continue
             else:
                 # An error dict without a code doesn't make much sense, so
                 # treat as a server error.
-                code = response.get('error').get('code', 500)
+                code = response.get(b'error').get(b'code', 500)
 
             ptrmap = {p.oid(): p for p in pointers}
-            p = ptrmap.get(response['oid'], None)
+            p = ptrmap.get(response[b'oid'], None)
             if p:
-                filename = getattr(p, 'filename', 'unknown')
+                filename = getattr(p, 'filename', b'unknown')
                 errors = {
-                    404: 'The object does not exist',
-                    410: 'The object was removed by the owner',
-                    422: 'Validation error',
-                    500: 'Internal server error',
+                    404: b'The object does not exist',
+                    410: b'The object was removed by the owner',
+                    422: b'Validation error',
+                    500: b'Internal server error',
                 }
-                msg = errors.get(code, 'status code %d' % code)
-                raise LfsRemoteError(_('LFS server error for "%s": %s')
+                msg = errors.get(code, b'status code %d' % code)
+                raise LfsRemoteError(_(b'LFS server error for "%s": %s')
                                      % (filename, msg))
             else:
                 raise LfsRemoteError(
-                    _('LFS server error. Unsolicited response for oid %s')
-                    % response['oid'])
+                    _(b'LFS server error. Unsolicited response for oid %s')
+                    % response[b'oid'])
 
     def _extractobjects(self, response, pointers, action):
         """extract objects from response of the batch API
@@ -382,12 +414,13 @@
         raise if any object has an error
         """
         # Scan errors from objects - fail early
-        objects = response.get('objects', [])
+        objects = response.get(b'objects', [])
         self._checkforservererror(pointers, objects, action)
 
         # Filter objects with given action. Practically, this skips uploading
         # objects which exist in the server.
-        filteredobjects = [o for o in objects if action in o.get('actions', [])]
+        filteredobjects = [o for o in objects
+                           if action in o.get(b'actions', [])]
 
         return filteredobjects
 
@@ -401,36 +434,37 @@
         See https://github.com/git-lfs/git-lfs/blob/master/docs/api/\
         basic-transfers.md
         """
-        oid = pycompat.bytestr(obj['oid'])
+        oid = obj[b'oid']
+        href = obj[b'actions'][action].get(b'href')
+        headers = obj[b'actions'][action].get(b'header', {}).items()
 
-        href = pycompat.bytestr(obj['actions'][action].get('href'))
-        headers = obj['actions'][action].get('header', {}).items()
-
-        request = util.urlreq.request(href)
-        if action == 'upload':
+        request = util.urlreq.request(pycompat.strurl(href))
+        if action == b'upload':
             # If uploading blobs, read data from local blobstore.
             if not localstore.verify(oid):
-                raise error.Abort(_('detected corrupt lfs object: %s') % oid,
-                                  hint=_('run hg verify'))
+                raise error.Abort(_(b'detected corrupt lfs object: %s') % oid,
+                                  hint=_(b'run hg verify'))
             request.data = filewithprogress(localstore.open(oid), None)
-            request.get_method = lambda: 'PUT'
-            request.add_header('Content-Type', 'application/octet-stream')
+            request.get_method = lambda: r'PUT'
+            request.add_header(r'Content-Type', r'application/octet-stream')
+            request.add_header(r'Content-Length', len(request.data))
 
         for k, v in headers:
-            request.add_header(k, v)
+            request.add_header(pycompat.strurl(k), pycompat.strurl(v))
 
         response = b''
         try:
             with contextlib.closing(self.urlopener.open(request)) as req:
                 ui = self.ui  # Shorten debug lines
                 if self.ui.debugflag:
-                    ui.debug('Status: %d\n' % req.status)
+                    ui.debug(b'Status: %d\n' % req.status)
                     # lfs-test-server and hg serve return headers in different
                     # order
-                    ui.debug('%s\n'
-                             % '\n'.join(sorted(str(req.info()).splitlines())))
+                    headers = pycompat.bytestr(req.info()).strip()
+                    ui.debug(b'%s\n'
+                             % b'\n'.join(sorted(headers.splitlines())))
 
-                if action == 'download':
+                if action == b'download':
                     # If downloading blobs, store downloaded data to local
                     # blobstore
                     localstore.download(oid, req)
@@ -441,65 +475,65 @@
                             break
                         response += data
                     if response:
-                        ui.debug('lfs %s response: %s' % (action, response))
+                        ui.debug(b'lfs %s response: %s' % (action, response))
         except util.urlerr.httperror as ex:
             if self.ui.debugflag:
-                self.ui.debug('%s: %s\n' % (oid, ex.read()))
-            raise LfsRemoteError(_('LFS HTTP error: %s (oid=%s, action=%s)')
-                                 % (ex, oid, action))
+                self.ui.debug(b'%s: %s\n' % (oid, ex.read())) # XXX: also bytes?
+            raise LfsRemoteError(_(b'LFS HTTP error: %s (oid=%s, action=%s)')
+                                 % (stringutil.forcebytestr(ex), oid, action))
         except util.urlerr.urlerror as ex:
-            hint = (_('attempted connection to %s')
-                    % util.urllibcompat.getfullurl(request))
-            raise LfsRemoteError(_('LFS error: %s') % _urlerrorreason(ex),
+            hint = (_(b'attempted connection to %s')
+                    % pycompat.bytesurl(util.urllibcompat.getfullurl(request)))
+            raise LfsRemoteError(_(b'LFS error: %s') % _urlerrorreason(ex),
                                  hint=hint)
 
     def _batch(self, pointers, localstore, action):
-        if action not in ['upload', 'download']:
-            raise error.ProgrammingError('invalid Git-LFS action: %s' % action)
+        if action not in [b'upload', b'download']:
+            raise error.ProgrammingError(b'invalid Git-LFS action: %s' % action)
 
         response = self._batchrequest(pointers, action)
         objects = self._extractobjects(response, pointers, action)
-        total = sum(x.get('size', 0) for x in objects)
+        total = sum(x.get(b'size', 0) for x in objects)
         sizes = {}
         for obj in objects:
-            sizes[obj.get('oid')] = obj.get('size', 0)
-        topic = {'upload': _('lfs uploading'),
-                 'download': _('lfs downloading')}[action]
+            sizes[obj.get(b'oid')] = obj.get(b'size', 0)
+        topic = {b'upload': _(b'lfs uploading'),
+                 b'download': _(b'lfs downloading')}[action]
         if len(objects) > 1:
-            self.ui.note(_('lfs: need to transfer %d objects (%s)\n')
+            self.ui.note(_(b'lfs: need to transfer %d objects (%s)\n')
                          % (len(objects), util.bytecount(total)))
 
         def transfer(chunk):
             for obj in chunk:
-                objsize = obj.get('size', 0)
+                objsize = obj.get(b'size', 0)
                 if self.ui.verbose:
-                    if action == 'download':
-                        msg = _('lfs: downloading %s (%s)\n')
-                    elif action == 'upload':
-                        msg = _('lfs: uploading %s (%s)\n')
-                    self.ui.note(msg % (obj.get('oid'),
+                    if action == b'download':
+                        msg = _(b'lfs: downloading %s (%s)\n')
+                    elif action == b'upload':
+                        msg = _(b'lfs: uploading %s (%s)\n')
+                    self.ui.note(msg % (obj.get(b'oid'),
                                  util.bytecount(objsize)))
                 retry = self.retry
                 while True:
                     try:
                         self._basictransfer(obj, action, localstore)
-                        yield 1, obj.get('oid')
+                        yield 1, obj.get(b'oid')
                         break
                     except socket.error as ex:
                         if retry > 0:
                             self.ui.note(
-                                _('lfs: failed: %r (remaining retry %d)\n')
-                                % (ex, retry))
+                                _(b'lfs: failed: %r (remaining retry %d)\n')
+                                % (stringutil.forcebytestr(ex), retry))
                             retry -= 1
                             continue
                         raise
 
         # Until https multiplexing gets sorted out
-        if self.ui.configbool('experimental', 'lfs.worker-enable'):
+        if self.ui.configbool(b'experimental', b'lfs.worker-enable'):
             oids = worker.worker(self.ui, 0.1, transfer, (),
-                                 sorted(objects, key=lambda o: o.get('oid')))
+                                 sorted(objects, key=lambda o: o.get(b'oid')))
         else:
-            oids = transfer(sorted(objects, key=lambda o: o.get('oid')))
+            oids = transfer(sorted(objects, key=lambda o: o.get(b'oid')))
 
         with self.ui.makeprogress(topic, total=total) as progress:
             progress.update(0)
@@ -509,14 +543,14 @@
                 processed += sizes[oid]
                 blobs += 1
                 progress.update(processed)
-                self.ui.note(_('lfs: processed: %s\n') % oid)
+                self.ui.note(_(b'lfs: processed: %s\n') % oid)
 
         if blobs > 0:
-            if action == 'upload':
-                self.ui.status(_('lfs: uploaded %d files (%s)\n')
+            if action == b'upload':
+                self.ui.status(_(b'lfs: uploaded %d files (%s)\n')
                                % (blobs, util.bytecount(processed)))
-            elif action == 'download':
-                self.ui.status(_('lfs: downloaded %d files (%s)\n')
+            elif action == b'download':
+                self.ui.status(_(b'lfs: downloaded %d files (%s)\n')
                                % (blobs, util.bytecount(processed)))
 
     def __del__(self):
@@ -531,18 +565,18 @@
     """Dummy store storing blobs to temp directory."""
 
     def __init__(self, repo, url):
-        fullpath = repo.vfs.join('lfs', url.path)
+        fullpath = repo.vfs.join(b'lfs', url.path)
         self.vfs = lfsvfs(fullpath)
 
     def writebatch(self, pointers, fromstore):
         for p in _deduplicate(pointers):
             content = fromstore.read(p.oid(), verify=True)
-            with self.vfs(p.oid(), 'wb', atomictemp=True) as fp:
+            with self.vfs(p.oid(), b'wb', atomictemp=True) as fp:
                 fp.write(content)
 
     def readbatch(self, pointers, tostore):
         for p in _deduplicate(pointers):
-            with self.vfs(p.oid(), 'rb') as fp:
+            with self.vfs(p.oid(), b'rb') as fp:
                 tostore.download(p.oid(), fp)
 
 class _nullremote(object):
@@ -570,13 +604,13 @@
         self._prompt()
 
     def _prompt(self):
-        raise error.Abort(_('lfs.url needs to be configured'))
+        raise error.Abort(_(b'lfs.url needs to be configured'))
 
 _storemap = {
-    'https': _gitlfsremote,
-    'http': _gitlfsremote,
-    'file': _dummyremote,
-    'null': _nullremote,
+    b'https': _gitlfsremote,
+    b'http': _gitlfsremote,
+    b'file': _dummyremote,
+    b'null': _nullremote,
     None: _promptremote,
 }
 
@@ -590,8 +624,8 @@
 def _verify(oid, content):
     realoid = node.hex(hashlib.sha256(content).digest())
     if realoid != oid:
-        raise LfsCorruptionError(_('detected corrupt lfs object: %s') % oid,
-                                 hint=_('run hg verify'))
+        raise LfsCorruptionError(_(b'detected corrupt lfs object: %s') % oid,
+                                 hint=_(b'run hg verify'))
 
 def remote(repo, remote=None):
     """remotestore factory. return a store in _storemap depending on config
@@ -603,7 +637,7 @@
 
     https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md
     """
-    lfsurl = repo.ui.config('lfs', 'url')
+    lfsurl = repo.ui.config(b'lfs', b'url')
     url = util.url(lfsurl or '')
     if lfsurl is None:
         if remote:
@@ -616,7 +650,7 @@
         else:
             # TODO: investigate 'paths.remote:lfsurl' style path customization,
             # and fall back to inferring from 'paths.remote' if unspecified.
-            path = repo.ui.config('paths', 'default') or ''
+            path = repo.ui.config(b'paths', b'default') or b''
 
         defaulturl = util.url(path)
 
@@ -628,11 +662,11 @@
             defaulturl.path = (defaulturl.path or b'') + b'.git/info/lfs'
 
             url = util.url(bytes(defaulturl))
-            repo.ui.note(_('lfs: assuming remote store: %s\n') % url)
+            repo.ui.note(_(b'lfs: assuming remote store: %s\n') % url)
 
     scheme = url.scheme
     if scheme not in _storemap:
-        raise error.Abort(_('lfs: unknown url scheme: %s') % scheme)
+        raise error.Abort(_(b'lfs: unknown url scheme: %s') % scheme)
     return _storemap[scheme](repo, url)
 
 class LfsRemoteError(error.StorageError):
--- a/hgext/lfs/wireprotolfsserver.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/lfs/wireprotolfsserver.py	Wed Apr 17 13:41:18 2019 -0400
@@ -43,7 +43,7 @@
     if orig(rctx, req, res, checkperm):
         return True
 
-    if not rctx.repo.ui.configbool('experimental', 'lfs.serve'):
+    if not rctx.repo.ui.configbool(b'experimental', b'lfs.serve'):
         return False
 
     if not util.safehasattr(rctx.repo.svfs, 'lfslocalblobstore'):
@@ -54,7 +54,7 @@
 
     try:
         if req.dispatchpath == b'.git/info/lfs/objects/batch':
-            checkperm(rctx, req, 'pull')
+            checkperm(rctx, req, b'pull')
             return _processbatchrequest(rctx.repo, req, res)
         # TODO: reserve and use a path in the proposed http wireprotocol /api/
         #       namespace?
@@ -81,7 +81,7 @@
 def _logexception(req):
     """Write information about the current exception to wsgi.errors."""
     tb = pycompat.sysbytes(traceback.format_exc())
-    errorlog = req.rawenv[r'wsgi.errors']
+    errorlog = req.rawenv[b'wsgi.errors']
 
     uri = b''
     if req.apppath:
@@ -133,25 +133,27 @@
     lfsreq = json.loads(req.bodyfh.read())
 
     # If no transfer handlers are explicitly requested, 'basic' is assumed.
-    if 'basic' not in lfsreq.get('transfers', ['basic']):
+    if r'basic' not in lfsreq.get(r'transfers', [r'basic']):
         _sethttperror(res, HTTP_BAD_REQUEST,
                       b'Only the basic LFS transfer handler is supported')
         return True
 
-    operation = lfsreq.get('operation')
-    if operation not in ('upload', 'download'):
+    operation = lfsreq.get(r'operation')
+    operation = pycompat.bytestr(operation)
+
+    if operation not in (b'upload', b'download'):
         _sethttperror(res, HTTP_BAD_REQUEST,
                       b'Unsupported LFS transfer operation: %s' % operation)
         return True
 
     localstore = repo.svfs.lfslocalblobstore
 
-    objects = [p for p in _batchresponseobjects(req, lfsreq.get('objects', []),
+    objects = [p for p in _batchresponseobjects(req, lfsreq.get(r'objects', []),
                                                 operation, localstore)]
 
     rsp = {
-        'transfer': 'basic',
-        'objects': objects,
+        r'transfer': r'basic',
+        r'objects': objects,
     }
 
     res.status = hgwebcommon.statusmessage(HTTP_OK)
@@ -190,11 +192,12 @@
 
     for obj in objects:
         # Convert unicode to ASCII to create a filesystem path
-        oid = obj.get('oid').encode('ascii')
+        soid = obj.get(r'oid')
+        oid = soid.encode(r'ascii')
         rsp = {
-            'oid': oid,
-            'size': obj.get('size'),  # XXX: should this check the local size?
-            #'authenticated': True,
+            r'oid': soid,
+            r'size': obj.get(r'size'),  # XXX: should this check the local size?
+            #r'authenticated': True,
         }
 
         exists = True
@@ -209,7 +212,7 @@
         # verified as the file is streamed to the caller.
         try:
             verifies = store.verify(oid)
-            if verifies and action == 'upload':
+            if verifies and action == b'upload':
                 # The client will skip this upload, but make sure it remains
                 # available locally.
                 store.linkfromusercache(oid)
@@ -217,9 +220,9 @@
             if inst.errno != errno.ENOENT:
                 _logexception(req)
 
-                rsp['error'] = {
-                    'code': 500,
-                    'message': inst.strerror or 'Internal Server Server'
+                rsp[r'error'] = {
+                    r'code': 500,
+                    r'message': inst.strerror or r'Internal Server Server'
                 }
                 yield rsp
                 continue
@@ -228,19 +231,19 @@
 
         # Items are always listed for downloads.  They are dropped for uploads
         # IFF they already exist locally.
-        if action == 'download':
+        if action == b'download':
             if not exists:
-                rsp['error'] = {
-                    'code': 404,
-                    'message': "The object does not exist"
+                rsp[r'error'] = {
+                    r'code': 404,
+                    r'message': r"The object does not exist"
                 }
                 yield rsp
                 continue
 
             elif not verifies:
-                rsp['error'] = {
-                    'code': 422,   # XXX: is this the right code?
-                    'message': "The object is corrupt"
+                rsp[r'error'] = {
+                    r'code': 422,   # XXX: is this the right code?
+                    r'message': r"The object is corrupt"
                 }
                 yield rsp
                 continue
@@ -256,22 +259,22 @@
             # a gratuitous deviation from lfs-test-server in the test
             # output.
             hdr = {
-                'Accept': 'application/vnd.git-lfs'
+                r'Accept': r'application/vnd.git-lfs'
             }
 
-            auth = req.headers.get('Authorization', '')
-            if auth.startswith('Basic '):
-                hdr['Authorization'] = auth
+            auth = req.headers.get(b'Authorization', b'')
+            if auth.startswith(b'Basic '):
+                hdr[r'Authorization'] = pycompat.strurl(auth)
 
             return hdr
 
-        rsp['actions'] = {
-            '%s' % action: {
-                'href': '%s%s/.hg/lfs/objects/%s'
-                    % (req.baseurl, req.apppath, oid),
+        rsp[r'actions'] = {
+            r'%s' % pycompat.strurl(action): {
+                r'href': pycompat.strurl(b'%s%s/.hg/lfs/objects/%s'
+                    % (req.baseurl, req.apppath, oid)),
                 # datetime.isoformat() doesn't include the 'Z' suffix
-                "expires_at": expiresat.strftime('%Y-%m-%dT%H:%M:%SZ'),
-                'header': _buildheader(),
+                r"expires_at": expiresat.strftime(r'%Y-%m-%dT%H:%M:%SZ'),
+                r'header': _buildheader(),
             }
         }
 
@@ -297,7 +300,7 @@
         return True
 
     if method == b'PUT':
-        checkperm('upload')
+        checkperm(b'upload')
 
         # TODO: verify Content-Type?
 
@@ -324,7 +327,7 @@
 
         return True
     elif method == b'GET':
-        checkperm('pull')
+        checkperm(b'pull')
 
         res.status = hgwebcommon.statusmessage(HTTP_OK)
         res.headers[b'Content-Type'] = b'application/octet-stream'
--- a/hgext/mq.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/mq.py	Wed Apr 17 13:41:18 2019 -0400
@@ -738,10 +738,10 @@
         for f in sorted(files):
             absf = repo.wjoin(f)
             if os.path.lexists(absf):
+                absorig = scmutil.backuppath(self.ui, repo, f)
                 self.ui.note(_('saving current version of %s as %s\n') %
-                             (f, scmutil.origpath(self.ui, repo, f)))
-
-                absorig = scmutil.origpath(self.ui, repo, absf)
+                             (f, os.path.relpath(absorig)))
+
                 if copy:
                     util.copyfile(absf, absorig)
                 else:
@@ -970,7 +970,7 @@
                         repo.dirstate.remove(f)
                     for f in merged:
                         repo.dirstate.merge(f)
-                    p1, p2 = repo.dirstate.parents()
+                    p1 = repo.dirstate.p1()
                     repo.setparents(p1, merge)
 
             if all_files and '.hgsubstate' in all_files:
@@ -1181,7 +1181,7 @@
     def makepatchname(self, title, fallbackname):
         """Return a suitable filename for title, adding a suffix to make
         it unique in the existing list"""
-        namebase = re.sub('[\s\W_]+', '_', title.lower()).strip('_')
+        namebase = re.sub(br'[\s\W_]+', b'_', title.lower()).strip(b'_')
         namebase = namebase[:75] # avoid too long name (issue5117)
         if namebase:
             try:
@@ -1394,7 +1394,7 @@
         diffopts = self.diffopts()
         with repo.wlock():
             heads = []
-            for hs in repo.branchmap().itervalues():
+            for hs in repo.branchmap().iterheads():
                 heads.extend(hs)
             if not heads:
                 heads = [nullid]
@@ -1700,8 +1700,7 @@
             # but we do it backwards to take advantage of manifest/changelog
             # caching against the next repo.status call
             mm, aa, dd = repo.status(patchparent, top)[:3]
-            changes = repo.changelog.read(top)
-            man = repo.manifestlog[changes[0]].read()
+            ctx = repo[top]
             aaa = aa[:]
             match1 = scmutil.match(repo[None], pats, opts)
             # in short mode, we only diff the files included in the
@@ -1778,13 +1777,12 @@
                         repo.dirstate.add(dst)
                     # remember the copies between patchparent and qtip
                     for dst in aaa:
-                        f = repo.file(dst)
-                        src = f.renamed(man[dst])
+                        src = ctx[dst].copysource()
                         if src:
-                            copies.setdefault(src[0], []).extend(
+                            copies.setdefault(src, []).extend(
                                 copies.get(dst, []))
                             if dst in a:
-                                copies[src[0]].append(dst)
+                                copies[src].append(dst)
                         # we can't copy a file created by the patch itself
                         if dst in copies:
                             del copies[dst]
@@ -1813,7 +1811,7 @@
                 for f in forget:
                     repo.dirstate.drop(f)
 
-                user = ph.user or changes[1]
+                user = ph.user or ctx.user()
 
                 oldphase = repo[top].phase()
 
@@ -1942,7 +1940,7 @@
                 self.ui.write(patchname, label='qseries.' + state)
             self.ui.write('\n')
 
-        applied = set([p.name for p in self.applied])
+        applied = {p.name for p in self.applied}
         if length is None:
             length = len(self.series) - start
         if not missing:
@@ -3521,7 +3519,7 @@
             if self.mq.applied and self.mq.checkapplied and not force:
                 parents = self.dirstate.parents()
                 patches = [s.node for s in self.mq.applied]
-                if parents[0] in patches or parents[1] in patches:
+                if any(p in patches for p in parents):
                     raise error.Abort(errmsg)
 
         def commit(self, text="", user=None, date=None, match=None,
@@ -3660,7 +3658,7 @@
     """Changesets managed by MQ.
     """
     revsetlang.getargs(x, 0, 0, _("mq takes no arguments"))
-    applied = set([repo[r.node].rev() for r in repo.mq.applied])
+    applied = {repo[r.node].rev() for r in repo.mq.applied}
     return smartset.baseset([r for r in subset if r in applied])
 
 # tell hggettext to extract docstrings from these functions:
--- a/hgext/narrow/narrowcommands.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/narrow/narrowcommands.py	Wed Apr 17 13:41:18 2019 -0400
@@ -278,9 +278,9 @@
             p1, p2 = ds.p1(), ds.p2()
             with ds.parentchange():
                 ds.setparents(node.nullid, node.nullid)
-            with wrappedextraprepare,\
-                 repo.ui.configoverride(overrides, 'widen'):
-                exchange.pull(repo, remote, heads=common)
+            with wrappedextraprepare:
+                with repo.ui.configoverride(overrides, 'widen'):
+                    exchange.pull(repo, remote, heads=common)
             with ds.parentchange():
                 ds.setparents(p1, p2)
         else:
@@ -296,11 +296,11 @@
                     'ellipses': False,
                 }).result()
 
-            with repo.transaction('widening') as tr,\
-                 repo.ui.configoverride(overrides, 'widen'):
-                tgetter = lambda: tr
-                bundle2.processbundle(repo, bundle,
-                        transactiongetter=tgetter)
+            with repo.transaction('widening') as tr:
+                with repo.ui.configoverride(overrides, 'widen'):
+                    tgetter = lambda: tr
+                    bundle2.processbundle(repo, bundle,
+                            transactiongetter=tgetter)
 
         with repo.transaction('widening'):
             repo.setnewnarrowpats()
@@ -345,10 +345,14 @@
     and replaced by the new ones specified to --addinclude and --addexclude.
     If --clear is specified without any further options, the narrowspec will be
     empty and will not match any files.
+
+    --import-rules accepts a path to a file containing rules, allowing you to
+    add --addinclude, --addexclude rules in bulk. Like the other include and
+    exclude switches, the changes are applied immediately.
     """
     opts = pycompat.byteskwargs(opts)
     if repository.NARROW_REQUIREMENT not in repo.requirements:
-        raise error.Abort(_('the narrow command is only supported on '
+        raise error.Abort(_('the tracked command is only supported on '
                             'respositories cloned with --narrow'))
 
     # Before supporting, decide whether it "hg tracked --clear" should mean
--- a/hgext/notify.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/notify.py	Wed Apr 17 13:41:18 2019 -0400
@@ -367,8 +367,12 @@
             raise error.Abort(inst)
 
         # store sender and subject
-        sender = encoding.strtolocal(msg[r'From'])
-        subject = encoding.strtolocal(msg[r'Subject'])
+        sender = msg[r'From']
+        subject = msg[r'Subject']
+        if sender is not None:
+            sender = encoding.strtolocal(sender)
+        if subject is not None:
+            subject = encoding.strtolocal(subject)
         del msg[r'From'], msg[r'Subject']
 
         if not msg.is_multipart():
--- a/hgext/phabricator.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/phabricator.py	Wed Apr 17 13:41:18 2019 -0400
@@ -60,6 +60,7 @@
     parser,
     patch,
     phases,
+    pycompat,
     registrar,
     scmutil,
     smartset,
@@ -127,7 +128,7 @@
     fullflags = flags + _VCR_FLAGS
     def decorate(fn):
         def inner(*args, **kwargs):
-            cassette = kwargs.pop(r'test_vcr', None)
+            cassette = pycompat.fsdecode(kwargs.pop(r'test_vcr', None))
             if cassette:
                 import hgdemandimport
                 with hgdemandimport.deactivated():
@@ -136,8 +137,9 @@
                     vcr = vcrmod.VCR(
                         serializer=r'json',
                         custom_patches=[
-                            (urlmod, 'httpconnection', stubs.VCRHTTPConnection),
-                            (urlmod, 'httpsconnection',
+                            (urlmod, r'httpconnection',
+                             stubs.VCRHTTPConnection),
+                            (urlmod, r'httpsconnection',
                              stubs.VCRHTTPSConnection),
                         ])
                     with vcr.use_cassette(cassette):
@@ -159,7 +161,8 @@
     def process(prefix, obj):
         if isinstance(obj, bool):
             obj = {True: b'true', False: b'false'}[obj]  # Python -> PHP form
-        items = {list: enumerate, dict: lambda x: x.items()}.get(type(obj))
+        lister = lambda l: [(b'%d' % k, v) for k, v in enumerate(l)]
+        items = {list: lister, dict: lambda x: x.items()}.get(type(obj))
         if items is None:
             flatparams[prefix] = obj
         else:
@@ -202,7 +205,7 @@
     """call Conduit API, params is a dict. return json.loads result, or None"""
     host, token = readurltoken(repo)
     url, authinfo = util.url(b'/'.join([host, b'api', name])).authinfo()
-    repo.ui.debug(b'Conduit Call: %s %s\n' % (url, params))
+    repo.ui.debug(b'Conduit Call: %s %s\n' % (url, pycompat.byterepr(params)))
     params = params.copy()
     params[b'api.token'] = token
     data = urlencodenested(params)
@@ -215,16 +218,20 @@
         body = sout.read()
     else:
         urlopener = urlmod.opener(repo.ui, authinfo)
-        request = util.urlreq.request(url, data=data)
+        request = util.urlreq.request(pycompat.strurl(url), data=data)
         with contextlib.closing(urlopener.open(request)) as rsp:
             body = rsp.read()
     repo.ui.debug(b'Conduit Response: %s\n' % body)
-    parsed = json.loads(body)
-    if parsed.get(r'error_code'):
+    parsed = pycompat.rapply(
+        lambda x: encoding.unitolocal(x) if isinstance(x, pycompat.unicode)
+        else x,
+        json.loads(body)
+    )
+    if parsed.get(b'error_code'):
         msg = (_(b'Conduit Error (%s): %s')
-               % (parsed[r'error_code'], parsed[r'error_info']))
+               % (parsed[b'error_code'], parsed[b'error_info']))
         raise error.Abort(msg)
-    return parsed[r'result']
+    return parsed[b'result']
 
 @vcrcommand(b'debugcallconduit', [], _(b'METHOD'))
 def debugcallconduit(ui, repo, name):
@@ -233,10 +240,20 @@
     Call parameters are read from stdin as a JSON blob. Result will be written
     to stdout as a JSON blob.
     """
-    params = json.loads(ui.fin.read())
-    result = callconduit(repo, name, params)
-    s = json.dumps(result, sort_keys=True, indent=2, separators=(b',', b': '))
-    ui.write(b'%s\n' % s)
+    # json.loads only accepts bytes from 3.6+
+    rawparams = encoding.unifromlocal(ui.fin.read())
+    # json.loads only returns unicode strings
+    params = pycompat.rapply(lambda x:
+        encoding.unitolocal(x) if isinstance(x, pycompat.unicode) else x,
+        json.loads(rawparams)
+    )
+    # json.dumps only accepts unicode strings
+    result = pycompat.rapply(lambda x:
+        encoding.unifromlocal(x) if isinstance(x, bytes) else x,
+        callconduit(repo, name, params)
+    )
+    s = json.dumps(result, sort_keys=True, indent=2, separators=(u',', u': '))
+    ui.write(b'%s\n' % encoding.unitolocal(s))
 
 def getrepophid(repo):
     """given callsign, return repository PHID or None"""
@@ -249,15 +266,15 @@
         return None
     query = callconduit(repo, b'diffusion.repository.search',
                         {b'constraints': {b'callsigns': [callsign]}})
-    if len(query[r'data']) == 0:
+    if len(query[b'data']) == 0:
         return None
-    repophid = encoding.strtolocal(query[r'data'][0][r'phid'])
+    repophid = query[b'data'][0][b'phid']
     repo.ui.setconfig(b'phabricator', b'repophid', repophid)
     return repophid
 
-_differentialrevisiontagre = re.compile(b'\AD([1-9][0-9]*)\Z')
+_differentialrevisiontagre = re.compile(br'\AD([1-9][0-9]*)\Z')
 _differentialrevisiondescre = re.compile(
-    b'^Differential Revision:\s*(?P<url>(?:.*)D(?P<id>[1-9][0-9]*))$', re.M)
+    br'^Differential Revision:\s*(?P<url>(?:.*)D(?P<id>[1-9][0-9]*))$', re.M)
 
 def getoldnodedrevmap(repo, nodelist):
     """find previous nodes that has been sent to Phabricator
@@ -277,7 +294,6 @@
     The ``old node``, if not None, is guaranteed to be the last diff of
     corresponding Differential Revision, and exist in the repo.
     """
-    url, token = readurltoken(repo)
     unfi = repo.unfiltered()
     nodemap = unfi.changelog.nodemap
 
@@ -298,7 +314,7 @@
         # Check commit message
         m = _differentialrevisiondescre.search(ctx.description())
         if m:
-            toconfirm[node] = (1, set(precnodes), int(m.group(b'id')))
+            toconfirm[node] = (1, set(precnodes), int(m.group(r'id')))
 
     # Double check if tags are genuine by collecting all old nodes from
     # Phabricator, and expect precursors overlap with it.
@@ -306,11 +322,11 @@
         drevs = [drev for force, precs, drev in toconfirm.values()]
         alldiffs = callconduit(unfi, b'differential.querydiffs',
                                {b'revisionIDs': drevs})
-        getnode = lambda d: bin(encoding.unitolocal(
-            getdiffmeta(d).get(r'node', b''))) or None
+        getnode = lambda d: bin(
+            getdiffmeta(d).get(b'node', b'')) or None
         for newnode, (force, precset, drev) in toconfirm.items():
             diffs = [d for d in alldiffs.values()
-                     if int(d[r'revisionID']) == drev]
+                     if int(d[b'revisionID']) == drev]
 
             # "precursors" as known by Phabricator
             phprecset = set(getnode(d) for d in diffs)
@@ -329,7 +345,7 @@
             # exists in the repo
             oldnode = lastdiff = None
             if diffs:
-                lastdiff = max(diffs, key=lambda d: int(d[r'id']))
+                lastdiff = max(diffs, key=lambda d: int(d[b'id']))
                 oldnode = getnode(lastdiff)
                 if oldnode and oldnode not in nodemap:
                     oldnode = None
@@ -362,25 +378,26 @@
 def writediffproperties(ctx, diff):
     """write metadata to diff so patches could be applied losslessly"""
     params = {
-        b'diff_id': diff[r'id'],
+        b'diff_id': diff[b'id'],
         b'name': b'hg:meta',
         b'data': json.dumps({
-            b'user': ctx.user(),
-            b'date': b'%d %d' % ctx.date(),
-            b'node': ctx.hex(),
-            b'parent': ctx.p1().hex(),
+            u'user': encoding.unifromlocal(ctx.user()),
+            u'date': u'{:.0f} {}'.format(*ctx.date()),
+            u'node': encoding.unifromlocal(ctx.hex()),
+            u'parent': encoding.unifromlocal(ctx.p1().hex()),
         }),
     }
     callconduit(ctx.repo(), b'differential.setdiffproperty', params)
 
     params = {
-        b'diff_id': diff[r'id'],
+        b'diff_id': diff[b'id'],
         b'name': b'local:commits',
         b'data': json.dumps({
-            ctx.hex(): {
-                b'author': stringutil.person(ctx.user()),
-                b'authorEmail': stringutil.email(ctx.user()),
-                b'time': ctx.date()[0],
+            encoding.unifromlocal(ctx.hex()): {
+                u'author': encoding.unifromlocal(stringutil.person(ctx.user())),
+                u'authorEmail': encoding.unifromlocal(
+                    stringutil.email(ctx.user())),
+                u'time': u'{:.0f}'.format(ctx.date()[0]),
             },
         }),
     }
@@ -409,7 +426,7 @@
     transactions = []
     if neednewdiff:
         diff = creatediff(ctx)
-        transactions.append({b'type': b'update', b'value': diff[r'phid']})
+        transactions.append({b'type': b'update', b'value': diff[b'phid']})
     else:
         # Even if we don't need to upload a new diff because the patch content
         # does not change. We might still need to update its metadata so
@@ -423,7 +440,7 @@
     # existing revision (revid is not None) since that introduces visible
     # churns (someone edited "Summary" twice) on the web page.
     if parentrevid and revid is None:
-        summary = b'Depends on D%s' % parentrevid
+        summary = b'Depends on D%d' % parentrevid
         transactions += [{b'type': b'summary', b'value': summary},
                          {b'type': b'summary', b'value': b' '}]
 
@@ -434,7 +451,7 @@
     desc = ctx.description()
     info = callconduit(repo, b'differential.parsecommitmessage',
                        {b'corpus': desc})
-    for k, v in info[r'fields'].items():
+    for k, v in info[b'fields'].items():
         if k in [b'title', b'summary', b'testPlan']:
             transactions.append({b'type': k, b'value': v})
 
@@ -451,17 +468,18 @@
 
 def userphids(repo, names):
     """convert user names to PHIDs"""
+    names = [name.lower() for name in names]
     query = {b'constraints': {b'usernames': names}}
     result = callconduit(repo, b'user.search', query)
     # username not found is not an error of the API. So check if we have missed
     # some names here.
-    data = result[r'data']
-    resolved = set(entry[r'fields'][r'username'] for entry in data)
+    data = result[b'data']
+    resolved = set(entry[b'fields'][b'username'].lower() for entry in data)
     unresolved = set(names) - resolved
     if unresolved:
         raise error.Abort(_(b'unknown username: %s')
                           % b' '.join(sorted(unresolved)))
-    return [entry[r'phid'] for entry in data]
+    return [entry[b'phid'] for entry in data]
 
 @vcrcommand(b'phabsend',
          [(b'r', b'rev', [], _(b'revisions to send'), _(b'REV')),
@@ -497,6 +515,7 @@
     phabsend will check obsstore and the above association to decide whether to
     update an existing Differential Revision, or create a new one.
     """
+    opts = pycompat.byteskwargs(opts)
     revs = list(revs) + opts.get(b'rev', [])
     revs = scmutil.revrange(repo, revs)
 
@@ -538,7 +557,7 @@
             revision, diff = createdifferentialrevision(
                 ctx, revid, lastrevid, oldnode, olddiff, actions)
             diffmap[ctx.node()] = diff
-            newrevid = int(revision[r'object'][r'id'])
+            newrevid = int(revision[b'object'][b'id'])
             if revid:
                 action = b'updated'
             else:
@@ -547,7 +566,7 @@
             # Create a local tag to note the association, if commit message
             # does not have it already
             m = _differentialrevisiondescre.search(ctx.description())
-            if not m or int(m.group(b'id')) != newrevid:
+            if not m or int(m.group(r'id')) != newrevid:
                 tagname = b'D%d' % newrevid
                 tags.tag(repo, tagname, ctx.node(), message=None, user=None,
                          date=None, local=True)
@@ -562,7 +581,7 @@
              b'skipped': _(b'skipped'),
              b'updated': _(b'updated')}[action],
             b'phabricator.action.%s' % action)
-        drevdesc = ui.label(b'D%s' % newrevid, b'phabricator.drev')
+        drevdesc = ui.label(b'D%d' % newrevid, b'phabricator.drev')
         nodedesc = ui.label(bytes(ctx), b'phabricator.node')
         desc = ui.label(ctx.description().split(b'\n')[0], b'phabricator.desc')
         ui.write(_(b'%s - %s - %s: %s\n') % (drevdesc, actiondesc, nodedesc,
@@ -580,9 +599,8 @@
             for i, rev in enumerate(revs):
                 old = unfi[rev]
                 drevid = drevids[i]
-                drev = [d for d in drevs if int(d[r'id']) == drevid][0]
+                drev = [d for d in drevs if int(d[b'id']) == drevid][0]
                 newdesc = getdescfromdrev(drev)
-                newdesc = encoding.unitolocal(newdesc)
                 # Make sure commit message contain "Differential Revision"
                 if old.description() != newdesc:
                     if old.phase() == phases.public:
@@ -613,8 +631,8 @@
 
 # Map from "hg:meta" keys to header understood by "hg import". The order is
 # consistent with "hg export" output.
-_metanamemap = util.sortdict([(r'user', b'User'), (r'date', b'Date'),
-                              (r'node', b'Node ID'), (r'parent', b'Parent ')])
+_metanamemap = util.sortdict([(b'user', b'User'), (b'date', b'Date'),
+                              (b'node', b'Node ID'), (b'parent', b'Parent ')])
 
 def _confirmbeforesend(repo, revs, oldmap):
     url, token = readurltoken(repo)
@@ -644,7 +662,7 @@
 
 def _getstatusname(drev):
     """get normalized status name from a Differential Revision"""
-    return drev[r'statusName'].replace(b' ', b'').lower()
+    return drev[b'statusName'].replace(b' ', b'').lower()
 
 # Small language to specify differential revisions. Support symbols: (), :X,
 # +, and -.
@@ -668,7 +686,7 @@
     length = len(text)
     while pos < length:
         symbol = b''.join(itertools.takewhile(lambda ch: ch not in special,
-                                              view[pos:]))
+                                              pycompat.iterbytestr(view[pos:])))
         if symbol:
             yield (b'symbol', symbol, pos)
             pos += len(symbol)
@@ -756,14 +774,14 @@
     """
     def fetch(params):
         """params -> single drev or None"""
-        key = (params.get(r'ids') or params.get(r'phids') or [None])[0]
+        key = (params.get(b'ids') or params.get(b'phids') or [None])[0]
         if key in prefetched:
             return prefetched[key]
         drevs = callconduit(repo, b'differential.query', params)
         # Fill prefetched with the result
         for drev in drevs:
-            prefetched[drev[r'phid']] = drev
-            prefetched[int(drev[r'id'])] = drev
+            prefetched[drev[b'phid']] = drev
+            prefetched[int(drev[b'id'])] = drev
         if key not in prefetched:
             raise error.Abort(_(b'cannot get Differential Revision %r')
                               % params)
@@ -773,16 +791,16 @@
         """given a top, get a stack from the bottom, [id] -> [id]"""
         visited = set()
         result = []
-        queue = [{r'ids': [i]} for i in topdrevids]
+        queue = [{b'ids': [i]} for i in topdrevids]
         while queue:
             params = queue.pop()
             drev = fetch(params)
-            if drev[r'id'] in visited:
+            if drev[b'id'] in visited:
                 continue
-            visited.add(drev[r'id'])
-            result.append(int(drev[r'id']))
-            auxiliary = drev.get(r'auxiliary', {})
-            depends = auxiliary.get(r'phabricator:depends-on', [])
+            visited.add(drev[b'id'])
+            result.append(int(drev[b'id']))
+            auxiliary = drev.get(b'auxiliary', {})
+            depends = auxiliary.get(b'phabricator:depends-on', [])
             for phid in depends:
                 queue.append({b'phids': [phid]})
         result.reverse()
@@ -802,7 +820,7 @@
     for r in ancestordrevs:
         tofetch.update(range(max(1, r - batchsize), r + 1))
     if drevs:
-        fetch({r'ids': list(tofetch)})
+        fetch({b'ids': list(tofetch)})
     validids = sorted(set(getstack(list(ancestordrevs))) | set(drevs))
 
     # Walk through the tree, return smartsets
@@ -836,12 +854,12 @@
     This is similar to differential.getcommitmessage API. But we only care
     about limited fields: title, summary, test plan, and URL.
     """
-    title = drev[r'title']
-    summary = drev[r'summary'].rstrip()
-    testplan = drev[r'testPlan'].rstrip()
+    title = drev[b'title']
+    summary = drev[b'summary'].rstrip()
+    testplan = drev[b'testPlan'].rstrip()
     if testplan:
         testplan = b'Test Plan:\n%s' % testplan
-    uri = b'Differential Revision: %s' % drev[r'uri']
+    uri = b'Differential Revision: %s' % drev[b'uri']
     return b'\n\n'.join(filter(None, [title, summary, testplan, uri]))
 
 def getdiffmeta(diff):
@@ -881,17 +899,17 @@
     Note: metadata extracted from "local:commits" will lose time zone
     information.
     """
-    props = diff.get(r'properties') or {}
-    meta = props.get(r'hg:meta')
-    if not meta and props.get(r'local:commits'):
-        commit = sorted(props[r'local:commits'].values())[0]
+    props = diff.get(b'properties') or {}
+    meta = props.get(b'hg:meta')
+    if not meta and props.get(b'local:commits'):
+        commit = sorted(props[b'local:commits'].values())[0]
         meta = {
-            r'date': r'%d 0' % commit[r'time'],
-            r'node': commit[r'rev'],
-            r'user': r'%s <%s>' % (commit[r'author'], commit[r'authorEmail']),
+            b'date': b'%d 0' % commit[b'time'],
+            b'node': commit[b'rev'],
+            b'user': b'%s <%s>' % (commit[b'author'], commit[b'authorEmail']),
         }
-        if len(commit.get(r'parents', ())) >= 1:
-            meta[r'parent'] = commit[r'parents'][0]
+        if len(commit.get(b'parents', ())) >= 1:
+            meta[b'parent'] = commit[b'parents'][0]
     return meta or {}
 
 def readpatch(repo, drevs, write):
@@ -901,14 +919,14 @@
     "differential.query".
     """
     # Prefetch hg:meta property for all diffs
-    diffids = sorted(set(max(int(v) for v in drev[r'diffs']) for drev in drevs))
+    diffids = sorted(set(max(int(v) for v in drev[b'diffs']) for drev in drevs))
     diffs = callconduit(repo, b'differential.querydiffs', {b'ids': diffids})
 
     # Generate patch for each drev
     for drev in drevs:
-        repo.ui.note(_(b'reading D%s\n') % drev[r'id'])
+        repo.ui.note(_(b'reading D%s\n') % drev[b'id'])
 
-        diffid = max(int(v) for v in drev[r'diffs'])
+        diffid = max(int(v) for v in drev[b'diffs'])
         body = callconduit(repo, b'differential.getrawdiff',
                            {b'diffID': diffid})
         desc = getdescfromdrev(drev)
@@ -917,13 +935,13 @@
         # Try to preserve metadata from hg:meta property. Write hg patch
         # headers that can be read by the "import" command. See patchheadermap
         # and extract in mercurial/patch.py for supported headers.
-        meta = getdiffmeta(diffs[str(diffid)])
+        meta = getdiffmeta(diffs[b'%d' % diffid])
         for k in _metanamemap.keys():
             if k in meta:
                 header += b'# %s %s\n' % (_metanamemap[k], meta[k])
 
         content = b'%s%s\n%s' % (header, desc, body)
-        write(encoding.unitolocal(content))
+        write(content)
 
 @vcrcommand(b'phabread',
          [(b'', b'stack', False, _(b'read dependencies'))],
@@ -948,6 +966,7 @@
     If --stack is given, follow dependencies information and read all patches.
     It is equivalent to the ``:`` operator.
     """
+    opts = pycompat.byteskwargs(opts)
     if opts.get(b'stack'):
         spec = b':(%s)' % spec
     drevs = querydrev(repo, spec)
@@ -966,6 +985,7 @@
 
     DREVSPEC selects revisions. See :hg:`help phabread` for its usage.
     """
+    opts = pycompat.byteskwargs(opts)
     flags = [n for n in b'accept reject abandon reclaim'.split() if opts.get(n)]
     if len(flags) > 1:
         raise error.Abort(_(b'%s cannot be used together') % b', '.join(flags))
@@ -979,7 +999,7 @@
         if i + 1 == len(drevs) and opts.get(b'comment'):
             actions.append({b'type': b'comment', b'value': opts[b'comment']})
         if actions:
-            params = {b'objectIdentifier': drev[r'phid'],
+            params = {b'objectIdentifier': drev[b'phid'],
                       b'transactions': actions}
             callconduit(repo, b'differential.revision.edit', params)
 
@@ -994,8 +1014,8 @@
     m = _differentialrevisiondescre.search(ctx.description())
     if m:
         return templateutil.hybriddict({
-            b'url': m.group(b'url'),
-            b'id': b"D{}".format(m.group(b'id')),
+            b'url': m.group(r'url'),
+            b'id': b"D%s" % m.group(r'id'),
         })
     else:
         tags = ctx.repo().nodetags(ctx.node())
--- a/hgext/rebase.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/rebase.py	Wed Apr 17 13:41:18 2019 -0400
@@ -949,6 +949,9 @@
         except error.InMemoryMergeConflictsError:
             ui.status(_('hit a merge conflict\n'))
             return 1
+        except error.Abort:
+            needsabort = False
+            raise
         else:
             if confirm:
                 ui.status(_('rebase completed successfully\n'))
@@ -1278,7 +1281,7 @@
     return stats
 
 def adjustdest(repo, rev, destmap, state, skipped):
-    """adjust rebase destination given the current rebase state
+    r"""adjust rebase destination given the current rebase state
 
     rev is what is being rebased. Return a list of two revs, which are the
     adjusted destinations for rev's p1 and p2, respectively. If a parent is
@@ -1804,7 +1807,6 @@
 
 def pullrebase(orig, ui, repo, *args, **opts):
     'Call rebase after pull if the latter has been invoked with --rebase'
-    ret = None
     if opts.get(r'rebase'):
         if ui.configbool('commands', 'rebase.requiredest'):
             msg = _('rebase destination required by configuration')
@@ -1879,8 +1881,8 @@
     obsolete successors.
     """
     obsoletenotrebased = {}
-    obsoletewithoutsuccessorindestination = set([])
-    obsoleteextinctsuccessors = set([])
+    obsoletewithoutsuccessorindestination = set()
+    obsoleteextinctsuccessors = set()
 
     assert repo.filtername is None
     cl = repo.changelog
--- a/hgext/record.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/record.py	Wed Apr 17 13:41:18 2019 -0400
@@ -119,6 +119,7 @@
 
     overrides = {('experimental', 'crecord'): False}
     with ui.configoverride(overrides, 'record'):
+        cmdutil.checkunfinished(repo)
         cmdutil.dorecord(ui, repo, committomq, cmdsuggest, False,
                          cmdutil.recordfilter, *pats, **opts)
 
@@ -134,12 +135,12 @@
     except KeyError:
         return
 
-    cmdtable["qrecord"] = \
-        (qrecord,
-         # same options as qnew, but copy them so we don't get
-         # -i/--interactive for qrecord and add white space diff options
-         mq.cmdtable['qnew'][1][:] + cmdutil.diffwsopts,
-         _('hg qrecord [OPTION]... PATCH [FILE]...'))
+    cmdtable["qrecord"] = (
+        qrecord,
+        # same options as qnew, but copy them so we don't get
+        # -i/--interactive for qrecord and add white space diff options
+        mq.cmdtable['qnew'][1][:] + cmdutil.diffwsopts,
+        _('hg qrecord [OPTION]... PATCH [FILE]...'))
 
     _wrapcmd('qnew', mq.cmdtable, qnew, _("interactively record a new patch"))
     _wrapcmd('qrefresh', mq.cmdtable, qrefresh,
--- a/hgext/releasenotes.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/releasenotes.py	Wed Apr 17 13:41:18 2019 -0400
@@ -55,7 +55,7 @@
     ('api', _('API Changes')),
 ]
 
-RE_DIRECTIVE = re.compile('^\.\. ([a-zA-Z0-9_]+)::\s*([^$]+)?$')
+RE_DIRECTIVE = re.compile(br'^\.\. ([a-zA-Z0-9_]+)::\s*([^$]+)?$')
 RE_ISSUE = br'\bissue ?[0-9]{4,6}(?![0-9])\b'
 
 BULLET_SECTION = _('Other Changes')
@@ -107,8 +107,9 @@
                       "releasenotes is disabled\n"))
 
         for section in other:
-            existingnotes = converttitled(self.titledforsection(section)) + \
-                convertnontitled(self.nontitledforsection(section))
+            existingnotes = (
+                converttitled(self.titledforsection(section)) +
+                convertnontitled(self.nontitledforsection(section)))
             for title, paragraphs in other.titledforsection(section):
                 if self.hastitledinsection(section, title):
                     # TODO prompt for resolution if different and running in
--- a/hgext/remotefilelog/__init__.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/__init__.py	Wed Apr 17 13:41:18 2019 -0400
@@ -159,7 +159,6 @@
     scmutil,
     smartset,
     streamclone,
-    templatekw,
     util,
 )
 from . import (
@@ -479,9 +478,10 @@
     def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
         if isenabled(repo):
             files = []
-            parentctx = repo['.']
+            pmf = repo['.'].manifest()
             for f in removed:
-                files.append((f, hex(parentctx.filenode(f))))
+                if f in pmf:
+                    files.append((f, hex(pmf[f])))
             # batch fetch the needed files from the server
             repo.fileservice.prefetch(files)
         return orig(repo, matcher, added, removed, *args, **kwargs)
@@ -497,20 +497,20 @@
 
             sparsematch1 = repo.maybesparsematch(c1.rev())
             if sparsematch1:
-                sparseu1 = []
+                sparseu1 = set()
                 for f in u1:
                     if sparsematch1(f):
                         files.append((f, hex(m1[f])))
-                        sparseu1.append(f)
+                        sparseu1.add(f)
                 u1 = sparseu1
 
             sparsematch2 = repo.maybesparsematch(c2.rev())
             if sparsematch2:
-                sparseu2 = []
+                sparseu2 = set()
                 for f in u2:
                     if sparsematch2(f):
                         files.append((f, hex(m2[f])))
-                        sparseu2.append(f)
+                        sparseu2.add(f)
                 u2 = sparseu2
 
             # batch fetch the needed files from the server
@@ -520,7 +520,7 @@
 
     # prefetch files before pathcopies check
     def computeforwardmissing(orig, a, b, match=None):
-        missing = list(orig(a, b, match=match))
+        missing = orig(a, b, match=match)
         repo = a._repo
         if isenabled(repo):
             mb = b.manifest()
@@ -528,11 +528,11 @@
             files = []
             sparsematch = repo.maybesparsematch(b.rev())
             if sparsematch:
-                sparsemissing = []
+                sparsemissing = set()
                 for f in missing:
                     if sparsematch(f):
                         files.append((f, hex(mb[f])))
-                        sparsemissing.append(f)
+                        sparsemissing.add(f)
                 missing = sparsemissing
 
             # batch fetch the needed files from the server
@@ -557,7 +557,7 @@
     extensions.wrapfunction(dispatch, 'runcommand', runcommand)
 
     # disappointing hacks below
-    templatekw.getrenamedfn = getrenamedfn
+    scmutil.getrenamedfn = getrenamedfn
     extensions.wrapfunction(revset, 'filelog', filelogrevset)
     revset.symbols['filelog'] = revset.filelog
     extensions.wrapfunction(cmdutil, 'walkfilerevs', walkfilerevs)
@@ -805,7 +805,7 @@
         return
 
     reposfile = open(repospath, 'rb')
-    repos = set([r[:-1] for r in reposfile.readlines()])
+    repos = {r[:-1] for r in reposfile.readlines()}
     reposfile.close()
 
     # build list of useful files
@@ -902,8 +902,7 @@
         # If this is a non-follow log without any revs specified, recommend that
         # the user add -f to speed it up.
         if not follow and not revs:
-            match, pats = scmutil.matchandpats(repo['.'], pats,
-                                               pycompat.byteskwargs(opts))
+            match = scmutil.match(repo['.'], pats, pycompat.byteskwargs(opts))
             isfile = not match.anypats()
             if isfile:
                 for file in match.files():
--- a/hgext/remotefilelog/basepack.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/basepack.py	Wed Apr 17 13:41:18 2019 -0400
@@ -270,9 +270,9 @@
                 # only affect this instance
                 self.VERSION = version
             elif self.VERSION != version:
-                raise RuntimeError('inconsistent version: %s' % version)
+                raise RuntimeError('inconsistent version: %d' % version)
         else:
-            raise RuntimeError('unsupported version: %s' % version)
+            raise RuntimeError('unsupported version: %d' % version)
 
 class basepack(versionmixin):
     # The maximum amount we should read via mmap before remmaping so the old
@@ -457,8 +457,6 @@
             pass
 
     def writeindex(self):
-        rawindex = ''
-
         largefanout = len(self.entries) > SMALLFANOUTCUTOFF
         if largefanout:
             params = indexparams(LARGEFANOUTPREFIX, self.VERSION)
--- a/hgext/remotefilelog/basestore.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/basestore.py	Wed Apr 17 13:41:18 2019 -0400
@@ -410,16 +410,18 @@
         def wrapped(self, *args, **kwargs):
             retrylog = self.retrylog or noop
             funcname = fn.__name__
-            for i in pycompat.xrange(self.numattempts):
+            i = 0
+            while i < self.numattempts:
                 if i > 0:
                     retrylog('re-attempting (n=%d) %s\n' % (i, funcname))
                     self.markforrefresh()
+                i += 1
                 try:
                     return fn(self, *args, **kwargs)
                 except KeyError:
-                    pass
-            # retries exhausted
-            retrylog('retries exhausted in %s, raising KeyError\n' %
-                     pycompat.sysbytes(funcname))
-            raise
+                    if i == self.numattempts:
+                        # retries exhausted
+                        retrylog('retries exhausted in %s, raising KeyError\n' %
+                                 pycompat.sysbytes(funcname))
+                        raise
         return wrapped
--- a/hgext/remotefilelog/datapack.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/datapack.py	Wed Apr 17 13:41:18 2019 -0400
@@ -242,7 +242,7 @@
             entry = index[end:end + entrylen]
         else:
             while start < end - entrylen:
-                mid = start  + (end - start) / 2
+                mid = start + (end - start) // 2
                 mid = mid - ((mid - params.indexstart) % entrylen)
                 midnode = index[mid:mid + NODELENGTH]
                 if midnode == node:
@@ -250,10 +250,8 @@
                     break
                 if node > midnode:
                     start = mid
-                    startnode = midnode
                 elif node < midnode:
                     end = mid
-                    endnode = midnode
             else:
                 return None
 
--- a/hgext/remotefilelog/debugcommands.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/debugcommands.py	Wed Apr 17 13:41:18 2019 -0400
@@ -16,6 +16,7 @@
     error,
     filelog,
     node as nodemod,
+    pycompat,
     revlog,
 )
 from . import (
@@ -175,7 +176,6 @@
     return zlib.decompress(raw)
 
 def parsefileblob(path, decompress):
-    raw = None
     f = open(path, "rb")
     try:
         raw = f.read()
@@ -277,11 +277,11 @@
                 totalblobsize += blobsize
             else:
                 blobsize = "(missing)"
-            ui.write("%s  %s  %s%d\n" % (
+            ui.write("%s  %s  %s%s\n" % (
                 hashformatter(node),
                 hashformatter(deltabase),
                 ('%d' % deltalen).ljust(14),
-                blobsize))
+                pycompat.bytestr(blobsize)))
 
         if filename is not None:
             printtotals()
--- a/hgext/remotefilelog/fileserverclient.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/fileserverclient.py	Wed Apr 17 13:41:18 2019 -0400
@@ -138,8 +138,8 @@
     def connect(self, cachecommand):
         if self.pipeo:
             raise error.Abort(_("cache connection already open"))
-        self.pipei, self.pipeo, self.pipee, self.subprocess = \
-            procutil.popen4(cachecommand)
+        self.pipei, self.pipeo, self.pipee, self.subprocess = (
+            procutil.popen4(cachecommand))
         self.connected = True
 
     def close(self):
@@ -544,7 +544,7 @@
                     if fetchwarning:
                         self.ui.warn(fetchwarning + '\n')
                 self.logstacktrace()
-            missingids = [(file, hex(id)) for file, id in missingids]
+            missingids = [(file, hex(id)) for file, id in sorted(missingids)]
             fetched += len(missingids)
             start = time.time()
             missingids = self.request(missingids)
--- a/hgext/remotefilelog/historypack.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/historypack.py	Wed Apr 17 13:41:18 2019 -0400
@@ -259,10 +259,8 @@
                     return self._index[mid:mid + entrylen]
                 if node > midnode:
                     start = mid
-                    startnode = midnode
                 elif node < midnode:
                     end = mid
-                    endnode = midnode
         return None
 
     def markledger(self, ledger, options=None):
@@ -514,7 +512,6 @@
 
             fileindexentries.append(rawentry)
 
-        nodecountraw = ''
         nodecountraw = struct.pack('!Q', nodecount)
         return (''.join(fileindexentries) + nodecountraw +
                 ''.join(nodeindexentries))
--- a/hgext/remotefilelog/remotefilectx.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/remotefilectx.py	Wed Apr 17 13:41:18 2019 -0400
@@ -15,7 +15,6 @@
     context,
     error,
     phases,
-    pycompat,
     util,
 )
 from . import shallowutil
@@ -39,11 +38,11 @@
 
     @propertycache
     def _changeid(self):
-        if '_changeid' in self.__dict__:
+        if r'_changeid' in self.__dict__:
             return self._changeid
-        elif '_changectx' in self.__dict__:
+        elif r'_changectx' in self.__dict__:
             return self._changectx.rev()
-        elif '_descendantrev' in self.__dict__:
+        elif r'_descendantrev' in self.__dict__:
             # this file context was created from a revision with a known
             # descendant, we can (lazily) correct for linkrev aliases
             linknode = self._adjustlinknode(self._path, self._filelog,
@@ -102,7 +101,7 @@
         """
         lkr = self.linkrev()
         attrs = vars(self)
-        noctx = not ('_changeid' in attrs or '_changectx' in attrs)
+        noctx = not (r'_changeid' in attrs or r'_changectx' in attrs)
         if noctx or self.rev() == lkr:
             return lkr
         linknode = self._adjustlinknode(self._path, self._filelog,
@@ -137,6 +136,10 @@
                 pass
         return renamed
 
+    def copysource(self):
+        copy = self.renamed()
+        return copy and copy[0]
+
     def ancestormap(self):
         if not self._ancestormap:
             self._ancestormap = self.filelog().ancestormap(self._filenode)
@@ -316,7 +319,7 @@
         finally:
             elapsed = time.time() - start
             repo.ui.log('linkrevfixup', logmsg + '\n', elapsed=elapsed * 1000,
-                        **pycompat.strkwargs(commonlogkwargs))
+                        **commonlogkwargs)
 
     def _verifylinknode(self, revs, linknode):
         """
@@ -452,8 +455,8 @@
 class remoteworkingfilectx(context.workingfilectx, remotefilectx):
     def __init__(self, repo, path, filelog=None, workingctx=None):
         self._ancestormap = None
-        return super(remoteworkingfilectx, self).__init__(repo, path,
-            filelog, workingctx)
+        super(remoteworkingfilectx, self).__init__(repo, path, filelog,
+                                                   workingctx)
 
     def parents(self):
         return remotefilectx.parents(self)
--- a/hgext/remotefilelog/remotefilelog.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/remotefilelog.py	Wed Apr 17 13:41:18 2019 -0400
@@ -10,7 +10,12 @@
 import collections
 import os
 
-from mercurial.node import bin, nullid
+from mercurial.node import (
+    bin,
+    nullid,
+    wdirfilenodeids,
+    wdirid,
+)
 from mercurial.i18n import _
 from mercurial import (
     ancestor,
@@ -61,8 +66,6 @@
         return t[s + 2:]
 
     def add(self, text, meta, transaction, linknode, p1=None, p2=None):
-        hashtext = text
-
         # hash with the metadata, like in vanilla filelogs
         hashtext = shallowutil.createrevlogtext(text, meta.get('copy'),
                                                 meta.get('copyrev'))
@@ -308,6 +311,8 @@
         if len(node) != 20:
             raise error.LookupError(node, self.filename,
                                     _('invalid revision input'))
+        if node == wdirid or node in wdirfilenodeids:
+            raise error.WdirUnsupported
 
         store = self.repo.contentstore
         rawtext = store.get(self.filename, node)
--- a/hgext/remotefilelog/remotefilelogserver.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/remotefilelogserver.py	Wed Apr 17 13:41:18 2019 -0400
@@ -54,7 +54,7 @@
                 elif cap.startswith("excludepattern="):
                     excludepattern = cap[len("excludepattern="):].split('\0')
 
-            m = match.always(repo.root, '')
+            m = match.always()
             if includepattern or excludepattern:
                 m = match.match(repo.root, '', None,
                     includepattern, excludepattern)
@@ -104,7 +104,7 @@
         oldnoflatmf = state.noflatmf
         try:
             state.shallowremote = True
-            state.match = match.always(repo.root, '')
+            state.match = match.always()
             state.noflatmf = other.get('noflatmanifest') == 'True'
             if includepattern or excludepattern:
                 state.match = match.match(repo.root, '', None,
--- a/hgext/remotefilelog/repack.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/repack.py	Wed Apr 17 13:41:18 2019 -0400
@@ -154,9 +154,9 @@
 
     # Either an oversize index or datapack will trigger cleanup of the whole
     # pack:
-    oversized = set([os.path.splitext(path)[0] for path, ftype, stat in files
+    oversized = {os.path.splitext(path)[0] for path, ftype, stat in files
         if (stat.st_size > maxsize and (os.path.splitext(path)[1]
-                                        in VALIDEXTS))])
+                                        in VALIDEXTS))}
 
     for rootfname in oversized:
         rootpath = os.path.join(folder, rootfname)
@@ -338,7 +338,7 @@
     packer = repacker(repo, data, history, fullhistory, category,
                       gc=garbagecollect, isold=isold, options=options)
 
-    with datapack.mutabledatapack(repo.ui, packpath, version=2) as dpack:
+    with datapack.mutabledatapack(repo.ui, packpath) as dpack:
         with historypack.mutablehistorypack(repo.ui, packpath) as hpack:
             try:
                 packer.run(dpack, hpack)
@@ -601,7 +601,6 @@
                 # TODO: Optimize the deltachain fetching. Since we're
                 # iterating over the different version of the file, we may
                 # be fetching the same deltachain over and over again.
-                meta = None
                 if deltabase != nullid:
                     deltaentry = self.data.getdelta(filename, node)
                     delta, deltabasename, origdeltabase, meta = deltaentry
--- a/hgext/remotefilelog/shallowbundle.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/shallowbundle.py	Wed Apr 17 13:41:18 2019 -0400
@@ -162,7 +162,7 @@
                 repo.shallowmatch = match.match(repo.root, '', None,
                     includepattern, excludepattern)
             else:
-                repo.shallowmatch = match.always(repo.root, '')
+                repo.shallowmatch = match.always()
         return orig(repo, outgoing, version, source, *args, **kwargs)
     finally:
         repo.shallowmatch = original
--- a/hgext/remotefilelog/shallowrepo.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/shallowrepo.py	Wed Apr 17 13:41:18 2019 -0400
@@ -289,7 +289,7 @@
 
     repo.__class__ = shallowrepository
 
-    repo.shallowmatch = match.always(repo.root, '')
+    repo.shallowmatch = match.always()
 
     makeunionstores(repo)
 
--- a/hgext/remotefilelog/shallowutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/remotefilelog/shallowutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -237,9 +237,9 @@
             # v0, str(int(size)) is the header
             size = int(header)
     except ValueError:
-        raise RuntimeError("unexpected remotefilelog header: illegal format")
+        raise RuntimeError(r"unexpected remotefilelog header: illegal format")
     if size is None:
-        raise RuntimeError("unexpected remotefilelog header: no size found")
+        raise RuntimeError(r"unexpected remotefilelog header: no size found")
     return index + 1, size, flags
 
 def buildfileblobheader(size, flags, version=None):
--- a/hgext/shelve.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/shelve.py	Wed Apr 17 13:41:18 2019 -0400
@@ -248,8 +248,8 @@
         if version < cls._version:
             d = cls._readold(repo)
         elif version == cls._version:
-            d = scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
-                       .read(firstlinenonkeyval=True)
+            d = scmutil.simplekeyvaluefile(
+                repo.vfs, cls._filename).read(firstlinenonkeyval=True)
         else:
             raise error.Abort(_('this version of shelve is incompatible '
                                 'with the version used in this repo'))
@@ -287,8 +287,9 @@
             "keep": cls._keep if keep else cls._nokeep,
             "activebook": activebook or cls._noactivebook
         }
-        scmutil.simplekeyvaluefile(repo.vfs, cls._filename)\
-               .write(info, firstline=("%d" % cls._version))
+        scmutil.simplekeyvaluefile(
+            repo.vfs, cls._filename).write(info,
+                                           firstline=("%d" % cls._version))
 
     @classmethod
     def clear(cls, repo):
@@ -419,14 +420,11 @@
     else:
         ui.status(_("nothing changed\n"))
 
-def _shelvecreatedcommit(repo, node, name):
+def _shelvecreatedcommit(repo, node, name, match):
     info = {'node': nodemod.hex(node)}
     shelvedfile(repo, name, 'shelve').writeinfo(info)
     bases = list(mutableancestors(repo[node]))
     shelvedfile(repo, name, 'hg').writebundle(bases, node)
-    # Create a matcher so that prefetch doesn't attempt to fetch the entire
-    # repository pointlessly.
-    match = scmutil.matchfiles(repo, repo[node].files())
     with shelvedfile(repo, name, patchextension).opener('wb') as fp:
         cmdutil.exportfile(repo, [node], fp, opts=mdiff.diffopts(git=True),
                            match=match)
@@ -500,12 +498,20 @@
             _nothingtoshelvemessaging(ui, repo, pats, opts)
             return 1
 
-        _shelvecreatedcommit(repo, node, name)
+        # Create a matcher so that prefetch doesn't attempt to fetch
+        # the entire repository pointlessly, and as an optimisation
+        # for movedirstate, if needed.
+        match = scmutil.matchfiles(repo, repo[node].files())
+        _shelvecreatedcommit(repo, node, name, match)
 
         if ui.formatted():
             desc = stringutil.ellipsis(desc, ui.termwidth())
         ui.status(_('shelved as %s\n') % name)
-        hg.update(repo, parent.node())
+        if opts['keep']:
+            with repo.dirstate.parentchange():
+                scmutil.movedirstate(repo, parent, match)
+        else:
+            hg.update(repo, parent.node())
         if origbranch != repo['.'].branch() and not _isbareshelve(pats, opts):
             repo.dirstate.setbranch(origbranch)
 
@@ -640,10 +646,6 @@
         raise error.Abort(_('working directory parents do not match unshelve '
                            'state'))
 
-def pathtofiles(repo, files):
-    cwd = repo.getcwd()
-    return [repo.pathto(f, cwd) for f in files]
-
 def unshelveabort(ui, repo, state, opts):
     """subcommand that abort an in-progress unshelve"""
     with repo.lock():
@@ -672,18 +674,8 @@
     dirstate."""
     with ui.configoverride({('ui', 'quiet'): True}):
         hg.update(repo, wctx.node())
-        files = []
-        files.extend(shelvectx.files())
-        files.extend(shelvectx.parents()[0].files())
-
-        # revert will overwrite unknown files, so move them out of the way
-        for file in repo.status(unknown=True).unknown:
-            if file in files:
-                util.rename(file, scmutil.origpath(ui, repo, file))
         ui.pushbuffer(True)
-        cmdutil.revert(ui, repo, shelvectx, repo.dirstate.parents(),
-                       *pathtofiles(repo, files),
-                       **{r'no_backup': True})
+        cmdutil.revert(ui, repo, shelvectx, repo.dirstate.parents())
         ui.popbuffer()
 
 def restorebranch(ui, repo, branchtorestore):
@@ -809,7 +801,7 @@
     """Rebase restored commit from its original location to a destination"""
     # If the shelve is not immediately on top of the commit
     # we'll be merging with, rebase it to be on top.
-    if tmpwctx.node() == shelvectx.parents()[0].node():
+    if tmpwctx.node() == shelvectx.p1().node():
         return shelvectx
 
     overrides = {
@@ -986,6 +978,12 @@
             return unshelvecontinue(ui, repo, state, opts)
     elif len(shelved) > 1:
         raise error.Abort(_('can only unshelve one change at a time'))
+
+    # abort unshelve while merging (issue5123)
+    parents = repo[None].parents()
+    if len(parents) > 1:
+        raise error.Abort(_('cannot unshelve while merging'))
+
     elif not shelved:
         shelved = listshelves(repo)
         if not shelved:
@@ -1053,6 +1051,8 @@
            _('delete the named shelved change(s)')),
           ('e', 'edit', False,
            _('invoke editor on commit messages')),
+          ('k', 'keep', False,
+           _('shelve, but keep changes in the working directory')),
           ('l', 'list', None,
            _('list current shelves')),
           ('m', 'message', '',
@@ -1111,6 +1111,7 @@
 #       ('date', {'create'}), # ignored for passing '--date "0 0"' in tests
         ('delete', {'delete'}),
         ('edit', {'create'}),
+        ('keep', {'create'}),
         ('list', {'list'}),
         ('message', {'create'}),
         ('name', {'create'}),
--- a/hgext/show.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/show.py	Wed Apr 17 13:41:18 2019 -0400
@@ -243,7 +243,7 @@
     else:
         newheads = set()
 
-    allrevs = set(stackrevs) | newheads | set([baserev])
+    allrevs = set(stackrevs) | newheads | {baserev}
     nodelen = longestshortest(repo, allrevs)
 
     try:
--- a/hgext/sparse.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/sparse.py	Wed Apr 17 13:41:18 2019 -0400
@@ -199,7 +199,7 @@
     def walk(orig, self, match, subrepos, unknown, ignored, full=True):
         # hack to not exclude explicitly-specified paths so that they can
         # be warned later on e.g. dirstate.add()
-        em = matchmod.exact(match._root, match._cwd, match.files())
+        em = matchmod.exact(match.files())
         sm = matchmod.unionmatcher([self._sparsematcher, em])
         match = matchmod.intersectmatchers(match, sm)
         return orig(self, match, subrepos, unknown, ignored, full)
@@ -318,9 +318,10 @@
             if temporaryincludes:
                 ui.status(_("Temporarily Included Files (for merge/rebase):\n"))
                 ui.status(("\n".join(temporaryincludes) + "\n"))
+            return
         else:
-            ui.status(_('repo is not sparse\n'))
-        return
+            raise error.Abort(_('the debugsparse command is only supported on'
+                                ' sparse repositories'))
 
     if include or exclude or delete or reset or enableprofile or disableprofile:
         sparse.updateconfig(repo, pats, opts, include=include, exclude=exclude,
--- a/hgext/split.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/split.py	Wed Apr 17 13:41:18 2019 -0400
@@ -134,13 +134,10 @@
     committed = [] # [ctx]
 
     # Set working parent to ctx.p1(), and keep working copy as ctx's content
-    # NOTE: if we can have "update without touching working copy" API, the
-    # revert step could be cheaper.
-    hg.clean(repo, ctx.p1().node(), show_stats=False)
-    parents = repo.changelog.parents(ctx.node())
-    ui.pushbuffer()
-    cmdutil.revert(ui, repo, ctx, parents)
-    ui.popbuffer() # discard "reverting ..." messages
+    if ctx.node() != repo.dirstate.p1():
+        hg.clean(repo, ctx.node(), show_stats=False)
+    with repo.dirstate.parentchange():
+        scmutil.movedirstate(repo, ctx.p1())
 
     # Any modified, added, removed, deleted result means split is incomplete
     incomplete = lambda repo: any(repo.status()[:4])
--- a/hgext/strip.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/strip.py	Wed Apr 17 13:41:18 2019 -0400
@@ -39,7 +39,7 @@
     if baserev:
         bctx = repo[baserev]
     else:
-        bctx = wctx.parents()[0]
+        bctx = wctx.p1()
     for s in sorted(wctx.substate):
         wctx.sub(s).bailifchanged(True)
         if s not in bctx.substate or bctx.sub(s).dirty():
@@ -76,7 +76,8 @@
 
     return unode
 
-def strip(ui, repo, revs, update=True, backup=True, force=None, bookmarks=None):
+def strip(ui, repo, revs, update=True, backup=True, force=None, bookmarks=None,
+          soft=False):
     with repo.wlock(), repo.lock():
 
         if update:
@@ -85,7 +86,10 @@
             hg.clean(repo, urev)
             repo.dirstate.write(repo.currenttransaction())
 
-        repair.strip(ui, repo, revs, backup)
+        if soft:
+            repair.softstrip(ui, repo, revs, backup)
+        else:
+            repair.strip(ui, repo, revs, backup)
 
         repomarks = repo._bookmarks
         if bookmarks:
@@ -110,7 +114,10 @@
           ('k', 'keep', None, _("do not modify working directory during "
                                 "strip")),
           ('B', 'bookmark', [], _("remove revs only reachable from given"
-                                  " bookmark"), _('BOOKMARK'))],
+                                  " bookmark"), _('BOOKMARK')),
+          ('', 'soft', None,
+          _("simply drop changesets from visible history (EXPERIMENTAL)")),
+         ],
           _('hg strip [-k] [-f] [-B bookmark] [-r] REV...'),
           helpcategory=command.CATEGORY_MAINTENANCE)
 def stripcmd(ui, repo, *revs, **opts):
@@ -235,6 +242,7 @@
 
 
         strip(ui, repo, revs, backup=backup, update=update,
-              force=opts.get('force'), bookmarks=bookmarks)
+              force=opts.get('force'), bookmarks=bookmarks,
+              soft=opts['soft'])
 
     return 0
--- a/hgext/transplant.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/transplant.py	Wed Apr 17 13:41:18 2019 -0400
@@ -155,7 +155,7 @@
         if opts is None:
             opts = {}
         revs = sorted(revmap)
-        p1, p2 = repo.dirstate.parents()
+        p1 = repo.dirstate.p1()
         pulls = []
         diffopts = patch.difffeatureopts(self.ui, opts)
         diffopts.git = True
@@ -186,7 +186,7 @@
                             exchange.pull(repo, source.peer(), heads=pulls)
                         merge.update(repo, pulls[-1], branchmerge=False,
                                      force=False)
-                        p1, p2 = repo.dirstate.parents()
+                        p1 = repo.dirstate.p1()
                         pulls = []
 
                 domerge = False
@@ -323,11 +323,11 @@
         else:
             files = None
         if merge:
-            p1, p2 = repo.dirstate.parents()
+            p1 = repo.dirstate.p1()
             repo.setparents(p1, node)
-            m = match.always(repo.root, '')
+            m = match.always()
         else:
-            m = match.exact(repo.root, '', files)
+            m = match.exact(files)
 
         n = repo.commit(message, user, date, extra=extra, match=m,
                         editor=self.getcommiteditor())
@@ -387,7 +387,7 @@
 
         extra = {'transplant_source': node}
         try:
-            p1, p2 = repo.dirstate.parents()
+            p1 = repo.dirstate.p1()
             if p1 != parent:
                 raise error.Abort(_('working directory not at transplant '
                                    'parent %s') % nodemod.hex(parent))
@@ -668,7 +668,7 @@
 
     tp = transplanter(ui, repo, opts)
 
-    p1, p2 = repo.dirstate.parents()
+    p1 = repo.dirstate.p1()
     if len(repo) > 0 and p1 == revlog.nullid:
         raise error.Abort(_('no revision checked out'))
     if opts.get('continue'):
@@ -676,11 +676,7 @@
             raise error.Abort(_('no transplant to continue'))
     else:
         cmdutil.checkunfinished(repo)
-        if p2 != revlog.nullid:
-            raise error.Abort(_('outstanding uncommitted merges'))
-        m, a, r, d = repo.status()[:4]
-        if m or a or r or d:
-            raise error.Abort(_('outstanding local changes'))
+        cmdutil.bailifchanged(repo)
 
     sourcerepo = opts.get('source')
     if sourcerepo:
--- a/hgext/uncommit.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/uncommit.py	Wed Apr 17 13:41:18 2019 -0400
@@ -25,7 +25,7 @@
     cmdutil,
     commands,
     context,
-    copies,
+    copies as copiesmod,
     error,
     node,
     obsutil,
@@ -33,6 +33,7 @@
     registrar,
     rewriteutil,
     scmutil,
+    util,
 )
 
 cmdtable = {}
@@ -44,6 +45,9 @@
 configitem('experimental', 'uncommitondirtywdir',
     default=False,
 )
+configitem('experimental', 'uncommit.keep',
+    default=False,
+)
 
 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
@@ -64,13 +68,13 @@
     if not exclude:
         return None
 
-    files = (initialfiles - exclude)
     # return the p1 so that we don't create an obsmarker later
     if not keepcommit:
-        return ctx.parents()[0].node()
+        return ctx.p1().node()
 
+    files = (initialfiles - exclude)
     # Filter copies
-    copied = copies.pathcopies(base, ctx)
+    copied = copiesmod.pathcopies(base, ctx)
     copied = dict((dst, src) for dst, src in copied.iteritems()
                   if dst in files)
     def filectxfn(repo, memctx, path, contentctx=ctx, redirect=()):
@@ -80,9 +84,12 @@
         mctx = context.memfilectx(repo, memctx, fctx.path(), fctx.data(),
                                   fctx.islink(),
                                   fctx.isexec(),
-                                  copied=copied.get(path))
+                                  copysource=copied.get(path))
         return mctx
 
+    if not files:
+        repo.ui.status(_("note: keeping empty commit\n"))
+
     new = context.memctx(repo,
                          parents=[base.node(), node.nullid],
                          text=ctx.description(),
@@ -93,50 +100,10 @@
                          extra=ctx.extra())
     return repo.commitctx(new)
 
-def _fixdirstate(repo, oldctx, newctx, status):
-    """ fix the dirstate after switching the working directory from oldctx to
-    newctx which can be result of either unamend or uncommit.
-    """
-    ds = repo.dirstate
-    copies = dict(ds.copies())
-    s = status
-    for f in s.modified:
-        if ds[f] == 'r':
-            # modified + removed -> removed
-            continue
-        ds.normallookup(f)
-
-    for f in s.added:
-        if ds[f] == 'r':
-            # added + removed -> unknown
-            ds.drop(f)
-        elif ds[f] != 'a':
-            ds.add(f)
-
-    for f in s.removed:
-        if ds[f] == 'a':
-            # removed + added -> normal
-            ds.normallookup(f)
-        elif ds[f] != 'r':
-            ds.remove(f)
-
-    # Merge old parent and old working dir copies
-    oldcopies = {}
-    for f in (s.modified + s.added):
-        src = oldctx[f].renamed()
-        if src:
-            oldcopies[f] = src[0]
-    oldcopies.update(copies)
-    copies = dict((dst, oldcopies.get(src, src))
-                  for dst, src in oldcopies.iteritems())
-    # Adjust the dirstate copies
-    for dst, src in copies.iteritems():
-        if (src not in newctx or dst in newctx or ds[dst] != 'a'):
-            src = None
-        ds.copy(src, dst)
-
 @command('uncommit',
-    [('', 'keep', False, _('allow an empty commit after uncommiting')),
+    [('', 'keep', None, _('allow an empty commit after uncommiting')),
+     ('', 'allow-dirty-working-copy', False,
+    _('allow uncommit with outstanding changes'))
     ] + commands.walkopts,
     _('[OPTION]... [FILE]...'),
     helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
@@ -155,17 +122,52 @@
 
     with repo.wlock(), repo.lock():
 
-        if not pats and not repo.ui.configbool('experimental',
-                                               'uncommitondirtywdir'):
-            cmdutil.bailifchanged(repo)
+        m, a, r, d = repo.status()[:4]
+        isdirtypath = any(set(m + a + r + d) & set(pats))
+        allowdirtywcopy = (opts['allow_dirty_working_copy'] or
+                    repo.ui.configbool('experimental', 'uncommitondirtywdir'))
+        if not allowdirtywcopy and (not pats or isdirtypath):
+            cmdutil.bailifchanged(repo, hint=_('requires '
+                                '--allow-dirty-working-copy to uncommit'))
         old = repo['.']
         rewriteutil.precheck(repo, [old.rev()], 'uncommit')
         if len(old.parents()) > 1:
             raise error.Abort(_("cannot uncommit merge changeset"))
 
+        match = scmutil.match(old, pats, opts)
+
+        # Check all explicitly given files; abort if there's a problem.
+        if match.files():
+            s = old.status(old.p1(), match, listclean=True)
+            eligible = set(s.added) | set(s.modified) | set(s.removed)
+
+            badfiles = set(match.files()) - eligible
+
+            # Naming a parent directory of an eligible file is OK, even
+            # if not everything tracked in that directory can be
+            # uncommitted.
+            if badfiles:
+                badfiles -= {f for f in util.dirs(eligible)}
+
+            for f in sorted(badfiles):
+                if f in s.clean:
+                    hint = _(b"file was not changed in working directory "
+                             b"parent")
+                elif repo.wvfs.exists(f):
+                    hint = _(b"file was untracked in working directory parent")
+                else:
+                    hint = _(b"file does not exist")
+
+                raise error.Abort(_(b'cannot uncommit "%s"')
+                                  % scmutil.getuipathfn(repo)(f), hint=hint)
+
         with repo.transaction('uncommit'):
-            match = scmutil.match(old, pats, opts)
-            keepcommit = opts.get('keep') or pats
+            keepcommit = pats
+            if not keepcommit:
+                if opts.get('keep') is not None:
+                    keepcommit = opts.get('keep')
+                else:
+                    keepcommit = ui.configbool('experimental', 'uncommit.keep')
             newid = _commitfiltered(repo, old, match, keepcommit)
             if newid is None:
                 ui.status(_("nothing to uncommit\n"))
@@ -179,12 +181,10 @@
                 # Fully removed the old commit
                 mapping[old.node()] = ()
 
-            scmutil.cleanupnodes(repo, mapping, 'uncommit', fixphase=True)
+            with repo.dirstate.parentchange():
+                scmutil.movedirstate(repo, repo[newid], match)
 
-            with repo.dirstate.parentchange():
-                repo.dirstate.setparents(newid, node.nullid)
-                s = old.p1().status(old, match=match)
-                _fixdirstate(repo, old, repo[newid], s)
+            scmutil.cleanupnodes(repo, mapping, 'uncommit', fixphase=True)
 
 def predecessormarkers(ctx):
     """yields the obsolete markers marking the given changeset as a successor"""
@@ -244,9 +244,7 @@
         dirstate = repo.dirstate
 
         with dirstate.parentchange():
-            dirstate.setparents(newprednode, node.nullid)
-            s = repo.status(predctx, curctx)
-            _fixdirstate(repo, curctx, newpredctx, s)
+            scmutil.movedirstate(repo, newpredctx)
 
         mapping = {curctx.node(): (newprednode,)}
         scmutil.cleanupnodes(repo, mapping, 'unamend', fixphase=True)
--- a/hgext/zeroconf/Zeroconf.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/hgext/zeroconf/Zeroconf.py	Wed Apr 17 13:41:18 2019 -0400
@@ -84,7 +84,6 @@
 import itertools
 import select
 import socket
-import string
 import struct
 import threading
 import time
@@ -106,7 +105,7 @@
 
 # Some DNS constants
 
-_MDNS_ADDR = '224.0.0.251'
+_MDNS_ADDR = r'224.0.0.251'
 _MDNS_PORT = 5353
 _DNS_PORT = 53
 _DNS_TTL = 60 * 60 # one hour default TTL
@@ -221,7 +220,7 @@
     """A DNS entry"""
 
     def __init__(self, name, type, clazz):
-        self.key = string.lower(name)
+        self.key = name.lower()
         self.name = name
         self.type = type
         self.clazz = clazz & _CLASS_MASK
@@ -620,7 +619,7 @@
         first = off
 
         while True:
-            len = ord(self.data[off])
+            len = ord(self.data[off:off + 1])
             off += 1
             if len == 0:
                 break
@@ -631,7 +630,7 @@
             elif t == 0xC0:
                 if next < 0:
                     next = off + 1
-                off = ((len & 0x3F) << 8) | ord(self.data[off])
+                off = ((len & 0x3F) << 8) | ord(self.data[off:off + 1])
                 if off >= first:
                     raise BadDomainNameCircular(off)
                 first = off
@@ -938,7 +937,6 @@
         self.zeroconf.engine.addReader(self, self.zeroconf.socket)
 
     def handle_read(self):
-        data = addr = port = None
         sock = self.zeroconf.socket
         try:
             data, (addr, port) = sock.recvfrom(_MAX_MSG_ABSOLUTE)
@@ -1230,7 +1228,6 @@
         delay = _LISTENER_TIME
         next = now + delay
         last = now + timeout
-        result = 0
         try:
             zeroconf.addListener(self, DNSQuestion(self.name, _TYPE_ANY,
                                                    _CLASS_IN))
@@ -1335,7 +1332,7 @@
             # SO_REUSEADDR and SO_REUSEPORT have been set, so ignore it
             pass
         self.socket.setsockopt(socket.SOL_IP, socket.IP_ADD_MEMBERSHIP,
-            socket.inet_aton(_MDNS_ADDR) + socket.inet_aton('0.0.0.0'))
+            socket.inet_aton(_MDNS_ADDR) + socket.inet_aton(r'0.0.0.0'))
 
         self.listeners = []
         self.browsers = []
@@ -1659,7 +1656,7 @@
             self.engine.notify()
             self.unregisterAllServices()
             self.socket.setsockopt(socket.SOL_IP, socket.IP_DROP_MEMBERSHIP,
-                socket.inet_aton(_MDNS_ADDR) + socket.inet_aton('0.0.0.0'))
+                socket.inet_aton(_MDNS_ADDR) + socket.inet_aton(r'0.0.0.0'))
             self.socket.close()
 
 # Test a few module features, including service registration, service
--- a/i18n/posplit	Tue Mar 19 09:23:35 2019 -0400
+++ b/i18n/posplit	Wed Apr 17 13:41:18 2019 -0400
@@ -77,7 +77,7 @@
                             continue
                         else:
                             # lines following directly, unexpected
-                            print('Warning: text follows line with directive' \
+                            print('Warning: text follows line with directive'
                                   ' %s' % directive)
                     comment = 'do not translate: .. %s::' % directive
                     if not newentry.comment:
--- a/mercurial/archival.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/archival.py	Wed Apr 17 13:41:18 2019 -0400
@@ -340,7 +340,8 @@
         for subpath in sorted(ctx.substate):
             sub = ctx.workingsub(subpath)
             submatch = matchmod.subdirmatcher(subpath, match)
-            total += sub.archive(archiver, prefix, submatch, decode)
+            subprefix = prefix + subpath + '/'
+            total += sub.archive(archiver, subprefix, submatch, decode)
 
     if total == 0:
         raise error.Abort(_('no files match the archive pattern'))
--- a/mercurial/bdiff.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/bdiff.c	Wed Apr 17 13:41:18 2019 -0400
@@ -35,15 +35,19 @@
 
 	/* count the lines */
 	i = 1; /* extra line for sentinel */
-	for (p = a; p < plast; p++)
-		if (*p == '\n')
+	for (p = a; p < plast; p++) {
+		if (*p == '\n') {
 			i++;
-	if (p == plast)
+		}
+	}
+	if (p == plast) {
 		i++;
+	}
 
 	*lr = l = (struct bdiff_line *)calloc(i, sizeof(struct bdiff_line));
-	if (!l)
+	if (!l) {
 		return -1;
+	}
 
 	/* build the line array and calculate hashes */
 	hash = 0;
@@ -90,18 +94,21 @@
 	struct pos *h = NULL;
 
 	/* build a hash table of the next highest power of 2 */
-	while (buckets < bn + 1)
+	while (buckets < bn + 1) {
 		buckets *= 2;
+	}
 
 	/* try to allocate a large hash table to avoid collisions */
 	for (scale = 4; scale; scale /= 2) {
 		h = (struct pos *)calloc(buckets, scale * sizeof(struct pos));
-		if (h)
+		if (h) {
 			break;
+		}
 	}
 
-	if (!h)
+	if (!h) {
 		return 0;
+	}
 
 	buckets = buckets * scale - 1;
 
@@ -115,9 +122,11 @@
 	for (i = 0; i < bn; i++) {
 		/* find the equivalence class */
 		for (j = b[i].hash & buckets; h[j].pos != -1;
-		     j = (j + 1) & buckets)
-			if (!cmp(b + i, b + h[j].pos))
+		     j = (j + 1) & buckets) {
+			if (!cmp(b + i, b + h[j].pos)) {
 				break;
+			}
+		}
 
 		/* add to the head of the equivalence class */
 		b[i].n = h[j].pos;
@@ -133,15 +142,18 @@
 	for (i = 0; i < an; i++) {
 		/* find the equivalence class */
 		for (j = a[i].hash & buckets; h[j].pos != -1;
-		     j = (j + 1) & buckets)
-			if (!cmp(a + i, b + h[j].pos))
+		     j = (j + 1) & buckets) {
+			if (!cmp(a + i, b + h[j].pos)) {
 				break;
+			}
+		}
 
 		a[i].e = j; /* use equivalence class for quick compare */
-		if (h[j].len <= t)
+		if (h[j].len <= t) {
 			a[i].n = h[j].pos; /* point to head of match list */
-		else
+		} else {
 			a[i].n = -1; /* too popular */
+		}
 	}
 
 	/* discard hash tables */
@@ -158,16 +170,18 @@
 	/* window our search on large regions to better bound
 	   worst-case performance. by choosing a window at the end, we
 	   reduce skipping overhead on the b chains. */
-	if (a2 - a1 > 30000)
+	if (a2 - a1 > 30000) {
 		a1 = a2 - 30000;
+	}
 
 	half = (a1 + a2 - 1) / 2;
 	bhalf = (b1 + b2 - 1) / 2;
 
 	for (i = a1; i < a2; i++) {
 		/* skip all lines in b after the current block */
-		for (j = a[i].n; j >= b2; j = b[j].n)
+		for (j = a[i].n; j >= b2; j = b[j].n) {
 			;
+		}
 
 		/* loop through all lines match a[i] in b */
 		for (; j >= b1; j = b[j].n) {
@@ -179,8 +193,9 @@
 					break;
 				}
 				/* previous line mismatch? */
-				if (a[i - k].e != b[j - k].e)
+				if (a[i - k].e != b[j - k].e) {
 					break;
+				}
 			}
 
 			pos[j].pos = i;
@@ -212,8 +227,9 @@
 	}
 
 	/* expand match to include subsequent popular lines */
-	while (mi + mk < a2 && mj + mk < b2 && a[mi + mk].e == b[mj + mk].e)
+	while (mi + mk < a2 && mj + mk < b2 && a[mi + mk].e == b[mj + mk].e) {
 		mk++;
+	}
 
 	*omi = mi;
 	*omj = mj;
@@ -230,18 +246,21 @@
 	while (1) {
 		/* find the longest match in this chunk */
 		k = longest_match(a, b, pos, a1, a2, b1, b2, &i, &j);
-		if (!k)
+		if (!k) {
 			return l;
+		}
 
 		/* and recurse on the remaining chunks on either side */
 		l = recurse(a, b, pos, a1, i, b1, j, l);
-		if (!l)
+		if (!l) {
 			return NULL;
+		}
 
 		l->next =
 		    (struct bdiff_hunk *)malloc(sizeof(struct bdiff_hunk));
-		if (!l->next)
+		if (!l->next) {
 			return NULL;
+		}
 
 		l = l->next;
 		l->a1 = i;
@@ -271,14 +290,16 @@
 		/* generate the matching block list */
 
 		curr = recurse(a, b, pos, 0, an, 0, bn, base);
-		if (!curr)
+		if (!curr) {
 			return -1;
+		}
 
 		/* sentinel end hunk */
 		curr->next =
 		    (struct bdiff_hunk *)malloc(sizeof(struct bdiff_hunk));
-		if (!curr->next)
+		if (!curr->next) {
 			return -1;
+		}
 		curr = curr->next;
 		curr->a1 = curr->a2 = an;
 		curr->b1 = curr->b2 = bn;
@@ -291,10 +312,11 @@
 	for (curr = base->next; curr; curr = curr->next) {
 		struct bdiff_hunk *next = curr->next;
 
-		if (!next)
+		if (!next) {
 			break;
+		}
 
-		if (curr->a2 == next->a1 || curr->b2 == next->b1)
+		if (curr->a2 == next->a1 || curr->b2 == next->b1) {
 			while (curr->a2 < an && curr->b2 < bn &&
 			       next->a1 < next->a2 && next->b1 < next->b2 &&
 			       !cmp(a + curr->a2, b + curr->b2)) {
@@ -303,10 +325,12 @@
 				curr->b2++;
 				next->b1++;
 			}
+		}
 	}
 
-	for (curr = base->next; curr; curr = curr->next)
+	for (curr = base->next; curr; curr = curr->next) {
 		count++;
+	}
 	return count;
 }
 
--- a/mercurial/bookmarks.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/bookmarks.py	Wed Apr 17 13:41:18 2019 -0400
@@ -44,7 +44,7 @@
     return fp
 
 class bmstore(object):
-    """Storage for bookmarks.
+    r"""Storage for bookmarks.
 
     This object should do all bookmark-related reads and writes, so
     that it's fairly simple to replace the storage underlying
@@ -306,7 +306,6 @@
     itself as we commit. This function returns the name of that bookmark.
     It is stored in .hg/bookmarks.current
     """
-    mark = None
     try:
         file = repo.vfs('bookmarks.current')
     except IOError as inst:
--- a/mercurial/branchmap.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/branchmap.py	Wed Apr 17 13:41:18 2019 -0400
@@ -23,144 +23,105 @@
     util,
 )
 from .utils import (
+    repoviewutil,
     stringutil,
 )
 
+subsettable = repoviewutil. subsettable
+
 calcsize = struct.calcsize
 pack_into = struct.pack_into
 unpack_from = struct.unpack_from
 
-def _filename(repo):
-    """name of a branchcache file for a given repo or repoview"""
-    filename = "branch2"
-    if repo.filtername:
-        filename = '%s-%s' % (filename, repo.filtername)
-    return filename
+
+class BranchMapCache(object):
+    """mapping of filtered views of repo with their branchcache"""
+    def __init__(self):
+        self._per_filter = {}
 
-def read(repo):
-    f = None
-    try:
-        f = repo.cachevfs(_filename(repo))
-        lineiter = iter(f)
-        cachekey = next(lineiter).rstrip('\n').split(" ", 2)
-        last, lrev = cachekey[:2]
-        last, lrev = bin(last), int(lrev)
-        filteredhash = None
-        if len(cachekey) > 2:
-            filteredhash = bin(cachekey[2])
-        partial = branchcache(tipnode=last, tiprev=lrev,
-                              filteredhash=filteredhash)
-        if not partial.validfor(repo):
-            # invalidate the cache
-            raise ValueError(r'tip differs')
+    def __getitem__(self, repo):
+        self.updatecache(repo)
+        return self._per_filter[repo.filtername]
+
+    def updatecache(self, repo):
+        """Update the cache for the given filtered view on a repository"""
+        # This can trigger updates for the caches for subsets of the filtered
+        # view, e.g. when there is no cache for this filtered view or the cache
+        # is stale.
+
         cl = repo.changelog
-        for l in lineiter:
-            l = l.rstrip('\n')
-            if not l:
-                continue
-            node, state, label = l.split(" ", 2)
-            if state not in 'oc':
-                raise ValueError(r'invalid branch state')
-            label = encoding.tolocal(label.strip())
-            node = bin(node)
-            if not cl.hasnode(node):
-                raise ValueError(
-                    r'node %s does not exist' % pycompat.sysstr(hex(node)))
-            partial.setdefault(label, []).append(node)
-            if state == 'c':
-                partial._closednodes.add(node)
+        filtername = repo.filtername
+        bcache = self._per_filter.get(filtername)
+        if bcache is None or not bcache.validfor(repo):
+            # cache object missing or cache object stale? Read from disk
+            bcache = branchcache.fromfile(repo)
 
-    except (IOError, OSError):
-        return None
+        revs = []
+        if bcache is None:
+            # no (fresh) cache available anymore, perhaps we can re-use
+            # the cache for a subset, then extend that to add info on missing
+            # revisions.
+            subsetname = subsettable.get(filtername)
+            if subsetname is not None:
+                subset = repo.filtered(subsetname)
+                bcache = self[subset].copy()
+                extrarevs = subset.changelog.filteredrevs - cl.filteredrevs
+                revs.extend(r for r in extrarevs if r <= bcache.tiprev)
+            else:
+                # nothing to fall back on, start empty.
+                bcache = branchcache()
 
-    except Exception as inst:
-        if repo.ui.debugflag:
-            msg = 'invalid branchheads cache'
-            if repo.filtername is not None:
-                msg += ' (%s)' % repo.filtername
-            msg += ': %s\n'
-            repo.ui.debug(msg % pycompat.bytestr(inst))
-        partial = None
+        revs.extend(cl.revs(start=bcache.tiprev + 1))
+        if revs:
+            bcache.update(repo, revs)
 
-    finally:
-        if f:
-            f.close()
-
-    return partial
+        assert bcache.validfor(repo), filtername
+        self._per_filter[repo.filtername] = bcache
 
-### Nearest subset relation
-# Nearest subset of filter X is a filter Y so that:
-# * Y is included in X,
-# * X - Y is as small as possible.
-# This create and ordering used for branchmap purpose.
-# the ordering may be partial
-subsettable = {None: 'visible',
-               'visible-hidden': 'visible',
-               'visible': 'served',
-               'served': 'immutable',
-               'immutable': 'base'}
+    def replace(self, repo, remotebranchmap):
+        """Replace the branchmap cache for a repo with a branch mapping.
 
-def updatecache(repo):
-    cl = repo.changelog
-    filtername = repo.filtername
-    partial = repo._branchcaches.get(filtername)
+        This is likely only called during clone with a branch map from a
+        remote.
 
-    revs = []
-    if partial is None or not partial.validfor(repo):
-        partial = read(repo)
-        if partial is None:
-            subsetname = subsettable.get(filtername)
-            if subsetname is None:
-                partial = branchcache()
-            else:
-                subset = repo.filtered(subsetname)
-                partial = subset.branchmap().copy()
-                extrarevs = subset.changelog.filteredrevs - cl.filteredrevs
-                revs.extend(r for  r in extrarevs if r <= partial.tiprev)
-    revs.extend(cl.revs(start=partial.tiprev + 1))
-    if revs:
-        partial.update(repo, revs)
-        partial.write(repo)
-
-    assert partial.validfor(repo), filtername
-    repo._branchcaches[repo.filtername] = partial
+        """
+        cl = repo.changelog
+        clrev = cl.rev
+        clbranchinfo = cl.branchinfo
+        rbheads = []
+        closed = []
+        for bheads in remotebranchmap.itervalues():
+            rbheads += bheads
+            for h in bheads:
+                r = clrev(h)
+                b, c = clbranchinfo(r)
+                if c:
+                    closed.append(h)
 
-def replacecache(repo, bm):
-    """Replace the branchmap cache for a repo with a branch mapping.
-
-    This is likely only called during clone with a branch map from a remote.
-    """
-    cl = repo.changelog
-    clrev = cl.rev
-    clbranchinfo = cl.branchinfo
-    rbheads = []
-    closed = []
-    for bheads in bm.itervalues():
-        rbheads.extend(bheads)
-        for h in bheads:
-            r = clrev(h)
-            b, c = clbranchinfo(r)
-            if c:
-                closed.append(h)
+        if rbheads:
+            rtiprev = max((int(clrev(node)) for node in rbheads))
+            cache = branchcache(
+                remotebranchmap, repo[rtiprev].node(), rtiprev,
+                closednodes=closed)
 
-    if rbheads:
-        rtiprev = max((int(clrev(node))
-                for node in rbheads))
-        cache = branchcache(bm,
-                            repo[rtiprev].node(),
-                            rtiprev,
-                            closednodes=closed)
+            # Try to stick it as low as possible
+            # filter above served are unlikely to be fetch from a clone
+            for candidate in ('base', 'immutable', 'served'):
+                rview = repo.filtered(candidate)
+                if cache.validfor(rview):
+                    self._per_filter[candidate] = cache
+                    cache.write(rview)
+                    return
 
-        # Try to stick it as low as possible
-        # filter above served are unlikely to be fetch from a clone
-        for candidate in ('base', 'immutable', 'served'):
-            rview = repo.filtered(candidate)
-            if cache.validfor(rview):
-                repo._branchcaches[candidate] = cache
-                cache.write(rview)
-                break
+    def clear(self):
+        self._per_filter.clear()
 
-class branchcache(dict):
+def _unknownnode(node):
+    """ raises ValueError when branchcache found a node which does not exists
+    """
+    raise ValueError(r'node %s does not exist' % pycompat.sysstr(hex(node)))
+
+class branchcache(object):
     """A dict like object that hold branches heads cache.
 
     This cache is used to avoid costly computations to determine all the
@@ -183,8 +144,10 @@
     """
 
     def __init__(self, entries=(), tipnode=nullid, tiprev=nullrev,
-                 filteredhash=None, closednodes=None):
-        super(branchcache, self).__init__(entries)
+                 filteredhash=None, closednodes=None, hasnode=None):
+        """ hasnode is a function which can be used to verify whether changelog
+        has a given node or not. If it's not provided, we assume that every node
+        we have exists in changelog """
         self.tipnode = tipnode
         self.tiprev = tiprev
         self.filteredhash = filteredhash
@@ -195,6 +158,125 @@
             self._closednodes = set()
         else:
             self._closednodes = closednodes
+        self._entries = dict(entries)
+        # whether closed nodes are verified or not
+        self._closedverified = False
+        # branches for which nodes are verified
+        self._verifiedbranches = set()
+        self._hasnode = hasnode
+        if self._hasnode is None:
+            self._hasnode = lambda x: True
+
+    def _verifyclosed(self):
+        """ verify the closed nodes we have """
+        if self._closedverified:
+            return
+        for node in self._closednodes:
+            if not self._hasnode(node):
+                _unknownnode(node)
+
+        self._closedverified = True
+
+    def _verifybranch(self, branch):
+        """ verify head nodes for the given branch. """
+        if branch not in self._entries or branch in self._verifiedbranches:
+            return
+        for n in self._entries[branch]:
+            if not self._hasnode(n):
+                _unknownnode(n)
+
+        self._verifiedbranches.add(branch)
+
+    def _verifyall(self):
+        """ verifies nodes of all the branches """
+        needverification = set(self._entries.keys()) - self._verifiedbranches
+        for b in needverification:
+            self._verifybranch(b)
+
+    def __iter__(self):
+        return iter(self._entries)
+
+    def __setitem__(self, key, value):
+        self._entries[key] = value
+
+    def __getitem__(self, key):
+        self._verifybranch(key)
+        return self._entries[key]
+
+    def __contains__(self, key):
+        self._verifybranch(key)
+        return key in self._entries
+
+    def iteritems(self):
+        for k, v in self._entries.iteritems():
+            self._verifybranch(k)
+            yield k, v
+
+    def hasbranch(self, label):
+        """ checks whether a branch of this name exists or not """
+        self._verifybranch(label)
+        return label in self._entries
+
+    @classmethod
+    def fromfile(cls, repo):
+        f = None
+        try:
+            f = repo.cachevfs(cls._filename(repo))
+            lineiter = iter(f)
+            cachekey = next(lineiter).rstrip('\n').split(" ", 2)
+            last, lrev = cachekey[:2]
+            last, lrev = bin(last), int(lrev)
+            filteredhash = None
+            hasnode = repo.changelog.hasnode
+            if len(cachekey) > 2:
+                filteredhash = bin(cachekey[2])
+            bcache = cls(tipnode=last, tiprev=lrev, filteredhash=filteredhash,
+                         hasnode=hasnode)
+            if not bcache.validfor(repo):
+                # invalidate the cache
+                raise ValueError(r'tip differs')
+            bcache.load(repo, lineiter)
+        except (IOError, OSError):
+            return None
+
+        except Exception as inst:
+            if repo.ui.debugflag:
+                msg = 'invalid branchheads cache'
+                if repo.filtername is not None:
+                    msg += ' (%s)' % repo.filtername
+                msg += ': %s\n'
+                repo.ui.debug(msg % pycompat.bytestr(inst))
+            bcache = None
+
+        finally:
+            if f:
+                f.close()
+
+        return bcache
+
+    def load(self, repo, lineiter):
+        """ fully loads the branchcache by reading from the file using the line
+        iterator passed"""
+        for line in lineiter:
+            line = line.rstrip('\n')
+            if not line:
+                continue
+            node, state, label = line.split(" ", 2)
+            if state not in 'oc':
+                raise ValueError(r'invalid branch state')
+            label = encoding.tolocal(label.strip())
+            node = bin(node)
+            self._entries.setdefault(label, []).append(node)
+            if state == 'c':
+                self._closednodes.add(node)
+
+    @staticmethod
+    def _filename(repo):
+        """name of a branchcache file for a given repo or repoview"""
+        filename = "branch2"
+        if repo.filtername:
+            filename = '%s-%s' % (filename, repo.filtername)
+        return filename
 
     def validfor(self, repo):
         """Is the cache content valid regarding a repo
@@ -203,7 +285,7 @@
         - True when cache is up to date or a subset of current repo."""
         try:
             return ((self.tipnode == repo.changelog.node(self.tiprev))
-                    and (self.filteredhash == \
+                    and (self.filteredhash ==
                          scmutil.filteredhash(repo, self.tiprev)))
         except IndexError:
             return False
@@ -230,7 +312,8 @@
         return (n for n in nodes if n not in self._closednodes)
 
     def branchheads(self, branch, closed=False):
-        heads = self[branch]
+        self._verifybranch(branch)
+        heads = self._entries[branch]
         if not closed:
             heads = list(self.iteropen(heads))
         return heads
@@ -239,32 +322,38 @@
         for bn, heads in self.iteritems():
             yield (bn, heads) + self._branchtip(heads)
 
+    def iterheads(self):
+        """ returns all the heads """
+        self._verifyall()
+        return self._entries.itervalues()
+
     def copy(self):
         """return an deep copy of the branchcache object"""
-        return branchcache(self, self.tipnode, self.tiprev, self.filteredhash,
-                           self._closednodes)
+        return type(self)(
+            self._entries, self.tipnode, self.tiprev, self.filteredhash,
+            self._closednodes)
 
     def write(self, repo):
         try:
-            f = repo.cachevfs(_filename(repo), "w", atomictemp=True)
+            f = repo.cachevfs(self._filename(repo), "w", atomictemp=True)
             cachekey = [hex(self.tipnode), '%d' % self.tiprev]
             if self.filteredhash is not None:
                 cachekey.append(hex(self.filteredhash))
             f.write(" ".join(cachekey) + '\n')
             nodecount = 0
             for label, nodes in sorted(self.iteritems()):
+                label = encoding.fromlocal(label)
                 for node in nodes:
                     nodecount += 1
                     if node in self._closednodes:
                         state = 'c'
                     else:
                         state = 'o'
-                    f.write("%s %s %s\n" % (hex(node), state,
-                                            encoding.fromlocal(label)))
+                    f.write("%s %s %s\n" % (hex(node), state, label))
             f.close()
             repo.ui.log('branchcache',
                         'wrote %s branch cache with %d labels and %d nodes\n',
-                        repo.filtername, len(self), nodecount)
+                        repo.filtername, len(self._entries), nodecount)
         except (IOError, OSError, error.Abort) as inst:
             # Abort may be raised by read only opener, so log and continue
             repo.ui.debug("couldn't write branch cache: %s\n" %
@@ -293,7 +382,7 @@
         # really branchheads. Note checking parents is insufficient:
         # 1 (branch a) -> 2 (branch b) -> 3 (branch a)
         for branch, newheadrevs in newbranches.iteritems():
-            bheads = self.setdefault(branch, [])
+            bheads = self._entries.setdefault(branch, [])
             bheadset = set(cl.rev(node) for node in bheads)
 
             # This have been tested True on all internal usage of this function.
@@ -320,7 +409,7 @@
             # cache key are not valid anymore
             self.tipnode = nullid
             self.tiprev = nullrev
-            for heads in self.values():
+            for heads in self.iterheads():
                 tiprev = max(cl.rev(node) for node in heads)
                 if tiprev > self.tiprev:
                     self.tipnode = cl.node(tiprev)
@@ -329,7 +418,16 @@
 
         duration = util.timer() - starttime
         repo.ui.log('branchcache', 'updated %s branch cache in %.4f seconds\n',
-                    repo.filtername, duration)
+                    repo.filtername or b'None', duration)
+
+        self.write(repo)
+
+
+class remotebranchcache(branchcache):
+    """Branchmap info for a remote connection, should not write locally"""
+    def write(self, repo):
+        pass
+
 
 # Revision branch info cache
 
--- a/mercurial/bundle2.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/bundle2.py	Wed Apr 17 13:41:18 2019 -0400
@@ -834,12 +834,21 @@
         if paramssize < 0:
             raise error.BundleValueError('negative bundle param size: %i'
                                          % paramssize)
-        yield _pack(_fstreamparamsize, paramssize)
         if paramssize:
             params = self._readexact(paramssize)
             self._processallparams(params)
-            yield params
-            assert self._compengine.bundletype()[1] == 'UN'
+            # The payload itself is decompressed below, so drop
+            # the compression parameter passed down to compensate.
+            outparams = []
+            for p in params.split(' '):
+                k, v = p.split('=', 1)
+                if k.lower() != 'compression':
+                    outparams.append(p)
+            outparams = ' '.join(outparams)
+            yield _pack(_fstreamparamsize, len(outparams))
+            yield outparams
+        else:
+            yield _pack(_fstreamparamsize, paramssize)
         # From there, payload might need to be decompressed
         self._fp = self._compengine.decompressorreader(self._fp)
         emptycount = 0
@@ -1397,8 +1406,8 @@
             assert chunknum == 0, 'Must start with chunk 0'
             self._chunkindex.append((0, self._tellfp()))
         else:
-            assert chunknum < len(self._chunkindex), \
-                   'Unknown chunk %d' % chunknum
+            assert chunknum < len(self._chunkindex), (
+                   'Unknown chunk %d' % chunknum)
             self._seekfp(self._chunkindex[chunknum][1])
 
         pos = self._chunkindex[chunknum][0]
@@ -1664,6 +1673,7 @@
                     mandatory=False)
 
 def _formatrequirementsspec(requirements):
+    requirements = [req for req in requirements if req != "shared"]
     return urlreq.quote(','.join(sorted(requirements)))
 
 def _formatrequirementsparams(requirements):
@@ -1979,7 +1989,7 @@
         op.gettransaction()
 
     currentheads = set()
-    for ls in op.repo.branchmap().itervalues():
+    for ls in op.repo.branchmap().iterheads():
         currentheads.update(ls)
 
     for h in heads:
@@ -2314,7 +2324,7 @@
                                         oldmatcher=oldmatcher,
                                         matcher=newmatcher,
                                         fullnodes=commonnodes)
-        cgdata = packer.generate(set([nodemod.nullid]), list(commonnodes),
+        cgdata = packer.generate({nodemod.nullid}, list(commonnodes),
                                  False, 'narrow_widen', changelog=False)
 
         part = bundler.newpart('changegroup', data=cgdata)
--- a/mercurial/cext/base85.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/cext/base85.c	Wed Apr 17 13:41:18 2019 -0400
@@ -24,8 +24,9 @@
 	unsigned i;
 
 	memset(b85dec, 0, sizeof(b85dec));
-	for (i = 0; i < sizeof(b85chars); i++)
+	for (i = 0; i < sizeof(b85chars); i++) {
 		b85dec[(int)(b85chars[i])] = i + 1;
+	}
 }
 
 static PyObject *b85encode(PyObject *self, PyObject *args)
@@ -37,19 +38,22 @@
 	unsigned int acc, val, ch;
 	int pad = 0;
 
-	if (!PyArg_ParseTuple(args, PY23("s#|i", "y#|i"), &text, &len, &pad))
+	if (!PyArg_ParseTuple(args, PY23("s#|i", "y#|i"), &text, &len, &pad)) {
 		return NULL;
+	}
 
-	if (pad)
+	if (pad) {
 		olen = ((len + 3) / 4 * 5) - 3;
-	else {
+	} else {
 		olen = len % 4;
-		if (olen)
+		if (olen) {
 			olen++;
+		}
 		olen += len / 4 * 5;
 	}
-	if (!(out = PyBytes_FromStringAndSize(NULL, olen + 3)))
+	if (!(out = PyBytes_FromStringAndSize(NULL, olen + 3))) {
 		return NULL;
+	}
 
 	dst = PyBytes_AsString(out);
 
@@ -58,8 +62,9 @@
 		for (i = 24; i >= 0; i -= 8) {
 			ch = *text++;
 			acc |= ch << i;
-			if (--len == 0)
+			if (--len == 0) {
 				break;
+			}
 		}
 		for (i = 4; i >= 0; i--) {
 			val = acc % 85;
@@ -69,8 +74,9 @@
 		dst += 5;
 	}
 
-	if (!pad)
+	if (!pad) {
 		_PyBytes_Resize(&out, olen);
+	}
 
 	return out;
 }
@@ -84,15 +90,18 @@
 	int c;
 	unsigned int acc;
 
-	if (!PyArg_ParseTuple(args, PY23("s#", "y#"), &text, &len))
+	if (!PyArg_ParseTuple(args, PY23("s#", "y#"), &text, &len)) {
 		return NULL;
+	}
 
 	olen = len / 5 * 4;
 	i = len % 5;
-	if (i)
+	if (i) {
 		olen += i - 1;
-	if (!(out = PyBytes_FromStringAndSize(NULL, olen)))
+	}
+	if (!(out = PyBytes_FromStringAndSize(NULL, olen))) {
 		return NULL;
+	}
 
 	dst = PyBytes_AsString(out);
 
@@ -100,8 +109,9 @@
 	while (i < len) {
 		acc = 0;
 		cap = len - i - 1;
-		if (cap > 4)
+		if (cap > 4) {
 			cap = 4;
+		}
 		for (j = 0; j < cap; i++, j++) {
 			c = b85dec[(int)*text++] - 1;
 			if (c < 0) {
@@ -136,10 +146,12 @@
 
 		cap = olen < 4 ? olen : 4;
 		olen -= cap;
-		for (j = 0; j < 4 - cap; j++)
+		for (j = 0; j < 4 - cap; j++) {
 			acc *= 85;
-		if (cap && cap < 4)
+		}
+		if (cap && cap < 4) {
 			acc += 0xffffff >> (cap - 1) * 8;
+		}
 		for (j = 0; j < cap; j++) {
 			acc = (acc << 8) | (acc >> 24);
 			*dst++ = acc;
--- a/mercurial/cext/bdiff.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/cext/bdiff.c	Wed Apr 17 13:41:18 2019 -0400
@@ -29,22 +29,26 @@
 
 	l.next = NULL;
 
-	if (!PyArg_ParseTuple(args, "SS:bdiff", &sa, &sb))
+	if (!PyArg_ParseTuple(args, "SS:bdiff", &sa, &sb)) {
 		return NULL;
+	}
 
 	an = bdiff_splitlines(PyBytes_AsString(sa), PyBytes_Size(sa), &a);
 	bn = bdiff_splitlines(PyBytes_AsString(sb), PyBytes_Size(sb), &b);
 
-	if (!a || !b)
+	if (!a || !b) {
 		goto nomem;
+	}
 
 	count = bdiff_diff(a, an, b, bn, &l);
-	if (count < 0)
+	if (count < 0) {
 		goto nomem;
+	}
 
 	rl = PyList_New(count);
-	if (!rl)
+	if (!rl) {
 		goto nomem;
+	}
 
 	for (h = l.next; h; h = h->next) {
 		m = Py_BuildValue("iiii", h->a1, h->a2, h->b1, h->b2);
@@ -72,8 +76,10 @@
 
 	l.next = NULL;
 
-	if (!PyArg_ParseTuple(args, PY23("s*s*:bdiff", "y*y*:bdiff"), &ba, &bb))
+	if (!PyArg_ParseTuple(args, PY23("s*s*:bdiff", "y*y*:bdiff"), &ba,
+	                      &bb)) {
 		return NULL;
+	}
 
 	if (!PyBuffer_IsContiguous(&ba, 'C') || ba.ndim > 1) {
 		PyErr_SetString(PyExc_ValueError, "bdiff input not contiguous");
@@ -98,8 +104,9 @@
 	lmax = la > lb ? lb : la;
 	for (ia = ba.buf, ib = bb.buf; li < lmax && *ia == *ib;
 	     ++li, ++ia, ++ib) {
-		if (*ia == '\n')
+		if (*ia == '\n') {
 			lcommon = li + 1;
+		}
 	}
 	/* we can almost add: if (li == lmax) lcommon = li; */
 
@@ -119,8 +126,9 @@
 	/* calculate length of output */
 	la = lb = 0;
 	for (h = l.next; h; h = h->next) {
-		if (h->a1 != la || h->b1 != lb)
+		if (h->a1 != la || h->b1 != lb) {
 			len += 12 + bl[h->b1].l - bl[lb].l;
+		}
 		la = h->a2;
 		lb = h->b2;
 	}
@@ -129,8 +137,9 @@
 
 	result = PyBytes_FromStringAndSize(NULL, len);
 
-	if (!result)
+	if (!result) {
 		goto cleanup;
+	}
 
 	/* build binary patch */
 	rb = PyBytes_AsString(result);
@@ -151,8 +160,9 @@
 	}
 
 cleanup:
-	if (_save)
+	if (_save) {
 		PyEval_RestoreThread(_save);
+	}
 	PyBuffer_Release(&ba);
 	PyBuffer_Release(&bb);
 	free(al);
@@ -174,20 +184,23 @@
 	Py_ssize_t i, rlen, wlen = 0;
 	char *w;
 
-	if (!PyArg_ParseTuple(args, "Sb:fixws", &s, &allws))
+	if (!PyArg_ParseTuple(args, "Sb:fixws", &s, &allws)) {
 		return NULL;
+	}
 	r = PyBytes_AsString(s);
 	rlen = PyBytes_Size(s);
 
 	w = (char *)PyMem_Malloc(rlen ? rlen : 1);
-	if (!w)
+	if (!w) {
 		goto nomem;
+	}
 
 	for (i = 0; i != rlen; i++) {
 		c = r[i];
 		if (c == ' ' || c == '\t' || c == '\r') {
-			if (!allws && (wlen == 0 || w[wlen - 1] != ' '))
+			if (!allws && (wlen == 0 || w[wlen - 1] != ' ')) {
 				w[wlen++] = ' ';
+			}
 		} else if (c == '\n' && !allws && wlen > 0 &&
 		           w[wlen - 1] == ' ') {
 			w[wlen - 1] = '\n';
@@ -207,8 +220,9 @@
                           const char *source, Py_ssize_t len)
 {
 	PyObject *sliced = PyBytes_FromStringAndSize(source, len);
-	if (sliced == NULL)
+	if (sliced == NULL) {
 		return false;
+	}
 	PyList_SET_ITEM(list, destidx, sliced);
 	return true;
 }
@@ -232,19 +246,22 @@
 			++nelts;
 		}
 	}
-	if ((result = PyList_New(nelts + 1)) == NULL)
+	if ((result = PyList_New(nelts + 1)) == NULL) {
 		goto abort;
+	}
 	nelts = 0;
 	for (i = 0; i < size - 1; ++i) {
 		if (text[i] == '\n') {
 			if (!sliceintolist(result, nelts++, text + start,
-			                   i - start + 1))
+			                   i - start + 1)) {
 				goto abort;
+			}
 			start = i + 1;
 		}
 	}
-	if (!sliceintolist(result, nelts++, text + start, size - start))
+	if (!sliceintolist(result, nelts++, text + start, size - start)) {
 		goto abort;
+	}
 	return result;
 abort:
 	Py_XDECREF(result);
@@ -257,8 +274,9 @@
 	PyObject *rl = (PyObject *)priv;
 	PyObject *m = Py_BuildValue("LLLL", a1, a2, b1, b2);
 	int r;
-	if (!m)
+	if (!m) {
 		return -1;
+	}
 	r = PyList_Append(rl, m);
 	Py_DECREF(m);
 	return r;
@@ -282,15 +300,17 @@
 	};
 
 	if (!PyArg_ParseTuple(args, PY23("s#s#", "y#y#"), &a.ptr, &la, &b.ptr,
-	                      &lb))
+	                      &lb)) {
 		return NULL;
+	}
 
 	a.size = la;
 	b.size = lb;
 
 	rl = PyList_New(0);
-	if (!rl)
+	if (!rl) {
 		return PyErr_NoMemory();
+	}
 
 	ecb.priv = rl;
 
--- a/mercurial/cext/charencode.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/cext/charencode.c	Wed Apr 17 13:41:18 2019 -0400
@@ -114,8 +114,9 @@
 
 	ret = PyBytes_FromStringAndSize(NULL, len / 2);
 
-	if (!ret)
+	if (!ret) {
 		return NULL;
+	}
 
 	d = PyBytes_AsString(ret);
 
@@ -133,21 +134,24 @@
 	const char *buf;
 	Py_ssize_t i, len;
 	if (!PyArg_ParseTuple(args, PY23("s#:isasciistr", "y#:isasciistr"),
-	                      &buf, &len))
+	                      &buf, &len)) {
 		return NULL;
+	}
 	i = 0;
 	/* char array in PyStringObject should be at least 4-byte aligned */
 	if (((uintptr_t)buf & 3) == 0) {
 		const uint32_t *p = (const uint32_t *)buf;
 		for (; i < len / 4; i++) {
-			if (p[i] & 0x80808080U)
+			if (p[i] & 0x80808080U) {
 				Py_RETURN_FALSE;
+			}
 		}
 		i *= 4;
 	}
 	for (; i < len; i++) {
-		if (buf[i] & 0x80)
+		if (buf[i] & 0x80) {
 			Py_RETURN_FALSE;
+		}
 	}
 	Py_RETURN_TRUE;
 }
@@ -164,8 +168,9 @@
 	len = PyBytes_GET_SIZE(str_obj);
 
 	newobj = PyBytes_FromStringAndSize(NULL, len);
-	if (!newobj)
+	if (!newobj) {
 		goto quit;
+	}
 
 	newstr = PyBytes_AS_STRING(newobj);
 
@@ -197,16 +202,18 @@
 PyObject *asciilower(PyObject *self, PyObject *args)
 {
 	PyObject *str_obj;
-	if (!PyArg_ParseTuple(args, "O!:asciilower", &PyBytes_Type, &str_obj))
+	if (!PyArg_ParseTuple(args, "O!:asciilower", &PyBytes_Type, &str_obj)) {
 		return NULL;
+	}
 	return _asciitransform(str_obj, lowertable, NULL);
 }
 
 PyObject *asciiupper(PyObject *self, PyObject *args)
 {
 	PyObject *str_obj;
-	if (!PyArg_ParseTuple(args, "O!:asciiupper", &PyBytes_Type, &str_obj))
+	if (!PyArg_ParseTuple(args, "O!:asciiupper", &PyBytes_Type, &str_obj)) {
 		return NULL;
+	}
 	return _asciitransform(str_obj, uppertable, NULL);
 }
 
@@ -222,8 +229,9 @@
 
 	if (!PyArg_ParseTuple(args, "O!O!O!:make_file_foldmap", &PyDict_Type,
 	                      &dmap, &PyInt_Type, &spec_obj, &PyFunction_Type,
-	                      &normcase_fallback))
+	                      &normcase_fallback)) {
 		goto quit;
+	}
 
 	spec = (int)PyInt_AS_LONG(spec_obj);
 	switch (spec) {
@@ -244,8 +252,9 @@
 	/* Add some more entries to deal with additions outside this
 	   function. */
 	file_foldmap = _dict_new_presized((PyDict_Size(dmap) / 10) * 11);
-	if (file_foldmap == NULL)
+	if (file_foldmap == NULL) {
 		goto quit;
+	}
 
 	while (PyDict_Next(dmap, &pos, &k, &v)) {
 		if (!dirstate_tuple_check(v)) {
@@ -265,8 +274,9 @@
 				    normcase_fallback, k, NULL);
 			}
 
-			if (normed == NULL)
+			if (normed == NULL) {
 				goto quit;
+			}
 			if (PyDict_SetItem(file_foldmap, normed, k) == -1) {
 				Py_DECREF(normed);
 				goto quit;
@@ -377,22 +387,25 @@
 	Py_ssize_t origlen, esclen;
 	int paranoid;
 	if (!PyArg_ParseTuple(args, "O!i:jsonescapeu8fast", &PyBytes_Type,
-	                      &origstr, &paranoid))
+	                      &origstr, &paranoid)) {
 		return NULL;
+	}
 
 	origbuf = PyBytes_AS_STRING(origstr);
 	origlen = PyBytes_GET_SIZE(origstr);
 	esclen = jsonescapelen(origbuf, origlen, paranoid);
-	if (esclen < 0)
+	if (esclen < 0) {
 		return NULL; /* unsupported char found or overflow */
+	}
 	if (origlen == esclen) {
 		Py_INCREF(origstr);
 		return origstr;
 	}
 
 	escstr = PyBytes_FromStringAndSize(NULL, esclen);
-	if (!escstr)
+	if (!escstr) {
 		return NULL;
+	}
 	encodejsonescape(PyBytes_AS_STRING(escstr), esclen, origbuf, origlen,
 	                 paranoid);
 
--- a/mercurial/cext/mpatch.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/cext/mpatch.c	Wed Apr 17 13:41:18 2019 -0400
@@ -55,13 +55,16 @@
 	int r;
 
 	PyObject *tmp = PyList_GetItem((PyObject *)bins, pos);
-	if (!tmp)
+	if (!tmp) {
 		return NULL;
-	if (PyObject_GetBuffer(tmp, &buffer, PyBUF_CONTIG_RO))
+	}
+	if (PyObject_GetBuffer(tmp, &buffer, PyBUF_CONTIG_RO)) {
 		return NULL;
+	}
 	if ((r = mpatch_decode(buffer.buf, buffer.len, &res)) < 0) {
-		if (!PyErr_Occurred())
+		if (!PyErr_Occurred()) {
 			setpyerr(r);
+		}
 		res = NULL;
 	}
 
@@ -78,8 +81,9 @@
 	char *out;
 	Py_ssize_t len, outlen;
 
-	if (!PyArg_ParseTuple(args, "OO:mpatch", &text, &bins))
+	if (!PyArg_ParseTuple(args, "OO:mpatch", &text, &bins)) {
 		return NULL;
+	}
 
 	len = PyList_Size(bins);
 	if (!len) {
@@ -94,8 +98,9 @@
 
 	patch = mpatch_fold(bins, cpygetitem, 0, len);
 	if (!patch) { /* error already set or memory error */
-		if (!PyErr_Occurred())
+		if (!PyErr_Occurred()) {
 			PyErr_NoMemory();
+		}
 		result = NULL;
 		goto cleanup;
 	}
@@ -126,8 +131,9 @@
 cleanup:
 	mpatch_lfree(patch);
 	PyBuffer_Release(&buffer);
-	if (!result && !PyErr_Occurred())
+	if (!result && !PyErr_Occurred()) {
 		setpyerr(r);
+	}
 	return result;
 }
 
@@ -138,15 +144,18 @@
 	Py_ssize_t patchlen;
 	char *bin;
 
-	if (!PyArg_ParseTuple(args, PY23("ls#", "ly#"), &orig, &bin, &patchlen))
+	if (!PyArg_ParseTuple(args, PY23("ls#", "ly#"), &orig, &bin,
+	                      &patchlen)) {
 		return NULL;
+	}
 
 	while (pos >= 0 && pos < patchlen) {
 		start = getbe32(bin + pos);
 		end = getbe32(bin + pos + 4);
 		len = getbe32(bin + pos + 8);
-		if (start > end)
+		if (start > end) {
 			break; /* sanity check */
+		}
 		pos += 12 + len;
 		outlen += start - last;
 		last = end;
@@ -154,9 +163,10 @@
 	}
 
 	if (pos != patchlen) {
-		if (!PyErr_Occurred())
+		if (!PyErr_Occurred()) {
 			PyErr_SetString(mpatch_Error,
 			                "patch cannot be decoded");
+		}
 		return NULL;
 	}
 
--- a/mercurial/cext/osutil.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/cext/osutil.c	Wed Apr 17 13:41:18 2019 -0400
@@ -8,6 +8,7 @@
 */
 
 #define _ATFILE_SOURCE
+#define PY_SSIZE_T_CLEAN
 #include <Python.h>
 #include <errno.h>
 #include <fcntl.h>
@@ -227,7 +228,7 @@
 		kind, py_st);
 }
 
-static PyObject *_listdir(char *path, int plen, int wantstat, char *skip)
+static PyObject *_listdir(char *path, Py_ssize_t plen, int wantstat, char *skip)
 {
 	PyObject *rval = NULL; /* initialize - return value */
 	PyObject *list;
@@ -1181,7 +1182,8 @@
 	PyObject *statobj = NULL; /* initialize - optional arg */
 	PyObject *skipobj = NULL; /* initialize - optional arg */
 	char *path, *skip = NULL;
-	int wantstat, plen;
+	Py_ssize_t plen;
+	int wantstat;
 
 	static char *kwlist[] = {"path", "stat", "skip", NULL};
 
--- a/mercurial/cext/parsers.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/cext/parsers.c	Wed Apr 17 13:41:18 2019 -0400
@@ -7,6 +7,7 @@
  the GNU General Public License, incorporated herein by reference.
 */
 
+#define PY_SSIZE_T_CLEAN
 #include <Python.h>
 #include <ctype.h>
 #include <stddef.h>
@@ -32,8 +33,9 @@
 {
 	Py_ssize_t expected_size;
 
-	if (!PyArg_ParseTuple(args, "n:make_presized_dict", &expected_size))
+	if (!PyArg_ParseTuple(args, "n:make_presized_dict", &expected_size)) {
 		return NULL;
+	}
 
 	return _dict_new_presized(expected_size);
 }
@@ -43,8 +45,9 @@
 {
 	dirstateTupleObject *t =
 	    PyObject_New(dirstateTupleObject, &dirstateTupleType);
-	if (!t)
+	if (!t) {
 		return NULL;
+	}
 	t->state = state;
 	t->mode = mode;
 	t->size = size;
@@ -60,12 +63,14 @@
 	dirstateTupleObject *t;
 	char state;
 	int size, mode, mtime;
-	if (!PyArg_ParseTuple(args, "ciii", &state, &mode, &size, &mtime))
+	if (!PyArg_ParseTuple(args, "ciii", &state, &mode, &size, &mtime)) {
 		return NULL;
+	}
 
 	t = (dirstateTupleObject *)subtype->tp_alloc(subtype, 1);
-	if (!t)
+	if (!t) {
 		return NULL;
+	}
 	t->state = state;
 	t->mode = mode;
 	t->size = size;
@@ -160,13 +165,15 @@
 	PyObject *fname = NULL, *cname = NULL, *entry = NULL;
 	char state, *cur, *str, *cpos;
 	int mode, size, mtime;
-	unsigned int flen, len, pos = 40;
-	int readlen;
+	unsigned int flen, pos = 40;
+	Py_ssize_t len = 40;
+	Py_ssize_t readlen;
 
 	if (!PyArg_ParseTuple(
 	        args, PY23("O!O!s#:parse_dirstate", "O!O!y#:parse_dirstate"),
-	        &PyDict_Type, &dmap, &PyDict_Type, &cmap, &str, &readlen))
+	        &PyDict_Type, &dmap, &PyDict_Type, &cmap, &str, &readlen)) {
 		goto quit;
+	}
 
 	len = readlen;
 
@@ -177,9 +184,11 @@
 		goto quit;
 	}
 
-	parents = Py_BuildValue(PY23("s#s#", "y#y#"), str, 20, str + 20, 20);
-	if (!parents)
+	parents = Py_BuildValue(PY23("s#s#", "y#y#"), str, (Py_ssize_t)20,
+	                        str + 20, (Py_ssize_t)20);
+	if (!parents) {
 		goto quit;
+	}
 
 	/* read filenames */
 	while (pos >= 40 && pos < len) {
@@ -212,13 +221,16 @@
 			    cpos + 1, flen - (cpos - cur) - 1);
 			if (!fname || !cname ||
 			    PyDict_SetItem(cmap, fname, cname) == -1 ||
-			    PyDict_SetItem(dmap, fname, entry) == -1)
+			    PyDict_SetItem(dmap, fname, entry) == -1) {
 				goto quit;
+			}
 			Py_DECREF(cname);
 		} else {
 			fname = PyBytes_FromStringAndSize(cur, flen);
-			if (!fname || PyDict_SetItem(dmap, fname, entry) == -1)
+			if (!fname ||
+			    PyDict_SetItem(dmap, fname, entry) == -1) {
 				goto quit;
+			}
 		}
 		Py_DECREF(fname);
 		Py_DECREF(entry);
@@ -245,16 +257,20 @@
 	PyObject *nonnset = NULL, *otherpset = NULL, *result = NULL;
 	Py_ssize_t pos;
 
-	if (!PyArg_ParseTuple(args, "O!:nonnormalentries", &PyDict_Type, &dmap))
+	if (!PyArg_ParseTuple(args, "O!:nonnormalentries", &PyDict_Type,
+	                      &dmap)) {
 		goto bail;
+	}
 
 	nonnset = PySet_New(NULL);
-	if (nonnset == NULL)
+	if (nonnset == NULL) {
 		goto bail;
+	}
 
 	otherpset = PySet_New(NULL);
-	if (otherpset == NULL)
+	if (otherpset == NULL) {
 		goto bail;
+	}
 
 	pos = 0;
 	while (PyDict_Next(dmap, &pos, &fname, &v)) {
@@ -272,15 +288,18 @@
 			}
 		}
 
-		if (t->state == 'n' && t->mtime != -1)
+		if (t->state == 'n' && t->mtime != -1) {
 			continue;
-		if (PySet_Add(nonnset, fname) == -1)
+		}
+		if (PySet_Add(nonnset, fname) == -1) {
 			goto bail;
+		}
 	}
 
 	result = Py_BuildValue("(OO)", nonnset, otherpset);
-	if (result == NULL)
+	if (result == NULL) {
 		goto bail;
+	}
 	Py_DECREF(nonnset);
 	Py_DECREF(otherpset);
 	return result;
@@ -304,8 +323,10 @@
 	int now;
 
 	if (!PyArg_ParseTuple(args, "O!O!O!i:pack_dirstate", &PyDict_Type, &map,
-	                      &PyDict_Type, &copymap, &PyTuple_Type, &pl, &now))
+	                      &PyDict_Type, &copymap, &PyTuple_Type, &pl,
+	                      &now)) {
 		return NULL;
+	}
 
 	if (PyTuple_Size(pl) != 2) {
 		PyErr_SetString(PyExc_TypeError, "expected 2-element tuple");
@@ -332,8 +353,9 @@
 	}
 
 	packobj = PyBytes_FromStringAndSize(NULL, nbytes);
-	if (packobj == NULL)
+	if (packobj == NULL) {
 		goto bail;
+	}
 
 	p = PyBytes_AS_STRING(packobj);
 
@@ -377,10 +399,12 @@
 			mtime = -1;
 			mtime_unset = (PyObject *)make_dirstate_tuple(
 			    state, mode, size, mtime);
-			if (!mtime_unset)
+			if (!mtime_unset) {
 				goto bail;
-			if (PyDict_SetItem(map, k, mtime_unset) == -1)
+			}
+			if (PyDict_SetItem(map, k, mtime_unset) == -1) {
 				goto bail;
+			}
 			Py_DECREF(mtime_unset);
 			mtime_unset = NULL;
 		}
@@ -564,8 +588,7 @@
 static PyObject *fm1readmarkers(PyObject *self, PyObject *args)
 {
 	const char *data, *dataend;
-	int datalen;
-	Py_ssize_t offset, stop;
+	Py_ssize_t datalen, offset, stop;
 	PyObject *markers = NULL;
 
 	if (!PyArg_ParseTuple(args, PY23("s#nn", "y#nn"), &data, &datalen,
@@ -664,8 +687,9 @@
 	manifest_module_init(mod);
 	revlog_module_init(mod);
 
-	if (PyType_Ready(&dirstateTupleType) < 0)
+	if (PyType_Ready(&dirstateTupleType) < 0) {
 		return;
+	}
 	Py_INCREF(&dirstateTupleType);
 	PyModule_AddObject(mod, "dirstatetuple",
 	                   (PyObject *)&dirstateTupleType);
@@ -675,12 +699,14 @@
 {
 	PyObject *sys = PyImport_ImportModule("sys"), *ver;
 	long hexversion;
-	if (!sys)
+	if (!sys) {
 		return -1;
+	}
 	ver = PyObject_GetAttrString(sys, "hexversion");
 	Py_DECREF(sys);
-	if (!ver)
+	if (!ver) {
 		return -1;
+	}
 	hexversion = PyInt_AsLong(ver);
 	Py_DECREF(ver);
 	/* sys.hexversion is a 32-bit number by default, so the -1 case
@@ -720,8 +746,9 @@
 {
 	PyObject *mod;
 
-	if (check_python_version() == -1)
+	if (check_python_version() == -1) {
 		return;
+	}
 	mod = Py_InitModule3("parsers", methods, parsers_doc);
 	module_init(mod);
 }
--- a/mercurial/cext/pathencode.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/cext/pathencode.c	Wed Apr 17 13:41:18 2019 -0400
@@ -126,8 +126,9 @@
 			if (src[i] == 'g') {
 				state = DHGDI;
 				charcopy(dest, &destlen, destsize, src[i++]);
-			} else
+			} else {
 				state = DDEFAULT;
+			}
 			break;
 		case DHGDI:
 			if (src[i] == '/') {
@@ -137,8 +138,9 @@
 			state = DDEFAULT;
 			break;
 		case DDEFAULT:
-			if (src[i] == '.')
+			if (src[i] == '.') {
 				state = DDOT;
+			}
 			charcopy(dest, &destlen, destsize, src[i++]);
 			break;
 		}
@@ -153,8 +155,9 @@
 	PyObject *pathobj, *newobj;
 	char *path;
 
-	if (!PyArg_ParseTuple(args, "O:encodedir", &pathobj))
+	if (!PyArg_ParseTuple(args, "O:encodedir", &pathobj)) {
 		return NULL;
+	}
 
 	if (PyBytes_AsStringAndSize(pathobj, &path, &len) == -1) {
 		PyErr_SetString(PyExc_TypeError, "expected a string");
@@ -235,15 +238,17 @@
 			if (src[i] == 'u') {
 				state = AU;
 				charcopy(dest, &destlen, destsize, src[i++]);
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case AU:
 			if (src[i] == 'x') {
 				state = THIRD;
 				i++;
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case THIRD:
 			state = DEFAULT;
@@ -262,8 +267,9 @@
 			if (src[i] == 'o') {
 				state = CO;
 				charcopy(dest, &destlen, destsize, src[i++]);
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case CO:
 			if (src[i] == 'm') {
@@ -272,8 +278,9 @@
 			} else if (src[i] == 'n') {
 				state = THIRD;
 				i++;
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case COMLPT:
 			switch (src[i]) {
@@ -314,43 +321,49 @@
 			if (src[i] == 'p') {
 				state = LP;
 				charcopy(dest, &destlen, destsize, src[i++]);
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case LP:
 			if (src[i] == 't') {
 				state = COMLPT;
 				i++;
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case N:
 			if (src[i] == 'u') {
 				state = NU;
 				charcopy(dest, &destlen, destsize, src[i++]);
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case NU:
 			if (src[i] == 'l') {
 				state = THIRD;
 				i++;
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case P:
 			if (src[i] == 'r') {
 				state = PR;
 				charcopy(dest, &destlen, destsize, src[i++]);
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case PR:
 			if (src[i] == 'n') {
 				state = THIRD;
 				i++;
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case LDOT:
 			switch (src[i]) {
@@ -397,18 +410,21 @@
 			if (src[i] == 'g') {
 				state = HGDI;
 				charcopy(dest, &destlen, destsize, src[i++]);
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case HGDI:
 			if (src[i] == '/') {
 				state = START;
-				if (encodedir)
+				if (encodedir) {
 					memcopy(dest, &destlen, destsize, ".hg",
 					        3);
+				}
 				charcopy(dest, &destlen, destsize, src[i++]);
-			} else
+			} else {
 				state = DEFAULT;
+			}
 			break;
 		case SPACE:
 			switch (src[i]) {
@@ -427,8 +443,9 @@
 		case DEFAULT:
 			while (inset(onebyte, src[i])) {
 				charcopy(dest, &destlen, destsize, src[i++]);
-				if (i == len)
+				if (i == len) {
 					goto done;
+				}
 			}
 			switch (src[i]) {
 			case '.':
@@ -456,9 +473,10 @@
 					charcopy(dest, &destlen, destsize, '_');
 					charcopy(dest, &destlen, destsize,
 					         c == '_' ? '_' : c + 32);
-				} else
+				} else {
 					escape3(dest, &destlen, destsize,
 					        src[i++]);
+				}
 				break;
 			}
 			break;
@@ -498,12 +516,13 @@
 	Py_ssize_t i, destlen = 0;
 
 	for (i = 0; i < len; i++) {
-		if (inset(onebyte, src[i]))
+		if (inset(onebyte, src[i])) {
 			charcopy(dest, &destlen, destsize, src[i]);
-		else if (inset(lower, src[i]))
+		} else if (inset(lower, src[i])) {
 			charcopy(dest, &destlen, destsize, src[i] + 32);
-		else
+		} else {
 			escape3(dest, &destlen, destsize, src[i]);
+		}
 	}
 
 	return destlen;
@@ -516,13 +535,15 @@
 	PyObject *ret;
 
 	if (!PyArg_ParseTuple(args, PY23("s#:lowerencode", "y#:lowerencode"),
-	                      &path, &len))
+	                      &path, &len)) {
 		return NULL;
+	}
 
 	newlen = _lowerencode(NULL, 0, path, len);
 	ret = PyBytes_FromStringAndSize(NULL, newlen);
-	if (ret)
+	if (ret) {
 		_lowerencode(PyBytes_AS_STRING(ret), newlen, path, len);
+	}
 
 	return ret;
 }
@@ -551,8 +572,9 @@
 	Py_ssize_t destsize, destlen = 0, slop, used;
 
 	while (lastslash >= 0 && src[lastslash] != '/') {
-		if (src[lastslash] == '.' && lastdot == -1)
+		if (src[lastslash] == '.' && lastdot == -1) {
 			lastdot = lastslash;
+		}
 		lastslash--;
 	}
 
@@ -570,12 +592,14 @@
 	/* If src contains a suffix, we will append it to the end of
 	   the new string, so make room. */
 	destsize = 120;
-	if (lastdot >= 0)
+	if (lastdot >= 0) {
 		destsize += len - lastdot - 1;
+	}
 
 	ret = PyBytes_FromStringAndSize(NULL, destsize);
-	if (ret == NULL)
+	if (ret == NULL) {
 		return NULL;
+	}
 
 	dest = PyBytes_AS_STRING(ret);
 	memcopy(dest, &destlen, destsize, "dh/", 3);
@@ -587,30 +611,36 @@
 			char d = dest[destlen - 1];
 			/* After truncation, a directory name may end
 			   in a space or dot, which are unportable. */
-			if (d == '.' || d == ' ')
+			if (d == '.' || d == ' ') {
 				dest[destlen - 1] = '_';
-			/* The + 3 is to account for "dh/" in the beginning */
-			if (destlen > maxshortdirslen + 3)
+				/* The + 3 is to account for "dh/" in the
+				 * beginning */
+			}
+			if (destlen > maxshortdirslen + 3) {
 				break;
+			}
 			charcopy(dest, &destlen, destsize, src[i]);
 			p = -1;
-		} else if (p < dirprefixlen)
+		} else if (p < dirprefixlen) {
 			charcopy(dest, &destlen, destsize, src[i]);
+		}
 	}
 
 	/* Rewind to just before the last slash copied. */
-	if (destlen > maxshortdirslen + 3)
+	if (destlen > maxshortdirslen + 3) {
 		do {
 			destlen--;
 		} while (destlen > 0 && dest[destlen] != '/');
+	}
 
 	if (destlen > 3) {
 		if (lastslash > 0) {
 			char d = dest[destlen - 1];
 			/* The last directory component may be
 			   truncated, so make it safe. */
-			if (d == '.' || d == ' ')
+			if (d == '.' || d == ' ') {
 				dest[destlen - 1] = '_';
+			}
 		}
 
 		charcopy(dest, &destlen, destsize, '/');
@@ -620,27 +650,32 @@
 	   depends on the number of bytes left after accounting for
 	   hash and suffix. */
 	used = destlen + 40;
-	if (lastdot >= 0)
+	if (lastdot >= 0) {
 		used += len - lastdot - 1;
+	}
 	slop = maxstorepathlen - used;
 	if (slop > 0) {
 		Py_ssize_t basenamelen =
 		    lastslash >= 0 ? len - lastslash - 2 : len - 1;
 
-		if (basenamelen > slop)
+		if (basenamelen > slop) {
 			basenamelen = slop;
-		if (basenamelen > 0)
+		}
+		if (basenamelen > 0) {
 			memcopy(dest, &destlen, destsize, &src[lastslash + 1],
 			        basenamelen);
+		}
 	}
 
 	/* Add hash and suffix. */
-	for (i = 0; i < 20; i++)
+	for (i = 0; i < 20; i++) {
 		hexencode(dest, &destlen, destsize, sha[i]);
+	}
 
-	if (lastdot >= 0)
+	if (lastdot >= 0) {
 		memcopy(dest, &destlen, destsize, &src[lastdot],
 		        len - lastdot - 1);
+	}
 
 	assert(PyBytes_Check(ret));
 	Py_SIZE(ret) = destlen;
@@ -677,13 +712,15 @@
 
 	shaobj = PyObject_CallFunction(shafunc, PY23("s#", "y#"), str, len);
 
-	if (shaobj == NULL)
+	if (shaobj == NULL) {
 		return -1;
+	}
 
 	hashobj = PyObject_CallMethod(shaobj, "digest", "");
 	Py_DECREF(shaobj);
-	if (hashobj == NULL)
+	if (hashobj == NULL) {
 		return -1;
+	}
 
 	if (!PyBytes_Check(hashobj) || PyBytes_GET_SIZE(hashobj) != 20) {
 		PyErr_SetString(PyExc_TypeError,
@@ -714,8 +751,9 @@
 	}
 
 	dirlen = _encodedir(dired, baselen, src, len);
-	if (sha1hash(sha, dired, dirlen - 1) == -1)
+	if (sha1hash(sha, dired, dirlen - 1) == -1) {
 		return NULL;
+	}
 	lowerlen = _lowerencode(lowered, baselen, dired + 5, dirlen - 5);
 	auxlen = auxencode(auxed, baselen, lowered, lowerlen);
 	return hashmangle(auxed, auxlen, sha);
@@ -727,18 +765,20 @@
 	PyObject *pathobj, *newobj;
 	char *path;
 
-	if (!PyArg_ParseTuple(args, "O:pathencode", &pathobj))
+	if (!PyArg_ParseTuple(args, "O:pathencode", &pathobj)) {
 		return NULL;
+	}
 
 	if (PyBytes_AsStringAndSize(pathobj, &path, &len) == -1) {
 		PyErr_SetString(PyExc_TypeError, "expected a string");
 		return NULL;
 	}
 
-	if (len > maxstorepathlen)
+	if (len > maxstorepathlen) {
 		newlen = maxstorepathlen + 2;
-	else
+	} else {
 		newlen = len ? basicencode(NULL, 0, path, len + 1) : 1;
+	}
 
 	if (newlen <= maxstorepathlen + 1) {
 		if (newlen == len + 1) {
@@ -754,8 +794,9 @@
 			basicencode(PyBytes_AS_STRING(newobj), newlen, path,
 			            len + 1);
 		}
-	} else
+	} else {
 		newobj = hashencode(path, len + 1);
+	}
 
 	return newobj;
 }
--- a/mercurial/cext/revlog.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/cext/revlog.c	Wed Apr 17 13:41:18 2019 -0400
@@ -7,6 +7,7 @@
  the GNU General Public License, incorporated herein by reference.
 */
 
+#define PY_SSIZE_T_CLEAN
 #include <Python.h>
 #include <assert.h>
 #include <ctype.h>
@@ -365,7 +366,7 @@
 
 	entry = Py_BuildValue(tuple_format, offset_flags, comp_len, uncomp_len,
 	                      base_rev, link_rev, parent_1, parent_2, c_node_id,
-	                      20);
+	                      (Py_ssize_t)20);
 
 	if (entry) {
 		PyObject_GC_UnTrack(entry);
@@ -1947,7 +1948,7 @@
 static PyObject *index_partialmatch(indexObject *self, PyObject *args)
 {
 	const char *fullnode;
-	int nodelen;
+	Py_ssize_t nodelen;
 	char *node;
 	int rev, i;
 
@@ -3016,8 +3017,9 @@
 	PyModule_AddObject(mod, "nodetree", (PyObject *)&nodetreeType);
 
 	if (!nullentry) {
-		nullentry = Py_BuildValue(PY23("iiiiiiis#", "iiiiiiiy#"), 0, 0,
-		                          0, -1, -1, -1, -1, nullid, 20);
+		nullentry =
+		    Py_BuildValue(PY23("iiiiiiis#", "iiiiiiiy#"), 0, 0, 0, -1,
+		                  -1, -1, -1, nullid, (Py_ssize_t)20);
 	}
 	if (nullentry)
 		PyObject_GC_UnTrack(nullentry);
--- a/mercurial/changegroup.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/changegroup.py	Wed Apr 17 13:41:18 2019 -0400
@@ -275,7 +275,7 @@
             # because we need to use the top level value (if they exist)
             # in this function.
             srctype = tr.hookargs.setdefault('source', srctype)
-            url = tr.hookargs.setdefault('url', url)
+            tr.hookargs.setdefault('url', url)
             repo.hook('prechangegroup',
                       throw=True, **pycompat.strkwargs(tr.hookargs))
 
@@ -817,13 +817,13 @@
         self._verbosenote(_('uncompressed size of bundle content:\n'))
         size = 0
 
-        clstate, deltas = self._generatechangelog(cl, clnodes)
+        clstate, deltas = self._generatechangelog(cl, clnodes,
+                                                  generate=changelog)
         for delta in deltas:
-            if changelog:
-                for chunk in _revisiondeltatochunks(delta,
-                                                    self._builddeltaheader):
-                    size += len(chunk)
-                    yield chunk
+            for chunk in _revisiondeltatochunks(delta,
+                                                self._builddeltaheader):
+                size += len(chunk)
+                yield chunk
 
         close = closechunk()
         size += len(close)
@@ -917,12 +917,15 @@
         if clnodes:
             repo.hook('outgoing', node=hex(clnodes[0]), source=source)
 
-    def _generatechangelog(self, cl, nodes):
+    def _generatechangelog(self, cl, nodes, generate=True):
         """Generate data for changelog chunks.
 
         Returns a 2-tuple of a dict containing state and an iterable of
         byte chunks. The state will not be fully populated until the
         chunk stream has been fully consumed.
+
+        if generate is False, the state will be fully populated and no chunk
+        stream will be yielded
         """
         clrevorder = {}
         manifests = {}
@@ -930,6 +933,27 @@
         changedfiles = set()
         clrevtomanifestrev = {}
 
+        state = {
+            'clrevorder': clrevorder,
+            'manifests': manifests,
+            'changedfiles': changedfiles,
+            'clrevtomanifestrev': clrevtomanifestrev,
+        }
+
+        if not (generate or self._ellipses):
+            # sort the nodes in storage order
+            nodes = sorted(nodes, key=cl.rev)
+            for node in nodes:
+                c = cl.changelogrevision(node)
+                clrevorder[node] = len(clrevorder)
+                # record the first changeset introducing this manifest version
+                manifests.setdefault(c.manifest, node)
+                # Record a complete list of potentially-changed files in
+                # this manifest.
+                changedfiles.update(c.files)
+
+            return state, ()
+
         # Callback for the changelog, used to collect changed files and
         # manifest nodes.
         # Returns the linkrev node (identity in the changelog case).
@@ -970,13 +994,6 @@
 
             return x
 
-        state = {
-            'clrevorder': clrevorder,
-            'manifests': manifests,
-            'changedfiles': changedfiles,
-            'clrevtomanifestrev': clrevtomanifestrev,
-        }
-
         gen = deltagroup(
             self._repo, cl, nodes, True, lookupcl,
             self._forcedeltaparentprev,
@@ -1088,6 +1105,11 @@
                     yield tree, []
 
     def _prunemanifests(self, store, nodes, commonrevs):
+        if not self._ellipses:
+            # In non-ellipses case and large repositories, it is better to
+            # prevent calling of store.rev and store.linkrev on a lot of
+            # nodes as compared to sending some extra data
+            return nodes.copy()
         # This is split out as a separate method to allow filtering
         # commonrevs in extension code.
         #
@@ -1296,9 +1318,9 @@
     assert version in supportedoutgoingversions(repo)
 
     if matcher is None:
-        matcher = matchmod.alwaysmatcher(repo.root, '')
+        matcher = matchmod.always()
     if oldmatcher is None:
-        oldmatcher = matchmod.nevermatcher(repo.root, '')
+        oldmatcher = matchmod.never()
 
     if version == '01' and not matcher.always():
         raise error.ProgrammingError('version 01 changegroups do not support '
--- a/mercurial/changelog.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/changelog.py	Wed Apr 17 13:41:18 2019 -0400
@@ -22,6 +22,7 @@
     error,
     pycompat,
     revlog,
+    util,
 )
 from .utils import (
     dateutil,
@@ -34,17 +35,25 @@
     """
     >>> from .pycompat import bytechr as chr
     >>> d = {b'nl': chr(10), b'bs': chr(92), b'cr': chr(13), b'nul': chr(0)}
-    >>> s = b"ab%(nl)scd%(bs)s%(bs)sn%(nul)sab%(cr)scd%(bs)s%(nl)s" % d
+    >>> s = b"ab%(nl)scd%(bs)s%(bs)sn%(nul)s12ab%(cr)scd%(bs)s%(nl)s" % d
     >>> s
-    'ab\\ncd\\\\\\\\n\\x00ab\\rcd\\\\\\n'
+    'ab\\ncd\\\\\\\\n\\x0012ab\\rcd\\\\\\n'
     >>> res = _string_escape(s)
-    >>> s == stringutil.unescapestr(res)
+    >>> s == _string_unescape(res)
     True
     """
     # subset of the string_escape codec
     text = text.replace('\\', '\\\\').replace('\n', '\\n').replace('\r', '\\r')
     return text.replace('\0', '\\0')
 
+def _string_unescape(text):
+    if '\\0' in text:
+        # fix up \0 without getting into trouble with \\0
+        text = text.replace('\\\\', '\\\\\n')
+        text = text.replace('\\0', '\0')
+        text = text.replace('\n', '')
+    return stringutil.unescapestr(text)
+
 def decodeextra(text):
     """
     >>> from .pycompat import bytechr as chr
@@ -59,20 +68,37 @@
     extra = _defaultextra.copy()
     for l in text.split('\0'):
         if l:
-            if '\\0' in l:
-                # fix up \0 without getting into trouble with \\0
-                l = l.replace('\\\\', '\\\\\n')
-                l = l.replace('\\0', '\0')
-                l = l.replace('\n', '')
-            k, v = stringutil.unescapestr(l).split(':', 1)
+            k, v = _string_unescape(l).split(':', 1)
             extra[k] = v
     return extra
 
 def encodeextra(d):
     # keys must be sorted to produce a deterministic changelog entry
-    items = [_string_escape('%s:%s' % (k, d[k])) for k in sorted(d)]
+    items = [
+        _string_escape('%s:%s' % (k, pycompat.bytestr(d[k])))
+        for k in sorted(d)
+    ]
     return "\0".join(items)
 
+def encodecopies(copies):
+    items = [
+        '%s\0%s' % (k, copies[k])
+        for k in sorted(copies)
+    ]
+    return "\n".join(items)
+
+def decodecopies(data):
+    try:
+        copies = {}
+        for l in data.split('\n'):
+            k, v = l.split('\0')
+            copies[k] = v
+        return copies
+    except ValueError:
+        # Perhaps someone had chosen the same key name (e.g. "p1copies") and
+        # used different syntax for the value.
+        return None
+
 def stripdesc(desc):
     """strip trailing whitespace and leading and trailing empty lines"""
     return '\n'.join([l.rstrip() for l in desc.splitlines()]).strip('\n')
@@ -179,8 +205,8 @@
     """
 
     __slots__ = (
-        u'_offsets',
-        u'_text',
+        r'_offsets',
+        r'_text',
     )
 
     def __new__(cls, text):
@@ -272,6 +298,16 @@
         return self._text[off[2] + 1:off[3]].split('\n')
 
     @property
+    def p1copies(self):
+        rawcopies = self.extra.get('p1copies')
+        return rawcopies and decodecopies(rawcopies)
+
+    @property
+    def p2copies(self):
+        rawcopies = self.extra.get('p2copies')
+        return rawcopies and decodecopies(rawcopies)
+
+    @property
     def description(self):
         return encoding.tolocal(self._text[self._offsets[3] + 2:])
 
@@ -347,6 +383,27 @@
     def reachableroots(self, minroot, heads, roots, includepath=False):
         return self.index.reachableroots2(minroot, heads, roots, includepath)
 
+    def _checknofilteredinrevs(self, revs):
+        """raise the appropriate error if 'revs' contains a filtered revision
+
+        This returns a version of 'revs' to be used thereafter by the caller.
+        In particular, if revs is an iterator, it is converted into a set.
+        """
+        safehasattr = util.safehasattr
+        if safehasattr(revs, '__next__'):
+            # Note that inspect.isgenerator() is not true for iterators,
+            revs = set(revs)
+
+        filteredrevs = self.filteredrevs
+        if safehasattr(revs, 'first'):  # smartset
+            offenders = revs & filteredrevs
+        else:
+            offenders = filteredrevs.intersection(revs)
+
+        for rev in offenders:
+            raise error.FilteredIndexError(rev)
+        return revs
+
     def headrevs(self, revs=None):
         if revs is None and self.filteredrevs:
             try:
@@ -356,6 +413,8 @@
             except AttributeError:
                 return self._headrevs()
 
+        if self.filteredrevs:
+            revs = self._checknofilteredinrevs(revs)
         return super(changelog, self).headrevs(revs)
 
     def strip(self, *args, **kwargs):
@@ -503,7 +562,7 @@
         return l[3:]
 
     def add(self, manifest, files, desc, transaction, p1, p2,
-                  user, date=None, extra=None):
+                  user, date=None, extra=None, p1copies=None, p2copies=None):
         # Convert to UTF-8 encoded bytestrings as the very first
         # thing: calling any method on a localstr object will turn it
         # into a str object and the cached UTF-8 string is thus lost.
@@ -532,6 +591,13 @@
             elif branch in (".", "null", "tip"):
                 raise error.StorageError(_('the name \'%s\' is reserved')
                                          % branch)
+        if (p1copies or p2copies) and extra is None:
+            extra = {}
+        if p1copies:
+            extra['p1copies'] = encodecopies(p1copies)
+        if p2copies:
+            extra['p2copies'] = encodecopies(p2copies)
+
         if extra:
             extra = encodeextra(extra)
             parseddate = "%s %s" % (parseddate, extra)
--- a/mercurial/chgserver.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/chgserver.py	Wed Apr 17 13:41:18 2019 -0400
@@ -64,11 +64,12 @@
 
 from .utils import (
     procutil,
+    stringutil,
 )
 
 def _hashlist(items):
     """return sha1 hexdigest for a list"""
-    return node.hex(hashlib.sha1(str(items)).digest())
+    return node.hex(hashlib.sha1(stringutil.pprint(items)).digest())
 
 # sensitive config sections affecting confighash
 _configsections = [
@@ -83,7 +84,7 @@
 ]
 
 # sensitive environment variables affecting confighash
-_envre = re.compile(r'''\A(?:
+_envre = re.compile(br'''\A(?:
                     CHGHG
                     |HG(?:DEMANDIMPORT|EMITWARNINGS|MODULEPOLICY|PROF|RCPATH)?
                     |HG(?:ENCODING|PLAIN).*
@@ -140,7 +141,7 @@
     files = [pycompat.sysexecutable]
     for m in modules:
         try:
-            files.append(inspect.getabsfile(m))
+            files.append(pycompat.fsencode(inspect.getabsfile(m)))
         except TypeError:
             pass
     return sorted(set(files))
@@ -449,7 +450,7 @@
         if newhash.confighash != self.hashstate.confighash:
             addr = _hashaddress(self.baseaddress, newhash.confighash)
             insts.append('redirect %s' % addr)
-        self.ui.log('chgserver', 'validate: %s\n', insts)
+        self.ui.log('chgserver', 'validate: %s\n', stringutil.pprint(insts))
         self.cresult.write('\0'.join(insts) or '\0')
 
     def chdir(self):
--- a/mercurial/cmdutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/cmdutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -180,8 +180,8 @@
 def newandmodified(chunks, originalchunks):
     newlyaddedandmodifiedfiles = set()
     for chunk in chunks:
-        if ishunk(chunk) and chunk.header.isnewfile() and chunk not in \
-            originalchunks:
+        if (ishunk(chunk) and chunk.header.isnewfile() and chunk not in
+            originalchunks):
             newlyaddedandmodifiedfiles.add(chunk.header.filename())
     return newlyaddedandmodifiedfiles
 
@@ -201,7 +201,8 @@
     setattr(ui, 'write', wrap)
     return oldwrite
 
-def filterchunks(ui, originalhunks, usecurses, testfile, operation=None):
+def filterchunks(ui, originalhunks, usecurses, testfile, match,
+                 operation=None):
     try:
         if usecurses:
             if testfile:
@@ -216,9 +217,9 @@
         ui.warn('%s\n' % e.message)
         ui.warn(_('falling back to text mode\n'))
 
-    return patch.filterpatch(ui, originalhunks, operation)
-
-def recordfilter(ui, originalhunks, operation=None):
+    return patch.filterpatch(ui, originalhunks, match, operation)
+
+def recordfilter(ui, originalhunks, match, operation=None):
     """ Prompts the user to filter the originalhunks and return a list of
     selected hunks.
     *operation* is used for to build ui messages to indicate the user what
@@ -230,7 +231,7 @@
     oldwrite = setupwrapcolorwrite(ui)
     try:
         newchunks, newopts = filterchunks(ui, originalhunks, usecurses,
-                                          testfile, operation)
+                                          testfile, match, operation)
     finally:
         ui.write = oldwrite
     return newchunks, newopts
@@ -304,16 +305,19 @@
 
         if not force:
             repo.checkcommitpatterns(wctx, vdirs, match, status, fail)
-        diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True)
+        diffopts = patch.difffeatureopts(ui, opts=opts, whitespace=True,
+                                         section='commands',
+                                         configprefix='commit.interactive.')
         diffopts.nodates = True
         diffopts.git = True
         diffopts.showfunc = True
         originaldiff = patch.diff(repo, changes=status, opts=diffopts)
         originalchunks = patch.parsepatch(originaldiff)
+        match = scmutil.match(repo[None], pats)
 
         # 1. filter patch, since we are intending to apply subset of it
         try:
-            chunks, newopts = filterfn(ui, originalchunks)
+            chunks, newopts = filterfn(ui, originalchunks, match)
         except error.PatchError as err:
             raise error.Abort(_('error parsing patch: %s') % err)
         opts.update(newopts)
@@ -342,8 +346,8 @@
         if backupall:
             tobackup = changed
         else:
-            tobackup = [f for f in newfiles if f in modified or f in \
-                    newlyaddedandmodifiedfiles]
+            tobackup = [f for f in newfiles if f in modified or f in
+                        newlyaddedandmodifiedfiles]
         backups = {}
         if tobackup:
             backupdir = repo.vfs.join('record-backups')
@@ -456,7 +460,7 @@
 
     def __init__(self, dirpath):
         self.path = dirpath
-        self.statuses = set([])
+        self.statuses = set()
         self.files = []
         self.subdirs = {}
 
@@ -629,11 +633,9 @@
     return _helpmessage('hg unshelve --continue', 'hg unshelve --abort')
 
 def _graftmsg():
-    # tweakdefaults requires `update` to have a rev hence the `.`
     return _helpmessage('hg graft --continue', 'hg graft --abort')
 
 def _mergemsg():
-    # tweakdefaults requires `update` to have a rev hence the `.`
     return _helpmessage('hg commit', 'hg merge --abort')
 
 def _bisectmsg():
@@ -1157,6 +1159,7 @@
     dryrun = opts.get("dry_run")
     wctx = repo[None]
 
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
     def walkpat(pat):
         srcs = []
         if after:
@@ -1166,7 +1169,7 @@
         m = scmutil.match(wctx, [pat], opts, globbed=True)
         for abs in wctx.walk(m):
             state = repo.dirstate[abs]
-            rel = m.rel(abs)
+            rel = uipathfn(abs)
             exact = m.exact(abs)
             if state in badstates:
                 if exact and state == '?':
@@ -1273,10 +1276,6 @@
                 else:
                     ui.warn(_('%s: cannot copy - %s\n') %
                             (relsrc, encoding.strtolocal(inst.strerror)))
-                    if rename:
-                        hint = _("('hg rename --after' to record the rename)\n")
-                    else:
-                        hint = _("('hg copy --after' to record the copy)\n")
                     return True # report a failure
 
         if ui.verbose or not exact:
@@ -1787,7 +1786,7 @@
     wanted = set()
     copies = []
     minrev, maxrev = min(revs), max(revs)
-    def filerevgen(filelog, last):
+    def filerevs(filelog, last):
         """
         Only files, no patterns.  Check the history of each file.
 
@@ -1850,7 +1849,7 @@
         ancestors = {filelog.linkrev(last)}
 
         # iterate from latest to oldest revision
-        for rev, flparentlinkrevs, copied in filerevgen(filelog, last):
+        for rev, flparentlinkrevs, copied in filerevs(filelog, last):
             if not follow:
                 if rev > maxrev:
                     continue
@@ -1986,7 +1985,10 @@
                 else:
                     self.revs.discard(value)
                     ctx = change(value)
-                    matches = [f for f in ctx.files() if match(f)]
+                    if allfiles:
+                        matches = list(ctx.manifest().walk(match))
+                    else:
+                        matches = [f for f in ctx.files() if match(f)]
                     if matches:
                         fncache[value] = matches
                         self.set.add(value)
@@ -2053,8 +2055,7 @@
 
     return iterate()
 
-def add(ui, repo, match, prefix, explicitonly, **opts):
-    join = lambda f: os.path.join(prefix, f)
+def add(ui, repo, match, prefix, uipathfn, explicitonly, **opts):
     bad = []
 
     badfn = lambda x, y: bad.append(x) or match.bad(x, y)
@@ -2078,20 +2079,24 @@
                 cca(f)
             names.append(f)
             if ui.verbose or not exact:
-                ui.status(_('adding %s\n') % match.rel(f),
+                ui.status(_('adding %s\n') % uipathfn(f),
                           label='ui.addremove.added')
 
     for subpath in sorted(wctx.substate):
         sub = wctx.sub(subpath)
         try:
             submatch = matchmod.subdirmatcher(subpath, match)
+            subprefix = repo.wvfs.reljoin(prefix, subpath)
+            subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
             if opts.get(r'subrepos'):
-                bad.extend(sub.add(ui, submatch, prefix, False, **opts))
+                bad.extend(sub.add(ui, submatch, subprefix, subuipathfn, False,
+                                   **opts))
             else:
-                bad.extend(sub.add(ui, submatch, prefix, True, **opts))
+                bad.extend(sub.add(ui, submatch, subprefix, subuipathfn, True,
+                                   **opts))
         except error.LookupError:
             ui.status(_("skipping missing subrepository: %s\n")
-                           % join(subpath))
+                           % uipathfn(subpath))
 
     if not opts.get(r'dry_run'):
         rejected = wctx.add(names, prefix)
@@ -2107,10 +2112,10 @@
         for subpath in ctx.substate:
             ctx.sub(subpath).addwebdirpath(serverpath, webconf)
 
-def forget(ui, repo, match, prefix, explicitonly, dryrun, interactive):
+def forget(ui, repo, match, prefix, uipathfn, explicitonly, dryrun,
+           interactive):
     if dryrun and interactive:
         raise error.Abort(_("cannot specify both --dry-run and --interactive"))
-    join = lambda f: os.path.join(prefix, f)
     bad = []
     badfn = lambda x, y: bad.append(x) or match.bad(x, y)
     wctx = repo[None]
@@ -2123,15 +2128,18 @@
 
     for subpath in sorted(wctx.substate):
         sub = wctx.sub(subpath)
+        submatch = matchmod.subdirmatcher(subpath, match)
+        subprefix = repo.wvfs.reljoin(prefix, subpath)
+        subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
         try:
-            submatch = matchmod.subdirmatcher(subpath, match)
-            subbad, subforgot = sub.forget(submatch, prefix, dryrun=dryrun,
+            subbad, subforgot = sub.forget(submatch, subprefix, subuipathfn,
+                                           dryrun=dryrun,
                                            interactive=interactive)
             bad.extend([subpath + '/' + f for f in subbad])
             forgot.extend([subpath + '/' + f for f in subforgot])
         except error.LookupError:
             ui.status(_("skipping missing subrepository: %s\n")
-                           % join(subpath))
+                           % uipathfn(subpath))
 
     if not explicitonly:
         for f in match.files():
@@ -2146,7 +2154,7 @@
                             continue
                         ui.warn(_('not removing %s: '
                                   'file is already untracked\n')
-                                % match.rel(f))
+                                % uipathfn(f))
                     bad.append(f)
 
     if interactive:
@@ -2157,13 +2165,14 @@
                       '$$ Include &all remaining files'
                       '$$ &? (display help)')
         for filename in forget[:]:
-            r = ui.promptchoice(_('forget %s %s') % (filename, responses))
+            r = ui.promptchoice(_('forget %s %s') %
+                                (uipathfn(filename), responses))
             if r == 4: # ?
                 while r == 4:
                     for c, t in ui.extractchoices(responses)[1]:
                         ui.write('%s - %s\n' % (c, encoding.lower(t)))
-                    r = ui.promptchoice(_('forget %s %s') % (filename,
-                                                                 responses))
+                    r = ui.promptchoice(_('forget %s %s') %
+                                        (uipathfn(filename), responses))
             if r == 0: # yes
                 continue
             elif r == 1: # no
@@ -2177,7 +2186,7 @@
 
     for f in forget:
         if ui.verbose or not match.exact(f) or interactive:
-            ui.status(_('removing %s\n') % match.rel(f),
+            ui.status(_('removing %s\n') % uipathfn(f),
                       label='ui.addremove.removed')
 
     if not dryrun:
@@ -2186,7 +2195,7 @@
         forgot.extend(f for f in forget if f not in rejected)
     return bad, forgot
 
-def files(ui, ctx, m, fm, fmt, subrepos):
+def files(ui, ctx, m, uipathfn, fm, fmt, subrepos):
     ret = 1
 
     needsfctx = ui.verbose or {'size', 'flags'} & fm.datahint()
@@ -2197,25 +2206,27 @@
             fc = ctx[f]
             fm.write('size flags', '% 10d % 1s ', fc.size(), fc.flags())
         fm.data(path=f)
-        fm.plain(fmt % m.rel(f))
+        fm.plain(fmt % uipathfn(f))
         ret = 0
 
     for subpath in sorted(ctx.substate):
         submatch = matchmod.subdirmatcher(subpath, m)
+        subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
         if (subrepos or m.exact(subpath) or any(submatch.files())):
             sub = ctx.sub(subpath)
             try:
                 recurse = m.exact(subpath) or subrepos
-                if sub.printfiles(ui, submatch, fm, fmt, recurse) == 0:
+                if sub.printfiles(ui, submatch, subuipathfn, fm, fmt,
+                                  recurse) == 0:
                     ret = 0
             except error.LookupError:
                 ui.status(_("skipping missing subrepository: %s\n")
-                               % m.abs(subpath))
+                               % uipathfn(subpath))
 
     return ret
 
-def remove(ui, repo, m, prefix, after, force, subrepos, dryrun, warnings=None):
-    join = lambda f: os.path.join(prefix, f)
+def remove(ui, repo, m, prefix, uipathfn, after, force, subrepos, dryrun,
+           warnings=None):
     ret = 0
     s = repo.status(match=m, clean=True)
     modified, added, deleted, clean = s[0], s[1], s[3], s[6]
@@ -2233,16 +2244,18 @@
                                unit=_('subrepos'))
     for subpath in subs:
         submatch = matchmod.subdirmatcher(subpath, m)
+        subprefix = repo.wvfs.reljoin(prefix, subpath)
+        subuipathfn = scmutil.subdiruipathfn(subpath, uipathfn)
         if subrepos or m.exact(subpath) or any(submatch.files()):
             progress.increment()
             sub = wctx.sub(subpath)
             try:
-                if sub.removefiles(submatch, prefix, after, force, subrepos,
-                                   dryrun, warnings):
+                if sub.removefiles(submatch, subprefix, subuipathfn, after,
+                                   force, subrepos, dryrun, warnings):
                     ret = 1
             except error.LookupError:
                 warnings.append(_("skipping missing subrepository: %s\n")
-                               % join(subpath))
+                               % uipathfn(subpath))
     progress.complete()
 
     # warn about failure to delete explicit files/dirs
@@ -2266,10 +2279,10 @@
         if repo.wvfs.exists(f):
             if repo.wvfs.isdir(f):
                 warnings.append(_('not removing %s: no tracked files\n')
-                        % m.rel(f))
+                        % uipathfn(f))
             else:
                 warnings.append(_('not removing %s: file is untracked\n')
-                        % m.rel(f))
+                        % uipathfn(f))
         # missing files will generate a warning elsewhere
         ret = 1
     progress.complete()
@@ -2285,7 +2298,7 @@
             progress.increment()
             if ui.verbose or (f in files):
                 warnings.append(_('not removing %s: file still exists\n')
-                                % m.rel(f))
+                                % uipathfn(f))
             ret = 1
         progress.complete()
     else:
@@ -2296,12 +2309,12 @@
         for f in modified:
             progress.increment()
             warnings.append(_('not removing %s: file is modified (use -f'
-                      ' to force removal)\n') % m.rel(f))
+                      ' to force removal)\n') % uipathfn(f))
             ret = 1
         for f in added:
             progress.increment()
             warnings.append(_("not removing %s: file has been marked for add"
-                      " (use 'hg forget' to undo add)\n") % m.rel(f))
+                      " (use 'hg forget' to undo add)\n") % uipathfn(f))
             ret = 1
         progress.complete()
 
@@ -2311,7 +2324,7 @@
     for f in list:
         if ui.verbose or not m.exact(f):
             progress.increment()
-            ui.status(_('removing %s\n') % m.rel(f),
+            ui.status(_('removing %s\n') % uipathfn(f),
                       label='ui.addremove.removed')
     progress.complete()
 
@@ -2382,18 +2395,18 @@
         write(abs)
         err = 0
 
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
     for subpath in sorted(ctx.substate):
         sub = ctx.sub(subpath)
         try:
             submatch = matchmod.subdirmatcher(subpath, matcher)
-
-            if not sub.cat(submatch, basefm, fntemplate,
-                           os.path.join(prefix, sub._path),
+            subprefix = os.path.join(prefix, subpath)
+            if not sub.cat(submatch, basefm, fntemplate, subprefix,
                            **pycompat.strkwargs(opts)):
                 err = 0
         except error.RepoLookupError:
-            ui.status(_("skipping missing subrepository: %s\n")
-                           % os.path.join(prefix, subpath))
+            ui.status(_("skipping missing subrepository: %s\n") %
+                      uipathfn(subpath))
 
     return err
 
@@ -2412,7 +2425,9 @@
         dsguard = dirstateguard.dirstateguard(repo, 'commit')
     with dsguard or util.nullcontextmanager():
         if dsguard:
-            if scmutil.addremove(repo, matcher, "", opts) != 0:
+            relative = scmutil.anypats(pats, opts)
+            uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=relative)
+            if scmutil.addremove(repo, matcher, "", uipathfn, opts) != 0:
                 raise error.Abort(
                     _("failed to mark all new/missing files as added/removed"))
 
@@ -2482,16 +2497,17 @@
         if len(old.parents()) > 1:
             # ctx.files() isn't reliable for merges, so fall back to the
             # slower repo.status() method
-            files = set([fn for st in base.status(old)[:3]
-                         for fn in st])
+            files = {fn for st in base.status(old)[:3] for fn in st}
         else:
             files = set(old.files())
 
         # add/remove the files to the working copy if the "addremove" option
         # was specified.
         matcher = scmutil.match(wctx, pats, opts)
+        relative = scmutil.anypats(pats, opts)
+        uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=relative)
         if (opts.get('addremove')
-            and scmutil.addremove(repo, matcher, "", opts)):
+            and scmutil.addremove(repo, matcher, "", uipathfn, opts)):
             raise error.Abort(
                 _("failed to mark all new/missing files as added/removed"))
 
@@ -2548,7 +2564,7 @@
                                               fctx.path(), fctx.data(),
                                               islink='l' in flags,
                                               isexec='x' in flags,
-                                              copied=copied.get(path))
+                                              copysource=copied.get(path))
                     return mctx
                 except KeyError:
                     return None
@@ -2807,6 +2823,7 @@
     # The mapping is in the form:
     #   <abs path in repo> -> (<path from CWD>, <exactly specified by matcher?>)
     names = {}
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
 
     with repo.wlock():
         ## filling of the `names` mapping
@@ -2822,7 +2839,7 @@
         if not m.always():
             matcher = matchmod.badmatch(m, lambda x, y: False)
             for abs in wctx.walk(matcher):
-                names[abs] = m.rel(abs), m.exact(abs)
+                names[abs] = m.exact(abs)
 
             # walk target manifest to fill `names`
 
@@ -2835,11 +2852,11 @@
                 for f in names:
                     if f.startswith(path_):
                         return
-                ui.warn("%s: %s\n" % (m.rel(path), msg))
+                ui.warn("%s: %s\n" % (uipathfn(path), msg))
 
             for abs in ctx.walk(matchmod.badmatch(m, badfn)):
                 if abs not in names:
-                    names[abs] = m.rel(abs), m.exact(abs)
+                    names[abs] = m.exact(abs)
 
             # Find status of all file in `names`.
             m = scmutil.matchfiles(repo, names)
@@ -2850,7 +2867,7 @@
             changes = repo.status(node1=node, match=m)
             for kind in changes:
                 for abs in kind:
-                    names[abs] = m.rel(abs), m.exact(abs)
+                    names[abs] = m.exact(abs)
 
             m = scmutil.matchfiles(repo, names)
 
@@ -2912,13 +2929,12 @@
             dsmodified -= mergeadd
 
         # if f is a rename, update `names` to also revert the source
-        cwd = repo.getcwd()
         for f in localchanges:
             src = repo.dirstate.copied(f)
             # XXX should we check for rename down to target node?
             if src and src not in names and repo.dirstate[src] == 'r':
                 dsremoved.add(src)
-                names[src] = (repo.pathto(src, cwd), True)
+                names[src] = True
 
         # determine the exact nature of the deleted changesets
         deladded = set(_deleted)
@@ -3025,7 +3041,7 @@
             (unknown,       actions['unknown'],  discard),
             )
 
-        for abs, (rel, exact) in sorted(names.items()):
+        for abs, exact in sorted(names.items()):
             # target file to be touch on disk (relative to cwd)
             target = repo.wjoin(abs)
             # search the entry in the dispatch table.
@@ -3042,19 +3058,21 @@
                         if dobackup == backupinteractive:
                             tobackup.add(abs)
                         elif (backup <= dobackup or wctx[abs].cmp(ctx[abs])):
-                            bakname = scmutil.origpath(ui, repo, rel)
+                            absbakname = scmutil.backuppath(ui, repo, abs)
+                            bakname = os.path.relpath(absbakname,
+                                                      start=repo.root)
                             ui.note(_('saving current version of %s as %s\n') %
-                                    (rel, bakname))
+                                    (uipathfn(abs), uipathfn(bakname)))
                             if not opts.get('dry_run'):
                                 if interactive:
-                                    util.copyfile(target, bakname)
+                                    util.copyfile(target, absbakname)
                                 else:
-                                    util.rename(target, bakname)
+                                    util.rename(target, absbakname)
                     if opts.get('dry_run'):
                         if ui.verbose or not exact:
-                            ui.status(msg % rel)
+                            ui.status(msg % uipathfn(abs))
                 elif exact:
-                    ui.warn(msg % rel)
+                    ui.warn(msg % uipathfn(abs))
                 break
 
         if not opts.get('dry_run'):
@@ -3065,8 +3083,9 @@
             prefetch(repo, [ctx.rev()],
                      matchfiles(repo,
                                 [f for sublist in oplist for f in sublist]))
-            _performrevert(repo, parents, ctx, names, actions, interactive,
-                           tobackup)
+            match = scmutil.match(repo[None], pats)
+            _performrevert(repo, parents, ctx, names, uipathfn, actions,
+                           match, interactive, tobackup)
 
         if targetsubs:
             # Revert the subrepos on the revert list
@@ -3078,8 +3097,8 @@
                     raise error.Abort("subrepository '%s' does not exist in %s!"
                                       % (sub, short(ctx.node())))
 
-def _performrevert(repo, parents, ctx, names, actions, interactive=False,
-                   tobackup=None):
+def _performrevert(repo, parents, ctx, names, uipathfn, actions,
+                   match, interactive=False, tobackup=None):
     """function that actually perform all the actions computed for revert
 
     This is an independent function to let extension to plug in and react to
@@ -3104,15 +3123,15 @@
         repo.dirstate.remove(f)
 
     def prntstatusmsg(action, f):
-        rel, exact = names[f]
+        exact = names[f]
         if repo.ui.verbose or not exact:
-            repo.ui.status(actions[action][1] % rel)
+            repo.ui.status(actions[action][1] % uipathfn(f))
 
     audit_path = pathutil.pathauditor(repo.root, cached=True)
     for f in actions['forget'][0]:
         if interactive:
             choice = repo.ui.promptchoice(
-                _("forget added file %s (Yn)?$$ &Yes $$ &No") % f)
+                _("forget added file %s (Yn)?$$ &Yes $$ &No") % uipathfn(f))
             if choice == 0:
                 prntstatusmsg('forget', f)
                 repo.dirstate.drop(f)
@@ -3125,7 +3144,7 @@
         audit_path(f)
         if interactive:
             choice = repo.ui.promptchoice(
-                _("remove added file %s (Yn)?$$ &Yes $$ &No") % f)
+                _("remove added file %s (Yn)?$$ &Yes $$ &No") % uipathfn(f))
             if choice == 0:
                 prntstatusmsg('remove', f)
                 doremove(f)
@@ -3154,25 +3173,30 @@
         # Prompt the user for changes to revert
         torevert = [f for f in actions['revert'][0] if f not in excluded_files]
         m = scmutil.matchfiles(repo, torevert)
-        diffopts = patch.difffeatureopts(repo.ui, whitespace=True)
+        diffopts = patch.difffeatureopts(repo.ui, whitespace=True,
+                                         section='commands',
+                                         configprefix='revert.interactive.')
         diffopts.nodates = True
         diffopts.git = True
-        operation = 'discard'
-        reversehunks = True
-        if node != parent:
-            operation = 'apply'
-            reversehunks = False
-        if reversehunks:
+        operation = 'apply'
+        if node == parent:
+            if repo.ui.configbool('experimental',
+                                  'revert.interactive.select-to-keep'):
+                operation = 'keep'
+            else:
+                operation = 'discard'
+
+        if operation == 'apply':
+            diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
+        else:
             diff = patch.diff(repo, ctx.node(), None, m, opts=diffopts)
-        else:
-            diff = patch.diff(repo, None, ctx.node(), m, opts=diffopts)
         originalchunks = patch.parsepatch(diff)
 
         try:
 
-            chunks, opts = recordfilter(repo.ui, originalchunks,
+            chunks, opts = recordfilter(repo.ui, originalchunks, match,
                                         operation=operation)
-            if reversehunks:
+            if operation == 'discard':
                 chunks = patch.reversehunks(chunks)
 
         except error.PatchError as err:
@@ -3186,15 +3210,20 @@
         # chunks are serialized per file, but files aren't sorted
         for f in sorted(set(c.header.filename() for c in chunks if ishunk(c))):
             prntstatusmsg('revert', f)
+        files = set()
         for c in chunks:
             if ishunk(c):
                 abs = c.header.filename()
                 # Create a backup file only if this hunk should be backed up
                 if c.header.filename() in tobackup:
                     target = repo.wjoin(abs)
-                    bakname = scmutil.origpath(repo.ui, repo, m.rel(abs))
+                    bakname = scmutil.backuppath(repo.ui, repo, abs)
                     util.copyfile(target, bakname)
                     tobackup.remove(abs)
+                if abs not in files:
+                    files.add(abs)
+                    if operation == 'keep':
+                        checkout(abs)
             c.write(fp)
         dopatch = fp.tell()
         fp.seek(0)
@@ -3222,9 +3251,19 @@
     if node == parent and p2 == nullid:
         normal = repo.dirstate.normal
     for f in actions['undelete'][0]:
-        prntstatusmsg('undelete', f)
-        checkout(f)
-        normal(f)
+        if interactive:
+            choice = repo.ui.promptchoice(
+                _("add back removed file %s (Yn)?$$ &Yes $$ &No") % f)
+            if choice == 0:
+                prntstatusmsg('undelete', f)
+                checkout(f)
+                normal(f)
+            else:
+                excluded_files.append(f)
+        else:
+            prntstatusmsg('undelete', f)
+            checkout(f)
+            normal(f)
 
     copied = copies.pathcopies(repo[parent], ctx)
 
--- a/mercurial/color.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/color.py	Wed Apr 17 13:41:18 2019 -0400
@@ -77,12 +77,13 @@
 _defaultstyles = {
     'grep.match': 'red bold',
     'grep.linenumber': 'green',
-    'grep.rev': 'green',
-    'grep.change': 'green',
+    'grep.rev': 'blue',
     'grep.sep': 'cyan',
     'grep.filename': 'magenta',
     'grep.user': 'magenta',
     'grep.date': 'magenta',
+    'grep.inserted': 'green bold',
+    'grep.deleted': 'red bold',
     'bookmarks.active': 'green',
     'branches.active': 'none',
     'branches.closed': 'black bold',
@@ -169,7 +170,7 @@
             ui._terminfoparams[key[9:]] = newval
     try:
         curses.setupterm()
-    except curses.error as e:
+    except curses.error:
         ui._terminfoparams.clear()
         return
 
@@ -484,7 +485,7 @@
             w32effects = None
         else:
             origattr = csbi.wAttributes
-            ansire = re.compile(b'\033\[([^m]*)m([^\033]*)(.*)',
+            ansire = re.compile(br'\033\[([^m]*)m([^\033]*)(.*)',
                                 re.MULTILINE | re.DOTALL)
 
     def win32print(ui, writefunc, text, **opts):
--- a/mercurial/commands.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/commands.py	Wed Apr 17 13:41:18 2019 -0400
@@ -61,7 +61,6 @@
     state as statemod,
     streamclone,
     tags as tagsmod,
-    templatekw,
     ui as uimod,
     util,
     wireprotoserver,
@@ -180,7 +179,8 @@
     """
 
     m = scmutil.match(repo[None], pats, pycompat.byteskwargs(opts))
-    rejected = cmdutil.add(ui, repo, m, "", False, **opts)
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
+    rejected = cmdutil.add(ui, repo, m, "", uipathfn, False, **opts)
     return rejected and 1 or 0
 
 @command('addremove',
@@ -254,7 +254,9 @@
     if not opts.get('similarity'):
         opts['similarity'] = '100'
     matcher = scmutil.match(repo[None], pats, opts)
-    return scmutil.addremove(repo, matcher, "", opts)
+    relative = scmutil.anypats(pats, opts)
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=relative)
+    return scmutil.addremove(repo, matcher, "", uipathfn, opts)
 
 @command('annotate|blame',
     [('r', 'rev', '', _('annotate the specified revision'), _('REV')),
@@ -407,12 +409,13 @@
     if skiprevs:
         skiprevs = scmutil.revrange(repo, skiprevs)
 
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
     for abs in ctx.walk(m):
         fctx = ctx[abs]
         rootfm.startitem()
         rootfm.data(path=abs)
         if not opts.get('text') and fctx.isbinary():
-            rootfm.plain(_("%s: binary file\n") % m.rel(abs))
+            rootfm.plain(_("%s: binary file\n") % uipathfn(abs))
             continue
 
         fm = rootfm.nested('lines', tmpl='{rev}: {line}')
@@ -1102,7 +1105,7 @@
 
     with repo.wlock():
         if opts.get('clean'):
-            label = repo[None].p1().branch()
+            label = repo['.'].branch()
             repo.dirstate.setbranch(label)
             ui.status(_('reset working directory to branch %s\n') % label)
         elif label:
@@ -1122,11 +1125,11 @@
             ui.status(_('marked working directory as branch %s\n') % label)
 
             # find any open named branches aside from default
-            others = [n for n, h, t, c in repo.branchmap().iterbranches()
-                      if n != "default" and not c]
-            if not others:
-                ui.status(_('(branches are permanent and global, '
-                            'did you want a bookmark?)\n'))
+            for n, h, t, c in repo.branchmap().iterbranches():
+                if n != "default" and not c:
+                    return 0
+            ui.status(_('(branches are permanent and global, '
+                        'did you want a bookmark?)\n'))
 
 @command('branches',
     [('a', 'active', False,
@@ -1672,8 +1675,8 @@
         if not bheads:
             raise error.Abort(_('can only close branch heads'))
         elif opts.get('amend'):
-            if repo[None].parents()[0].p1().branch() != branch and \
-                    repo[None].parents()[0].p2().branch() != branch:
+            if (repo['.'].p1().branch() != branch and
+                repo['.'].p2().branch() != branch):
                 raise error.Abort(_('can only close branch heads'))
 
     if opts.get('amend'):
@@ -2209,8 +2212,10 @@
 
     m = scmutil.match(ctx, pats, opts)
     ui.pager('files')
+    uipathfn = scmutil.getuipathfn(ctx.repo(), legacyrelativevalue=True)
     with ui.formatter('files', opts) as fm:
-        return cmdutil.files(ui, ctx, m, fm, fmt, opts.get('subrepos'))
+        return cmdutil.files(ui, ctx, m, uipathfn, fm, fmt,
+                             opts.get('subrepos'))
 
 @command(
     'forget',
@@ -2254,7 +2259,8 @@
 
     m = scmutil.match(repo[None], pats, opts)
     dryrun, interactive = opts.get('dry_run'), opts.get('interactive')
-    rejected = cmdutil.forget(ui, repo, m, prefix="",
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
+    rejected = cmdutil.forget(ui, repo, m, prefix="", uipathfn=uipathfn,
                               explicitonly=False, dryrun=dryrun,
                               interactive=interactive)[0]
     return rejected and 1 or 0
@@ -2633,7 +2639,6 @@
         raise error.Abort(_("cannot abort using an old graftstate"))
 
     # changeset from which graft operation was started
-    startctx = None
     if len(newnodes) > 0:
         startctx = repo[newnodes[0]].p1()
     else:
@@ -2849,6 +2854,7 @@
                 for i in pycompat.xrange(blo, bhi):
                     yield ('+', b[i])
 
+    uipathfn = scmutil.getuipathfn(repo)
     def display(fm, fn, ctx, pstates, states):
         rev = scmutil.intrev(ctx)
         if fm.isplain():
@@ -2868,7 +2874,7 @@
             except error.WdirUnsupported:
                 return ctx[fn].isbinary()
 
-        fieldnamemap = {'filename': 'path', 'linenumber': 'lineno'}
+        fieldnamemap = {'linenumber': 'lineno'}
         if diff:
             iter = difflinestates(pstates, states)
         else:
@@ -2876,27 +2882,29 @@
         for change, l in iter:
             fm.startitem()
             fm.context(ctx=ctx)
-            fm.data(node=fm.hexfunc(scmutil.binnode(ctx)))
+            fm.data(node=fm.hexfunc(scmutil.binnode(ctx)), path=fn)
+            fm.plain(uipathfn(fn), label='grep.filename')
 
             cols = [
-                ('filename', '%s', fn, True),
-                ('rev', '%d', rev, not plaingrep),
-                ('linenumber', '%d', l.linenum, opts.get('line_number')),
+                ('rev', '%d', rev, not plaingrep, ''),
+                ('linenumber', '%d', l.linenum, opts.get('line_number'), ''),
             ]
             if diff:
-                cols.append(('change', '%s', change, True))
+                cols.append(
+                    ('change', '%s', change, True,
+                     'grep.inserted ' if change == '+' else 'grep.deleted ')
+                )
             cols.extend([
-                ('user', '%s', formatuser(ctx.user()), opts.get('user')),
+                ('user', '%s', formatuser(ctx.user()), opts.get('user'), ''),
                 ('date', '%s', fm.formatdate(ctx.date(), datefmt),
-                 opts.get('date')),
+                 opts.get('date'), ''),
             ])
-            lastcol = next(
-                name for name, fmt, data, cond in reversed(cols) if cond)
-            for name, fmt, data, cond in cols:
+            for name, fmt, data, cond, extra_label in cols:
+                if cond:
+                    fm.plain(sep, label='grep.sep')
                 field = fieldnamemap.get(name, name)
-                fm.condwrite(cond, field, fmt, data, label='grep.%s' % name)
-                if cond and name != lastcol:
-                    fm.plain(sep, label='grep.sep')
+                label = extra_label + ('grep.%s' % name)
+                fm.condwrite(cond, field, fmt, data, label=label)
             if not opts.get('files_with_matches'):
                 fm.plain(sep, label='grep.sep')
                 if not opts.get('text') and binary():
@@ -2926,12 +2934,13 @@
             fm.data(matched=False)
         fm.end()
 
-    skip = {}
+    skip = set()
     revfiles = {}
     match = scmutil.match(repo[None], pats, opts)
     found = False
     follow = opts.get('follow')
 
+    getrenamed = scmutil.getrenamedfn(repo)
     def prep(ctx, fns):
         rev = ctx.rev()
         pctx = ctx.p1()
@@ -2945,16 +2954,15 @@
                 fnode = ctx.filenode(fn)
             except error.LookupError:
                 continue
-            try:
-                copied = flog.renamed(fnode)
-            except error.WdirUnsupported:
-                copied = ctx[fn].renamed()
-            copy = follow and copied and copied[0]
-            if copy:
-                copies.setdefault(rev, {})[fn] = copy
+
+            copy = None
+            if follow:
+                copy = getrenamed(fn, rev)
+                if copy:
+                    copies.setdefault(rev, {})[fn] = copy
+                    if fn in skip:
+                        skip.add(copy)
             if fn in skip:
-                if copy:
-                    skip[copy] = True
                 continue
             files.append(fn)
 
@@ -2983,16 +2991,16 @@
             copy = copies.get(rev, {}).get(fn)
             if fn in skip:
                 if copy:
-                    skip[copy] = True
+                    skip.add(copy)
                 continue
             pstates = matches.get(parent, {}).get(copy or fn, [])
             if pstates or states:
                 r = display(fm, fn, ctx, pstates, states)
                 found = found or r
                 if r and not diff and not all_files:
-                    skip[fn] = True
+                    skip.add(fn)
                     if copy:
-                        skip[copy] = True
+                        skip.add(copy)
         del revfiles[rev]
         # We will keep the matches dict for the duration of the window
         # clear the matches dict once the window is over
@@ -3488,7 +3496,7 @@
                 else:
                     patchurl = os.path.join(base, patchurl)
                     ui.status(_('applying %s\n') % patchurl)
-                    patchfile = hg.openpath(ui, patchurl)
+                    patchfile = hg.openpath(ui, patchurl, sendaccept=False)
 
                 haspatch = False
                 for hunk in patch.split(patchfile):
@@ -3683,11 +3691,12 @@
         filesgen = sorted(repo.dirstate.matches(m))
     else:
         filesgen = ctx.matches(m)
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=bool(pats))
     for abs in filesgen:
         if opts.get('fullpath'):
             ui.write(repo.wjoin(abs), end)
         else:
-            ui.write(((pats and m.rel(abs)) or abs), end)
+            ui.write(uipathfn(abs), end)
         ret = 0
 
     return ret
@@ -3872,7 +3881,7 @@
         endrev = None
         if revs:
             endrev = revs.max() + 1
-        getrenamed = templatekw.getrenamedfn(repo, endrev=endrev)
+        getrenamed = scmutil.getrenamedfn(repo, endrev=endrev)
 
     ui.pager('log')
     displayer = logcmdutil.changesetdisplayer(ui, repo, opts, differ,
@@ -4361,7 +4370,7 @@
             msg = _("not updating: %s") % stringutil.forcebytestr(inst)
             hint = inst.hint
             raise error.UpdateAbort(msg, hint=hint)
-    if modheads > 1:
+    if modheads is not None and modheads > 1:
         currentbranchheads = len(repo.branchheads())
         if currentbranchheads == modheads:
             ui.status(_("(run 'hg heads' to see heads, 'hg merge' to merge)\n"))
@@ -4479,7 +4488,7 @@
             brev = None
 
             if checkout:
-                checkout = repo.changelog.rev(checkout)
+                checkout = repo.unfiltered().changelog.rev(checkout)
 
                 # order below depends on implementation of
                 # hg.addbranchrevs(). opts['bookmark'] is ignored,
@@ -4494,7 +4503,10 @@
             try:
                 ret = postincoming(ui, repo, modheads, opts.get('update'),
                                    checkout, brev)
-
+            except error.FilteredRepoLookupError as exc:
+                msg = _('cannot update to target: %s') % exc.args[0]
+                exc.args = (msg,) + exc.args[1:]
+                raise
             finally:
                 del repo._subtoppath
 
@@ -4714,7 +4726,8 @@
 
     m = scmutil.match(repo[None], pats, opts)
     subrepos = opts.get('subrepos')
-    return cmdutil.remove(ui, repo, m, "", after, force, subrepos,
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
+    return cmdutil.remove(ui, repo, m, "", uipathfn, after, force, subrepos,
                           dryrun=dryrun)
 
 @command('rename|move|mv',
@@ -4809,8 +4822,8 @@
     opts = pycompat.byteskwargs(opts)
     confirm = ui.configbool('commands', 'resolve.confirm')
     flaglist = 'all mark unmark list no_status re_merge'.split()
-    all, mark, unmark, show, nostatus, remerge = \
-        [opts.get(o) for o in flaglist]
+    all, mark, unmark, show, nostatus, remerge = [
+        opts.get(o) for o in flaglist]
 
     actioncount = len(list(filter(None, [show, mark, unmark, remerge])))
     if actioncount > 1:
@@ -4839,6 +4852,8 @@
                                  b'$$ &Yes $$ &No')):
                 raise error.Abort(_('user quit'))
 
+    uipathfn = scmutil.getuipathfn(repo)
+
     if show:
         ui.pager('resolve')
         fm = ui.formatter('resolve', opts)
@@ -4866,7 +4881,8 @@
             fm.startitem()
             fm.context(ctx=wctx)
             fm.condwrite(not nostatus, 'mergestatus', '%s ', key, label=label)
-            fm.write('path', '%s\n', f, label=label)
+            fm.data(path=f)
+            fm.plain('%s\n' % uipathfn(f), label=label)
         fm.end()
         return 0
 
@@ -4912,11 +4928,11 @@
                 if mark:
                     if exact:
                         ui.warn(_('not marking %s as it is driver-resolved\n')
-                                % f)
+                                % uipathfn(f))
                 elif unmark:
                     if exact:
                         ui.warn(_('not unmarking %s as it is driver-resolved\n')
-                                % f)
+                                % uipathfn(f))
                 else:
                     runconclude = True
                 continue
@@ -4930,14 +4946,14 @@
                     ms.mark(f, mergemod.MERGE_RECORD_UNRESOLVED_PATH)
                 elif ms[f] == mergemod.MERGE_RECORD_UNRESOLVED_PATH:
                     ui.warn(_('%s: path conflict must be resolved manually\n')
-                            % f)
+                            % uipathfn(f))
                 continue
 
             if mark:
                 if markcheck:
                     fdata = repo.wvfs.tryread(f)
-                    if filemerge.hasconflictmarkers(fdata) and \
-                        ms[f] != mergemod.MERGE_RECORD_RESOLVED:
+                    if (filemerge.hasconflictmarkers(fdata) and
+                        ms[f] != mergemod.MERGE_RECORD_RESOLVED):
                         hasconflictmarkers.append(f)
                 ms.mark(f, mergemod.MERGE_RECORD_RESOLVED)
             elif unmark:
@@ -4968,14 +4984,15 @@
                 if complete:
                     try:
                         util.rename(a + ".resolve",
-                                    scmutil.origpath(ui, repo, a))
+                                    scmutil.backuppath(ui, repo, f))
                     except OSError as inst:
                         if inst.errno != errno.ENOENT:
                             raise
 
         if hasconflictmarkers:
             ui.warn(_('warning: the following files still have conflict '
-                      'markers:\n  ') + '\n  '.join(hasconflictmarkers) + '\n')
+                      'markers:\n') + ''.join('  ' + uipathfn(f) + '\n'
+                                              for f in hasconflictmarkers))
             if markcheck == 'abort' and not all and not pats:
                 raise error.Abort(_('conflict markers detected'),
                                   hint=_('use --all to mark anyway'))
@@ -4994,7 +5011,7 @@
             # replace filemerge's .orig file with our resolve file
             a = repo.wjoin(f)
             try:
-                util.rename(a + ".resolve", scmutil.origpath(ui, repo, a))
+                util.rename(a + ".resolve", scmutil.backuppath(ui, repo, f))
             except OSError as inst:
                 if inst.errno != errno.ENOENT:
                     raise
@@ -5413,10 +5430,11 @@
         repo = scmutil.unhidehashlikerevs(repo, revs, 'nowarn')
         ctx1, ctx2 = scmutil.revpair(repo, revs)
 
-    if pats or ui.configbool('commands', 'status.relative'):
-        cwd = repo.getcwd()
-    else:
-        cwd = ''
+    forcerelativevalue = None
+    if ui.hasconfig('commands', 'status.relative'):
+        forcerelativevalue = ui.configbool('commands', 'status.relative')
+    uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=bool(pats),
+                                   forcerelativevalue=forcerelativevalue)
 
     if opts.get('print0'):
         end = '\0'
@@ -5467,10 +5485,10 @@
                 fm.context(ctx=ctx2)
                 fm.data(path=f)
                 fm.condwrite(showchar, 'status', '%s ', char, label=label)
-                fm.plain(fmt % repo.pathto(f, cwd), label=label)
+                fm.plain(fmt % uipathfn(f), label=label)
                 if f in copy:
                     fm.data(source=copy[f])
-                    fm.plain(('  %s' + end) % repo.pathto(copy[f], cwd),
+                    fm.plain(('  %s' + end) % uipathfn(copy[f]),
                              label='status.copied')
 
     if ((ui.verbose or ui.configbool('commands', 'status.verbose'))
@@ -5503,7 +5521,6 @@
     pnode = parents[0].node()
     marks = []
 
-    ms = None
     try:
         ms = mergemod.mergestate.read(repo)
     except error.UnsupportedMergeRecords as e:
@@ -5830,6 +5847,10 @@
                 expectedtype = 'global'
 
             for n in names:
+                if repo.tagtype(n) == 'global':
+                    alltags = tagsmod.findglobaltags(ui, repo)
+                    if alltags[n][0] == nullid:
+                        raise error.Abort(_("tag '%s' is already removed") % n)
                 if not repo.tagtype(n):
                     raise error.Abort(_("tag '%s' does not exist") % n)
                 if repo.tagtype(n) != expectedtype:
@@ -5908,7 +5929,6 @@
     ui.pager('tags')
     fm = ui.formatter('tags', opts)
     hexfunc = fm.hexfunc
-    tagtype = ""
 
     for t, n in reversed(repo.tagslist()):
         hn = hexfunc(n)
--- a/mercurial/config.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/config.py	Wed Apr 17 13:41:18 2019 -0400
@@ -78,6 +78,10 @@
         return list(self._data.get(section, {}).iteritems())
     def set(self, section, item, value, source=""):
         if pycompat.ispy3:
+            assert not isinstance(section, str), (
+                'config section may not be unicode strings on Python 3')
+            assert not isinstance(item, str), (
+                'config item may not be unicode strings on Python 3')
             assert not isinstance(value, str), (
                 'config values may not be unicode strings on Python 3')
         if section not in self:
--- a/mercurial/configitems.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/configitems.py	Wed Apr 17 13:41:18 2019 -0400
@@ -113,46 +113,49 @@
 
 coreconfigitem = getitemregister(coreitems)
 
+def _registerdiffopts(section, configprefix=''):
+    coreconfigitem(section, configprefix + 'nodates',
+        default=False,
+    )
+    coreconfigitem(section, configprefix + 'showfunc',
+        default=False,
+    )
+    coreconfigitem(section, configprefix + 'unified',
+        default=None,
+    )
+    coreconfigitem(section, configprefix + 'git',
+        default=False,
+    )
+    coreconfigitem(section, configprefix + 'ignorews',
+        default=False,
+    )
+    coreconfigitem(section, configprefix + 'ignorewsamount',
+        default=False,
+    )
+    coreconfigitem(section, configprefix + 'ignoreblanklines',
+        default=False,
+    )
+    coreconfigitem(section, configprefix + 'ignorewseol',
+        default=False,
+    )
+    coreconfigitem(section, configprefix + 'nobinary',
+        default=False,
+    )
+    coreconfigitem(section, configprefix + 'noprefix',
+        default=False,
+    )
+    coreconfigitem(section, configprefix + 'word-diff',
+        default=False,
+    )
+
 coreconfigitem('alias', '.*',
     default=dynamicdefault,
     generic=True,
 )
-coreconfigitem('annotate', 'nodates',
-    default=False,
-)
-coreconfigitem('annotate', 'showfunc',
-    default=False,
-)
-coreconfigitem('annotate', 'unified',
-    default=None,
-)
-coreconfigitem('annotate', 'git',
-    default=False,
-)
-coreconfigitem('annotate', 'ignorews',
-    default=False,
-)
-coreconfigitem('annotate', 'ignorewsamount',
-    default=False,
-)
-coreconfigitem('annotate', 'ignoreblanklines',
-    default=False,
-)
-coreconfigitem('annotate', 'ignorewseol',
-    default=False,
-)
-coreconfigitem('annotate', 'nobinary',
-    default=False,
-)
-coreconfigitem('annotate', 'noprefix',
-    default=False,
-)
-coreconfigitem('annotate', 'word-diff',
-    default=False,
-)
 coreconfigitem('auth', 'cookiefile',
     default=None,
 )
+_registerdiffopts(section='annotate')
 # bookmarks.pushing: internal hack for discovery
 coreconfigitem('bookmarks', 'pushing',
     default=list,
@@ -198,6 +201,7 @@
 coreconfigitem('color', 'pagermode',
     default=dynamicdefault,
 )
+_registerdiffopts(section='commands', configprefix='commit.interactive.')
 coreconfigitem('commands', 'grep.all-files',
     default=False,
 )
@@ -210,6 +214,7 @@
 coreconfigitem('commands', 'resolve.mark-check',
     default='none',
 )
+_registerdiffopts(section='commands', configprefix='revert.interactive.')
 coreconfigitem('commands', 'show.aliasprefix',
     default=list,
 )
@@ -404,39 +409,7 @@
 coreconfigitem('devel', 'debug.peer-request',
     default=False,
 )
-coreconfigitem('diff', 'nodates',
-    default=False,
-)
-coreconfigitem('diff', 'showfunc',
-    default=False,
-)
-coreconfigitem('diff', 'unified',
-    default=None,
-)
-coreconfigitem('diff', 'git',
-    default=False,
-)
-coreconfigitem('diff', 'ignorews',
-    default=False,
-)
-coreconfigitem('diff', 'ignorewsamount',
-    default=False,
-)
-coreconfigitem('diff', 'ignoreblanklines',
-    default=False,
-)
-coreconfigitem('diff', 'ignorewseol',
-    default=False,
-)
-coreconfigitem('diff', 'nobinary',
-    default=False,
-)
-coreconfigitem('diff', 'noprefix',
-    default=False,
-)
-coreconfigitem('diff', 'word-diff',
-    default=False,
-)
+_registerdiffopts(section='diff')
 coreconfigitem('email', 'bcc',
     default=None,
 )
@@ -497,6 +470,9 @@
 coreconfigitem('experimental', 'changegroup3',
     default=False,
 )
+coreconfigitem('experimental', 'cleanup-as-archived',
+    default=False,
+)
 coreconfigitem('experimental', 'clientcompressionengines',
     default=list,
 )
@@ -509,6 +485,12 @@
 coreconfigitem('experimental', 'copytrace.sourcecommitlimit',
     default=100,
 )
+coreconfigitem('experimental', 'copies.read-from',
+    default="filelog-only",
+)
+coreconfigitem('experimental', 'copies.write-to',
+    default='filelog-only',
+)
 coreconfigitem('experimental', 'crecordtest',
     default=None,
 )
@@ -574,9 +556,6 @@
 coreconfigitem('experimental', 'extendedheader.similarity',
     default=False,
 )
-coreconfigitem('experimental', 'format.compression',
-    default='zlib',
-)
 coreconfigitem('experimental', 'graphshorten',
     default=False,
 )
@@ -616,6 +595,9 @@
 coreconfigitem('experimental', 'removeemptydirs',
     default=True,
 )
+coreconfigitem('experimental', 'revert.interactive.select-to-keep',
+    default=False,
+)
 coreconfigitem('experimental', 'revisions.prefixhexnode',
     default=False,
 )
@@ -702,6 +684,10 @@
 coreconfigitem('format', 'sparse-revlog',
     default=True,
 )
+coreconfigitem('format', 'revlog-compression',
+    default='zlib',
+    alias=[('experimental', 'format.compression')]
+)
 coreconfigitem('format', 'usefncache',
     default=True,
 )
@@ -720,11 +706,11 @@
 coreconfigitem('fsmonitor', 'warn_update_file_count',
     default=50000,
 )
-coreconfigitem('help', 'hidden-command\..*',
+coreconfigitem('help', br'hidden-command\..*',
     default=False,
     generic=True,
 )
-coreconfigitem('help', 'hidden-topic\..*',
+coreconfigitem('help', br'hidden-topic\..*',
     default=False,
     generic=True,
 )
@@ -1004,6 +990,18 @@
     default=True,
     alias=[('format', 'aggressivemergedeltas')],
 )
+coreconfigitem('storage', 'revlog.reuse-external-delta',
+    default=True,
+)
+coreconfigitem('storage', 'revlog.reuse-external-delta-parent',
+    default=None,
+)
+coreconfigitem('storage', 'revlog.zlib.level',
+    default=None,
+)
+coreconfigitem('storage', 'revlog.zstd.level',
+    default=None,
+)
 coreconfigitem('server', 'bookmarks-pushkey-compat',
     default=True,
 )
@@ -1056,6 +1054,9 @@
 coreconfigitem('server', 'uncompressedallowsecret',
     default=False,
 )
+coreconfigitem('server', 'view',
+    default='served',
+)
 coreconfigitem('server', 'validate',
     default=False,
 )
@@ -1108,6 +1109,10 @@
     default=None,
     generic=True,
 )
+coreconfigitem('templateconfig', '.*',
+    default=dynamicdefault,
+    generic=True,
+)
 coreconfigitem('trusted', 'groups',
     default=list,
 )
@@ -1233,6 +1238,9 @@
 coreconfigitem('ui', 'quietbookmarkmove',
     default=False,
 )
+coreconfigitem('ui', 'relative-paths',
+    default='legacy',
+)
 coreconfigitem('ui', 'remotecmd',
     default='hg',
 )
--- a/mercurial/context.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/context.py	Wed Apr 17 13:41:18 2019 -0400
@@ -21,7 +21,7 @@
     nullrev,
     short,
     wdirfilenodeids,
-    wdirid,
+    wdirhex,
 )
 from . import (
     dagop,
@@ -294,16 +294,16 @@
                               listsubrepos=listsubrepos, badfn=badfn)
 
     def diff(self, ctx2=None, match=None, changes=None, opts=None,
-             losedatafn=None, prefix='', relroot='', copy=None,
-             hunksfilterfn=None):
+             losedatafn=None, pathfn=None, copy=None,
+             copysourcematch=None, hunksfilterfn=None):
         """Returns a diff generator for the given contexts and matcher"""
         if ctx2 is None:
             ctx2 = self.p1()
         if ctx2 is not None:
             ctx2 = self._repo[ctx2]
         return patch.diff(self._repo, ctx2, self, match=match, changes=changes,
-                          opts=opts, losedatafn=losedatafn, prefix=prefix,
-                          relroot=relroot, copy=copy,
+                          opts=opts, losedatafn=losedatafn, pathfn=pathfn,
+                          copy=copy, copysourcematch=copysourcematch,
                           hunksfilterfn=hunksfilterfn)
 
     def dirs(self):
@@ -439,6 +439,44 @@
         return self._changeset.date
     def files(self):
         return self._changeset.files
+    @propertycache
+    def _copies(self):
+        source = self._repo.ui.config('experimental', 'copies.read-from')
+        p1copies = self._changeset.p1copies
+        p2copies = self._changeset.p2copies
+        # If config says to get copy metadata only from changeset, then return
+        # that, defaulting to {} if there was no copy metadata.
+        # In compatibility mode, we return copy data from the changeset if
+        # it was recorded there, and otherwise we fall back to getting it from
+        # the filelogs (below).
+        if (source == 'changeset-only' or
+            (source == 'compatibility' and p1copies is not None)):
+            return p1copies or {}, p2copies or {}
+
+        # Otherwise (config said to read only from filelog, or we are in
+        # compatiblity mode and there is not data in the changeset), we get
+        # the copy metadata from the filelogs.
+        p1copies = {}
+        p2copies = {}
+        p1 = self.p1()
+        p2 = self.p2()
+        narrowmatch = self._repo.narrowmatch()
+        for dst in self.files():
+            if not narrowmatch(dst) or dst not in self:
+                continue
+            copied = self[dst].renamed()
+            if not copied:
+                continue
+            src, srcnode = copied
+            if src in p1 and p1[src].filenode() == srcnode:
+                p1copies[dst] = src
+            elif src in p2 and p2[src].filenode() == srcnode:
+                p2copies[dst] = src
+        return p1copies, p2copies
+    def p1copies(self):
+        return self._copies[0]
+    def p2copies(self):
+        return self._copies[1]
     def description(self):
         return self._changeset.description
     def branch(self):
@@ -668,6 +706,8 @@
         return self._changectx
     def renamed(self):
         return self._copied
+    def copysource(self):
+        return self._copied and self._copied[0]
     def repo(self):
         return self._repo
     def size(self):
@@ -960,9 +1000,9 @@
 
         assert (changeid is not None
                 or fileid is not None
-                or changectx is not None), \
-                ("bad args: changeid=%r, fileid=%r, changectx=%r"
-                 % (changeid, fileid, changectx))
+                or changectx is not None), (
+                    "bad args: changeid=%r, fileid=%r, changectx=%r"
+                    % (changeid, fileid, changectx))
 
         if filelog is not None:
             self._filelog = filelog
@@ -1158,7 +1198,6 @@
     def files(self):
         return sorted(self._status.modified + self._status.added +
                       self._status.removed)
-
     def modified(self):
         return self._status.modified
     def added(self):
@@ -1167,6 +1206,26 @@
         return self._status.removed
     def deleted(self):
         return self._status.deleted
+    @propertycache
+    def _copies(self):
+        p1copies = {}
+        p2copies = {}
+        parents = self._repo.dirstate.parents()
+        p1manifest = self._repo[parents[0]].manifest()
+        p2manifest = self._repo[parents[1]].manifest()
+        narrowmatch = self._repo.narrowmatch()
+        for dst, src in self._repo.dirstate.copies().items():
+            if not narrowmatch(dst):
+                continue
+            if src in p1manifest:
+                p1copies[dst] = src
+            elif src in p2manifest:
+                p2copies[dst] = src
+        return p1copies, p2copies
+    def p1copies(self):
+        return self._copies[0]
+    def p2copies(self):
+        return self._copies[1]
     def branch(self):
         return encoding.tolocal(self._extra['branch'])
     def closesbranch(self):
@@ -1280,7 +1339,7 @@
         return self._repo.dirstate[key] not in "?r"
 
     def hex(self):
-        return hex(wdirid)
+        return wdirhex
 
     @propertycache
     def _parents(self):
@@ -1355,28 +1414,15 @@
             uipath = lambda f: ds.pathto(pathutil.join(prefix, f))
             rejected = []
             for f in files:
-                if f not in self._repo.dirstate:
+                if f not in ds:
                     self._repo.ui.warn(_("%s not tracked!\n") % uipath(f))
                     rejected.append(f)
-                elif self._repo.dirstate[f] != 'a':
-                    self._repo.dirstate.remove(f)
+                elif ds[f] != 'a':
+                    ds.remove(f)
                 else:
-                    self._repo.dirstate.drop(f)
+                    ds.drop(f)
             return rejected
 
-    def undelete(self, list):
-        pctxs = self.parents()
-        with self._repo.wlock():
-            ds = self._repo.dirstate
-            for f in list:
-                if self._repo.dirstate[f] != 'r':
-                    self._repo.ui.warn(_("%s not removed!\n") % ds.pathto(f))
-                else:
-                    fctx = f in pctxs[0] and pctxs[0][f] or pctxs[1][f]
-                    t = fctx.data()
-                    self._repo.wwrite(f, t, fctx.flags())
-                    self._repo.dirstate.normal(f)
-
     def copy(self, source, dest):
         try:
             st = self._repo.wvfs.lstat(dest)
@@ -1392,11 +1438,12 @@
                                % self._repo.dirstate.pathto(dest))
         else:
             with self._repo.wlock():
-                if self._repo.dirstate[dest] in '?':
-                    self._repo.dirstate.add(dest)
-                elif self._repo.dirstate[dest] in 'r':
-                    self._repo.dirstate.normallookup(dest)
-                self._repo.dirstate.copy(source, dest)
+                ds = self._repo.dirstate
+                if ds[dest] in '?':
+                    ds.add(dest)
+                elif ds[dest] in 'r':
+                    ds.normallookup(dest)
+                ds.copy(source, dest)
 
     def match(self, pats=None, include=None, exclude=None, default='glob',
               listsubrepos=False, badfn=None):
@@ -1632,6 +1679,12 @@
         # linked to self._changectx no matter if file is modified or not
         return self.rev()
 
+    def renamed(self):
+        path = self.copysource()
+        if not path:
+            return None
+        return path, self._changectx._parents[0]._manifest.get(path, nullid)
+
     def parents(self):
         '''return parent filectxs, following copies if necessary'''
         def filenode(ctx, path):
@@ -1668,11 +1721,8 @@
 
     def data(self):
         return self._repo.wread(self._path)
-    def renamed(self):
-        rp = self._repo.dirstate.copied(self._path)
-        if not rp:
-            return None
-        return rp, self._changectx._parents[0]._manifest.get(rp, nullid)
+    def copysource(self):
+        return self._repo.dirstate.copied(self._path)
 
     def size(self):
         return self._repo.wvfs.lstat(self._path).st_size
@@ -1822,6 +1872,30 @@
         return [f for f in self._cache.keys() if
                 not self._cache[f]['exists'] and self._existsinparent(f)]
 
+    def p1copies(self):
+        copies = self._repo._wrappedctx.p1copies().copy()
+        narrowmatch = self._repo.narrowmatch()
+        for f in self._cache.keys():
+            if not narrowmatch(f):
+                continue
+            copies.pop(f, None) # delete if it exists
+            source = self._cache[f]['copied']
+            if source:
+                copies[f] = source
+        return copies
+
+    def p2copies(self):
+        copies = self._repo._wrappedctx.p2copies().copy()
+        narrowmatch = self._repo.narrowmatch()
+        for f in self._cache.keys():
+            if not narrowmatch(f):
+                continue
+            copies.pop(f, None) # delete if it exists
+            source = self._cache[f]['copied']
+            if source:
+                copies[f] = source
+        return copies
+
     def isinmemory(self):
         return True
 
@@ -1832,10 +1906,8 @@
             return self._wrappedctx[path].date()
 
     def markcopied(self, path, origin):
-        if self.isdirty(path):
-            self._cache[path]['copied'] = origin
-        else:
-            raise error.ProgrammingError('markcopied() called on clean context')
+        self._markdirty(path, exists=True, date=self.filedate(path),
+                        flags=self.flags(path), copied=origin)
 
     def copydata(self, path):
         if self.isdirty(path):
@@ -1897,7 +1969,7 @@
 
         # Test the other direction -- that this path from p2 isn't a directory
         # in p1 (test that p1 doesn't have any paths matching `path/*`).
-        match = self.match(pats=[path + '/'], default=b'path')
+        match = self.match([path], default=b'path')
         matches = self.p1().manifest().matches(match)
         mfiles = matches.keys()
         if len(mfiles) > 0:
@@ -1908,7 +1980,7 @@
             if not mfiles:
                 return
             raise error.Abort("error: file '%s' cannot be written because "
-                              " '%s/' is a folder in %s (containing %d "
+                              " '%s/' is a directory in %s (containing %d "
                               "entries: %s)"
                               % (path, path, self.p1(), len(mfiles),
                                  ', '.join(mfiles)))
@@ -2039,7 +2111,8 @@
             del self._cache[path]
         return keys
 
-    def _markdirty(self, path, exists, data=None, date=None, flags=''):
+    def _markdirty(self, path, exists, data=None, date=None, flags='',
+        copied=None):
         # data not provided, let's see if we already have some; if not, let's
         # grab it from our underlying context, so that we always have data if
         # the file is marked as existing.
@@ -2052,7 +2125,7 @@
             'data': data,
             'date': date,
             'flags': flags,
-            'copied': None,
+            'copied': copied,
         }
 
     def filectx(self, path, filelog=None):
@@ -2088,11 +2161,8 @@
     def lexists(self):
         return self._parent.exists(self._path)
 
-    def renamed(self):
-        path = self._parent.copydata(self._path)
-        if not path:
-            return None
-        return path, self._changectx._parents[0]._manifest.get(path, nullid)
+    def copysource(self):
+        return self._parent.copydata(self._path)
 
     def size(self):
         return self._parent.size(self._path)
@@ -2178,14 +2248,10 @@
     """
     def getfilectx(repo, memctx, path):
         fctx = ctx[path]
-        # this is weird but apparently we only keep track of one parent
-        # (why not only store that instead of a tuple?)
-        copied = fctx.renamed()
-        if copied:
-            copied = copied[0]
+        copysource = fctx.copysource()
         return memfilectx(repo, memctx, path, fctx.data(),
                           islink=fctx.islink(), isexec=fctx.isexec(),
-                          copied=copied)
+                          copysource=copysource)
 
     return getfilectx
 
@@ -2195,12 +2261,12 @@
     This is a convenience method for building a memctx based on a patchstore.
     """
     def getfilectx(repo, memctx, path):
-        data, mode, copied = patchstore.getfile(path)
+        data, mode, copysource = patchstore.getfile(path)
         if data is None:
             return None
         islink, isexec = mode
         return memfilectx(repo, memctx, path, data, islink=islink,
-                          isexec=isexec, copied=copied)
+                          isexec=isexec, copysource=copysource)
 
     return getfilectx
 
@@ -2326,7 +2392,7 @@
     See memctx and committablefilectx for more details.
     """
     def __init__(self, repo, changectx, path, data, islink=False,
-                 isexec=False, copied=None):
+                 isexec=False, copysource=None):
         """
         path is the normalized file path relative to repository root.
         data is the file content as a string.
@@ -2342,9 +2408,10 @@
             self._flags = 'x'
         else:
             self._flags = ''
-        self._copied = None
-        if copied:
-            self._copied = (copied, nullid)
+        self._copysource = copysource
+
+    def copysource(self):
+        return self._copysource
 
     def cmp(self, fctx):
         return self.data() != fctx.data()
--- a/mercurial/copies.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/copies.py	Wed Apr 17 13:41:18 2019 -0400
@@ -17,21 +17,19 @@
     match as matchmod,
     node,
     pathutil,
-    scmutil,
     util,
 )
 from .utils import (
     stringutil,
 )
 
-def _findlimit(repo, a, b):
+def _findlimit(repo, ctxa, ctxb):
     """
     Find the last revision that needs to be checked to ensure that a full
     transitive closure for file copies can be properly calculated.
     Generally, this means finding the earliest revision number that's an
     ancestor of a or b but not both, except when a or b is a direct descendent
     of the other, in which case we can return the minimum revnum of a and b.
-    None if no such revision exists.
     """
 
     # basic idea:
@@ -46,27 +44,32 @@
     #   - quit when interesting revs is zero
 
     cl = repo.changelog
+    wdirparents = None
+    a = ctxa.rev()
+    b = ctxb.rev()
     if a is None:
+        wdirparents = (ctxa.p1(), ctxa.p2())
         a = node.wdirrev
     if b is None:
+        assert not wdirparents
+        wdirparents = (ctxb.p1(), ctxb.p2())
         b = node.wdirrev
 
     side = {a: -1, b: 1}
     visit = [-a, -b]
     heapq.heapify(visit)
     interesting = len(visit)
-    hascommonancestor = False
     limit = node.wdirrev
 
     while interesting:
         r = -heapq.heappop(visit)
         if r == node.wdirrev:
-            parents = [cl.rev(p) for p in repo.dirstate.parents()]
+            parents = [pctx.rev() for pctx in wdirparents]
         else:
             parents = cl.parentrevs(r)
+        if parents[1] == node.nullrev:
+            parents = parents[:1]
         for p in parents:
-            if p < 0:
-                continue
             if p not in side:
                 # first time we see p; add it to visit
                 side[p] = side[r]
@@ -77,14 +80,10 @@
                 # p was interesting but now we know better
                 side[p] = 0
                 interesting -= 1
-                hascommonancestor = True
         if side[r]:
             limit = r # lowest rev visited
             interesting -= 1
 
-    if not hascommonancestor:
-        return None
-
     # Consider the following flow (see test-commit-amend.t under issue4405):
     # 1/ File 'a0' committed
     # 2/ File renamed from 'a0' to 'a1' in a new commit (call it 'a1')
@@ -124,10 +123,13 @@
             # file is a copy of an existing file
             t[k] = v
 
-    # remove criss-crossed copies
     for k, v in list(t.items()):
+        # remove criss-crossed copies
         if k in src and v in dst:
             del t[k]
+        # remove copies to files that were then removed
+        elif k not in dst:
+            del t[k]
 
     return t
 
@@ -141,8 +143,8 @@
         if limit >= 0 and not f.isintroducedafter(limit):
             return None
 
-def _dirstatecopies(d, match=None):
-    ds = d._repo.dirstate
+def _dirstatecopies(repo, match=None):
+    ds = repo.dirstate
     c = ds.copies().copy()
     for k in list(c):
         if ds[k] not in 'anm' or (match and not match(k)):
@@ -158,19 +160,26 @@
     mb = b.manifest()
     return mb.filesnotin(ma, match=match)
 
+def usechangesetcentricalgo(repo):
+    """Checks if we should use changeset-centric copy algorithms"""
+    return (repo.ui.config('experimental', 'copies.read-from') in
+            ('changeset-only', 'compatibility'))
+
 def _committedforwardcopies(a, b, match):
     """Like _forwardcopies(), but b.rev() cannot be None (working copy)"""
     # files might have to be traced back to the fctx parent of the last
     # one-side-only changeset, but not further back than that
     repo = a._repo
+
+    if usechangesetcentricalgo(repo):
+        return _changesetforwardcopies(a, b, match)
+
     debug = repo.ui.debugflag and repo.ui.configbool('devel', 'debug.copies')
     dbg = repo.ui.debug
     if debug:
         dbg('debug.copies:    looking into rename from %s to %s\n'
             % (a, b))
-    limit = _findlimit(repo, a.rev(), b.rev())
-    if limit is None:
-        limit = node.nullrev
+    limit = _findlimit(repo, a, b)
     if debug:
         dbg('debug.copies:      search limit: %d\n' % limit)
     am = a.manifest()
@@ -188,7 +197,7 @@
     # this comparison.
     forwardmissingmatch = match
     if b.p1() == a and b.p2().node() == node.nullid:
-        filesmatcher = scmutil.matchfiles(a._repo, b.files())
+        filesmatcher = matchmod.exact(b.files())
         forwardmissingmatch = matchmod.intersectmatchers(match, filesmatcher)
     missing = _computeforwardmissing(a, b, match=forwardmissingmatch)
 
@@ -215,6 +224,76 @@
                 % (util.timer() - start))
     return cm
 
+def _changesetforwardcopies(a, b, match):
+    if a.rev() == node.nullrev:
+        return {}
+
+    repo = a.repo()
+    children = {}
+    cl = repo.changelog
+    missingrevs = cl.findmissingrevs(common=[a.rev()], heads=[b.rev()])
+    for r in missingrevs:
+        for p in cl.parentrevs(r):
+            if p == node.nullrev:
+                continue
+            if p not in children:
+                children[p] = [r]
+            else:
+                children[p].append(r)
+
+    roots = set(children) - set(missingrevs)
+    # 'work' contains 3-tuples of a (revision number, parent number, copies).
+    # The parent number is only used for knowing which parent the copies dict
+    # came from.
+    work = [(r, 1, {}) for r in roots]
+    heapq.heapify(work)
+    while work:
+        r, i1, copies1 = heapq.heappop(work)
+        if work and work[0][0] == r:
+            # We are tracing copies from both parents
+            r, i2, copies2 = heapq.heappop(work)
+            copies = {}
+            ctx = repo[r]
+            p1man, p2man = ctx.p1().manifest(), ctx.p2().manifest()
+            allcopies = set(copies1) | set(copies2)
+            # TODO: perhaps this filtering should be done as long as ctx
+            # is merge, whether or not we're tracing from both parent.
+            for dst in allcopies:
+                if not match(dst):
+                    continue
+                if dst not in copies2:
+                    # Copied on p1 side: mark as copy from p1 side if it didn't
+                    # already exist on p2 side
+                    if dst not in p2man:
+                        copies[dst] = copies1[dst]
+                elif dst not in copies1:
+                    # Copied on p2 side: mark as copy from p2 side if it didn't
+                    # already exist on p1 side
+                    if dst not in p1man:
+                        copies[dst] = copies2[dst]
+                else:
+                    # Copied on both sides: mark as copy from p1 side
+                    copies[dst] = copies1[dst]
+        else:
+            copies = copies1
+        if r == b.rev():
+            return copies
+        for c in children[r]:
+            childctx = repo[c]
+            if r == childctx.p1().rev():
+                parent = 1
+                childcopies = childctx.p1copies()
+            else:
+                assert r == childctx.p2().rev()
+                parent = 2
+                childcopies = childctx.p2copies()
+            if not match.always():
+                childcopies = {dst: src for dst, src in childcopies.items()
+                               if match(dst)}
+            childcopies = _chain(a, childctx, copies, childcopies)
+            heapq.heappush(work, (c, parent, childcopies))
+    assert False
+
 def _forwardcopies(a, b, match=None):
     """find {dst@b: src@a} copy mapping where a is an ancestor of b"""
 
@@ -223,23 +302,28 @@
     if b.rev() is None:
         if a == b.p1():
             # short-circuit to avoid issues with merge states
-            return _dirstatecopies(b, match)
+            return _dirstatecopies(b._repo, match)
 
         cm = _committedforwardcopies(a, b.p1(), match)
         # combine copies from dirstate if necessary
-        return _chain(a, b, cm, _dirstatecopies(b, match))
+        return _chain(a, b, cm, _dirstatecopies(b._repo, match))
     return _committedforwardcopies(a, b, match)
 
-def _backwardrenames(a, b):
+def _backwardrenames(a, b, match):
     if a._repo.ui.config('experimental', 'copytrace') == 'off':
         return {}
 
     # Even though we're not taking copies into account, 1:n rename situations
     # can still exist (e.g. hg cp a b; hg mv a c). In those cases we
     # arbitrarily pick one of the renames.
+    # We don't want to pass in "match" here, since that would filter
+    # the destination by it. Since we're reversing the copies, we want
+    # to filter the source instead.
     f = _forwardcopies(b, a)
     r = {}
     for k, v in sorted(f.iteritems()):
+        if match and not match(v):
+            continue
         # remove copies
         if v in a:
             continue
@@ -263,10 +347,10 @@
     if a == y:
         if debug:
             repo.ui.debug('debug.copies: search mode: backward\n')
-        return _backwardrenames(x, y)
+        return _backwardrenames(x, y, match=match)
     if debug:
         repo.ui.debug('debug.copies: search mode: combined\n')
-    return _chain(x, y, _backwardrenames(x, a),
+    return _chain(x, y, _backwardrenames(x, a, match=match),
                   _forwardcopies(a, y, match=match))
 
 def _computenonoverlap(repo, c1, c2, addedinm1, addedinm2, baselabel=''):
@@ -346,8 +430,7 @@
 
 def mergecopies(repo, c1, c2, base):
     """
-    The function calling different copytracing algorithms on the basis of config
-    which find moves and copies between context c1 and c2 that are relevant for
+    Finds moves and copies between context c1 and c2 that are relevant for
     merging. 'base' will be used as the merge base.
 
     Copytracing is used in commands like rebase, merge, unshelve, etc to merge
@@ -388,14 +471,18 @@
     "dirmove" is a mapping of detected source dir -> destination dir renames.
     This is needed for handling changes to new files previously grafted into
     renamed directories.
+
+    This function calls different copytracing algorithms based on config.
     """
     # avoid silly behavior for update from empty dir
     if not c1 or not c2 or c1 == c2:
         return {}, {}, {}, {}, {}
 
+    narrowmatch = c1.repo().narrowmatch()
+
     # avoid silly behavior for parent -> working dir
     if c2.node() is None and c1.node() == repo.dirstate.p1():
-        return repo.dirstate.copies(), {}, {}, {}, {}
+        return _dirstatecopies(repo, narrowmatch), {}, {}, {}, {}
 
     copytracing = repo.ui.config('experimental', 'copytrace')
     boolctrace = stringutil.parsebool(copytracing)
@@ -464,10 +551,7 @@
     if graft:
         tca = _c1.ancestor(_c2)
 
-    limit = _findlimit(repo, c1.rev(), c2.rev())
-    if limit is None:
-        # no common ancestor, no copies
-        return {}, {}, {}, {}, {}
+    limit = _findlimit(repo, c1, c2)
     repo.ui.debug("  searching for copies back to rev %d\n" % limit)
 
     m1 = c1.manifest()
@@ -529,7 +613,7 @@
     if dirtyc1:
         _combinecopies(data2['incomplete'], data1['incomplete'], copy, diverge,
                        incompletediverge)
-    else:
+    if dirtyc2:
         _combinecopies(data1['incomplete'], data2['incomplete'], copy, diverge,
                        incompletediverge)
 
@@ -568,7 +652,13 @@
     for f in bothnew:
         _checkcopies(c1, c2, f, base, tca, dirtyc1, limit, both1)
         _checkcopies(c2, c1, f, base, tca, dirtyc2, limit, both2)
-    if dirtyc1:
+    if dirtyc1 and dirtyc2:
+        remainder = _combinecopies(both2['incomplete'], both1['incomplete'],
+                                   copy, bothdiverge, bothincompletediverge)
+        remainder1 = _combinecopies(both1['incomplete'], both2['incomplete'],
+                                   copy, bothdiverge, bothincompletediverge)
+        remainder.update(remainder1)
+    elif dirtyc1:
         # incomplete copies may only be found on the "dirty" side for bothnew
         assert not both2['incomplete']
         remainder = _combinecopies({}, both1['incomplete'], copy, bothdiverge,
@@ -781,7 +871,7 @@
     """
 
     if f1 == f2:
-        return f1 # a match
+        return True # a match
 
     g1, g2 = f1.ancestors(), f2.ancestors()
     try:
--- a/mercurial/crecord.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/crecord.py	Wed Apr 17 13:41:18 2019 -0400
@@ -20,6 +20,7 @@
     encoding,
     error,
     patch as patchmod,
+    pycompat,
     scmutil,
     util,
 )
@@ -30,7 +31,7 @@
 
 # This is required for ncurses to display non-ASCII characters in default user
 # locale encoding correctly.  --immerrr
-locale.setlocale(locale.LC_ALL, u'')
+locale.setlocale(locale.LC_ALL, r'')
 
 # patch comments based on the git one
 diffhelptext = _("""# To remove '-' lines, make them ' ' lines (context).
@@ -377,9 +378,9 @@
     def countchanges(self):
         """changedlines -> (n+,n-)"""
         add = len([l for l in self.changedlines if l.applied
-                   and l.prettystr()[0] == '+'])
+                    and l.prettystr().startswith('+')])
         rem = len([l for l in self.changedlines if l.applied
-                   and l.prettystr()[0] == '-'])
+                    and l.prettystr().startswith('-')])
         return add, rem
 
     def getfromtoline(self):
@@ -423,7 +424,7 @@
             changedlinestr = changedline.prettystr()
             if changedline.applied:
                 hunklinelist.append(changedlinestr)
-            elif changedlinestr[0] == "-":
+            elif changedlinestr.startswith("-"):
                 hunklinelist.append(" " + changedlinestr[1:])
 
         fp.write(''.join(self.before + hunklinelist + self.after))
@@ -471,11 +472,11 @@
         for line in self.changedlines:
             text = line.linetext
             if line.applied:
-                if text[0] == '+':
+                if text.startswith('+'):
                     dels.append(text[1:])
-                elif text[0] == '-':
+                elif text.startswith('-'):
                     adds.append(text[1:])
-            elif text[0] == '+':
+            elif text.startswith('+'):
                 dels.append(text[1:])
                 adds.append(text[1:])
         hunk = ['-%s' % l for l in dels] + ['+%s' % l for l in adds]
@@ -487,7 +488,7 @@
         return getattr(self._hunk, name)
 
     def __repr__(self):
-        return '<hunk %r@%d>' % (self.filename(), self.fromline)
+        return r'<hunk %r@%d>' % (self.filename(), self.fromline)
 
 def filterpatch(ui, chunks, chunkselector, operation=None):
     """interactively filter patch chunks into applied-only chunks"""
@@ -553,6 +554,14 @@
     of the chosen chunks.
     """
     chunkselector = curseschunkselector(headerlist, ui, operation)
+
+    class dummystdscr(object):
+        def clear(self):
+            pass
+        def refresh(self):
+            pass
+
+    chunkselector.stdscr = dummystdscr()
     if testfn and os.path.exists(testfn):
         testf = open(testfn, 'rb')
         testcommands = [x.rstrip('\n') for x in testf.readlines()]
@@ -565,6 +574,7 @@
 _headermessages = { # {operation: text}
     'apply': _('Select hunks to apply'),
     'discard': _('Select hunks to discard'),
+    'keep': _('Select hunks to keep'),
     None: _('Select hunks to record'),
 }
 
@@ -952,8 +962,8 @@
         # turn tabs into spaces
         instr = instr.expandtabs(4)
         strwidth = encoding.colwidth(instr)
-        numspaces = (width - ((strwidth + xstart) % width) - 1)
-        return instr + " " * numspaces + "\n"
+        numspaces = (width - ((strwidth + xstart) % width))
+        return instr + " " * numspaces
 
     def printstring(self, window, text, fgcolor=None, bgcolor=None, pair=None,
         pairname=None, attrlist=None, towin=True, align=True, showwhtspc=False):
@@ -1457,6 +1467,8 @@
         pgup/pgdn [K/J] : go to previous/next item of same type
  right/left-arrow [l/h] : go to child item / parent item
  shift-left-arrow   [H] : go to parent header / fold selected header
+                      g : go to the top
+                      G : go to the bottom
                       f : fold / unfold item, hiding/revealing its children
                       F : fold / unfold parent item and all of its ancestors
                  ctrl-l : scroll the selected line to the top of the screen
@@ -1495,6 +1507,45 @@
         self.stdscr.refresh()
         self.stdscr.keypad(1) # allow arrow-keys to continue to function
 
+    def handlefirstlineevent(self):
+        """
+        Handle 'g' to navigate to the top most file in the ncurses window.
+        """
+        self.currentselecteditem = self.headerlist[0]
+        currentitem = self.currentselecteditem
+        # select the parent item recursively until we're at a header
+        while True:
+            nextitem = currentitem.parentitem()
+            if nextitem is None:
+                break
+            else:
+                currentitem = nextitem
+
+        self.currentselecteditem = currentitem
+
+    def handlelastlineevent(self):
+        """
+        Handle 'G' to navigate to the bottom most file/hunk/line depending
+        on the whether the fold is active or not.
+
+        If the bottom most file is folded, it navigates to that file and
+        stops there. If the bottom most file is unfolded, it navigates to
+        the bottom most hunk in that file and stops there. If the bottom most
+        hunk is unfolded, it navigates to the bottom most line in that hunk.
+        """
+        currentitem = self.currentselecteditem
+        nextitem = currentitem.nextitem()
+        # select the child item recursively until we're at a footer
+        while nextitem is not None:
+            nextitem = currentitem.nextitem()
+            if nextitem is None:
+                break
+            else:
+                currentitem = nextitem
+
+        self.currentselecteditem = currentitem
+        self.recenterdisplayedarea()
+
     def confirmationwindow(self, windowtext):
         "display an informational window, then wait for and return a keypress."
 
@@ -1519,10 +1570,10 @@
         """ask for 'y' to be pressed to confirm selected. return True if
         confirmed."""
         confirmtext = _(
-"""if you answer yes to the following, the your currently chosen patch chunks
-will be loaded into an editor.  you may modify the patch from the editor, and
-save the changes if you wish to change the patch.  otherwise, you can just
-close the editor without saving to accept the current patch as-is.
+"""If you answer yes to the following, your currently chosen patch chunks
+will be loaded into an editor. To modify the patch, make the changes in your
+editor and save. To accept the current patch as-is, close the editor without
+saving.
 
 note: don't add/remove lines unless you also modify the range information.
       failing to follow this rule will result in the commit aborting.
@@ -1546,14 +1597,7 @@
         new changeset will be created (the normal commit behavior).
         """
 
-        try:
-            ver = float(util.version()[:3])
-        except ValueError:
-            ver = 1
-        if ver < 2.19:
-            msg = _("The amend option is unavailable with hg versions < 2.2\n\n"
-                    "Press any key to continue.")
-        elif opts.get('amend') is None:
+        if opts.get('amend') is None:
             opts['amend'] = True
             msg = _("Amend option is turned on -- committing the currently "
                     "selected changes will not create a new changeset, but "
@@ -1611,6 +1655,9 @@
             except error.Abort as exc:
                 self.errorstr = str(exc)
                 return None
+            finally:
+                self.stdscr.clear()
+                self.stdscr.refresh()
 
             # remove comment lines
             patch = [line + '\n' for line in patch.splitlines()
@@ -1674,6 +1721,7 @@
 
         Return true to exit the main loop.
         """
+        keypressed = pycompat.bytestr(keypressed)
         if keypressed in ["k", "KEY_UP"]:
             self.uparrowevent()
         if keypressed in ["K", "KEY_PPAGE"]:
@@ -1718,13 +1766,20 @@
             self.togglefolded(foldparent=True)
         elif keypressed in ["m"]:
             self.commitMessageWindow()
+        elif keypressed in ["g", "KEY_HOME"]:
+            self.handlefirstlineevent()
+        elif keypressed in ["G", "KEY_END"]:
+            self.handlelastlineevent()
         elif keypressed in ["?"]:
             self.helpwindow()
             self.stdscr.clear()
             self.stdscr.refresh()
         elif curses.unctrl(keypressed) in ["^L"]:
-            # scroll the current line to the top of the screen
+            # scroll the current line to the top of the screen, and redraw
+            # everything
             self.scrolllines(self.selecteditemstartline)
+            self.stdscr.clear()
+            self.stdscr.refresh()
 
     def main(self, stdscr):
         """
@@ -1754,6 +1809,18 @@
         except curses.error:
             self.usecolor = False
 
+        # In some situations we may have some cruft left on the "alternate
+        # screen" from another program (or previous iterations of ourself), and
+        # we won't clear it if the scroll region is small enough to comfortably
+        # fit on the terminal.
+        self.stdscr.clear()
+
+        # don't display the cursor
+        try:
+            curses.curs_set(0)
+        except curses.error:
+            pass
+
         # available colors: black, blue, cyan, green, magenta, white, yellow
         # init_pair(color_id, foreground_color, background_color)
         self.initcolorpair(None, None, name="normal")
@@ -1799,6 +1866,7 @@
                 break
 
         if self.commenttext != "":
-            whitespaceremoved = re.sub("(?m)^\s.*(\n|$)", "", self.commenttext)
+            whitespaceremoved = re.sub(br"(?m)^\s.*(\n|$)", b"",
+                                       self.commenttext)
             if whitespaceremoved != "":
                 self.opts['message'] = self.commenttext
--- a/mercurial/dagop.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/dagop.py	Wed Apr 17 13:41:18 2019 -0400
@@ -28,7 +28,7 @@
 generatorset = smartset.generatorset
 
 # possible maximum depth between null and wdir()
-_maxlogdepth = 0x80000000
+maxlogdepth = 0x80000000
 
 def _walkrevtree(pfunc, revs, startdepth, stopdepth, reverse):
     """Walk DAG using 'pfunc' from the given 'revs' nodes
@@ -42,7 +42,7 @@
     if startdepth is None:
         startdepth = 0
     if stopdepth is None:
-        stopdepth = _maxlogdepth
+        stopdepth = maxlogdepth
     if stopdepth == 0:
         return
     if stopdepth < 0:
@@ -142,7 +142,7 @@
 
 def revancestors(repo, revs, followfirst=False, startdepth=None,
                  stopdepth=None, cutfunc=None):
-    """Like revlog.ancestors(), but supports additional options, includes
+    r"""Like revlog.ancestors(), but supports additional options, includes
     the given revs themselves, and returns a smartset
 
     Scan ends at the stopdepth (exlusive) if specified. Revisions found
@@ -221,7 +221,7 @@
     Scan ends at the stopdepth (exlusive) if specified. Revisions found
     earlier than the startdepth are omitted.
     """
-    if startdepth is None and stopdepth is None:
+    if startdepth is None and (stopdepth is None or stopdepth >= maxlogdepth):
         gen = _genrevdescendants(repo, revs, followfirst)
     else:
         gen = _genrevdescendantsofdepth(repo, revs, followfirst,
@@ -764,7 +764,7 @@
     the input set.
     """
     headrevs = set(revs)
-    parents = set([node.nullrev])
+    parents = {node.nullrev}
     up = parents.update
 
     for rev in revs:
--- a/mercurial/debugcommands.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/debugcommands.py	Wed Apr 17 13:41:18 2019 -0400
@@ -38,6 +38,7 @@
     cmdutil,
     color,
     context,
+    copies,
     dagparser,
     encoding,
     error,
@@ -81,6 +82,7 @@
 )
 from .utils import (
     cborutil,
+    compression,
     dateutil,
     procutil,
     stringutil,
@@ -745,7 +747,6 @@
         nodates = True
     datesort = opts.get(r'datesort')
 
-    timestr = ""
     if datesort:
         keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
     else:
@@ -772,6 +773,7 @@
     ('', 'nonheads', None,
      _('use old-style discovery with non-heads included')),
     ('', 'rev', [], 'restrict discovery to this set of revs'),
+    ('', 'seed', '12323', 'specify the random seed use for discovery'),
     ] + cmdutil.remoteopts,
     _('[--rev REV] [OTHER]'))
 def debugdiscovery(ui, repo, remoteurl="default", **opts):
@@ -782,10 +784,12 @@
     ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
 
     # make sure tests are repeatable
-    random.seed(12323)
-
-    def doit(pushedrevs, remoteheads, remote=remote):
-        if opts.get('old'):
+    random.seed(int(opts['seed']))
+
+
+
+    if opts.get('old'):
+        def doit(pushedrevs, remoteheads, remote=remote):
             if not util.safehasattr(remote, 'branches'):
                 # enable in-client legacy support
                 remote = localrepo.locallegacypeer(remote.local())
@@ -799,26 +803,59 @@
                 clnode = repo.changelog.node
                 common = repo.revs('heads(::%ln)', common)
                 common = {clnode(r) for r in common}
-        else:
+            return common, hds
+    else:
+        def doit(pushedrevs, remoteheads, remote=remote):
             nodes = None
             if pushedrevs:
                 revs = scmutil.revrange(repo, pushedrevs)
                 nodes = [repo[r].node() for r in revs]
             common, any, hds = setdiscovery.findcommonheads(ui, repo, remote,
                                                             ancestorsof=nodes)
-        common = set(common)
-        rheads = set(hds)
-        lheads = set(repo.heads())
-        ui.write(("common heads: %s\n") %
-                 " ".join(sorted(short(n) for n in common)))
-        if lheads <= common:
-            ui.write(("local is subset\n"))
-        elif rheads <= common:
-            ui.write(("remote is subset\n"))
+            return common, hds
 
     remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches, revs=None)
     localrevs = opts['rev']
-    doit(localrevs, remoterevs)
+    with util.timedcm('debug-discovery') as t:
+        common, hds = doit(localrevs, remoterevs)
+
+    # compute all statistics
+    common = set(common)
+    rheads = set(hds)
+    lheads = set(repo.heads())
+
+    data = {}
+    data['elapsed'] = t.elapsed
+    data['nb-common'] = len(common)
+    data['nb-common-local'] = len(common & lheads)
+    data['nb-common-remote'] = len(common & rheads)
+    data['nb-local'] = len(lheads)
+    data['nb-local-missing'] = data['nb-local'] - data['nb-common-local']
+    data['nb-remote'] = len(rheads)
+    data['nb-remote-unknown'] = data['nb-remote'] - data['nb-common-remote']
+    data['nb-revs'] = len(repo.revs('all()'))
+    data['nb-revs-common'] = len(repo.revs('::%ln', common))
+    data['nb-revs-missing'] = data['nb-revs'] - data['nb-revs-common']
+
+    # display discovery summary
+    ui.write(("elapsed time:  %(elapsed)f seconds\n") % data)
+    ui.write(("heads summary:\n"))
+    ui.write(("  total common heads:  %(nb-common)9d\n") % data)
+    ui.write(("    also local heads:  %(nb-common-local)9d\n") % data)
+    ui.write(("    also remote heads: %(nb-common-remote)9d\n") % data)
+    ui.write(("  local heads:         %(nb-local)9d\n") % data)
+    ui.write(("    common:            %(nb-common-local)9d\n") % data)
+    ui.write(("    missing:           %(nb-local-missing)9d\n") % data)
+    ui.write(("  remote heads:        %(nb-remote)9d\n") % data)
+    ui.write(("    common:            %(nb-common-remote)9d\n") % data)
+    ui.write(("    unknown:           %(nb-remote-unknown)9d\n") % data)
+    ui.write(("local changesets:      %(nb-revs)9d\n") % data)
+    ui.write(("  common:              %(nb-revs-common)9d\n") % data)
+    ui.write(("  missing:             %(nb-revs-missing)9d\n") % data)
+
+    if ui.verbose:
+        ui.write(("common heads: %s\n") %
+                 " ".join(sorted(short(n) for n in common)))
 
 _chunksize = 4 << 10
 
@@ -1086,6 +1123,7 @@
         ui.write("%s\n" % pycompat.byterepr(ignore))
     else:
         m = scmutil.match(repo[None], pats=files)
+        uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
         for f in m.files():
             nf = util.normpath(f)
             ignored = None
@@ -1102,16 +1140,16 @@
                             break
             if ignored:
                 if ignored == nf:
-                    ui.write(_("%s is ignored\n") % m.uipath(f))
+                    ui.write(_("%s is ignored\n") % uipathfn(f))
                 else:
                     ui.write(_("%s is ignored because of "
-                               "containing folder %s\n")
-                             % (m.uipath(f), ignored))
+                               "containing directory %s\n")
+                             % (uipathfn(f), ignored))
                 ignorefile, lineno, line = ignoredata
                 ui.write(_("(ignore rule in %s, line %d: '%s')\n")
                          % (ignorefile, lineno, line))
             else:
-                ui.write(_("%s is not ignored\n") % m.uipath(f))
+                ui.write(_("%s is not ignored\n") % uipathfn(f))
 
 @command('debugindex', cmdutil.debugrevlogopts + cmdutil.formatteropts,
          _('-c|-m|FILE'))
@@ -1182,13 +1220,6 @@
     '''
     opts = pycompat.byteskwargs(opts)
 
-    def writetemp(contents):
-        (fd, name) = pycompat.mkstemp(prefix="hg-debuginstall-")
-        f = os.fdopen(fd, r"wb")
-        f.write(contents)
-        f.close()
-        return name
-
     problems = 0
 
     fm = ui.formatter('debuginstall', opts)
@@ -1269,7 +1300,8 @@
              fm.formatlist(sorted(e.name() for e in compengines
                                   if e.available()),
                            name='compengine', fmt='%s', sep=', '))
-    wirecompengines = util.compengines.supportedwireengines(util.SERVERROLE)
+    wirecompengines = compression.compengines.supportedwireengines(
+        compression.SERVERROLE)
     fm.write('compenginesserver', _('checking available compression engines '
                                     'for wire protocol (%s)\n'),
              fm.formatlist([e.name() for e in wirecompengines
@@ -1448,8 +1480,8 @@
                     if host == socket.gethostname():
                         locker = 'user %s, process %s' % (user or b'None', pid)
                     else:
-                        locker = 'user %s, process %s, host %s' \
-                                 % (user or b'None', pid, host)
+                        locker = ('user %s, process %s, host %s'
+                                  % (user or b'None', pid, host))
                 ui.write(("%-6s %s (%ds)\n") % (name + ":", locker, age))
                 return 1
             except OSError as e:
@@ -1466,50 +1498,59 @@
 
 @command('debugmanifestfulltextcache', [
         ('', 'clear', False, _('clear the cache')),
-        ('a', 'add', '', _('add the given manifest node to the cache'),
+        ('a', 'add', [], _('add the given manifest nodes to the cache'),
          _('NODE'))
     ], '')
-def debugmanifestfulltextcache(ui, repo, add=None, **opts):
+def debugmanifestfulltextcache(ui, repo, add=(), **opts):
     """show, clear or amend the contents of the manifest fulltext cache"""
-    with repo.lock():
+
+    def getcache():
         r = repo.manifestlog.getstorage(b'')
         try:
-            cache = r._fulltextcache
+            return r._fulltextcache
         except AttributeError:
-            ui.warn(_(
-                "Current revlog implementation doesn't appear to have a "
-                'manifest fulltext cache\n'))
+            msg = _("Current revlog implementation doesn't appear to have a "
+                    "manifest fulltext cache\n")
+            raise error.Abort(msg)
+
+    if opts.get(r'clear'):
+        with repo.wlock():
+            cache = getcache()
+            cache.clear(clear_persisted_data=True)
             return
 
-        if opts.get(r'clear'):
-            cache.clear()
-
-        if add:
-            try:
-                manifest = repo.manifestlog[r.lookup(add)]
-            except error.LookupError as e:
-                raise error.Abort(e, hint="Check your manifest node id")
-            manifest.read()  # stores revisision in cache too
-
-        if not len(cache):
-            ui.write(_('Cache empty'))
-        else:
-            ui.write(
-                _('Cache contains %d manifest entries, in order of most to '
-                  'least recent:\n') % (len(cache),))
-            totalsize = 0
-            for nodeid in cache:
-                # Use cache.get to not update the LRU order
-                data = cache.get(nodeid)
-                size = len(data)
-                totalsize += size + 24   # 20 bytes nodeid, 4 bytes size
-                ui.write(_('id: %s, size %s\n') % (
-                    hex(nodeid), util.bytecount(size)))
-            ondisk = cache._opener.stat('manifestfulltextcache').st_size
-            ui.write(
-                _('Total cache data size %s, on-disk %s\n') % (
-                    util.bytecount(totalsize), util.bytecount(ondisk))
-            )
+    if add:
+        with repo.wlock():
+            m = repo.manifestlog
+            store = m.getstorage(b'')
+            for n in add:
+                try:
+                    manifest = m[store.lookup(n)]
+                except error.LookupError as e:
+                    raise error.Abort(e, hint="Check your manifest node id")
+                manifest.read()  # stores revisision in cache too
+            return
+
+    cache = getcache()
+    if not len(cache):
+        ui.write(_('cache empty\n'))
+    else:
+        ui.write(
+            _('cache contains %d manifest entries, in order of most to '
+              'least recent:\n') % (len(cache),))
+        totalsize = 0
+        for nodeid in cache:
+            # Use cache.get to not update the LRU order
+            data = cache.peek(nodeid)
+            size = len(data)
+            totalsize += size + 24   # 20 bytes nodeid, 4 bytes size
+            ui.write(_('id: %s, size %s\n') % (
+                hex(nodeid), util.bytecount(size)))
+        ondisk = cache._opener.stat('manifestfulltextcache').st_size
+        ui.write(
+            _('total cache data size %s, on-disk %s\n') % (
+                util.bytecount(totalsize), util.bytecount(ondisk))
+        )
 
 @command('debugmergestate', [], '')
 def debugmergestate(ui, repo, *args):
@@ -1747,6 +1788,28 @@
             cmdutil.showmarker(fm, m, index=ind)
         fm.end()
 
+@command('debugp1copies',
+         [('r', 'rev', '', _('revision to debug'), _('REV'))],
+         _('[-r REV]'))
+def debugp1copies(ui, repo, **opts):
+    """dump copy information compared to p1"""
+
+    opts = pycompat.byteskwargs(opts)
+    ctx = scmutil.revsingle(repo, opts.get('rev'), default=None)
+    for dst, src in ctx.p1copies().items():
+        ui.write('%s -> %s\n' % (src, dst))
+
+@command('debugp2copies',
+         [('r', 'rev', '', _('revision to debug'), _('REV'))],
+         _('[-r REV]'))
+def debugp1copies(ui, repo, **opts):
+    """dump copy information compared to p2"""
+
+    opts = pycompat.byteskwargs(opts)
+    ctx = scmutil.revsingle(repo, opts.get('rev'), default=None)
+    for dst, src in ctx.p2copies().items():
+        ui.write('%s -> %s\n' % (src, dst))
+
 @command('debugpathcomplete',
          [('f', 'full', None, _('complete an entire path')),
           ('n', 'normal', None, _('show only normal files')),
@@ -1812,6 +1875,18 @@
     ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
     ui.write('\n')
 
+@command('debugpathcopies',
+         cmdutil.walkopts,
+         'hg debugpathcopies REV1 REV2 [FILE]',
+         inferrepo=True)
+def debugpathcopies(ui, repo, rev1, rev2, *pats, **opts):
+    """show copies between two revisions"""
+    ctx1 = scmutil.revsingle(repo, rev1)
+    ctx2 = scmutil.revsingle(repo, rev2)
+    m = scmutil.match(ctx1, pats, opts)
+    for dst, src in sorted(copies.pathcopies(ctx1, ctx2, m).items()):
+        ui.write('%s -> %s\n' % (src, dst))
+
 @command('debugpeer', [], _('PATH'), norepo=True)
 def debugpeer(ui, path):
     """establish a connection to a peer repository"""
@@ -2004,17 +2079,17 @@
 
 @command('debugrename',
     [('r', 'rev', '', _('revision to debug'), _('REV'))],
-    _('[-r REV] FILE'))
-def debugrename(ui, repo, file1, *pats, **opts):
+    _('[-r REV] [FILE]...'))
+def debugrename(ui, repo, *pats, **opts):
     """dump rename information"""
 
     opts = pycompat.byteskwargs(opts)
     ctx = scmutil.revsingle(repo, opts.get('rev'))
-    m = scmutil.match(ctx, (file1,) + pats, opts)
+    m = scmutil.match(ctx, pats, opts)
     for abs in ctx.walk(m):
         fctx = ctx[abs]
         o = fctx.filelog().renamed(fctx.filenode())
-        rel = m.rel(abs)
+        rel = repo.pathto(abs)
         if o:
             ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
         else:
@@ -2468,15 +2543,15 @@
         ui.write(('+++ optimized\n'), label='diff.file_b')
         sm = difflib.SequenceMatcher(None, arevs, brevs)
         for tag, alo, ahi, blo, bhi in sm.get_opcodes():
-            if tag in ('delete', 'replace'):
+            if tag in (r'delete', r'replace'):
                 for c in arevs[alo:ahi]:
-                    ui.write('-%s\n' % c, label='diff.deleted')
-            if tag in ('insert', 'replace'):
+                    ui.write('-%d\n' % c, label='diff.deleted')
+            if tag in (r'insert', r'replace'):
                 for c in brevs[blo:bhi]:
-                    ui.write('+%s\n' % c, label='diff.inserted')
-            if tag == 'equal':
+                    ui.write('+%d\n' % c, label='diff.inserted')
+            if tag == r'equal':
                 for c in arevs[alo:ahi]:
-                    ui.write(' %s\n' % c)
+                    ui.write(' %d\n' % c)
         return 1
 
     func = revset.makematcher(tree)
@@ -2569,7 +2644,6 @@
 
     source, branches = hg.parseurl(ui.expandpath(source))
     url = util.url(source)
-    addr = None
 
     defaultport = {'https': 443, 'ssh': 22}
     if url.scheme in defaultport:
@@ -2791,9 +2865,9 @@
         f = lambda fn: util.normpath(fn)
     fmt = 'f  %%-%ds  %%-%ds  %%s' % (
         max([len(abs) for abs in items]),
-        max([len(m.rel(abs)) for abs in items]))
+        max([len(repo.pathto(abs)) for abs in items]))
     for abs in items:
-        line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
+        line = fmt % (abs, f(repo.pathto(abs)), m.exact(abs) and 'exact' or '')
         ui.write("%s\n" % line.rstrip())
 
 @command('debugwhyunstable', [], _('REV'))
--- a/mercurial/diffutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/diffutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -16,13 +16,15 @@
     pycompat,
 )
 
-def diffallopts(ui, opts=None, untrusted=False, section='diff'):
+def diffallopts(ui, opts=None, untrusted=False, section='diff',
+                configprefix=''):
     '''return diffopts with all features supported and parsed'''
     return difffeatureopts(ui, opts=opts, untrusted=untrusted, section=section,
-                           git=True, whitespace=True, formatchanging=True)
+                           git=True, whitespace=True, formatchanging=True,
+                           configprefix=configprefix)
 
 def difffeatureopts(ui, opts=None, untrusted=False, section='diff', git=False,
-                    whitespace=False, formatchanging=False):
+                    whitespace=False, formatchanging=False, configprefix=''):
     '''return diffopts with only opted-in features parsed
 
     Features:
@@ -45,7 +47,8 @@
                 return v
         if forceplain is not None and ui.plain():
             return forceplain
-        return getter(section, name or key, untrusted=untrusted)
+        return getter(section, configprefix + (name or key),
+                      untrusted=untrusted)
 
     # core options, expected to be understood by every diff parser
     buildopts = {
--- a/mercurial/dirstate.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/dirstate.py	Wed Apr 17 13:41:18 2019 -0400
@@ -81,6 +81,10 @@
         self._origpl = None
         self._updatedfiles = set()
         self._mapcls = dirstatemap
+        # Access and cache cwd early, so we don't access it for the first time
+        # after a working-copy update caused it to not exist (accessing it then
+        # raises an exception).
+        self._cwd
 
     @contextlib.contextmanager
     def parentchange(self):
@@ -144,7 +148,7 @@
     def _ignore(self):
         files = self._ignorefiles()
         if not files:
-            return matchmod.never(self._root, '')
+            return matchmod.never()
 
         pats = ['include:%s' % f for f in files]
         return matchmod.match(self._root, '', [], pats, warn=self._ui.warn)
@@ -285,8 +289,8 @@
         See localrepo.setparents()
         """
         if self._parentwriters == 0:
-            raise ValueError("cannot set dirstate parent without "
-                             "calling dirstate.beginparentchange")
+            raise ValueError("cannot set dirstate parent outside of "
+                             "dirstate.parentchange context manager")
 
         self._dirty = True
         oldp2 = self._pl[1]
--- a/mercurial/discovery.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/discovery.py	Wed Apr 17 13:41:18 2019 -0400
@@ -201,44 +201,40 @@
     outgoing = pushop.outgoing
     cl = repo.changelog
     headssum = {}
+    missingctx = set()
     # A. Create set of branches involved in the push.
-    branches = set(repo[n].branch() for n in outgoing.missing)
+    branches = set()
+    for n in outgoing.missing:
+        ctx = repo[n]
+        missingctx.add(ctx)
+        branches.add(ctx.branch())
 
     with remote.commandexecutor() as e:
         remotemap = e.callcommand('branchmap', {}).result()
 
-    newbranches = branches - set(remotemap)
-    branches.difference_update(newbranches)
-
-    # A. register remote heads
-    remotebranches = set()
+    knownnode = cl.hasnode # do not use nodemap until it is filtered
+    # A. register remote heads of branches which are in outgoing set
     for branch, heads in remotemap.iteritems():
-        remotebranches.add(branch)
+        # don't add head info about branches which we don't have locally
+        if branch not in branches:
+            continue
         known = []
         unsynced = []
-        knownnode = cl.hasnode # do not use nodemap until it is filtered
         for h in heads:
             if knownnode(h):
                 known.append(h)
             else:
                 unsynced.append(h)
         headssum[branch] = (known, list(known), unsynced)
+
     # B. add new branch data
-    missingctx = list(repo[n] for n in outgoing.missing)
-    touchedbranches = set()
-    for ctx in missingctx:
-        branch = ctx.branch()
-        touchedbranches.add(branch)
+    for branch in branches:
         if branch not in headssum:
             headssum[branch] = (None, [], [])
 
-    # C drop data about untouched branches:
-    for branch in remotebranches - touchedbranches:
-        del headssum[branch]
-
-    # D. Update newmap with outgoing changes.
+    # C. Update newmap with outgoing changes.
     # This will possibly add new heads and remove existing ones.
-    newmap = branchmap.branchcache((branch, heads[1])
+    newmap = branchmap.remotebranchcache((branch, heads[1])
                                  for branch, heads in headssum.iteritems()
                                  if heads[0] is not None)
     newmap.update(repo, (ctx.rev() for ctx in missingctx))
--- a/mercurial/encoding.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/encoding.py	Wed Apr 17 13:41:18 2019 -0400
@@ -65,7 +65,7 @@
 else:
     # preferred encoding isn't known yet; use utf-8 to avoid unicode error
     # and recreate it once encoding is settled
-    environ = dict((k.encode(u'utf-8'), v.encode(u'utf-8'))
+    environ = dict((k.encode(r'utf-8'), v.encode(r'utf-8'))
                    for k, v in os.environ.items())  # re-exports
 
 _encodingrewrites = {
@@ -152,7 +152,7 @@
             if encoding == 'UTF-8':
                 # fast path
                 return s
-            r = u.encode(_sysstr(encoding), u"replace")
+            r = u.encode(_sysstr(encoding), r"replace")
             if u == r.decode(_sysstr(encoding)):
                 # r is a safe, non-lossy encoding of s
                 return safelocalstr(r)
@@ -161,7 +161,7 @@
             # we should only get here if we're looking at an ancient changeset
             try:
                 u = s.decode(_sysstr(fallbackencoding))
-                r = u.encode(_sysstr(encoding), u"replace")
+                r = u.encode(_sysstr(encoding), r"replace")
                 if u == r.decode(_sysstr(encoding)):
                     # r is a safe, non-lossy encoding of s
                     return safelocalstr(r)
@@ -169,7 +169,7 @@
             except UnicodeDecodeError:
                 u = s.decode("utf-8", "replace") # last ditch
                 # can't round-trip
-                return u.encode(_sysstr(encoding), u"replace")
+                return u.encode(_sysstr(encoding), r"replace")
     except LookupError as k:
         raise error.Abort(k, hint="please check your locale settings")
 
@@ -230,7 +230,7 @@
 if not _nativeenviron:
     # now encoding and helper functions are available, recreate the environ
     # dict to be exported to other modules
-    environ = dict((tolocal(k.encode(u'utf-8')), tolocal(v.encode(u'utf-8')))
+    environ = dict((tolocal(k.encode(r'utf-8')), tolocal(v.encode(r'utf-8')))
                    for k, v in os.environ.items())  # re-exports
 
 if pycompat.ispy3:
@@ -251,7 +251,7 @@
 
 def colwidth(s):
     "Find the column width of a string for display in the local encoding"
-    return ucolwidth(s.decode(_sysstr(encoding), u'replace'))
+    return ucolwidth(s.decode(_sysstr(encoding), r'replace'))
 
 def ucolwidth(d):
     "Find the column width of a Unicode string for display"
--- a/mercurial/exchange.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/exchange.py	Wed Apr 17 13:41:18 2019 -0400
@@ -297,7 +297,6 @@
                                               'client'))
             elif part.type == 'stream2' and version is None:
                 # A stream2 part requires to be part of a v2 bundle
-                version = "v2"
                 requirements = urlreq.unquote(part.params['requirements'])
                 splitted = requirements.split()
                 params = bundle2._formatrequirementsparams(splitted)
@@ -557,18 +556,18 @@
                % stringutil.forcebytestr(err))
         pushop.ui.debug(msg)
 
-    with wlock or util.nullcontextmanager(), \
-            lock or util.nullcontextmanager(), \
-            pushop.trmanager or util.nullcontextmanager():
-        pushop.repo.checkpush(pushop)
-        _checkpublish(pushop)
-        _pushdiscovery(pushop)
-        if not _forcebundle1(pushop):
-            _pushbundle2(pushop)
-        _pushchangeset(pushop)
-        _pushsyncphase(pushop)
-        _pushobsolete(pushop)
-        _pushbookmark(pushop)
+    with wlock or util.nullcontextmanager():
+        with lock or util.nullcontextmanager():
+            with pushop.trmanager or util.nullcontextmanager():
+                pushop.repo.checkpush(pushop)
+                _checkpublish(pushop)
+                _pushdiscovery(pushop)
+                if not _forcebundle1(pushop):
+                    _pushbundle2(pushop)
+                _pushchangeset(pushop)
+                _pushsyncphase(pushop)
+                _pushobsolete(pushop)
+                _pushbookmark(pushop)
 
     if repo.ui.configbool('experimental', 'remotenames'):
         logexchange.pullremotenames(repo, remote)
@@ -708,8 +707,8 @@
 
     remotebookmark = listkeys(remote, 'bookmarks')
 
-    explicit = set([repo._bookmarks.expandname(bookmark)
-                    for bookmark in pushop.bookmarks])
+    explicit = {repo._bookmarks.expandname(bookmark)
+                for bookmark in pushop.bookmarks}
 
     remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
     comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
@@ -921,7 +920,7 @@
                       if v in changegroup.supportedoutgoingversions(
                           pushop.repo)]
         if not cgversions:
-            raise ValueError(_('no common changegroup version'))
+            raise error.Abort(_('no common changegroup version'))
         version = max(cgversions)
     cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
                                       'push')
@@ -2185,7 +2184,7 @@
         cgversions = [v for v in cgversions
                       if v in changegroup.supportedoutgoingversions(repo)]
         if not cgversions:
-            raise ValueError(_('no common changegroup version'))
+            raise error.Abort(_('no common changegroup version'))
         version = max(cgversions)
 
     outgoing = _computeoutgoing(repo, heads, common)
@@ -2229,7 +2228,7 @@
     if not kwargs.get(r'bookmarks', False):
         return
     if 'bookmarks' not in b2caps:
-        raise ValueError(_('no common bookmarks exchange method'))
+        raise error.Abort(_('no common bookmarks exchange method'))
     books  = bookmod.listbinbookmarks(repo)
     data = bookmod.binaryencode(books)
     if data:
@@ -2264,7 +2263,7 @@
     """add phase heads part to the requested bundle"""
     if kwargs.get(r'phases', False):
         if not 'heads' in b2caps.get('phases'):
-            raise ValueError(_('no common phases exchange method'))
+            raise error.Abort(_('no common phases exchange method'))
         if heads is None:
             heads = repo.heads()
 
@@ -2548,8 +2547,8 @@
         return True
 
     # Stream clone v2
-    if (bundlespec.wirecompression == 'UN' and \
-        bundlespec.wireversion == '02' and \
+    if (bundlespec.wirecompression == 'UN' and
+        bundlespec.wireversion == '02' and
         bundlespec.contentopts.get('streamv2')):
         return True
 
--- a/mercurial/filemerge.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/filemerge.py	Wed Apr 17 13:41:18 2019 -0400
@@ -279,6 +279,7 @@
     keep as the merged version."""
     ui = repo.ui
     fd = fcd.path()
+    uipathfn = scmutil.getuipathfn(repo)
 
     # Avoid prompting during an in-memory merge since it doesn't support merge
     # conflicts.
@@ -287,7 +288,7 @@
                                                 'support file conflicts')
 
     prompts = partextras(labels)
-    prompts['fd'] = fd
+    prompts['fd'] = uipathfn(fd)
     try:
         if fco.isabsent():
             index = ui.promptchoice(
@@ -394,13 +395,14 @@
 
 def _mergecheck(repo, mynode, orig, fcd, fco, fca, toolconf):
     tool, toolpath, binary, symlink, scriptfn = toolconf
+    uipathfn = scmutil.getuipathfn(repo)
     if symlink:
         repo.ui.warn(_('warning: internal %s cannot merge symlinks '
-                       'for %s\n') % (tool, fcd.path()))
+                       'for %s\n') % (tool, uipathfn(fcd.path())))
         return False
     if fcd.isabsent() or fco.isabsent():
         repo.ui.warn(_('warning: internal %s cannot merge change/delete '
-                       'conflict for %s\n') % (tool, fcd.path()))
+                       'conflict for %s\n') % (tool, uipathfn(fcd.path())))
         return False
     return True
 
@@ -462,7 +464,6 @@
     Generic driver for _imergelocal and _imergeother
     """
     assert localorother is not None
-    tool, toolpath, binary, symlink, scriptfn = toolconf
     r = simplemerge.simplemerge(repo.ui, fcd, fca, fco, label=labels,
                                 localorother=localorother)
     return True, r
@@ -581,9 +582,10 @@
 
 def _xmerge(repo, mynode, orig, fcd, fco, fca, toolconf, files, labels=None):
     tool, toolpath, binary, symlink, scriptfn = toolconf
+    uipathfn = scmutil.getuipathfn(repo)
     if fcd.isabsent() or fco.isabsent():
         repo.ui.warn(_('warning: %s cannot merge change/delete conflict '
-                       'for %s\n') % (tool, fcd.path()))
+                       'for %s\n') % (tool, uipathfn(fcd.path())))
         return False, 1, None
     unused, unused, unused, back = files
     localpath = _workingpath(repo, fcd)
@@ -623,7 +625,7 @@
             lambda s: procutil.shellquote(util.localpath(s)))
         if _toolbool(ui, tool, "gui"):
             repo.ui.status(_('running merge tool %s for file %s\n') %
-                           (tool, fcd.path()))
+                           (tool, uipathfn(fcd.path())))
         if scriptfn is None:
             cmd = toolpath + ' ' + args
             repo.ui.debug('launching merge tool: %s\n' % cmd)
@@ -741,8 +743,7 @@
     # TODO: Break this import cycle somehow. (filectx -> ctx -> fileset ->
     # merge -> filemerge). (I suspect the fileset import is the weakest link)
     from . import context
-    a = _workingpath(repo, fcd)
-    back = scmutil.origpath(ui, repo, a)
+    back = scmutil.backuppath(ui, repo, fcd.path())
     inworkingdir = (back.startswith(repo.wvfs.base) and not
         back.startswith(repo.vfs.base))
     if isinstance(fcd, context.overlayworkingfilectx) and inworkingdir:
@@ -762,6 +763,7 @@
             if isinstance(fcd, context.overlayworkingfilectx):
                 util.writefile(back, fcd.data())
             else:
+                a = _workingpath(repo, fcd)
                 util.copyfile(a, back)
         # A arbitraryfilectx is returned, so we can run the same functions on
         # the backup context regardless of where it lives.
@@ -842,6 +844,8 @@
 
     ui = repo.ui
     fd = fcd.path()
+    uipathfn = scmutil.getuipathfn(repo)
+    fduipath = uipathfn(fd)
     binary = fcd.isbinary() or fco.isbinary() or fca.isbinary()
     symlink = 'l' in fcd.flags() + fco.flags()
     changedelete = fcd.isabsent() or fco.isabsent()
@@ -865,8 +869,8 @@
             raise error.Abort(_("invalid 'python:' syntax: %s") % toolpath)
         toolpath = script
     ui.debug("picked tool '%s' for %s (binary %s symlink %s changedelete %s)\n"
-             % (tool, fd, pycompat.bytestr(binary), pycompat.bytestr(symlink),
-                    pycompat.bytestr(changedelete)))
+             % (tool, fduipath, pycompat.bytestr(binary),
+                pycompat.bytestr(symlink), pycompat.bytestr(changedelete)))
 
     if tool in internals:
         func = internals[tool]
@@ -892,9 +896,10 @@
 
     if premerge:
         if orig != fco.path():
-            ui.status(_("merging %s and %s to %s\n") % (orig, fco.path(), fd))
+            ui.status(_("merging %s and %s to %s\n") %
+                      (uipathfn(orig), uipathfn(fco.path()), fduipath))
         else:
-            ui.status(_("merging %s\n") % fd)
+            ui.status(_("merging %s\n") % fduipath)
 
     ui.debug("my %s other %s ancestor %s\n" % (fcd, fco, fca))
 
@@ -905,7 +910,7 @@
                 raise error.InMemoryMergeConflictsError('in-memory merge does '
                                                         'not support merge '
                                                         'conflicts')
-            ui.warn(onfailure % fd)
+            ui.warn(onfailure % fduipath)
         return True, 1, False
 
     back = _makebackup(repo, ui, wctx, fcd, premerge)
@@ -958,7 +963,7 @@
                     raise error.InMemoryMergeConflictsError('in-memory merge '
                                                             'does not support '
                                                             'merge conflicts')
-                ui.warn(onfailure % fd)
+                ui.warn(onfailure % fduipath)
             _onfilemergefailure(ui)
 
         return True, r, deleted
@@ -986,6 +991,7 @@
 
 def _check(repo, r, ui, tool, fcd, files):
     fd = fcd.path()
+    uipathfn = scmutil.getuipathfn(repo)
     unused, unused, unused, back = files
 
     if not r and (_toolbool(ui, tool, "checkconflicts") or
@@ -997,7 +1003,7 @@
     if 'prompt' in _toollist(ui, tool, "check"):
         checked = True
         if ui.promptchoice(_("was merge of '%s' successful (yn)?"
-                             "$$ &Yes $$ &No") % fd, 1):
+                             "$$ &Yes $$ &No") % uipathfn(fd), 1):
             r = 1
 
     if not r and not checked and (_toolbool(ui, tool, "checkchanged") or
@@ -1006,7 +1012,7 @@
         if back is not None and not fcd.cmp(back):
             if ui.promptchoice(_(" output file %s appears unchanged\n"
                                  "was merge successful (yn)?"
-                                 "$$ &Yes $$ &No") % fd, 1):
+                                 "$$ &Yes $$ &No") % uipathfn(fd), 1):
                 r = 1
 
     if back is not None and _toolbool(ui, tool, "fixeol"):
--- a/mercurial/fileset.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/fileset.py	Wed Apr 17 13:41:18 2019 -0400
@@ -499,9 +499,8 @@
         """Create a matcher to select files by predfn(filename)"""
         if cache:
             predfn = util.cachefunc(predfn)
-        repo = self.ctx.repo()
-        return matchmod.predicatematcher(repo.root, repo.getcwd(), predfn,
-                                         predrepr=predrepr, badfn=self._badfn)
+        return matchmod.predicatematcher(predfn, predrepr=predrepr,
+                                         badfn=self._badfn)
 
     def fpredicate(self, predfn, predrepr=None, cache=False):
         """Create a matcher to select files by predfn(fctx) at the current
@@ -539,9 +538,7 @@
 
     def never(self):
         """Create a matcher to select nothing"""
-        repo = self.ctx.repo()
-        return matchmod.nevermatcher(repo.root, repo.getcwd(),
-                                     badfn=self._badfn)
+        return matchmod.never(badfn=self._badfn)
 
 def match(ctx, expr, badfn=None):
     """Create a matcher for a single fileset expression"""
--- a/mercurial/formatter.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/formatter.py	Wed Apr 17 13:41:18 2019 -0400
@@ -130,6 +130,7 @@
     util,
 )
 from .utils import (
+    cborutil,
     dateutil,
     stringutil,
 )
@@ -341,6 +342,18 @@
         baseformatter.end(self)
         self._out.write(pickle.dumps(self._data))
 
+class cborformatter(baseformatter):
+    '''serialize items as an indefinite-length CBOR array'''
+    def __init__(self, ui, out, topic, opts):
+        baseformatter.__init__(self, ui, topic, opts, _nullconverter)
+        self._out = out
+        self._out.write(cborutil.BEGIN_INDEFINITE_ARRAY)
+    def _showitem(self):
+        self._out.write(b''.join(cborutil.streamencode(self._item)))
+    def end(self):
+        baseformatter.end(self)
+        self._out.write(cborutil.BREAK)
+
 class jsonformatter(baseformatter):
     def __init__(self, ui, out, topic, opts):
         baseformatter.__init__(self, ui, topic, opts, _nullconverter)
@@ -617,7 +630,9 @@
 
 def formatter(ui, out, topic, opts):
     template = opts.get("template", "")
-    if template == "json":
+    if template == "cbor":
+        return cborformatter(ui, out, topic, opts)
+    elif template == "json":
         return jsonformatter(ui, out, topic, opts)
     elif template == "pickle":
         return pickleformatter(ui, out, topic, opts)
--- a/mercurial/graphmod.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/graphmod.py	Wed Apr 17 13:41:18 2019 -0400
@@ -451,7 +451,7 @@
     # If 'graphshorten' config, only draw shift_interline
     # when there is any non vertical flow in graph.
     if state['graphshorten']:
-        if any(c in '\/' for c in shift_interline if c):
+        if any(c in br'\/' for c in shift_interline if c):
             lines.append(shift_interline)
     # Else, no 'graphshorten' config so draw shift_interline.
     else:
--- a/mercurial/hbisect.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/hbisect.py	Wed Apr 17 13:41:18 2019 -0400
@@ -34,7 +34,7 @@
 
     changelog = repo.changelog
     clparents = changelog.parentrevs
-    skip = set([changelog.rev(n) for n in state['skip']])
+    skip = {changelog.rev(n) for n in state['skip']}
 
     def buildancestors(bad, good):
         badrev = min([changelog.rev(n) for n in bad])
--- a/mercurial/help.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/help.py	Wed Apr 17 13:41:18 2019 -0400
@@ -37,6 +37,9 @@
 from .hgweb import (
     webcommands,
 )
+from .utils import (
+    compression,
+)
 
 _exclkeywords = {
     "(ADVANCED)",
@@ -428,7 +431,7 @@
     addtopichook(topic, add)
 
 addtopicsymbols('bundlespec', '.. bundlecompressionmarker',
-                util.bundlecompressiontopics())
+                compression.bundlecompressiontopics())
 addtopicsymbols('filesets', '.. predicatesmarker', fileset.symbols)
 addtopicsymbols('merge-tools', '.. internaltoolsmarker',
                 filemerge.internalsdoc)
@@ -745,7 +748,7 @@
                 ct = mod.cmdtable
             except AttributeError:
                 ct = {}
-            modcmds = set([c.partition('|')[0] for c in ct])
+            modcmds = {c.partition('|')[0] for c in ct}
             rst.extend(helplist(modcmds.__contains__))
         else:
             rst.append(_("(use 'hg help extensions' for information on enabling"
--- a/mercurial/help/config.txt	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/help/config.txt	Wed Apr 17 13:41:18 2019 -0400
@@ -866,6 +866,13 @@
     Repositories with this on-disk format require Mercurial version 4.7
 
     Enabled by default.
+``revlog-compression``
+    Compression algorithm used by revlog. Supported value are `zlib` and `zstd`.
+    The `zlib` engine is the historical default of Mercurial. `zstd` is a newer
+    format that is usually a net win over `zlib` operating faster at better
+    compression rate. Use `zstd` to reduce CPU usage.
+
+    On some system, Mercurial installation may lack `zstd` supports. Default is `zlib`.
 
 ``graph``
 ---------
@@ -1843,6 +1850,55 @@
     Turning this option off can result in large increase of repository size for
     repository with many merges.
 
+``revlog.reuse-external-delta-parent``
+    Control the order in which delta parents are considered when adding new
+    revisions from an external source.
+    (typically: apply bundle from `hg pull` or `hg push`).
+
+    New revisions are usually provided as a delta against other revisions. By
+    default, Mercurial will try to reuse this delta first, therefore using the
+    same "delta parent" as the source. Directly using delta's from the source
+    reduces CPU usage and usually speeds up operation. However, in some case,
+    the source might have sub-optimal delta bases and forcing their reevaluation
+    is useful. For example, pushes from an old client could have sub-optimal
+    delta's parent that the server want to optimize. (lack of general delta, bad
+    parents, choice, lack of sparse-revlog, etc).
+
+    This option is enabled by default. Turning it off will ensure bad delta
+    parent choices from older client do not propagate to this repository, at
+    the cost of a small increase in CPU consumption.
+
+    Note: this option only control the order in which delta parents are
+    considered.  Even when disabled, the existing delta from the source will be
+    reused if the same delta parent is selected.
+
+``revlog.reuse-external-delta``
+    Control the reuse of delta from external source.
+    (typically: apply bundle from `hg pull` or `hg push`).
+
+    New revisions are usually provided as a delta against another revision. By
+    default, Mercurial will not recompute the same delta again, trusting
+    externally provided deltas. There have been rare cases of small adjustment
+    to the diffing algorithm in the past. So in some rare case, recomputing
+    delta provided by ancient clients can provides better results. Disabling
+    this option means going through a full delta recomputation for all incoming
+    revisions. It means a large increase in CPU usage and will slow operations
+    down.
+
+    This option is enabled by default. When disabled, it also disables the
+    related ``storage.revlog.reuse-external-delta-parent`` option.
+
+``revlog.zlib.level``
+    Zlib compression level used when storing data into the repository. Accepted
+    Value range from 1 (lowest compression) to 9 (highest compression). Zlib
+    default value is 6.
+
+
+``revlog.zstd.level``
+    zstd compression level used when storing data into the repository. Accepted
+    Value range from 1 (lowest compression) to 22 (highest compression).
+    (default 3)
+
 ``server``
 ----------
 
@@ -1990,6 +2046,13 @@
 
     See also ``server.zliblevel``.
 
+``view``
+    Repository filter used when exchanging revisions with the peer.
+
+    The default view (``served``) excludes secret and hidden changesets.
+    Another useful value is ``immutable`` (no draft, secret or hidden
+    changesets). (EXPERIMENTAL)
+
 ``smtp``
 --------
 
@@ -2341,6 +2404,9 @@
     Reduce the amount of output printed.
     (default: False)
 
+``relative-paths``
+    Prefer relative paths in the UI.
+
 ``remotecmd``
     Remote command to use for clone/push/pull operations.
     (default: ``hg``)
--- a/mercurial/help/scripting.txt	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/help/scripting.txt	Wed Apr 17 13:41:18 2019 -0400
@@ -142,9 +142,11 @@
    using templates to make your life easier.
 
 The ``-T/--template`` argument allows specifying pre-defined styles.
-Mercurial ships with the machine-readable styles ``json`` and ``xml``,
-which provide JSON and XML output, respectively. These are useful for
-producing output that is machine readable as-is.
+Mercurial ships with the machine-readable styles ``cbor``, ``json``,
+and ``xml``, which provide CBOR, JSON, and XML output, respectively.
+These are useful for producing output that is machine readable as-is.
+
+(Mercurial 5.0 is required for CBOR style.)
 
 .. important::
 
--- a/mercurial/help/subrepos.txt	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/help/subrepos.txt	Wed Apr 17 13:41:18 2019 -0400
@@ -105,8 +105,10 @@
     Subversion subrepositories will print a warning and abort.
 
 :diff: diff does not recurse in subrepos unless -S/--subrepos is
-    specified. Changes are displayed as usual, on the subrepositories
-    elements. Subversion subrepositories are currently silently ignored.
+    specified.  However, if you specify the full path of a file or
+    directory in a subrepo, it will be diffed even without
+    -S/--subrepos being specified.  Subversion subrepositories are
+    currently silently ignored.
 
 :files: files does not recurse into subrepos unless -S/--subrepos is
     specified.  However, if you specify the full path of a file or
--- a/mercurial/hg.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/hg.py	Wed Apr 17 13:41:18 2019 -0400
@@ -38,6 +38,7 @@
     narrowspec,
     node,
     phases,
+    pycompat,
     repository as repositorymod,
     scmutil,
     sshpeer,
@@ -57,7 +58,15 @@
 
 def _local(path):
     path = util.expandpath(util.urllocalpath(path))
-    return (os.path.isfile(path) and bundlerepo or localrepo)
+
+    try:
+        isfile = os.path.isfile(path)
+    # Python 2 raises TypeError, Python 3 ValueError.
+    except (TypeError, ValueError) as e:
+        raise error.Abort(_('invalid path %s: %s') % (
+            path, pycompat.bytestr(e)))
+
+    return isfile and bundlerepo or localrepo
 
 def addbranchrevs(lrepo, other, branches, revs):
     peer = other.peer() # a courtesy to callers using a localrepo for other
@@ -144,13 +153,13 @@
             return False
     return repo.local()
 
-def openpath(ui, path):
+def openpath(ui, path, sendaccept=True):
     '''open path with open if local, url.open if remote'''
     pathurl = util.url(path, parsequery=False, parsefragment=False)
     if pathurl.islocal():
         return util.posixfile(pathurl.localpath(), 'rb')
     else:
-        return url.open(ui, path)
+        return url.open(ui, path, sendaccept=sendaccept)
 
 # a list of (ui, repo) functions called for wire peer initialization
 wirepeersetupfuncs = []
@@ -282,25 +291,20 @@
     called.
     """
 
-    destlock = lock = None
-    lock = repo.lock()
-    try:
+    with repo.lock():
         # we use locks here because if we race with commit, we
         # can end up with extra data in the cloned revlogs that's
         # not pointed to by changesets, thus causing verify to
         # fail
-
         destlock = copystore(ui, repo, repo.path)
-
-        sharefile = repo.vfs.join('sharedpath')
-        util.rename(sharefile, sharefile + '.old')
+        with destlock or util.nullcontextmanager():
 
-        repo.requirements.discard('shared')
-        repo.requirements.discard('relshared')
-        repo._writerequirements()
-    finally:
-        destlock and destlock.release()
-        lock and lock.release()
+            sharefile = repo.vfs.join('sharedpath')
+            util.rename(sharefile, sharefile + '.old')
+
+            repo.requirements.discard('shared')
+            repo.requirements.discard('relshared')
+            repo._writerequirements()
 
     # Removing share changes some fundamental properties of the repo instance.
     # So we instantiate a new repo object and operate on it rather than
--- a/mercurial/hgweb/hgwebdir_mod.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/hgweb/hgwebdir_mod.py	Wed Apr 17 13:41:18 2019 -0400
@@ -143,7 +143,7 @@
                 path = path[:-len(discarded) - 1]
 
                 try:
-                    r = hg.repository(ui, path)
+                    hg.repository(ui, path)
                     directory = False
                 except (IOError, error.RepoError):
                     pass
@@ -510,7 +510,7 @@
         if style == styles[0]:
             vars['style'] = style
 
-        sessionvars = webutil.sessionvars(vars, r'?')
+        sessionvars = webutil.sessionvars(vars, '?')
         logourl = config('web', 'logourl')
         logoimg = config('web', 'logoimg')
         staticurl = (config('web', 'staticurl')
--- a/mercurial/hgweb/server.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/hgweb/server.py	Wed Apr 17 13:41:18 2019 -0400
@@ -54,7 +54,7 @@
         self.writelines(str.split('\n'))
     def writelines(self, seq):
         for msg in seq:
-            self.handler.log_error("HG error:  %s", msg)
+            self.handler.log_error(r"HG error:  %s", encoding.strfromlocal(msg))
 
 class _httprequesthandler(httpservermod.basehttprequesthandler):
 
@@ -100,17 +100,22 @@
     def do_POST(self):
         try:
             self.do_write()
-        except Exception:
+        except Exception as e:
+            # I/O below could raise another exception. So log the original
+            # exception first to ensure it is recorded.
+            if not (isinstance(e, (OSError, socket.error))
+                    and e.errno == errno.ECONNRESET):
+                tb = r"".join(traceback.format_exception(*sys.exc_info()))
+                # We need a native-string newline to poke in the log
+                # message, because we won't get a newline when using an
+                # r-string. This is the easy way out.
+                newline = chr(10)
+                self.log_error(r"Exception happened during processing "
+                               r"request '%s':%s%s", self.path, newline, tb)
+
             self._start_response(r"500 Internal Server Error", [])
             self._write(b"Internal Server Error")
             self._done()
-            tb = r"".join(traceback.format_exception(*sys.exc_info()))
-            # We need a native-string newline to poke in the log
-            # message, because we won't get a newline when using an
-            # r-string. This is the easy way out.
-            newline = chr(10)
-            self.log_error(r"Exception happened during processing "
-                           r"request '%s':%s%s", self.path, newline, tb)
 
     def do_PUT(self):
         self.do_POST()
@@ -165,7 +170,7 @@
         if length:
             env[r'CONTENT_LENGTH'] = length
         for header in [h for h in self.headers.keys()
-                       if h not in (r'content-type', r'content-length')]:
+                      if h.lower() not in (r'content-type', r'content-length')]:
             hkey = r'HTTP_' + header.replace(r'-', r'_').upper()
             hval = self.headers.get(header)
             hval = hval.replace(r'\n', r'').strip()
--- a/mercurial/hgweb/webcommands.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/hgweb/webcommands.py	Wed Apr 17 13:41:18 2019 -0400
@@ -884,7 +884,7 @@
             leftlines = filelines(pfctx)
     else:
         rightlines = ()
-        pfctx = ctx.parents()[0][path]
+        pfctx = ctx.p1()[path]
         leftlines = filelines(pfctx)
 
     comparison = webutil.compare(context, leftlines, rightlines)
--- a/mercurial/hgweb/webutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/hgweb/webutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -456,13 +456,13 @@
     files = listfilediffs(ctx.files(), n, web.maxfiles)
 
     entry = commonentry(repo, ctx)
-    entry.update(
-        allparents=_kwfunc(lambda context, mapping: parents(ctx)),
-        parent=_kwfunc(lambda context, mapping: parents(ctx, rev - 1)),
-        child=_kwfunc(lambda context, mapping: children(ctx, rev + 1)),
-        changelogtag=showtags,
-        files=files,
-    )
+    entry.update({
+        'allparents': _kwfunc(lambda context, mapping: parents(ctx)),
+        'parent': _kwfunc(lambda context, mapping: parents(ctx, rev - 1)),
+        'child': _kwfunc(lambda context, mapping: children(ctx, rev + 1)),
+        'changelogtag': showtags,
+        'files': files,
+    })
     return entry
 
 def changelistentries(web, revs, maxcount, parityfn):
@@ -565,16 +565,14 @@
 def _diffsgen(context, repo, ctx, basectx, files, style, stripecount,
               linerange, lineidprefix):
     if files:
-        m = match.exact(repo.root, repo.getcwd(), files)
+        m = match.exact(files)
     else:
-        m = match.always(repo.root, repo.getcwd())
+        m = match.always()
 
     diffopts = patch.diffopts(repo.ui, untrusted=True)
-    node1 = basectx.node()
-    node2 = ctx.node()
     parity = paritygen(stripecount)
 
-    diffhunks = patch.diffhunks(repo, node1, node2, m, opts=diffopts)
+    diffhunks = patch.diffhunks(repo, basectx, ctx, m, opts=diffopts)
     for blockno, (fctx1, fctx2, header, hunks) in enumerate(diffhunks, 1):
         if style != 'raw':
             header = header[1:]
--- a/mercurial/hgweb/wsgiheaders.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/hgweb/wsgiheaders.py	Wed Apr 17 13:41:18 2019 -0400
@@ -127,7 +127,7 @@
         return self._headers[:]
 
     def __repr__(self):
-        return "%s(%r)" % (self.__class__.__name__, self._headers)
+        return r"%s(%r)" % (self.__class__.__name__, self._headers)
 
     def __str__(self):
         """str() returns the formatted headers, complete with end line,
--- a/mercurial/httpconnection.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/httpconnection.py	Wed Apr 17 13:41:18 2019 -0400
@@ -109,10 +109,10 @@
             schemes, prefix = [p[0]], p[1]
         else:
             schemes = (auth.get('schemes') or 'https').split()
-        if (prefix == '*' or hostpath.startswith(prefix)) and \
-            (len(prefix) > bestlen or (len(prefix) == bestlen and \
-                not bestuser and 'username' in auth)) \
-             and scheme in schemes:
+        if ((prefix == '*' or hostpath.startswith(prefix)) and
+            (len(prefix) > bestlen or (len(prefix) == bestlen and
+                                       not bestuser and 'username' in auth))
+            and scheme in schemes):
             bestlen = len(prefix)
             bestauth = group, auth
             bestuser = auth.get('username')
--- a/mercurial/httppeer.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/httppeer.py	Wed Apr 17 13:41:18 2019 -0400
@@ -816,8 +816,8 @@
             return
 
         raise error.CapabilityError(
-            _('cannot %s; client or remote repository does not support the %r '
-              'capability') % (purpose, name))
+            _('cannot %s; client or remote repository does not support the '
+              '\'%s\' capability') % (purpose, name))
 
     # End of ipeercapabilities.
 
--- a/mercurial/keepalive.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/keepalive.py	Wed Apr 17 13:41:18 2019 -0400
@@ -84,6 +84,7 @@
 
 from __future__ import absolute_import, print_function
 
+import collections
 import errno
 import hashlib
 import socket
@@ -114,15 +115,13 @@
       """
     def __init__(self):
         self._lock = threading.Lock()
-        self._hostmap = {} # map hosts to a list of connections
+        self._hostmap = collections.defaultdict(list) # host -> [connection]
         self._connmap = {} # map connections to host
         self._readymap = {} # map connection to ready state
 
     def add(self, host, connection, ready):
         self._lock.acquire()
         try:
-            if host not in self._hostmap:
-                self._hostmap[host] = []
             self._hostmap[host].append(connection)
             self._connmap[connection] = host
             self._readymap[connection] = ready
@@ -155,19 +154,18 @@
         conn = None
         self._lock.acquire()
         try:
-            if host in self._hostmap:
-                for c in self._hostmap[host]:
-                    if self._readymap[c]:
-                        self._readymap[c] = 0
-                        conn = c
-                        break
+            for c in self._hostmap[host]:
+                if self._readymap[c]:
+                    self._readymap[c] = False
+                    conn = c
+                    break
         finally:
             self._lock.release()
         return conn
 
     def get_all(self, host=None):
         if host:
-            return list(self._hostmap.get(host, []))
+            return list(self._hostmap[host])
         else:
             return dict(self._hostmap)
 
@@ -202,7 +200,7 @@
     def _request_closed(self, request, host, connection):
         """tells us that this request is now closed and that the
         connection is ready for another request"""
-        self._cm.set_ready(connection, 1)
+        self._cm.set_ready(connection, True)
 
     def _remove_connection(self, host, connection, close=0):
         if close:
@@ -239,7 +237,7 @@
                 if DEBUG:
                     DEBUG.info("creating new connection to %s (%d)",
                                host, id(h))
-                self._cm.add(host, h, 0)
+                self._cm.add(host, h, False)
                 self._start_transaction(h, req)
                 r = h.getresponse()
         # The string form of BadStatusLine is the status line. Add some context
@@ -405,6 +403,11 @@
     _raw_read = httplib.HTTPResponse.read
     _raw_readinto = getattr(httplib.HTTPResponse, 'readinto', None)
 
+    # Python 2.7 has a single close() which closes the socket handle.
+    # This method was effectively renamed to _close_conn() in Python 3. But
+    # there is also a close(). _close_conn() is called by methods like
+    # read().
+
     def close(self):
         if self.fp:
             self.fp.close()
@@ -413,6 +416,9 @@
                 self._handler._request_closed(self, self._host,
                                               self._connection)
 
+    def _close_conn(self):
+        self.close()
+
     def close_connection(self):
         self._handler._remove_connection(self._host, self._connection, close=1)
         self.close()
--- a/mercurial/localrepo.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/localrepo.py	Wed Apr 17 13:41:18 2019 -0400
@@ -643,8 +643,10 @@
     # Add derived requirements from registered compression engines.
     for name in util.compengines:
         engine = util.compengines[name]
-        if engine.revlogheader():
+        if engine.available() and engine.revlogheader():
             supported.add(b'exp-compression-%s' % name)
+            if engine.name() == 'zstd':
+                supported.add(b'revlog-compression-zstd')
 
     return supported
 
@@ -752,7 +754,15 @@
                                      b'revlog.optimize-delta-parent-choice')
     options[b'deltabothparents'] = deltabothparents
 
-    options[b'lazydeltabase'] = not scmutil.gddeltaconfig(ui)
+    lazydelta = ui.configbool(b'storage', b'revlog.reuse-external-delta')
+    lazydeltabase = False
+    if lazydelta:
+        lazydeltabase = ui.configbool(b'storage',
+                                      b'revlog.reuse-external-delta-parent')
+    if lazydeltabase is None:
+        lazydeltabase = not scmutil.gddeltaconfig(ui)
+    options[b'lazydelta'] = lazydelta
+    options[b'lazydeltabase'] = lazydeltabase
 
     chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
     if 0 <= chainspan:
@@ -786,8 +796,24 @@
         options[b'maxchainlen'] = maxchainlen
 
     for r in requirements:
-        if r.startswith(b'exp-compression-'):
-            options[b'compengine'] = r[len(b'exp-compression-'):]
+        # we allow multiple compression engine requirement to co-exist because
+        # strickly speaking, revlog seems to support mixed compression style.
+        #
+        # The compression used for new entries will be "the last one"
+        prefix = r.startswith
+        if prefix('revlog-compression-') or prefix('exp-compression-'):
+            options[b'compengine'] = r.split('-', 2)[2]
+
+    options[b'zlib.level'] = ui.configint(b'storage', b'revlog.zlib.level')
+    if options[b'zlib.level'] is not None:
+        if not (0 <= options[b'zlib.level'] <= 9):
+            msg = _('invalid value for `storage.revlog.zlib.level` config: %d')
+            raise error.Abort(msg % options[b'zlib.level'])
+    options[b'zstd.level'] = ui.configint(b'storage', b'revlog.zstd.level')
+    if options[b'zstd.level'] is not None:
+        if not (0 <= options[b'zstd.level'] <= 22):
+            msg = _('invalid value for `storage.revlog.zstd.level` config: %d')
+            raise error.Abort(msg % options[b'zstd.level'])
 
     if repository.NARROW_REQUIREMENT in requirements:
         options[b'enableellipsis'] = True
@@ -992,7 +1018,7 @@
 
         self._dirstatevalidatewarned = False
 
-        self._branchcaches = {}
+        self._branchcaches = branchmap.BranchMapCache()
         self._revbranchcache = None
         self._filterpats = {}
         self._datafilters = {}
@@ -1160,7 +1186,17 @@
         return self
 
     def filtered(self, name, visibilityexceptions=None):
-        """Return a filtered version of a repository"""
+        """Return a filtered version of a repository
+
+        The `name` parameter is the identifier of the requested view. This
+        will return a repoview object set "exactly" to the specified view.
+
+        This function does not apply recursive filtering to a repository. For
+        example calling `repo.filtered("served")` will return a repoview using
+        the "served" view, regardless of the initial view used by `repo`.
+
+        In other word, there is always only one level of `repoview` "filtering".
+        """
         cls = repoview.newtype(self.unfiltered().__class__)
         return cls(self, name, visibilityexceptions)
 
@@ -1227,14 +1263,14 @@
     @storecache(narrowspec.FILENAME)
     def _storenarrowmatch(self):
         if repository.NARROW_REQUIREMENT not in self.requirements:
-            return matchmod.always(self.root, '')
+            return matchmod.always()
         include, exclude = self.narrowpats
         return narrowspec.match(self.root, include=include, exclude=exclude)
 
     @storecache(narrowspec.FILENAME)
     def _narrowmatch(self):
         if repository.NARROW_REQUIREMENT not in self.requirements:
-            return matchmod.always(self.root, '')
+            return matchmod.always()
         narrowspec.checkworkingcopynarrowspec(self)
         include, exclude = self.narrowpats
         return narrowspec.match(self.root, include=include, exclude=exclude)
@@ -1252,7 +1288,7 @@
             if includeexact and not self._narrowmatch.always():
                 # do not exclude explicitly-specified paths so that they can
                 # be warned later on
-                em = matchmod.exact(match._root, match._cwd, match.files())
+                em = matchmod.exact(match.files())
                 nm = matchmod.unionmatcher([self._narrowmatch, em])
                 return matchmod.intersectmatchers(match, nm)
             return matchmod.intersectmatchers(match, self._narrowmatch)
@@ -1520,8 +1556,7 @@
     def branchmap(self):
         '''returns a dictionary {branch: [branchheads]} with branchheads
         ordered by increasing revision number'''
-        branchmap.updatecache(self)
-        return self._branchcaches[self.filtername]
+        return self._branchcaches[self]
 
     @unfilteredmethod
     def revbranchcache(self):
@@ -1546,10 +1581,13 @@
                 pass
 
     def lookup(self, key):
-        return scmutil.revsymbol(self, key).node()
+        node = scmutil.revsymbol(self, key).node()
+        if node is None:
+            raise error.RepoLookupError(_("unknown revision '%s'") % key)
+        return node
 
     def lookupbranch(self, key):
-        if key in self.branchmap():
+        if self.branchmap().hasbranch(key):
             return key
 
         return scmutil.revsymbol(self, key).branch()
@@ -1811,7 +1849,6 @@
                     args = tr.hookargs.copy()
                     args.update(bookmarks.preparehookargs(name, old, new))
                     repo.hook('pretxnclose-bookmark', throw=True,
-                              txnname=desc,
                               **pycompat.strkwargs(args))
             if hook.hashook(repo.ui, 'pretxnclose-phase'):
                 cl = repo.unfiltered().changelog
@@ -1819,11 +1856,11 @@
                     args = tr.hookargs.copy()
                     node = hex(cl.node(rev))
                     args.update(phases.preparehookargs(node, old, new))
-                    repo.hook('pretxnclose-phase', throw=True, txnname=desc,
+                    repo.hook('pretxnclose-phase', throw=True,
                               **pycompat.strkwargs(args))
 
             repo.hook('pretxnclose', throw=True,
-                      txnname=desc, **pycompat.strkwargs(tr.hookargs))
+                      **pycompat.strkwargs(tr.hookargs))
         def releasefn(tr, success):
             repo = reporef()
             if success:
@@ -1857,6 +1894,7 @@
         tr.changes['bookmarks'] = {}
 
         tr.hookargs['txnid'] = txnid
+        tr.hookargs['txnname'] = desc
         # note: writing the fncache only during finalize mean that the file is
         # outdated when running hooks. As fncache is used for streaming clone,
         # this is not expected to break anything that happen during the hooks.
@@ -1878,7 +1916,7 @@
                         args = tr.hookargs.copy()
                         args.update(bookmarks.preparehookargs(name, old, new))
                         repo.hook('txnclose-bookmark', throw=False,
-                                  txnname=desc, **pycompat.strkwargs(args))
+                                  **pycompat.strkwargs(args))
 
                 if hook.hashook(repo.ui, 'txnclose-phase'):
                     cl = repo.unfiltered().changelog
@@ -1887,10 +1925,10 @@
                         args = tr.hookargs.copy()
                         node = hex(cl.node(rev))
                         args.update(phases.preparehookargs(node, old, new))
-                        repo.hook('txnclose-phase', throw=False, txnname=desc,
+                        repo.hook('txnclose-phase', throw=False,
                                   **pycompat.strkwargs(args))
 
-                repo.hook('txnclose', throw=False, txnname=desc,
+                repo.hook('txnclose', throw=False,
                           **pycompat.strkwargs(hookargs))
             reporef()._afterlock(hookfunc)
         tr.addfinalize('txnclose-hook', txnclosehook)
@@ -1902,7 +1940,7 @@
         def txnaborthook(tr2):
             """To be run if transaction is aborted
             """
-            reporef().hook('txnabort', throw=False, txnname=desc,
+            reporef().hook('txnabort', throw=False,
                            **pycompat.strkwargs(tr2.hookargs))
         tr.addabort('txnabort-hook', txnaborthook)
         # avoid eager cache invalidation. in-memory data should be identical
@@ -2011,8 +2049,7 @@
             self.svfs.rename('undo.phaseroots', 'phaseroots', checkambig=True)
         self.invalidate()
 
-        parentgone = (parents[0] not in self.changelog.nodemap or
-                      parents[1] not in self.changelog.nodemap)
+        parentgone = any(p not in self.changelog.nodemap for p in parents)
         if parentgone:
             # prevent dirstateguard from overwriting already restored one
             dsguard.close()
@@ -2074,13 +2111,15 @@
             return
 
         if tr is None or tr.changes['origrepolen'] < len(self):
-            # updating the unfiltered branchmap should refresh all the others,
+            # accessing the 'ser ved' branchmap should refresh all the others,
             self.ui.debug('updating the branch cache\n')
-            branchmap.updatecache(self.filtered('served'))
+            self.filtered('served').branchmap()
+            self.filtered('served.hidden').branchmap()
 
         if full:
-            rbc = self.revbranchcache()
-            for r in self.changelog:
+            unfi = self.unfiltered()
+            rbc = unfi.revbranchcache()
+            for r in unfi.changelog:
                 rbc.branchinfo(r)
             rbc.write()
 
@@ -2088,13 +2127,17 @@
             for ctx in self['.'].parents():
                 ctx.manifest()  # accessing the manifest is enough
 
+            # accessing tags warm the cache
+            self.tags()
+            self.filtered('served').tags()
+
     def invalidatecaches(self):
 
         if r'_tagscache' in vars(self):
             # can't use delattr on proxy
             del self.__dict__[r'_tagscache']
 
-        self.unfiltered()._branchcaches.clear()
+        self._branchcaches.clear()
         self.invalidatevolatilesets()
         self._sparsesignaturecache.clear()
 
@@ -2218,8 +2261,12 @@
             l.lock()
             return l
 
-        l = self._lock(self.svfs, "lock", wait, None,
-                       self.invalidate, _('repository %s') % self.origroot)
+        l = self._lock(vfs=self.svfs,
+                       lockname="lock",
+                       wait=wait,
+                       releasefn=None,
+                       acquirefn=self.invalidate,
+                       desc=_('repository %s') % self.origroot)
         self._lockref = weakref.ref(l)
         return l
 
@@ -2277,7 +2324,8 @@
         """Returns the wlock if it's held, or None if it's not."""
         return self._currentlock(self._wlockref)
 
-    def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
+    def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist,
+                    includecopymeta):
         """
         commit an individual file as part of a larger transaction
         """
@@ -2295,8 +2343,8 @@
 
         flog = self.file(fname)
         meta = {}
-        copy = fctx.renamed()
-        if copy and copy[0] != fname:
+        cfname = fctx.copysource()
+        if cfname and cfname != fname:
             # Mark the new revision of this file as a copy of another
             # file.  This copy data will effectively act as a parent
             # of this new revision.  If this is a merge, the first
@@ -2316,14 +2364,13 @@
             #    \- 2 --- 4        as the merge base
             #
 
-            cfname = copy[0]
-            crev = manifest1.get(cfname)
+            cnode = manifest1.get(cfname)
             newfparent = fparent2
 
             if manifest2: # branch merge
-                if fparent2 == nullid or crev is None: # copied on remote side
+                if fparent2 == nullid or cnode is None: # copied on remote side
                     if cfname in manifest2:
-                        crev = manifest2[cfname]
+                        cnode = manifest2[cfname]
                         newfparent = fparent1
 
             # Here, we used to search backwards through history to try to find
@@ -2335,10 +2382,11 @@
             # expect this outcome it can be fixed, but this is the correct
             # behavior in this circumstance.
 
-            if crev:
-                self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
-                meta["copy"] = cfname
-                meta["copyrev"] = hex(crev)
+            if cnode:
+                self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(cnode)))
+                if includecopymeta:
+                    meta["copy"] = cfname
+                    meta["copyrev"] = hex(cnode)
                 fparent1, fparent2 = nullid, newfparent
             else:
                 self.ui.warn(_("warning: can't find ancestor for '%s' "
@@ -2402,18 +2450,15 @@
             raise error.Abort('%s: %s' % (f, msg))
 
         if not match:
-            match = matchmod.always(self.root, '')
+            match = matchmod.always()
 
         if not force:
             vdirs = []
             match.explicitdir = vdirs.append
             match.bad = fail
 
-        wlock = lock = tr = None
-        try:
-            wlock = self.wlock()
-            lock = self.lock() # for recent changelog (see issue4368)
-
+        # lock() for recent changelog (see issue4368)
+        with self.wlock(), self.lock():
             wctx = self[None]
             merge = len(wctx.parents()) > 1
 
@@ -2460,10 +2505,11 @@
 
             # commit subs and write new state
             if subs:
+                uipathfn = scmutil.getuipathfn(self)
                 for s in sorted(commitsubs):
                     sub = wctx.sub(s)
                     self.ui.status(_('committing subrepository %s\n') %
-                                   subrepoutil.subrelpath(sub))
+                                   uipathfn(subrepoutil.subrelpath(sub)))
                     sr = sub.commit(cctx._text, user, date)
                     newstate[s] = (newstate[s][0], sr)
                 subrepoutil.writestate(self, newstate)
@@ -2473,21 +2519,17 @@
             try:
                 self.hook("precommit", throw=True, parent1=hookp1,
                           parent2=hookp2)
-                tr = self.transaction('commit')
-                ret = self.commitctx(cctx, True)
+                with self.transaction('commit'):
+                    ret = self.commitctx(cctx, True)
+                    # update bookmarks, dirstate and mergestate
+                    bookmarks.update(self, [p1, p2], ret)
+                    cctx.markcommitted(ret)
+                    ms.reset()
             except: # re-raises
                 if edited:
                     self.ui.write(
                         _('note: commit message saved in %s\n') % msgfn)
                 raise
-            # update bookmarks, dirstate and mergestate
-            bookmarks.update(self, [p1, p2], ret)
-            cctx.markcommitted(ret)
-            ms.reset()
-            tr.close()
-
-        finally:
-            lockmod.release(tr, lock, wlock)
 
         def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
             # hack for command that use a temporary commit (eg: histedit)
@@ -2509,13 +2551,16 @@
         from p1 or p2 are excluded from the committed ctx.files().
         """
 
-        tr = None
         p1, p2 = ctx.p1(), ctx.p2()
         user = ctx.user()
 
-        lock = self.lock()
-        try:
-            tr = self.transaction("commit")
+        writecopiesto = self.ui.config('experimental', 'copies.write-to')
+        writefilecopymeta = writecopiesto != 'changeset-only'
+        p1copies, p2copies = None, None
+        if writecopiesto in ('changeset-only', 'compatibility'):
+            p1copies = ctx.p1copies()
+            p2copies = ctx.p2copies()
+        with self.lock(), self.transaction("commit") as tr:
             trp = weakref.proxy(tr)
 
             if ctx.manifestnode():
@@ -2538,8 +2583,9 @@
                 removed = list(ctx.removed())
                 linkrev = len(self)
                 self.ui.note(_("committing files:\n"))
+                uipathfn = scmutil.getuipathfn(self)
                 for f in sorted(ctx.modified() + ctx.added()):
-                    self.ui.note(f + "\n")
+                    self.ui.note(uipathfn(f) + "\n")
                     try:
                         fctx = ctx[f]
                         if fctx is None:
@@ -2547,15 +2593,18 @@
                         else:
                             added.append(f)
                             m[f] = self._filecommit(fctx, m1, m2, linkrev,
-                                                    trp, changed)
+                                                    trp, changed,
+                                                    writefilecopymeta)
                             m.setflag(f, fctx.flags())
-                    except OSError as inst:
-                        self.ui.warn(_("trouble committing %s!\n") % f)
+                    except OSError:
+                        self.ui.warn(_("trouble committing %s!\n") %
+                                     uipathfn(f))
                         raise
                     except IOError as inst:
                         errcode = getattr(inst, 'errno', errno.ENOENT)
                         if error or errcode and errcode != errno.ENOENT:
-                            self.ui.warn(_("trouble committing %s!\n") % f)
+                            self.ui.warn(_("trouble committing %s!\n") %
+                                         uipathfn(f))
                         raise
 
                 # update manifest
@@ -2599,7 +2648,8 @@
             self.changelog.delayupdate(tr)
             n = self.changelog.add(mn, files, ctx.description(),
                                    trp, p1.node(), p2.node(),
-                                   user, ctx.date(), ctx.extra().copy())
+                                   user, ctx.date(), ctx.extra().copy(),
+                                   p1copies, p2copies)
             xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
             self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
                       parent2=xp2)
@@ -2612,12 +2662,7 @@
                 #
                 # if minimal phase was 0 we don't need to retract anything
                 phases.registernew(self, tr, targetphase, [n])
-            tr.close()
             return n
-        finally:
-            if tr:
-                tr.release()
-            lock.release()
 
     @unfilteredmethod
     def destroying(self):
@@ -2727,7 +2772,7 @@
         if branch is None:
             branch = self[None].branch()
         branches = self.branchmap()
-        if branch not in branches:
+        if not branches.hasbranch(branch):
             return []
         # the cache returns heads ordered lowest to highest
         bheads = list(reversed(branches.branchheads(branch, closed=closed)))
@@ -2906,16 +2951,18 @@
             if ui.configbool('format', 'dotencode'):
                 requirements.add('dotencode')
 
-    compengine = ui.config('experimental', 'format.compression')
+    compengine = ui.config('format', 'revlog-compression')
     if compengine not in util.compengines:
         raise error.Abort(_('compression engine %s defined by '
-                            'experimental.format.compression not available') %
+                            'format.revlog-compression not available') %
                           compengine,
                           hint=_('run "hg debuginstall" to list available '
                                  'compression engines'))
 
     # zlib is the historical default and doesn't need an explicit requirement.
-    if compengine != 'zlib':
+    elif compengine == 'zstd':
+        requirements.add('revlog-compression-zstd')
+    elif compengine != 'zlib':
         requirements.add('exp-compression-%s' % compengine)
 
     if scmutil.gdinitconfig(ui):
--- a/mercurial/logcmdutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/logcmdutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -9,6 +9,7 @@
 
 import itertools
 import os
+import posixpath
 
 from .i18n import _
 from .node import (
@@ -58,29 +59,53 @@
                    changes=None, stat=False, fp=None, graphwidth=0,
                    prefix='', root='', listsubrepos=False, hunksfilterfn=None):
     '''show diff or diffstat.'''
+    ctx1 = repo[node1]
+    ctx2 = repo[node2]
     if root:
         relroot = pathutil.canonpath(repo.root, repo.getcwd(), root)
     else:
         relroot = ''
+    copysourcematch = None
+    def compose(f, g):
+        return lambda x: f(g(x))
+    def pathfn(f):
+        return posixpath.join(prefix, f)
     if relroot != '':
         # XXX relative roots currently don't work if the root is within a
         # subrepo
-        uirelroot = match.uipath(relroot)
+        uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
+        uirelroot = uipathfn(pathfn(relroot))
         relroot += '/'
         for matchroot in match.files():
             if not matchroot.startswith(relroot):
-                ui.warn(_('warning: %s not inside relative root %s\n') % (
-                    match.uipath(matchroot), uirelroot))
+                ui.warn(_('warning: %s not inside relative root %s\n') %
+                        (uipathfn(pathfn(matchroot)), uirelroot))
+
+        relrootmatch = scmutil.match(ctx2, pats=[relroot], default='path')
+        match = matchmod.intersectmatchers(match, relrootmatch)
+        copysourcematch = relrootmatch
+
+        checkroot = (repo.ui.configbool('devel', 'all-warnings') or
+                     repo.ui.configbool('devel', 'check-relroot'))
+        def relrootpathfn(f):
+            if checkroot and not f.startswith(relroot):
+                raise AssertionError(
+                    "file %s doesn't start with relroot %s" % (f, relroot))
+            return f[len(relroot):]
+        pathfn = compose(relrootpathfn, pathfn)
 
     if stat:
         diffopts = diffopts.copy(context=0, noprefix=False)
         width = 80
         if not ui.plain():
             width = ui.termwidth() - graphwidth
+        # If an explicit --root was given, don't respect ui.relative-paths
+        if not relroot:
+            pathfn = compose(scmutil.getuipathfn(repo), pathfn)
 
-    chunks = repo[node2].diff(repo[node1], match, changes, opts=diffopts,
-                              prefix=prefix, relroot=relroot,
-                              hunksfilterfn=hunksfilterfn)
+    chunks = ctx2.diff(ctx1, match, changes, opts=diffopts, pathfn=pathfn,
+                       copysourcematch=copysourcematch,
+                       hunksfilterfn=hunksfilterfn)
 
     if fp is not None or ui.canwritewithoutlabels():
         out = fp or ui
@@ -104,22 +129,21 @@
             for chunk, label in chunks:
                 ui.write(chunk, label=label)
 
-    if listsubrepos:
-        ctx1 = repo[node1]
-        ctx2 = repo[node2]
-        for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
-            tempnode2 = node2
-            try:
-                if node2 is not None:
-                    tempnode2 = ctx2.substate[subpath][1]
-            except KeyError:
-                # A subrepo that existed in node1 was deleted between node1 and
-                # node2 (inclusive). Thus, ctx2's substate won't contain that
-                # subpath. The best we can do is to ignore it.
-                tempnode2 = None
-            submatch = matchmod.subdirmatcher(subpath, match)
+    for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
+        tempnode2 = node2
+        try:
+            if node2 is not None:
+                tempnode2 = ctx2.substate[subpath][1]
+        except KeyError:
+            # A subrepo that existed in node1 was deleted between node1 and
+            # node2 (inclusive). Thus, ctx2's substate won't contain that
+            # subpath. The best we can do is to ignore it.
+            tempnode2 = None
+        submatch = matchmod.subdirmatcher(subpath, match)
+        subprefix = repo.wvfs.reljoin(prefix, subpath)
+        if listsubrepos or match.exact(subpath) or any(submatch.files()):
             sub.diff(ui, diffopts, tempnode2, submatch, changes=changes,
-                     stat=stat, fp=fp, prefix=prefix)
+                     stat=stat, fp=fp, prefix=subprefix)
 
 class changesetdiffer(object):
     """Generate diff of changeset with pre-configured filtering functions"""
@@ -518,7 +542,7 @@
     regular display via changesetprinter() is done.
     """
     postargs = (differ, opts, buffered)
-    if opts.get('template') == 'json':
+    if opts.get('template') in {'cbor', 'json'}:
         fm = ui.formatter('log', opts)
         return changesetformatter(ui, repo, fm, *postargs)
 
--- a/mercurial/logexchange.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/logexchange.py	Wed Apr 17 13:41:18 2019 -0400
@@ -97,7 +97,6 @@
 
 def activepath(repo, remote):
     """returns remote path"""
-    local = None
     # is the remote a local peer
     local = remote.local()
 
--- a/mercurial/mail.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/mail.py	Wed Apr 17 13:41:18 2019 -0400
@@ -243,6 +243,13 @@
             cs.body_encoding = email.charset.QP
             break
 
+    # On Python 2, this simply assigns a value. Python 3 inspects
+    # body and does different things depending on whether it has
+    # encode() or decode() attributes. We can get the old behavior
+    # if we pass a str and charset is None and we call set_charset().
+    # But we may get into  trouble later due to Python attempting to
+    # encode/decode using the registered charset (or attempting to
+    # use ascii in the absence of a charset).
     msg.set_payload(body, cs)
 
     return msg
--- a/mercurial/manifest.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/manifest.py	Wed Apr 17 13:41:18 2019 -0400
@@ -283,7 +283,6 @@
         if len(self.extradata) == 0:
             return
         l = []
-        last_cut = 0
         i = 0
         offset = 0
         self.extrainfo = [0] * len(self.positions)
@@ -1277,6 +1276,9 @@
     These are written in reverse cache order (oldest to newest).
 
     """
+
+    _file = 'manifestfulltextcache'
+
     def __init__(self, max):
         super(manifestfulltextcache, self).__init__(max)
         self._dirty = False
@@ -1288,7 +1290,7 @@
             return
 
         try:
-            with self._opener('manifestfulltextcache') as fp:
+            with self._opener(self._file) as fp:
                 set = super(manifestfulltextcache, self).__setitem__
                 # ignore trailing data, this is a cache, corruption is skipped
                 while True:
@@ -1314,8 +1316,7 @@
         if not self._dirty or self._opener is None:
             return
         # rotate backwards to the first used node
-        with self._opener(
-                'manifestfulltextcache', 'w', atomictemp=True, checkambig=True
+        with self._opener(self._file, 'w', atomictemp=True, checkambig=True
             ) as fp:
             node = self._head.prev
             while True:
@@ -1434,10 +1435,13 @@
 
     def _setupmanifestcachehooks(self, repo):
         """Persist the manifestfulltextcache on lock release"""
-        if not util.safehasattr(repo, '_lockref'):
+        if not util.safehasattr(repo, '_wlockref'):
             return
 
-        self._fulltextcache._opener = repo.cachevfs
+        self._fulltextcache._opener = repo.wcachevfs
+        if repo._currentlock(repo._wlockref) is None:
+            return
+
         reporef = weakref.ref(repo)
         manifestrevlogref = weakref.ref(self)
 
@@ -1451,8 +1455,7 @@
                 return
             self._fulltextcache.write()
 
-        if repo._currentlock(repo._lockref) is not None:
-            repo._afterlock(persistmanifestcache)
+        repo._afterlock(persistmanifestcache)
 
     @property
     def fulltextcache(self):
--- a/mercurial/match.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/match.py	Wed Apr 17 13:41:18 2019 -0400
@@ -42,7 +42,7 @@
     except AttributeError:
         return m.match
 
-def _expandsets(root, cwd, kindpats, ctx, listsubrepos, badfn):
+def _expandsets(kindpats, ctx=None, listsubrepos=False, badfn=None):
     '''Returns the kindpats list with the 'set' patterns expanded to matchers'''
     matchers = []
     other = []
@@ -57,7 +57,7 @@
             if listsubrepos:
                 for subpath in ctx.substate:
                     sm = ctx.sub(subpath).matchfileset(pat, badfn=badfn)
-                    pm = prefixdirmatcher(root, cwd, subpath, sm, badfn=badfn)
+                    pm = prefixdirmatcher(subpath, sm, badfn=badfn)
                     matchers.append(pm)
 
             continue
@@ -97,27 +97,26 @@
             return False
     return True
 
-def _buildkindpatsmatcher(matchercls, root, cwd, kindpats, ctx=None,
+def _buildkindpatsmatcher(matchercls, root, kindpats, ctx=None,
                           listsubrepos=False, badfn=None):
     matchers = []
-    fms, kindpats = _expandsets(root, cwd, kindpats, ctx=ctx,
+    fms, kindpats = _expandsets(kindpats, ctx=ctx,
                                 listsubrepos=listsubrepos, badfn=badfn)
     if kindpats:
-        m = matchercls(root, cwd, kindpats, listsubrepos=listsubrepos,
-                       badfn=badfn)
+        m = matchercls(root, kindpats, badfn=badfn)
         matchers.append(m)
     if fms:
         matchers.extend(fms)
     if not matchers:
-        return nevermatcher(root, cwd, badfn=badfn)
+        return nevermatcher(badfn=badfn)
     if len(matchers) == 1:
         return matchers[0]
     return unionmatcher(matchers)
 
 def match(root, cwd, patterns=None, include=None, exclude=None, default='glob',
-          exact=False, auditor=None, ctx=None, listsubrepos=False, warn=None,
+          auditor=None, ctx=None, listsubrepos=False, warn=None,
           badfn=None, icasefs=False):
-    """build an object to match a set of file patterns
+    r"""build an object to match a set of file patterns
 
     arguments:
     root - the canonical root of the tree you're matching against
@@ -126,7 +125,9 @@
     include - patterns to include (unless they are excluded)
     exclude - patterns to exclude (even if they are included)
     default - if a pattern in patterns has no explicit type, assume this one
-    exact - patterns are actually filenames (include/exclude still apply)
+    auditor - optional path auditor
+    ctx - optional changecontext
+    listsubrepos - if True, recurse into subrepositories
     warn - optional function used for printing warnings
     badfn - optional bad() callback for this matcher instead of the default
     icasefs - make a matcher for wdir on case insensitive filesystems, which
@@ -147,12 +148,55 @@
     'subinclude:<path>' - a file of patterns to match against files under
                           the same directory
     '<something>' - a pattern of the specified default type
+
+    Usually a patternmatcher is returned:
+    >>> match(b'foo', b'.', [b're:.*\.c$', b'path:foo/a', b'*.py'])
+    <patternmatcher patterns='.*\\.c$|foo/a(?:/|$)|[^/]*\\.py$'>
+
+    Combining 'patterns' with 'include' (resp. 'exclude') gives an
+    intersectionmatcher (resp. a differencematcher):
+    >>> type(match(b'foo', b'.', [b're:.*\.c$'], include=[b'path:lib']))
+    <class 'mercurial.match.intersectionmatcher'>
+    >>> type(match(b'foo', b'.', [b're:.*\.c$'], exclude=[b'path:build']))
+    <class 'mercurial.match.differencematcher'>
+
+    Notice that, if 'patterns' is empty, an alwaysmatcher is returned:
+    >>> match(b'foo', b'.', [])
+    <alwaysmatcher>
+
+    The 'default' argument determines which kind of pattern is assumed if a
+    pattern has no prefix:
+    >>> match(b'foo', b'.', [b'.*\.c$'], default=b're')
+    <patternmatcher patterns='.*\\.c$'>
+    >>> match(b'foo', b'.', [b'main.py'], default=b'relpath')
+    <patternmatcher patterns='main\\.py(?:/|$)'>
+    >>> match(b'foo', b'.', [b'main.py'], default=b're')
+    <patternmatcher patterns='main.py'>
+
+    The primary use of matchers is to check whether a value (usually a file
+    name) matches againset one of the patterns given at initialization. There
+    are two ways of doing this check.
+
+    >>> m = match(b'foo', b'', [b're:.*\.c$', b'relpath:a'])
+
+    1. Calling the matcher with a file name returns True if any pattern
+    matches that file name:
+    >>> m(b'a')
+    True
+    >>> m(b'main.c')
+    True
+    >>> m(b'test.py')
+    False
+
+    2. Using the exact() method only returns True if the file name matches one
+    of the exact patterns (i.e. not re: or glob: patterns):
+    >>> m.exact(b'a')
+    True
+    >>> m.exact(b'main.c')
+    False
     """
     normalize = _donormalize
     if icasefs:
-        if exact:
-            raise error.ProgrammingError("a case-insensitive exact matcher "
-                                         "doesn't make sense")
         dirstate = ctx.repo().dirstate
         dsnormalize = dirstate.normalize
 
@@ -171,41 +215,38 @@
                 kindpats.append((kind, pats, source))
             return kindpats
 
-    if exact:
-        m = exactmatcher(root, cwd, patterns, badfn)
-    elif patterns:
+    if patterns:
         kindpats = normalize(patterns, default, root, cwd, auditor, warn)
         if _kindpatsalwaysmatch(kindpats):
-            m = alwaysmatcher(root, cwd, badfn, relativeuipath=True)
+            m = alwaysmatcher(badfn)
         else:
-            m = _buildkindpatsmatcher(patternmatcher, root, cwd, kindpats,
-                                      ctx=ctx, listsubrepos=listsubrepos,
-                                      badfn=badfn)
+            m = _buildkindpatsmatcher(patternmatcher, root, kindpats, ctx=ctx,
+                                      listsubrepos=listsubrepos, badfn=badfn)
     else:
         # It's a little strange that no patterns means to match everything.
         # Consider changing this to match nothing (probably using nevermatcher).
-        m = alwaysmatcher(root, cwd, badfn)
+        m = alwaysmatcher(badfn)
 
     if include:
         kindpats = normalize(include, 'glob', root, cwd, auditor, warn)
-        im = _buildkindpatsmatcher(includematcher, root, cwd, kindpats, ctx=ctx,
+        im = _buildkindpatsmatcher(includematcher, root, kindpats, ctx=ctx,
                                    listsubrepos=listsubrepos, badfn=None)
         m = intersectmatchers(m, im)
     if exclude:
         kindpats = normalize(exclude, 'glob', root, cwd, auditor, warn)
-        em = _buildkindpatsmatcher(includematcher, root, cwd, kindpats, ctx=ctx,
+        em = _buildkindpatsmatcher(includematcher, root, kindpats, ctx=ctx,
                                    listsubrepos=listsubrepos, badfn=None)
         m = differencematcher(m, em)
     return m
 
-def exact(root, cwd, files, badfn=None):
-    return exactmatcher(root, cwd, files, badfn=badfn)
+def exact(files, badfn=None):
+    return exactmatcher(files, badfn=badfn)
 
-def always(root, cwd):
-    return alwaysmatcher(root, cwd)
+def always(badfn=None):
+    return alwaysmatcher(badfn)
 
-def never(root, cwd):
-    return nevermatcher(root, cwd)
+def never(badfn=None):
+    return nevermatcher(badfn)
 
 def badmatch(match, badfn):
     """Make a copy of the given matcher, replacing its bad method with the given
@@ -215,13 +256,13 @@
     m.bad = badfn
     return m
 
-def _donormalize(patterns, default, root, cwd, auditor, warn):
+def _donormalize(patterns, default, root, cwd, auditor=None, warn=None):
     '''Convert 'kind:pat' from the patterns list to tuples with kind and
     normalized and rooted patterns and with listfiles expanded.'''
     kindpats = []
     for kind, pat in [_patsplit(p, default) for p in patterns]:
         if kind in cwdrelativepatternkinds:
-            pat = pathutil.canonpath(root, cwd, pat, auditor)
+            pat = pathutil.canonpath(root, cwd, pat, auditor=auditor)
         elif kind in ('relglob', 'path', 'rootfilesin', 'rootglob'):
             pat = util.normpath(pat)
         elif kind in ('listfile', 'listfile0'):
@@ -258,12 +299,9 @@
 
 class basematcher(object):
 
-    def __init__(self, root, cwd, badfn=None, relativeuipath=True):
-        self._root = root
-        self._cwd = cwd
+    def __init__(self, badfn=None):
         if badfn is not None:
             self.bad = badfn
-        self._relativeuipath = relativeuipath
 
     def __call__(self, fn):
         return self.matchfn(fn)
@@ -284,21 +322,6 @@
     # by recursive traversal is visited.
     traversedir = None
 
-    def abs(self, f):
-        '''Convert a repo path back to path that is relative to the root of the
-        matcher.'''
-        return f
-
-    def rel(self, f):
-        '''Convert repo path back to path that is relative to cwd of matcher.'''
-        return util.pathto(self._root, self._cwd, f)
-
-    def uipath(self, f):
-        '''Convert repo path to a display path.  If patterns or -I/-X were used
-        to create this matcher, the display path will be relative to cwd.
-        Otherwise it is relative to the root of the repo.'''
-        return (self._relativeuipath and self.rel(f)) or self.abs(f)
-
     @propertycache
     def _files(self):
         return []
@@ -399,9 +422,8 @@
 class alwaysmatcher(basematcher):
     '''Matches everything.'''
 
-    def __init__(self, root, cwd, badfn=None, relativeuipath=False):
-        super(alwaysmatcher, self).__init__(root, cwd, badfn,
-                                            relativeuipath=relativeuipath)
+    def __init__(self, badfn=None):
+        super(alwaysmatcher, self).__init__(badfn)
 
     def always(self):
         return True
@@ -421,8 +443,8 @@
 class nevermatcher(basematcher):
     '''Matches nothing.'''
 
-    def __init__(self, root, cwd, badfn=None):
-        super(nevermatcher, self).__init__(root, cwd, badfn)
+    def __init__(self, badfn=None):
+        super(nevermatcher, self).__init__(badfn)
 
     # It's a little weird to say that the nevermatcher is an exact matcher
     # or a prefix matcher, but it seems to make sense to let callers take
@@ -447,8 +469,8 @@
 class predicatematcher(basematcher):
     """A matcher adapter for a simple boolean function"""
 
-    def __init__(self, root, cwd, predfn, predrepr=None, badfn=None):
-        super(predicatematcher, self).__init__(root, cwd, badfn)
+    def __init__(self, predfn, predrepr=None, badfn=None):
+        super(predicatematcher, self).__init__(badfn)
         self.matchfn = predfn
         self._predrepr = predrepr
 
@@ -459,14 +481,44 @@
         return '<predicatenmatcher pred=%s>' % s
 
 class patternmatcher(basematcher):
+    """Matches a set of (kind, pat, source) against a 'root' directory.
 
-    def __init__(self, root, cwd, kindpats, listsubrepos=False, badfn=None):
-        super(patternmatcher, self).__init__(root, cwd, badfn)
+    >>> kindpats = [
+    ...     (b're', b'.*\.c$', b''),
+    ...     (b'path', b'foo/a', b''),
+    ...     (b'relpath', b'b', b''),
+    ...     (b'glob', b'*.h', b''),
+    ... ]
+    >>> m = patternmatcher(b'foo', kindpats)
+    >>> m(b'main.c')  # matches re:.*\.c$
+    True
+    >>> m(b'b.txt')
+    False
+    >>> m(b'foo/a')  # matches path:foo/a
+    True
+    >>> m(b'a')  # does not match path:b, since 'root' is 'foo'
+    False
+    >>> m(b'b')  # matches relpath:b, since 'root' is 'foo'
+    True
+    >>> m(b'lib.h')  # matches glob:*.h
+    True
+
+    >>> m.files()
+    ['.', 'foo/a', 'b', '.']
+    >>> m.exact(b'foo/a')
+    True
+    >>> m.exact(b'b')
+    True
+    >>> m.exact(b'lib.h')  # exact matches are for (rel)path kinds
+    False
+    """
+
+    def __init__(self, root, kindpats, badfn=None):
+        super(patternmatcher, self).__init__(badfn)
 
         self._files = _explicitfiles(kindpats)
         self._prefix = _prefix(kindpats)
-        self._pats, self.matchfn = _buildmatch(kindpats, '$', listsubrepos,
-                                               root)
+        self._pats, self.matchfn = _buildmatch(kindpats, '$', root)
 
     @propertycache
     def _dirs(self):
@@ -539,11 +591,10 @@
 
 class includematcher(basematcher):
 
-    def __init__(self, root, cwd, kindpats, listsubrepos=False, badfn=None):
-        super(includematcher, self).__init__(root, cwd, badfn)
+    def __init__(self, root, kindpats, badfn=None):
+        super(includematcher, self).__init__(badfn)
 
-        self._pats, self.matchfn = _buildmatch(kindpats, '(?:/|$)',
-                                               listsubrepos, root)
+        self._pats, self.matchfn = _buildmatch(kindpats, '(?:/|$)', root)
         self._prefix = _prefix(kindpats)
         roots, dirs, parents = _rootsdirsandparents(kindpats)
         # roots are directories which are recursively included.
@@ -597,12 +648,28 @@
         return ('<includematcher includes=%r>' % pycompat.bytestr(self._pats))
 
 class exactmatcher(basematcher):
-    '''Matches the input files exactly. They are interpreted as paths, not
+    r'''Matches the input files exactly. They are interpreted as paths, not
     patterns (so no kind-prefixes).
+
+    >>> m = exactmatcher([b'a.txt', b're:.*\.c$'])
+    >>> m(b'a.txt')
+    True
+    >>> m(b'b.txt')
+    False
+
+    Input files that would be matched are exactly those returned by .files()
+    >>> m.files()
+    ['a.txt', 're:.*\\.c$']
+
+    So pattern 're:.*\.c$' is not considered as a regex, but as a file name
+    >>> m(b'main.c')
+    False
+    >>> m(b're:.*\.c$')
+    True
     '''
 
-    def __init__(self, root, cwd, files, badfn=None):
-        super(exactmatcher, self).__init__(root, cwd, badfn)
+    def __init__(self, files, badfn=None):
+        super(exactmatcher, self).__init__(badfn)
 
         if isinstance(files, list):
             self._files = files
@@ -649,11 +716,11 @@
     '''Composes two matchers by matching if the first matches and the second
     does not.
 
-    The second matcher's non-matching-attributes (root, cwd, bad, explicitdir,
+    The second matcher's non-matching-attributes (bad, explicitdir,
     traversedir) are ignored.
     '''
     def __init__(self, m1, m2):
-        super(differencematcher, self).__init__(m1._root, m1._cwd)
+        super(differencematcher, self).__init__()
         self._m1 = m1
         self._m2 = m2
         self.bad = m1.bad
@@ -677,6 +744,9 @@
     def visitdir(self, dir):
         if self._m2.visitdir(dir) == 'all':
             return False
+        elif not self._m2.visitdir(dir):
+            # m2 does not match dir, we can return 'all' here if possible
+            return self._m1.visitdir(dir)
         return bool(self._m1.visitdir(dir))
 
     def visitchildrenset(self, dir):
@@ -714,7 +784,7 @@
 def intersectmatchers(m1, m2):
     '''Composes two matchers by matching if both of them match.
 
-    The second matcher's non-matching-attributes (root, cwd, bad, explicitdir,
+    The second matcher's non-matching-attributes (bad, explicitdir,
     traversedir) are ignored.
     '''
     if m1 is None or m2 is None:
@@ -726,19 +796,15 @@
         m.bad = m1.bad
         m.explicitdir = m1.explicitdir
         m.traversedir = m1.traversedir
-        m.abs = m1.abs
-        m.rel = m1.rel
-        m._relativeuipath |= m1._relativeuipath
         return m
     if m2.always():
         m = copy.copy(m1)
-        m._relativeuipath |= m2._relativeuipath
         return m
     return intersectionmatcher(m1, m2)
 
 class intersectionmatcher(basematcher):
     def __init__(self, m1, m2):
-        super(intersectionmatcher, self).__init__(m1._root, m1._cwd)
+        super(intersectionmatcher, self).__init__()
         self._m1 = m1
         self._m2 = m2
         self.bad = m1.bad
@@ -805,31 +871,27 @@
     >>> from . import pycompat
     >>> m1 = match(b'root', b'', [b'a.txt', b'sub/b.txt'])
     >>> m2 = subdirmatcher(b'sub', m1)
-    >>> bool(m2(b'a.txt'))
+    >>> m2(b'a.txt')
     False
-    >>> bool(m2(b'b.txt'))
+    >>> m2(b'b.txt')
     True
-    >>> bool(m2.matchfn(b'a.txt'))
+    >>> m2.matchfn(b'a.txt')
     False
-    >>> bool(m2.matchfn(b'b.txt'))
+    >>> m2.matchfn(b'b.txt')
     True
     >>> m2.files()
     ['b.txt']
     >>> m2.exact(b'b.txt')
     True
-    >>> util.pconvert(m2.rel(b'b.txt'))
-    'sub/b.txt'
     >>> def bad(f, msg):
     ...     print(pycompat.sysstr(b"%s: %s" % (f, msg)))
     >>> m1.bad = bad
     >>> m2.bad(b'x.txt', b'No such file')
     sub/x.txt: No such file
-    >>> m2.abs(b'c.txt')
-    'sub/c.txt'
     """
 
     def __init__(self, path, matcher):
-        super(subdirmatcher, self).__init__(matcher._root, matcher._cwd)
+        super(subdirmatcher, self).__init__()
         self._path = path
         self._matcher = matcher
         self._always = matcher.always()
@@ -845,15 +907,6 @@
     def bad(self, f, msg):
         self._matcher.bad(self._path + "/" + f, msg)
 
-    def abs(self, f):
-        return self._matcher.abs(self._path + "/" + f)
-
-    def rel(self, f):
-        return self._matcher.rel(self._path + "/" + f)
-
-    def uipath(self, f):
-        return self._matcher.uipath(self._path + "/" + f)
-
     def matchfn(self, f):
         # Some information is lost in the superclass's constructor, so we
         # can not accurately create the matching function for the subdirectory
@@ -889,19 +942,19 @@
 class prefixdirmatcher(basematcher):
     """Adapt a matcher to work on a parent directory.
 
-    The matcher's non-matching-attributes (root, cwd, bad, explicitdir,
-    traversedir) are ignored.
+    The matcher's non-matching-attributes (bad, explicitdir, traversedir) are
+    ignored.
 
     The prefix path should usually be the relative path from the root of
     this matcher to the root of the wrapped matcher.
 
     >>> m1 = match(util.localpath(b'root/d/e'), b'f', [b'../a.txt', b'b.txt'])
-    >>> m2 = prefixdirmatcher(b'root', b'd/e/f', b'd/e', m1)
-    >>> bool(m2(b'a.txt'),)
+    >>> m2 = prefixdirmatcher(b'd/e', m1)
+    >>> m2(b'a.txt')
     False
-    >>> bool(m2(b'd/e/a.txt'))
+    >>> m2(b'd/e/a.txt')
     True
-    >>> bool(m2(b'd/e/b.txt'))
+    >>> m2(b'd/e/b.txt')
     False
     >>> m2.files()
     ['d/e/a.txt', 'd/e/f/b.txt']
@@ -919,8 +972,8 @@
     False
     """
 
-    def __init__(self, root, cwd, path, matcher, badfn=None):
-        super(prefixdirmatcher, self).__init__(root, cwd, badfn)
+    def __init__(self, path, matcher, badfn=None):
+        super(prefixdirmatcher, self).__init__(badfn)
         if not path:
             raise error.ProgrammingError('prefix path must not be empty')
         self._path = path
@@ -970,13 +1023,13 @@
 class unionmatcher(basematcher):
     """A matcher that is the union of several matchers.
 
-    The non-matching-attributes (root, cwd, bad, explicitdir, traversedir) are
-    taken from the first matcher.
+    The non-matching-attributes (bad, explicitdir, traversedir) are taken from
+    the first matcher.
     """
 
     def __init__(self, matchers):
         m1 = matchers[0]
-        super(unionmatcher, self).__init__(m1._root, m1._cwd)
+        super(unionmatcher, self).__init__()
         self.explicitdir = m1.explicitdir
         self.traversedir = m1.traversedir
         self._matchers = matchers
@@ -1020,7 +1073,18 @@
         return ('<unionmatcher matchers=%r>' % self._matchers)
 
 def patkind(pattern, default=None):
-    '''If pattern is 'kind:pat' with a known kind, return kind.'''
+    '''If pattern is 'kind:pat' with a known kind, return kind.
+
+    >>> patkind(b're:.*\.c$')
+    're'
+    >>> patkind(b'glob:*.c')
+    'glob'
+    >>> patkind(b'relpath:test.py')
+    'relpath'
+    >>> patkind(b'main.py')
+    >>> patkind(b'main.py', default=b're')
+    're'
+    '''
     return _patsplit(pattern, default)[0]
 
 def _patsplit(pattern, default):
@@ -1142,7 +1206,7 @@
         return _globre(pat) + globsuffix
     raise error.ProgrammingError('not a regex pattern: %s:%s' % (kind, pat))
 
-def _buildmatch(kindpats, globsuffix, listsubrepos, root):
+def _buildmatch(kindpats, globsuffix, root):
     '''Return regexp string and a matcher function for kindpats.
     globsuffix is appended to the regexp of globs.'''
     matchfuncs = []
@@ -1223,7 +1287,8 @@
             groupsize += piecesize + 1
 
         if startidx == 0:
-            func = _rematcher(fullregexp)
+            matcher = _rematcher(fullregexp)
+            func = lambda s: bool(matcher(s))
         else:
             group = regexps[startidx:]
             allgroups.append(_joinregexes(group))
--- a/mercurial/merge.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/merge.py	Wed Apr 17 13:41:18 2019 -0400
@@ -391,9 +391,9 @@
         """
         # Check local variables before looking at filesystem for performance
         # reasons.
-        return bool(self._local) or bool(self._state) or \
-               self._repo.vfs.exists(self.statepathv1) or \
-               self._repo.vfs.exists(self.statepathv2)
+        return (bool(self._local) or bool(self._state) or
+                self._repo.vfs.exists(self.statepathv1) or
+                self._repo.vfs.exists(self.statepathv2))
 
     def commit(self):
         """Write current state on disk (if necessary)"""
@@ -815,8 +815,8 @@
                     fileconflicts.add(f)
 
         allconflicts = fileconflicts | pathconflicts
-        ignoredconflicts = set([c for c in allconflicts
-                                if repo.dirstate._ignore(c)])
+        ignoredconflicts = {c for c in allconflicts
+                            if repo.dirstate._ignore(c)}
         unknownconflicts = allconflicts - ignoredconflicts
         collectconflicts(ignoredconflicts, ignoredconfig)
         collectconflicts(unknownconflicts, unknownconfig)
@@ -1104,7 +1104,7 @@
     Raise an exception if the merge cannot be completed because the repo is
     narrowed.
     """
-    nooptypes = set(['k']) # TODO: handle with nonconflicttypes
+    nooptypes = {'k'} # TODO: handle with nonconflicttypes
     nonconflicttypes = set('a am c cm f g r e'.split())
     # We mutate the items in the dict during iteration, so iterate
     # over a copy.
@@ -1186,9 +1186,6 @@
 
     diff = m1.diff(m2, match=matcher)
 
-    if matcher is None:
-        matcher = matchmod.always('', '')
-
     actions = {}
     for f, ((n1, fl1), (n2, fl2)) in diff.iteritems():
         if n1 and n2: # file exists on both local and remote side
@@ -1502,15 +1499,15 @@
                 # If a file or directory exists with the same name, back that
                 # up.  Otherwise, look to see if there is a file that conflicts
                 # with a directory this file is in, and if so, back that up.
-                absf = repo.wjoin(f)
+                conflicting = f
                 if not repo.wvfs.lexists(f):
                     for p in util.finddirs(f):
                         if repo.wvfs.isfileorlink(p):
-                            absf = repo.wjoin(p)
+                            conflicting = p
                             break
-                orig = scmutil.origpath(ui, repo, absf)
-                if repo.wvfs.lexists(absf):
-                    util.rename(absf, orig)
+                if repo.wvfs.lexists(conflicting):
+                    orig = scmutil.backuppath(ui, repo, conflicting)
+                    util.rename(repo.wjoin(conflicting), orig)
             wctx[f].clearunknown()
             atomictemp = ui.configbool("experimental", "update.atomic-file")
             wctx[f].write(fctx(f).data(), flags, backgroundclose=True,
@@ -2134,14 +2131,14 @@
         for f, fl in sorted(diverge.iteritems()):
             repo.ui.warn(_("note: possible conflict - %s was renamed "
                            "multiple times to:\n") % f)
-            for nf in fl:
+            for nf in sorted(fl):
                 repo.ui.warn(" %s\n" % nf)
 
         # rename and delete
         for f, fl in sorted(renamedelete.iteritems()):
             repo.ui.warn(_("note: possible conflict - %s was deleted "
                            "and renamed to:\n") % f)
-            for nf in fl:
+            for nf in sorted(fl):
                 repo.ui.warn(" %s\n" % nf)
 
         ### apply phase
@@ -2208,7 +2205,7 @@
                   error=stats.unresolvedcount)
     return stats
 
-def graft(repo, ctx, pctx, labels, keepparent=False,
+def graft(repo, ctx, pctx, labels=None, keepparent=False,
           keepconflictparent=False):
     """Do a graft-like merge.
 
--- a/mercurial/minirst.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/minirst.py	Wed Apr 17 13:41:18 2019 -0400
@@ -114,9 +114,9 @@
                 # Partially minimized form: remove space and both
                 # colons.
                 blocks[i]['lines'][-1] = blocks[i]['lines'][-1][:-3]
-            elif len(blocks[i]['lines']) == 1 and \
-                 blocks[i]['lines'][0].lstrip(' ').startswith('.. ') and \
-                 blocks[i]['lines'][0].find(' ', 3) == -1:
+            elif (len(blocks[i]['lines']) == 1 and
+                  blocks[i]['lines'][0].lstrip(' ').startswith('.. ') and
+                  blocks[i]['lines'][0].find(' ', 3) == -1):
                 # directive on its own line, not a literal block
                 i += 1
                 continue
@@ -641,7 +641,6 @@
 
 def parse(text, indent=0, keep=None, admonitions=None):
     """Parse text into a list of blocks"""
-    pruned = []
     blocks = findblocks(text)
     for b in blocks:
         b['indent'] += indent
@@ -736,7 +735,6 @@
     '''return a list of (section path, nesting level, blocks) tuples'''
     nest = ""
     names = ()
-    level = 0
     secs = []
 
     def getname(b):
@@ -792,8 +790,8 @@
                     if section['type'] != 'margin':
                         sindent = section['indent']
                         if len(section['lines']) > 1:
-                            sindent += len(section['lines'][1]) - \
-                              len(section['lines'][1].lstrip(' '))
+                            sindent += (len(section['lines'][1]) -
+                                        len(section['lines'][1].lstrip(' ')))
                         if bindent >= sindent:
                             break
                     pointer += 1
--- a/mercurial/mpatch.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/mpatch.c	Wed Apr 17 13:41:18 2019 -0400
@@ -41,8 +41,9 @@
 {
 	struct mpatch_flist *a = NULL;
 
-	if (size < 1)
+	if (size < 1) {
 		size = 1;
+	}
 
 	a = (struct mpatch_flist *)malloc(sizeof(struct mpatch_flist));
 	if (a) {
@@ -110,10 +111,12 @@
 
 	while (s != src->tail) {
 		int soffset = s->start;
-		if (!safeadd(offset, &soffset))
+		if (!safeadd(offset, &soffset)) {
 			break; /* add would overflow, oh well */
-		if (soffset >= cut)
+		}
+		if (soffset >= cut) {
 			break; /* we've gone far enough */
+		}
 
 		postend = offset;
 		if (!safeadd(s->start, &postend) ||
@@ -139,11 +142,13 @@
 			if (!safesub(offset, &c)) {
 				break;
 			}
-			if (s->end < c)
+			if (s->end < c) {
 				c = s->end;
+			}
 			l = cut - offset - s->start;
-			if (s->len < l)
+			if (s->len < l) {
 				l = s->len;
+			}
 
 			offset += s->start + l - c;
 
@@ -176,8 +181,9 @@
 		if (!safeadd(offset, &cmpcut)) {
 			break;
 		}
-		if (cmpcut >= cut)
+		if (cmpcut >= cut) {
 			break;
+		}
 
 		postend = offset;
 		if (!safeadd(s->start, &postend)) {
@@ -205,11 +211,13 @@
 			if (!safesub(offset, &c)) {
 				break;
 			}
-			if (s->end < c)
+			if (s->end < c) {
 				c = s->end;
+			}
 			l = cut - offset - s->start;
-			if (s->len < l)
+			if (s->len < l) {
 				l = s->len;
+			}
 
 			offset += s->start + l - c;
 			s->start = c;
@@ -233,8 +241,9 @@
 	struct mpatch_frag *bh, *ct;
 	int offset = 0, post;
 
-	if (a && b)
+	if (a && b) {
 		c = lalloc((lsize(a) + lsize(b)) * 2);
+	}
 
 	if (c) {
 
@@ -284,8 +293,9 @@
 
 	/* assume worst case size, we won't have many of these lists */
 	l = lalloc(len / 12 + 1);
-	if (!l)
+	if (!l) {
 		return MPATCH_ERR_NO_MEM;
+	}
 
 	lt = l->tail;
 
@@ -295,8 +305,9 @@
 		lt->start = getbe32(bin + pos);
 		lt->end = getbe32(bin + pos + 4);
 		lt->len = getbe32(bin + pos + 8);
-		if (lt->start < 0 || lt->start > lt->end || lt->len < 0)
+		if (lt->start < 0 || lt->start > lt->end || lt->len < 0) {
 			break; /* sanity check */
+		}
 		if (!safeadd(12, &pos)) {
 			break;
 		}
--- a/mercurial/narrowspec.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/narrowspec.py	Wed Apr 17 13:41:18 2019 -0400
@@ -127,7 +127,7 @@
         # Passing empty include and empty exclude to matchmod.match()
         # gives a matcher that matches everything, so explicitly use
         # the nevermatcher.
-        return matchmod.never(root, '')
+        return matchmod.never()
     return matchmod.match(root, '', [], include=include or [],
                           exclude=exclude or [])
 
--- a/mercurial/obsolete.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/obsolete.py	Wed Apr 17 13:41:18 2019 -0400
@@ -743,7 +743,7 @@
                 pruned = [m for m in succsmarkers.get(current, ()) if not m[1]]
                 direct.update(pruned)
             direct -= seenmarkers
-            pendingnodes = set([m[0] for m in direct])
+            pendingnodes = {m[0] for m in direct}
             seenmarkers |= direct
             pendingnodes -= seennodes
             seennodes |= pendingnodes
--- a/mercurial/obsutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/obsutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -397,14 +397,17 @@
 
     This is a first and basic implementation, with many shortcoming.
     """
-    # lefctx.repo() and rightctx.repo() are the same here
-    repo = leftctx.repo()
-    diffopts = diffutil.diffallopts(repo.ui, {'git': True})
+    diffopts = diffutil.diffallopts(leftctx.repo().ui, {'git': True})
+
     # Leftctx or right ctx might be filtered, so we need to use the contexts
     # with an unfiltered repository to safely compute the diff
-    leftunfi = repo.unfiltered()[leftctx.rev()]
+
+    # leftctx and rightctx can be from different repository views in case of
+    # hgsubversion, do don't try to access them from same repository
+    # rightctx.repo() and leftctx.repo() are not always the same
+    leftunfi = leftctx._repo.unfiltered()[leftctx.rev()]
     leftdiff = leftunfi.diff(opts=diffopts)
-    rightunfi = repo.unfiltered()[rightctx.rev()]
+    rightunfi = rightctx._repo.unfiltered()[rightctx.rev()]
     rightdiff = rightunfi.diff(opts=diffopts)
 
     left, right = (0, 0)
--- a/mercurial/patch.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/patch.py	Wed Apr 17 13:41:18 2019 -0400
@@ -15,7 +15,6 @@
 import errno
 import hashlib
 import os
-import posixpath
 import re
 import shutil
 import zlib
@@ -363,7 +362,7 @@
         return self._ispatchinga(afile) and self._ispatchingb(bfile)
 
     def __repr__(self):
-        return "<patchmeta %s %r>" % (self.op, self.path)
+        return r"<patchmeta %s %r>" % (self.op, self.path)
 
 def readgitpatch(lr):
     """extract git-style metadata about patches from <patchname>"""
@@ -637,8 +636,8 @@
         return self.changed | self.removed
 
 # @@ -start,len +start,len @@ or @@ -start +start @@ if len is 1
-unidesc = re.compile('@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@')
-contextdesc = re.compile('(?:---|\*\*\*) (\d+)(?:,(\d+))? (?:---|\*\*\*)')
+unidesc = re.compile(br'@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@')
+contextdesc = re.compile(br'(?:---|\*\*\*) (\d+)(?:,(\d+))? (?:---|\*\*\*)')
 eolmodes = ['strict', 'crlf', 'lf', 'auto']
 
 class patchfile(object):
@@ -752,7 +751,7 @@
             for l in x.hunk:
                 lines.append(l)
                 if l[-1:] != '\n':
-                    lines.append("\n\ No newline at end of file\n")
+                    lines.append("\n\\ No newline at end of file\n")
         self.backend.writerej(self.fname, len(self.rej), self.hunks, lines)
 
     def apply(self, h):
@@ -864,7 +863,7 @@
     diff_re = re.compile('diff -r .* (.*)$')
     allhunks_re = re.compile('(?:index|deleted file) ')
     pretty_re = re.compile('(?:new file|deleted file) ')
-    special_re = re.compile('(?:index|deleted|copy|rename) ')
+    special_re = re.compile('(?:index|deleted|copy|rename|new mode) ')
     newfile_re = re.compile('(?:new file)')
 
     def __init__(self, header):
@@ -926,8 +925,8 @@
         # if they have some content as we want to be able to change it
         nocontent = len(self.header) == 2
         emptynewfile = self.isnewfile() and nocontent
-        return emptynewfile or \
-                any(self.special_re.match(h) for h in self.header)
+        return (emptynewfile
+                or any(self.special_re.match(h) for h in self.header))
 
 class recordhunk(object):
     """patch hunk
@@ -1013,11 +1012,13 @@
         'multiple': {
             'apply': _("apply change %d/%d to '%s'?"),
             'discard': _("discard change %d/%d to '%s'?"),
+            'keep': _("keep change %d/%d to '%s'?"),
             'record': _("record change %d/%d to '%s'?"),
         },
         'single': {
             'apply': _("apply this change to '%s'?"),
             'discard': _("discard this change to '%s'?"),
+            'keep': _("keep this change to '%s'?"),
             'record': _("record this change to '%s'?"),
         },
         'help': {
@@ -1041,6 +1042,16 @@
                          '$$ Discard &all changes to all remaining files'
                          '$$ &Quit, discarding no changes'
                          '$$ &? (display help)'),
+            'keep': _('[Ynesfdaq?]'
+                         '$$ &Yes, keep this change'
+                         '$$ &No, skip this change'
+                         '$$ &Edit this change manually'
+                         '$$ &Skip remaining changes to this file'
+                         '$$ Keep remaining changes to this &file'
+                         '$$ &Done, skip remaining changes and files'
+                         '$$ Keep &all changes to all remaining files'
+                         '$$ &Quit, keeping all changes'
+                         '$$ &? (display help)'),
             'record': _('[Ynesfdaq?]'
                         '$$ &Yes, record this change'
                         '$$ &No, skip this change'
@@ -1054,7 +1065,7 @@
         }
     }
 
-def filterpatch(ui, headers, operation=None):
+def filterpatch(ui, headers, match, operation=None):
     """Interactively filter patch chunks into applied-only chunks"""
     messages = getmessages()
 
@@ -1118,7 +1129,8 @@
                     f = util.nativeeolwriter(os.fdopen(patchfd, r'wb'))
                     chunk.header.write(f)
                     chunk.write(f)
-                    f.write('\n'.join(['# ' + i for i in phelp.splitlines()]))
+                    f.write(''.join(['# ' + i + '\n'
+                                     for i in phelp.splitlines()]))
                     f.close()
                     # Start the editor and wait for it to complete
                     editor = ui.geteditor()
@@ -1170,9 +1182,13 @@
         seen.add(hdr)
         if skipall is None:
             h.pretty(ui)
+        files = h.files()
         msg = (_('examine changes to %s?') %
-               _(' and ').join("'%s'" % f for f in h.files()))
-        r, skipfile, skipall, np = prompt(skipfile, skipall, msg, None)
+               _(' and ').join("'%s'" % f for f in files))
+        if all(match.exact(f) for f in files):
+            r, skipall, np = True, None, None
+        else:
+            r, skipfile, skipall, np = prompt(skipfile, skipall, msg, None)
         if not r:
             continue
         applied[h.filename()] = [h]
@@ -1304,7 +1320,7 @@
             self.hunk.append(u)
 
         l = lr.readline()
-        if l.startswith('\ '):
+        if l.startswith(br'\ '):
             s = self.a[-1][:-1]
             self.a[-1] = s
             self.hunk[-1] = s
@@ -1322,7 +1338,7 @@
         hunki = 1
         for x in pycompat.xrange(self.lenb):
             l = lr.readline()
-            if l.startswith('\ '):
+            if l.startswith(br'\ '):
                 # XXX: the only way to hit this is with an invalid line range.
                 # The no-eol marker is not counted in the line range, but I
                 # guess there are diff(1) out there which behave differently.
@@ -1379,7 +1395,7 @@
 
     def _fixnewline(self, lr):
         l = lr.readline()
-        if l.startswith('\ '):
+        if l.startswith(br'\ '):
             diffhelper.fixnewline(self.hunk, self.a, self.b)
         else:
             lr.push(l)
@@ -1448,7 +1464,6 @@
             hunk.append(l)
             return l.rstrip('\r\n')
 
-        size = 0
         while True:
             line = getline(lr, self.hunk)
             if not line:
@@ -1610,6 +1625,7 @@
             self.headers = []
 
         def addrange(self, limits):
+            self.addcontext([])
             fromstart, fromend, tostart, toend, proc = limits
             self.fromline = int(fromstart)
             self.toline = int(tostart)
@@ -1630,6 +1646,8 @@
             if self.context:
                 self.before = self.context
                 self.context = []
+            if self.hunk:
+                self.addcontext([])
             self.hunk = hunk
 
         def newfile(self, hdr):
@@ -1903,7 +1921,6 @@
             if not gitpatches:
                 raise PatchError(_('failed to synchronize metadata for "%s"')
                                  % afile[2:])
-            gp = gitpatches[-1]
             newfile = True
         elif x.startswith('---'):
             # check for a unified diff
@@ -2238,8 +2255,8 @@
 difffeatureopts = diffutil.difffeatureopts
 
 def diff(repo, node1=None, node2=None, match=None, changes=None,
-         opts=None, losedatafn=None, prefix='', relroot='', copy=None,
-         hunksfilterfn=None):
+         opts=None, losedatafn=None, pathfn=None, copy=None,
+         copysourcematch=None, hunksfilterfn=None):
     '''yields diff of changes to files between two nodes, or node and
     working directory.
 
@@ -2263,20 +2280,28 @@
     copy, if not empty, should contain mappings {dst@y: src@x} of copy
     information.
 
+    if copysourcematch is not None, then copy sources will be filtered by this
+    matcher
+
     hunksfilterfn, if not None, should be a function taking a filectx and
     hunks generator that may yield filtered hunks.
     '''
+    if not node1 and not node2:
+        node1 = repo.dirstate.p1()
+
+    ctx1 = repo[node1]
+    ctx2 = repo[node2]
+
     for fctx1, fctx2, hdr, hunks in diffhunks(
-            repo, node1=node1, node2=node2,
-            match=match, changes=changes, opts=opts,
-            losedatafn=losedatafn, prefix=prefix, relroot=relroot, copy=copy,
-    ):
+            repo, ctx1=ctx1, ctx2=ctx2, match=match, changes=changes, opts=opts,
+            losedatafn=losedatafn, pathfn=pathfn, copy=copy,
+            copysourcematch=copysourcematch):
         if hunksfilterfn is not None:
             # If the file has been removed, fctx2 is None; but this should
             # not occur here since we catch removed files early in
             # logcmdutil.getlinerangerevs() for 'hg log -L'.
-            assert fctx2 is not None, \
-                'fctx2 unexpectly None in diff hunks filtering'
+            assert fctx2 is not None, (
+                'fctx2 unexpectly None in diff hunks filtering')
             hunks = hunksfilterfn(fctx2, hunks)
         text = ''.join(sum((list(hlines) for hrange, hlines in hunks), []))
         if hdr and (text or len(hdr) > 1):
@@ -2284,8 +2309,8 @@
         if text:
             yield text
 
-def diffhunks(repo, node1=None, node2=None, match=None, changes=None,
-              opts=None, losedatafn=None, prefix='', relroot='', copy=None):
+def diffhunks(repo, ctx1, ctx2, match=None, changes=None, opts=None,
+              losedatafn=None, pathfn=None, copy=None, copysourcematch=None):
     """Yield diff of changes to files in the form of (`header`, `hunks`) tuples
     where `header` is a list of diff headers and `hunks` is an iterable of
     (`hunkrange`, `hunklines`) tuples.
@@ -2296,9 +2321,6 @@
     if opts is None:
         opts = mdiff.defaultopts
 
-    if not node1 and not node2:
-        node1 = repo.dirstate.p1()
-
     def lrugetfilectx():
         cache = {}
         order = collections.deque()
@@ -2315,16 +2337,6 @@
         return getfilectx
     getfilectx = lrugetfilectx()
 
-    ctx1 = repo[node1]
-    ctx2 = repo[node2]
-
-    relfiltered = False
-    if relroot != '' and match.always():
-        # as a special case, create a new matcher with just the relroot
-        pats = [relroot]
-        match = scmutil.match(ctx2, pats, default='path')
-        relfiltered = True
-
     if not changes:
         changes = ctx1.status(ctx2, match=match)
     modified, added, removed = changes[:3]
@@ -2343,21 +2355,11 @@
         if opts.git or opts.upgrade:
             copy = copies.pathcopies(ctx1, ctx2, match=match)
 
-    if relroot is not None:
-        if not relfiltered:
-            # XXX this would ideally be done in the matcher, but that is
-            # generally meant to 'or' patterns, not 'and' them. In this case we
-            # need to 'and' all the patterns from the matcher with relroot.
-            def filterrel(l):
-                return [f for f in l if f.startswith(relroot)]
-            modified = filterrel(modified)
-            added = filterrel(added)
-            removed = filterrel(removed)
-            relfiltered = True
-        # filter out copies where either side isn't inside the relative root
-        copy = dict(((dst, src) for (dst, src) in copy.iteritems()
-                     if dst.startswith(relroot)
-                     and src.startswith(relroot)))
+    if copysourcematch:
+        # filter out copies where source side isn't inside the matcher
+        # (copies.pathcopies() already filtered out the destination)
+        copy = {dst: src for dst, src in copy.iteritems()
+                if copysourcematch(src)}
 
     modifiedset = set(modified)
     addedset = set(added)
@@ -2388,7 +2390,7 @@
 
     def difffn(opts, losedata):
         return trydiff(repo, revs, ctx1, ctx2, modified, added, removed,
-                       copy, getfilectx, opts, losedata, prefix, relroot)
+                       copy, getfilectx, opts, losedata, pathfn)
     if opts.upgrade and not opts.git:
         try:
             def losedata(fn):
@@ -2603,16 +2605,14 @@
         yield f1, f2, copyop
 
 def trydiff(repo, revs, ctx1, ctx2, modified, added, removed,
-            copy, getfilectx, opts, losedatafn, prefix, relroot):
+            copy, getfilectx, opts, losedatafn, pathfn):
     '''given input data, generate a diff and yield it in blocks
 
     If generating a diff would lose data like flags or binary data and
     losedatafn is not None, it will be called.
 
-    relroot is removed and prefix is added to every path in the diff output.
-
-    If relroot is not empty, this function expects every path in modified,
-    added, removed and copy to start with it.'''
+    pathfn is applied to every path in the diff output.
+    '''
 
     def gitindex(text):
         if not text:
@@ -2640,12 +2640,8 @@
 
     gitmode = {'l': '120000', 'x': '100755', '': '100644'}
 
-    if relroot != '' and (repo.ui.configbool('devel', 'all-warnings')
-                          or repo.ui.configbool('devel', 'check-relroot')):
-        for f in modified + added + removed + list(copy) + list(copy.values()):
-            if f is not None and not f.startswith(relroot):
-                raise AssertionError(
-                    "file %s doesn't start with relroot %s" % (f, relroot))
+    if not pathfn:
+        pathfn = lambda f: f
 
     for f1, f2, copyop in _filepairs(modified, added, removed, copy, opts):
         content1 = None
@@ -2682,10 +2678,8 @@
                 (f1 and f2 and flag1 != flag2)):
                 losedatafn(f2 or f1)
 
-        path1 = f1 or f2
-        path2 = f2 or f1
-        path1 = posixpath.join(prefix, path1[len(relroot):])
-        path2 = posixpath.join(prefix, path2[len(relroot):])
+        path1 = pathfn(f1 or f2)
+        path2 = pathfn(f2 or f1)
         header = []
         if opts.git:
             header.append('diff --git %s%s %s%s' %
@@ -2705,7 +2699,7 @@
                         header.append('similarity index %d%%' % sim)
                     header.append('%s from %s' % (copyop, path1))
                     header.append('%s to %s' % (copyop, path2))
-        elif revs and not repo.ui.quiet:
+        elif revs:
             header.append(diffline(path1, revs))
 
         #  fctx.is  | diffopts                | what to   | is fctx.data()
@@ -2773,7 +2767,7 @@
     return maxfile, maxtotal, addtotal, removetotal, binary
 
 def diffstatdata(lines):
-    diffre = re.compile('^diff .*-r [a-z0-9]+\s(.*)$')
+    diffre = re.compile(br'^diff .*-r [a-z0-9]+\s(.*)$')
 
     results = []
     filename, adds, removes, isbinary = None, 0, 0, False
@@ -2808,6 +2802,10 @@
         elif (line.startswith('GIT binary patch') or
               line.startswith('Binary file')):
             isbinary = True
+        elif line.startswith('rename from'):
+            filename = line[12:]
+        elif line.startswith('rename to'):
+            filename += ' => %s' % line[10:]
     addresult()
     return results
 
--- a/mercurial/posix.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/posix.py	Wed Apr 17 13:41:18 2019 -0400
@@ -575,15 +575,16 @@
     if gid is None:
         gid = os.getgid()
     try:
-        return grp.getgrgid(gid)[0]
+        return pycompat.fsencode(grp.getgrgid(gid)[0])
     except KeyError:
-        return str(gid)
+        return pycompat.bytestr(gid)
 
 def groupmembers(name):
     """Return the list of members of the group with the given
     name, KeyError if the group does not exist.
     """
-    return list(grp.getgrnam(name).gr_mem)
+    name = pycompat.fsdecode(name)
+    return pycompat.rapply(pycompat.fsencode, list(grp.getgrnam(name).gr_mem))
 
 def spawndetached(args):
     return os.spawnvp(os.P_NOWAIT | getattr(os, 'P_DETACH', 0),
--- a/mercurial/rcutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/rcutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -29,7 +29,8 @@
     p = util.expandpath(path)
     if os.path.isdir(p):
         join = os.path.join
-        return [join(p, f) for f, k in util.listdir(p) if f.endswith('.rc')]
+        return sorted(join(p, f) for f, k in util.listdir(p)
+                      if f.endswith('.rc'))
     return [p]
 
 def envrcitems(env=None):
--- a/mercurial/repair.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/repair.py	Wed Apr 17 13:41:18 2019 -0400
@@ -252,6 +252,24 @@
     # extensions can use it
     return backupfile
 
+def softstrip(ui, repo, nodelist, backup=True, topic='backup'):
+    """perform a "soft" strip using the archived phase"""
+    tostrip = [c.node() for c in repo.set('sort(%ln::)', nodelist)]
+    if not tostrip:
+        return None
+
+    newbmtarget, updatebm = _bookmarkmovements(repo, tostrip)
+    if backup:
+        node = tostrip[0]
+        backupfile = _createstripbackup(repo, tostrip, node, topic)
+
+    with repo.transaction('strip') as tr:
+        phases.retractboundary(repo, tr, phases.archived, tostrip)
+        bmchanges = [(m, repo[newbmtarget].node()) for m in updatebm]
+        repo._bookmarks.applychanges(repo, tr, bmchanges)
+    return backupfile
+
+
 def _bookmarkmovements(repo, tostrip):
     # compute necessary bookmark movement
     bm = repo._bookmarks
--- a/mercurial/repository.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/repository.py	Wed Apr 17 13:41:18 2019 -0400
@@ -346,8 +346,8 @@
             return
 
         raise error.CapabilityError(
-            _('cannot %s; remote repository does not support the %r '
-              'capability') % (purpose, name))
+            _('cannot %s; remote repository does not support the '
+              '\'%s\' capability') % (purpose, name))
 
 class iverifyproblem(interfaceutil.Interface):
     """Represents a problem with the integrity of the repository.
--- a/mercurial/repoview.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/repoview.py	Wed Apr 17 13:41:18 2019 -0400
@@ -25,9 +25,9 @@
     This is a standalone function to allow extensions to wrap it.
 
     Because we use the set of immutable changesets as a fallback subset in
-    branchmap (see mercurial.branchmap.subsettable), you cannot set "public"
-    changesets as "hideable". Doing so would break multiple code assertions and
-    lead to crashes."""
+    branchmap (see mercurial.utils.repoviewutils.subsettable), you cannot set
+    "public" changesets as "hideable". Doing so would break multiple code
+    assertions and lead to crashes."""
     obsoletes = obsolete.getrevs(repo, 'obsolete')
     internals = repo._phasecache.getrevset(repo, phases.localhiddenphases)
     internals = frozenset(internals)
@@ -86,6 +86,14 @@
         _revealancestors(pfunc, hidden, visible)
     return frozenset(hidden)
 
+def computesecret(repo, visibilityexceptions=None):
+    """compute the set of revision that can never be exposed through hgweb
+
+    Changeset in the secret phase (or above) should stay unaccessible."""
+    assert not repo.changelog.filteredrevs
+    secrets = repo._phasecache.getrevset(repo, phases.remotehiddenphases)
+    return frozenset(secrets)
+
 def computeunserved(repo, visibilityexceptions=None):
     """compute the set of revision that should be filtered when used a server
 
@@ -93,9 +101,9 @@
     assert not repo.changelog.filteredrevs
     # fast path in simple case to avoid impact of non optimised code
     hiddens = filterrevs(repo, 'visible')
-    if phases.hassecret(repo):
-        secrets = repo._phasecache.getrevset(repo, phases.remotehiddenphases)
-        return frozenset(hiddens | frozenset(secrets))
+    secrets = filterrevs(repo, 'served.hidden')
+    if secrets:
+        return frozenset(hiddens | secrets)
     else:
         return hiddens
 
@@ -136,11 +144,12 @@
 # function to compute filtered set
 #
 # When adding a new filter you MUST update the table at:
-#     mercurial.branchmap.subsettable
+#     mercurial.utils.repoviewutil.subsettable
 # Otherwise your filter will have to recompute all its branches cache
 # from scratch (very slow).
 filtertable = {'visible': computehidden,
                'visible-hidden': computehidden,
+               'served.hidden': computesecret,
                'served': computeunserved,
                'immutable':  computemutable,
                'base':  computeimpactable}
--- a/mercurial/revlog.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/revlog.py	Wed Apr 17 13:41:18 2019 -0400
@@ -371,6 +371,7 @@
         self._nodecache = {nullid: nullrev}
         self._nodepos = None
         self._compengine = 'zlib'
+        self._compengineopts = {}
         self._maxdeltachainspan = -1
         self._withsparseread = False
         self._sparserevlog = False
@@ -410,9 +411,16 @@
             self._maxchainlen = opts['maxchainlen']
         if 'deltabothparents' in opts:
             self._deltabothparents = opts['deltabothparents']
-        self._lazydeltabase = bool(opts.get('lazydeltabase', False))
+        self._lazydelta = bool(opts.get('lazydelta', True))
+        self._lazydeltabase = False
+        if self._lazydelta:
+            self._lazydeltabase = bool(opts.get('lazydeltabase', False))
         if 'compengine' in opts:
             self._compengine = opts['compengine']
+        if 'zlib.level' in opts:
+            self._compengineopts['zlib.level'] = opts['zlib.level']
+        if 'zstd.level' in opts:
+            self._compengineopts['zstd.level'] = opts['zstd.level']
         if 'maxdeltachainspan' in opts:
             self._maxdeltachainspan = opts['maxdeltachainspan']
         if self._mmaplargeindex and 'mmapindexthreshold' in opts:
@@ -523,7 +531,8 @@
 
     @util.propertycache
     def _compressor(self):
-        return util.compengines[self._compengine].revlogcompressor()
+        engine = util.compengines[self._compengine]
+        return engine.revlogcompressor(self._compengineopts)
 
     def _indexfp(self, mode='r'):
         """file object for the revlog's index file"""
@@ -610,6 +619,9 @@
         self._pcache = {}
 
         try:
+            # If we are using the native C version, you are in a fun case
+            # where self.index, self.nodemap and self._nodecaches is the same
+            # object.
             self._nodecache.clearcaches()
         except AttributeError:
             self._nodecache = {nullid: nullrev}
@@ -1118,7 +1130,9 @@
                 return self.index.headrevs()
             except AttributeError:
                 return self._headrevs()
-        return dagop.headrevs(revs, self.parentrevs)
+        if rustext is not None:
+            return rustext.dagop.headrevs(self.index, revs)
+        return dagop.headrevs(revs, self._uncheckedparentrevs)
 
     def computephases(self, roots):
         return self.index.computephasesmapsets(roots)
@@ -1337,7 +1351,7 @@
             return True
 
         def maybewdir(prefix):
-            return all(c == 'f' for c in prefix)
+            return all(c == 'f' for c in pycompat.iterbytestr(prefix))
 
         hexnode = hex(node)
 
@@ -1973,7 +1987,7 @@
         except KeyError:
             try:
                 engine = util.compengines.forrevlogheader(t)
-                compressor = engine.revlogcompressor()
+                compressor = engine.revlogcompressor(self._compengineopts)
                 self._decompressors[t] = compressor
             except KeyError:
                 raise error.RevlogError(_('unknown compression type %r') % t)
@@ -2264,6 +2278,14 @@
         self._nodepos = None
 
     def checksize(self):
+        """Check size of index and data files
+
+        return a (dd, di) tuple.
+        - dd: extra bytes for the "data" file
+        - di: extra bytes for the "index" file
+
+        A healthy revlog will return (0, 0).
+        """
         expected = 0
         if len(self):
             expected = max(0, self.end(len(self) - 1))
@@ -2388,21 +2410,25 @@
         if getattr(destrevlog, 'filteredrevs', None):
             raise ValueError(_('destination revlog has filtered revisions'))
 
-        # lazydeltabase controls whether to reuse a cached delta, if possible.
+        # lazydelta and lazydeltabase controls whether to reuse a cached delta,
+        # if possible.
+        oldlazydelta = destrevlog._lazydelta
         oldlazydeltabase = destrevlog._lazydeltabase
         oldamd = destrevlog._deltabothparents
 
         try:
             if deltareuse == self.DELTAREUSEALWAYS:
                 destrevlog._lazydeltabase = True
+                destrevlog._lazydelta = True
             elif deltareuse == self.DELTAREUSESAMEREVS:
                 destrevlog._lazydeltabase = False
+                destrevlog._lazydelta = True
+            elif deltareuse == self.DELTAREUSENEVER:
+                destrevlog._lazydeltabase = False
+                destrevlog._lazydelta = False
 
             destrevlog._deltabothparents = forcedeltabothparents or oldamd
 
-            populatecachedelta = deltareuse in (self.DELTAREUSEALWAYS,
-                                                self.DELTAREUSESAMEREVS)
-
             deltacomputer = deltautil.deltacomputer(destrevlog)
             index = self.index
             for rev in self:
@@ -2420,7 +2446,7 @@
                 # the revlog chunk is a delta.
                 cachedelta = None
                 rawtext = None
-                if populatecachedelta:
+                if destrevlog._lazydelta:
                     dp = self.deltaparent(rev)
                     if dp != nullrev:
                         cachedelta = (dp, bytes(self._chunk(rev)))
@@ -2452,6 +2478,7 @@
                 if addrevisioncb:
                     addrevisioncb(self, rev, node)
         finally:
+            destrevlog._lazydelta = oldlazydelta
             destrevlog._lazydeltabase = oldlazydeltabase
             destrevlog._deltabothparents = oldamd
 
--- a/mercurial/revlogutils/deltas.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/revlogutils/deltas.py	Wed Apr 17 13:41:18 2019 -0400
@@ -637,7 +637,7 @@
 
     deltas_limit = textlen * LIMIT_DELTA2TEXT
 
-    tested = set([nullrev])
+    tested = {nullrev}
     candidates = _refinedgroups(revlog, p1, p2, cachedelta)
     while True:
         temptative = candidates.send(good)
@@ -916,7 +916,7 @@
                     and currentbase != base
                     and self.revlog.length(currentbase) == 0):
                 currentbase = self.revlog.deltaparent(currentbase)
-            if currentbase == base:
+            if self.revlog._lazydelta and currentbase == base:
                 delta = revinfo.cachedelta[1]
         if delta is None:
             delta = self._builddeltadiff(base, revinfo, fh)
--- a/mercurial/revset.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/revset.py	Wed Apr 17 13:41:18 2019 -0400
@@ -43,7 +43,7 @@
 getinteger = revsetlang.getinteger
 getboolean = revsetlang.getboolean
 getlist = revsetlang.getlist
-getrange = revsetlang.getrange
+getintrange = revsetlang.getintrange
 getargs = revsetlang.getargs
 getargsdict = revsetlang.getargsdict
 
@@ -225,24 +225,70 @@
 def relationset(repo, subset, x, y, order):
     raise error.ParseError(_("can't use a relation in this context"))
 
-def generationsrel(repo, subset, x, rel, n, order):
-    # TODO: support range, rewrite tests, and drop startdepth argument
-    # from ancestors() and descendants() predicates
-    if n <= 0:
-        n = -n
-        return _ancestors(repo, subset, x, startdepth=n, stopdepth=n + 1)
-    else:
-        return _descendants(repo, subset, x, startdepth=n, stopdepth=n + 1)
+def _splitrange(a, b):
+    """Split range with bounds a and b into two ranges at 0 and return two
+    tuples of numbers for use as startdepth and stopdepth arguments of
+    revancestors and revdescendants.
+
+    >>> _splitrange(-10, -5)     # [-10:-5]
+    ((5, 11), (None, None))
+    >>> _splitrange(5, 10)       # [5:10]
+    ((None, None), (5, 11))
+    >>> _splitrange(-10, 10)     # [-10:10]
+    ((0, 11), (0, 11))
+    >>> _splitrange(-10, 0)      # [-10:0]
+    ((0, 11), (None, None))
+    >>> _splitrange(0, 10)       # [0:10]
+    ((None, None), (0, 11))
+    >>> _splitrange(0, 0)        # [0:0]
+    ((0, 1), (None, None))
+    >>> _splitrange(1, -1)       # [1:-1]
+    ((None, None), (None, None))
+    """
+    ancdepths = (None, None)
+    descdepths = (None, None)
+    if a == b == 0:
+        ancdepths = (0, 1)
+    if a < 0:
+        ancdepths = (-min(b, 0), -a + 1)
+    if b > 0:
+        descdepths = (max(a, 0), b + 1)
+    return ancdepths, descdepths
+
+def generationsrel(repo, subset, x, rel, z, order):
+    # TODO: rewrite tests, and drop startdepth argument from ancestors() and
+    # descendants() predicates
+    a, b = getintrange(z,
+                       _('relation subscript must be an integer or a range'),
+                       _('relation subscript bounds must be integers'),
+                       deffirst=-(dagop.maxlogdepth - 1),
+                       deflast=+(dagop.maxlogdepth - 1))
+    (ancstart, ancstop), (descstart, descstop) = _splitrange(a, b)
+
+    if ancstart is None and descstart is None:
+        return baseset()
+
+    revs = getset(repo, fullreposet(repo), x)
+    if not revs:
+        return baseset()
+
+    if ancstart is not None and descstart is not None:
+        s = dagop.revancestors(repo, revs, False, ancstart, ancstop)
+        s += dagop.revdescendants(repo, revs, False, descstart, descstop)
+    elif ancstart is not None:
+        s = dagop.revancestors(repo, revs, False, ancstart, ancstop)
+    elif descstart is not None:
+        s = dagop.revdescendants(repo, revs, False, descstart, descstop)
+
+    return subset & s
 
 def relsubscriptset(repo, subset, x, y, z, order):
     # this is pretty basic implementation of 'x#y[z]' operator, still
     # experimental so undocumented. see the wiki for further ideas.
     # https://www.mercurial-scm.org/wiki/RevsetOperatorPlan
     rel = getsymbol(y)
-    n = getinteger(z, _("relation subscript must be an integer"))
-
     if rel in subscriptrelations:
-        return subscriptrelations[rel](repo, subset, x, rel, n, order)
+        return subscriptrelations[rel](repo, subset, x, rel, z, order)
 
     relnames = [r for r in subscriptrelations.keys() if len(r) > 1]
     raise error.UnknownIdentifier(rel, relnames)
@@ -412,7 +458,7 @@
             try:
                 r = cl.parentrevs(r)[0]
             except error.WdirUnsupported:
-                r = repo[r].parents()[0].rev()
+                r = repo[r].p1().rev()
         ps.add(r)
     return subset & ps
 
@@ -509,7 +555,7 @@
         if kind == 'literal':
             # note: falls through to the revspec case if no branch with
             # this name exists and pattern kind is not specified explicitly
-            if pattern in repo.branchmap():
+            if repo.branchmap().hasbranch(pattern):
                 return subset.filter(lambda r: matcher(getbranch(r)),
                                      condrepr=('<branch %r>', b))
             if b.startswith('literal:'):
@@ -552,6 +598,12 @@
     return subset & bundlerevs
 
 def checkstatus(repo, subset, pat, field):
+    """Helper for status-related revsets (adds, removes, modifies).
+    The field parameter says which kind is desired:
+    0: modified
+    1: added
+    2: removed
+    """
     hasset = matchmod.patkind(pat) == 'set'
 
     mcache = [None]
@@ -815,6 +867,43 @@
     contentdivergent = obsmod.getrevs(repo, 'contentdivergent')
     return subset & contentdivergent
 
+@predicate('expectsize(set[, size])', safe=True, takeorder=True)
+def expectsize(repo, subset, x, order):
+    """Return the given revset if size matches the revset size.
+    Abort if the revset doesn't expect given size.
+    size can either be an integer range or an integer.
+
+    For example, ``expectsize(0:1, 3:5)`` will abort as revset size is 2 and
+    2 is not between 3 and 5 inclusive."""
+
+    args = getargsdict(x, 'expectsize', 'set size')
+    minsize = 0
+    maxsize = len(repo) + 1
+    err = ''
+    if 'size' not in args or 'set' not in args:
+        raise error.ParseError(_('invalid set of arguments'))
+    minsize, maxsize = getintrange(args['size'],
+                                   _('expectsize requires a size range'
+                                     ' or a positive integer'),
+                                   _('size range bounds must be integers'),
+                                   minsize, maxsize)
+    if minsize < 0 or maxsize < 0:
+        raise error.ParseError(_('negative size'))
+    rev = getset(repo, fullreposet(repo), args['set'], order=order)
+    if minsize != maxsize and (len(rev) < minsize or len(rev) > maxsize):
+        err = _('revset size mismatch.'
+                ' expected between %d and %d, got %d') % (minsize, maxsize,
+                                                          len(rev))
+    elif minsize == maxsize and len(rev) != minsize:
+        err = _('revset size mismatch.'
+                ' expected %d, got %d') % (minsize, len(rev))
+    if err:
+        raise error.RepoLookupError(err)
+    if order == followorder:
+        return subset & rev
+    else:
+        return rev & subset
+
 @predicate('extdata(source)', safe=False, weight=100)
 def extdata(repo, subset, x):
     """Changesets in the specified extdata source. (EXPERIMENTAL)"""
@@ -877,9 +966,6 @@
     The pattern without explicit kind like ``glob:`` is expected to be
     relative to the current directory and match against a file exactly
     for efficiency.
-
-    If some linkrev points to revisions filtered by the current repoview, we'll
-    work around it to return a non-filtered value.
     """
 
     # i18n: "filelog" is a keyword
@@ -1008,11 +1094,11 @@
     # i18n: "followlines" is a keyword
     msg = _("followlines expects exactly one file")
     fname = scmutil.parsefollowlinespattern(repo, rev, pat, msg)
-    # i18n: "followlines" is a keyword
-    lr = getrange(args['lines'][0], _("followlines expects a line range"))
-    fromline, toline = [getinteger(a, _("line range bounds must be integers"))
-                        for a in lr]
-    fromline, toline = util.processlinerange(fromline, toline)
+    fromline, toline = util.processlinerange(
+        *getintrange(args['lines'][0],
+                     # i18n: "followlines" is a keyword
+                     _("followlines expects a line number or a range"),
+                     _("line range bounds must be integers")))
 
     fctx = repo[rev].filectx(fname)
     descend = False
@@ -1157,7 +1243,7 @@
     getargs(x, 0, 0, _("head takes no arguments"))
     hs = set()
     cl = repo.changelog
-    for ls in repo.branchmap().itervalues():
+    for ls in repo.branchmap().iterheads():
         hs.update(cl.rev(h) for h in ls)
     return subset & baseset(hs)
 
@@ -1513,7 +1599,7 @@
         try:
             ps.add(cl.parentrevs(r)[0])
         except error.WdirUnsupported:
-            ps.add(repo[r].parents()[0].rev())
+            ps.add(repo[r].p1().rev())
     ps -= {node.nullrev}
     # XXX we should turn this into a baseset instead of a set, smartset may do
     # some optimizations from the fact this is a baseset.
@@ -1632,7 +1718,7 @@
             try:
                 ps.add(cl.parentrevs(r)[0])
             except error.WdirUnsupported:
-                ps.add(repo[r].parents()[0].rev())
+                ps.add(repo[r].p1().rev())
         else:
             try:
                 parents = cl.parentrevs(r)
@@ -2027,7 +2113,7 @@
     if len(args) != 0:
         pat = getstring(args[0], _("subrepo requires a pattern"))
 
-    m = matchmod.exact(repo.root, repo.root, ['.hgsubstate'])
+    m = matchmod.exact(['.hgsubstate'])
 
     def submatches(names):
         k, p, m = stringutil.stringmatcher(pat)
--- a/mercurial/revsetlang.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/revsetlang.py	Wed Apr 17 13:41:18 2019 -0400
@@ -62,8 +62,8 @@
 
 # default set of valid characters for the initial letter of symbols
 _syminitletters = set(pycompat.iterbytestr(
-    string.ascii_letters.encode('ascii') +
-    string.digits.encode('ascii') +
+    pycompat.sysbytes(string.ascii_letters) +
+    pycompat.sysbytes(string.digits) +
     '._@')) | set(map(pycompat.bytechr, pycompat.xrange(128, 256)))
 
 # default set of valid characters for non-initial letters of symbols
@@ -240,6 +240,18 @@
         return None, None
     raise error.ParseError(err)
 
+def getintrange(x, err1, err2, deffirst=_notset, deflast=_notset):
+    """Get [first, last] integer range (both inclusive) from a parsed tree
+
+    If any of the sides omitted, and if no default provided, ParseError will
+    be raised.
+    """
+    if x and (x[0] == 'string' or x[0] == 'symbol'):
+        n = getinteger(x, err1)
+        return n, n
+    a, b = getrange(x, err1)
+    return getinteger(a, err2, deffirst), getinteger(b, err2, deflast)
+
 def getargs(x, min, max, err):
     l = getlist(x)
     if len(l) < min or (max >= 0 and len(l) > max):
--- a/mercurial/scmutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/scmutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -11,6 +11,7 @@
 import glob
 import hashlib
 import os
+import posixpath
 import re
 import subprocess
 import weakref
@@ -27,6 +28,7 @@
 )
 
 from . import (
+    copies as copiesmod,
     encoding,
     error,
     match as matchmod,
@@ -231,10 +233,10 @@
             ui.error(_("(did you forget to compile extensions?)\n"))
         elif m in "zlib".split():
             ui.error(_("(is your Python install correct?)\n"))
-    except IOError as inst:
-        if util.safehasattr(inst, "code"):
+    except (IOError, OSError) as inst:
+        if util.safehasattr(inst, "code"): # HTTPError
             ui.error(_("abort: %s\n") % stringutil.forcebytestr(inst))
-        elif util.safehasattr(inst, "reason"):
+        elif util.safehasattr(inst, "reason"): # URLError or SSLError
             try: # usually it is in the form (errno, strerror)
                 reason = inst.reason.args[1]
             except (AttributeError, IndexError):
@@ -247,22 +249,15 @@
         elif (util.safehasattr(inst, "args")
               and inst.args and inst.args[0] == errno.EPIPE):
             pass
-        elif getattr(inst, "strerror", None):
-            if getattr(inst, "filename", None):
-                ui.error(_("abort: %s: %s\n") % (
+        elif getattr(inst, "strerror", None): # common IOError or OSError
+            if getattr(inst, "filename", None) is not None:
+                ui.error(_("abort: %s: '%s'\n") % (
                     encoding.strtolocal(inst.strerror),
                     stringutil.forcebytestr(inst.filename)))
             else:
                 ui.error(_("abort: %s\n") % encoding.strtolocal(inst.strerror))
-        else:
+        else: # suspicious IOError
             raise
-    except OSError as inst:
-        if getattr(inst, "filename", None) is not None:
-            ui.error(_("abort: %s: '%s'\n") % (
-                encoding.strtolocal(inst.strerror),
-                stringutil.forcebytestr(inst.filename)))
-        else:
-            ui.error(_("abort: %s\n") % encoding.strtolocal(inst.strerror))
     except MemoryError:
         ui.error(_("abort: out of memory\n"))
     except SystemExit as inst:
@@ -673,19 +668,11 @@
     l = revrange(repo, revs)
 
     if not l:
-        first = second = None
-    elif l.isascending():
-        first = l.min()
-        second = l.max()
-    elif l.isdescending():
-        first = l.max()
-        second = l.min()
-    else:
-        first = l.first()
-        second = l.last()
+        raise error.Abort(_('empty revision range'))
 
-    if first is None:
-        raise error.Abort(_('empty revision range'))
+    first = l.first()
+    second = l.last()
+
     if (first == second and len(revs) >= 2
         and not all(revrange(repo, [r]) for r in revs)):
         raise error.Abort(_('empty revision on one side of range'))
@@ -740,6 +727,53 @@
         return []
     return parents
 
+def getuipathfn(repo, legacyrelativevalue=False, forcerelativevalue=None):
+    """Return a function that produced paths for presenting to the user.
+
+    The returned function takes a repo-relative path and produces a path
+    that can be presented in the UI.
+
+    Depending on the value of ui.relative-paths, either a repo-relative or
+    cwd-relative path will be produced.
+
+    legacyrelativevalue is the value to use if ui.relative-paths=legacy
+
+    If forcerelativevalue is not None, then that value will be used regardless
+    of what ui.relative-paths is set to.
+    """
+    if forcerelativevalue is not None:
+        relative = forcerelativevalue
+    else:
+        config = repo.ui.config('ui', 'relative-paths')
+        if config == 'legacy':
+            relative = legacyrelativevalue
+        else:
+            relative = stringutil.parsebool(config)
+            if relative is None:
+                raise error.ConfigError(
+                    _("ui.relative-paths is not a boolean ('%s')") % config)
+
+    if relative:
+        cwd = repo.getcwd()
+        pathto = repo.pathto
+        return lambda f: pathto(f, cwd)
+    elif repo.ui.configbool('ui', 'slash'):
+        return lambda f: f
+    else:
+        return util.localpath
+
+def subdiruipathfn(subpath, uipathfn):
+    '''Create a new uipathfn that treats the file as relative to subpath.'''
+    return lambda f: uipathfn(posixpath.join(subpath, f))
+
+def anypats(pats, opts):
+    '''Checks if any patterns, including --include and --exclude were given.
+
+    Some commands (e.g. addremove) use this condition for deciding whether to
+    print absolute or relative paths.
+    '''
+    return bool(pats or opts.get('include') or opts.get('exclude'))
+
 def expandpats(pats):
     '''Expand bare globs when running on windows.
     On posix we assume it already has already been done by sh.'''
@@ -764,15 +798,14 @@
     '''Return a matcher and the patterns that were used.
     The matcher will warn about bad matches, unless an alternate badfn callback
     is provided.'''
-    if pats == ("",):
-        pats = []
     if opts is None:
         opts = {}
     if not globbed and default == 'relpath':
         pats = expandpats(pats or [])
 
+    uipathfn = getuipathfn(ctx.repo(), legacyrelativevalue=True)
     def bad(f, msg):
-        ctx.repo().ui.warn("%s: %s\n" % (m.rel(f), msg))
+        ctx.repo().ui.warn("%s: %s\n" % (uipathfn(f), msg))
 
     if badfn is None:
         badfn = bad
@@ -791,11 +824,11 @@
 
 def matchall(repo):
     '''Return a matcher that will efficiently match everything.'''
-    return matchmod.always(repo.root, repo.getcwd())
+    return matchmod.always()
 
 def matchfiles(repo, files, badfn=None):
     '''Return a matcher that will efficiently match exactly these files.'''
-    return matchmod.exact(repo.root, repo.getcwd(), files, badfn=badfn)
+    return matchmod.exact(files, badfn=badfn)
 
 def parsefollowlinespattern(repo, rev, pat, msg):
     """Return a file name from `pat` pattern suitable for usage in followlines
@@ -820,26 +853,26 @@
         return None
     return vfs.vfs(repo.wvfs.join(origbackuppath))
 
-def origpath(ui, repo, filepath):
-    '''customize where .orig files are created
+def backuppath(ui, repo, filepath):
+    '''customize where working copy backup files (.orig files) are created
 
     Fetch user defined path from config file: [ui] origbackuppath = <path>
     Fall back to default (filepath with .orig suffix) if not specified
+
+    filepath is repo-relative
+
+    Returns an absolute path
     '''
     origvfs = getorigvfs(ui, repo)
     if origvfs is None:
-        return filepath + ".orig"
+        return repo.wjoin(filepath + ".orig")
 
-    # Convert filepath from an absolute path into a path inside the repo.
-    filepathfromroot = util.normpath(os.path.relpath(filepath,
-                                                     start=repo.root))
-
-    origbackupdir = origvfs.dirname(filepathfromroot)
+    origbackupdir = origvfs.dirname(filepath)
     if not origvfs.isdir(origbackupdir) or origvfs.islink(origbackupdir):
         ui.note(_('creating directory: %s\n') % origvfs.join(origbackupdir))
 
         # Remove any files that conflict with the backup file's path
-        for f in reversed(list(util.finddirs(filepathfromroot))):
+        for f in reversed(list(util.finddirs(filepath))):
             if origvfs.isfileorlink(f):
                 ui.note(_('removing conflicting file: %s\n')
                         % origvfs.join(f))
@@ -848,12 +881,12 @@
 
         origvfs.makedirs(origbackupdir)
 
-    if origvfs.isdir(filepathfromroot) and not origvfs.islink(filepathfromroot):
+    if origvfs.isdir(filepath) and not origvfs.islink(filepath):
         ui.note(_('removing conflicting directory: %s\n')
-                % origvfs.join(filepathfromroot))
-        origvfs.rmtree(filepathfromroot, forcibly=True)
+                % origvfs.join(filepath))
+        origvfs.rmtree(filepath, forcibly=True)
 
-    return origvfs.join(filepathfromroot)
+    return origvfs.join(filepath)
 
 class _containsnode(object):
     """proxy __contains__(node) to container.__contains__ which accepts revs"""
@@ -984,6 +1017,7 @@
         for phase, nodes in toadvance.items():
             phases.advanceboundary(repo, tr, phase, nodes)
 
+        mayusearchived = repo.ui.config('experimental', 'cleanup-as-archived')
         # Obsolete or strip nodes
         if obsolete.isenabled(repo, obsolete.createmarkersopt):
             # If a node is already obsoleted, and we want to obsolete it
@@ -1001,6 +1035,17 @@
             if rels:
                 obsolete.createmarkers(repo, rels, operation=operation,
                                        metadata=metadata)
+        elif phases.supportinternal(repo) and mayusearchived:
+            # this assume we do not have "unstable" nodes above the cleaned ones
+            allreplaced = set()
+            for ns in replacements.keys():
+                allreplaced.update(ns)
+            if backup:
+                from . import repair # avoid import cycle
+                node = min(allreplaced, key=repo.changelog.rev)
+                repair.backupbundle(repo, allreplaced, allreplaced, node,
+                                    operation)
+            phases.retractboundary(repo, tr, phases.archived, allreplaced)
         else:
             from . import repair # avoid import cycle
             tostrip = list(n for ns in replacements for n in ns)
@@ -1008,7 +1053,7 @@
                 repair.delayedstrip(repo.ui, repo, tostrip, operation,
                                     backup=backup)
 
-def addremove(repo, matcher, prefix, opts=None):
+def addremove(repo, matcher, prefix, uipathfn, opts=None):
     if opts is None:
         opts = {}
     m = matcher
@@ -1022,19 +1067,20 @@
     similarity /= 100.0
 
     ret = 0
-    join = lambda f: os.path.join(prefix, f)
 
     wctx = repo[None]
     for subpath in sorted(wctx.substate):
         submatch = matchmod.subdirmatcher(subpath, m)
         if opts.get('subrepos') or m.exact(subpath) or any(submatch.files()):
             sub = wctx.sub(subpath)
+            subprefix = repo.wvfs.reljoin(prefix, subpath)
+            subuipathfn = subdiruipathfn(subpath, uipathfn)
             try:
-                if sub.addremove(submatch, prefix, opts):
+                if sub.addremove(submatch, subprefix, subuipathfn, opts):
                     ret = 1
             except error.LookupError:
                 repo.ui.status(_("skipping missing subrepository: %s\n")
-                                 % join(subpath))
+                                 % uipathfn(subpath))
 
     rejected = []
     def badfn(f, msg):
@@ -1052,15 +1098,15 @@
     for abs in sorted(toprint):
         if repo.ui.verbose or not m.exact(abs):
             if abs in unknownset:
-                status = _('adding %s\n') % m.uipath(abs)
+                status = _('adding %s\n') % uipathfn(abs)
                 label = 'ui.addremove.added'
             else:
-                status = _('removing %s\n') % m.uipath(abs)
+                status = _('removing %s\n') % uipathfn(abs)
                 label = 'ui.addremove.removed'
             repo.ui.status(status, label=label)
 
     renames = _findrenames(repo, m, added + unknown, removed + deleted,
-                           similarity)
+                           similarity, uipathfn)
 
     if not dry_run:
         _markchanges(repo, unknown + forgotten, deleted, renames)
@@ -1089,8 +1135,12 @@
                 status = _('removing %s\n') % abs
             repo.ui.status(status)
 
+    # TODO: We should probably have the caller pass in uipathfn and apply it to
+    # the messages above too. legacyrelativevalue=True is consistent with how
+    # it used to work.
+    uipathfn = getuipathfn(repo, legacyrelativevalue=True)
     renames = _findrenames(repo, m, added + unknown, removed + deleted,
-                           similarity)
+                           similarity, uipathfn)
 
     _markchanges(repo, unknown + forgotten, deleted, renames)
 
@@ -1129,7 +1179,7 @@
 
     return added, unknown, deleted, removed, forgotten
 
-def _findrenames(repo, matcher, added, removed, similarity):
+def _findrenames(repo, matcher, added, removed, similarity, uipathfn):
     '''Find renames from removed files to added ones.'''
     renames = {}
     if similarity > 0:
@@ -1139,7 +1189,7 @@
                 or not matcher.exact(new)):
                 repo.ui.status(_('recording removal of %s as rename to %s '
                                  '(%d%% similar)\n') %
-                               (matcher.rel(old), matcher.rel(new),
+                               (uipathfn(old), uipathfn(new),
                                 score * 100))
             renames[new] = old
     return renames
@@ -1154,6 +1204,49 @@
         for new, old in renames.iteritems():
             wctx.copy(old, new)
 
+def getrenamedfn(repo, endrev=None):
+    if copiesmod.usechangesetcentricalgo(repo):
+        def getrenamed(fn, rev):
+            ctx = repo[rev]
+            p1copies = ctx.p1copies()
+            if fn in p1copies:
+                return p1copies[fn]
+            p2copies = ctx.p2copies()
+            if fn in p2copies:
+                return p2copies[fn]
+            return None
+        return getrenamed
+
+    rcache = {}
+    if endrev is None:
+        endrev = len(repo)
+
+    def getrenamed(fn, rev):
+        '''looks up all renames for a file (up to endrev) the first
+        time the file is given. It indexes on the changerev and only
+        parses the manifest if linkrev != changerev.
+        Returns rename info for fn at changerev rev.'''
+        if fn not in rcache:
+            rcache[fn] = {}
+            fl = repo.file(fn)
+            for i in fl:
+                lr = fl.linkrev(i)
+                renamed = fl.renamed(fl.node(i))
+                rcache[fn][lr] = renamed and renamed[0]
+                if lr >= endrev:
+                    break
+        if rev in rcache[fn]:
+            return rcache[fn][rev]
+
+        # If linkrev != rev (i.e. rev not found in rcache) fallback to
+        # filectx logic.
+        try:
+            return repo[rev][fn].copysource()
+        except error.LookupError:
+            return None
+
+    return getrenamed
+
 def dirstatecopy(ui, repo, wctx, src, dst, dryrun=False, cwd=None):
     """Update the dirstate to reflect the intent of copying src to dst. For
     different reasons it might not end with dst being marked as copied from src.
@@ -1173,6 +1266,49 @@
         elif not dryrun:
             wctx.copy(origsrc, dst)
 
+def movedirstate(repo, newctx, match=None):
+    """Move the dirstate to newctx and adjust it as necessary.
+
+    A matcher can be provided as an optimization. It is probably a bug to pass
+    a matcher that doesn't match all the differences between the parent of the
+    working copy and newctx.
+    """
+    oldctx = repo['.']
+    ds = repo.dirstate
+    ds.setparents(newctx.node(), nullid)
+    copies = dict(ds.copies())
+    s = newctx.status(oldctx, match=match)
+    for f in s.modified:
+        if ds[f] == 'r':
+            # modified + removed -> removed
+            continue
+        ds.normallookup(f)
+
+    for f in s.added:
+        if ds[f] == 'r':
+            # added + removed -> unknown
+            ds.drop(f)
+        elif ds[f] != 'a':
+            ds.add(f)
+
+    for f in s.removed:
+        if ds[f] == 'a':
+            # removed + added -> normal
+            ds.normallookup(f)
+        elif ds[f] != 'r':
+            ds.remove(f)
+
+    # Merge old parent and old working dir copies
+    oldcopies = copiesmod.pathcopies(newctx, oldctx, match)
+    oldcopies.update(copies)
+    copies = dict((dst, oldcopies.get(src, src))
+                  for dst, src in oldcopies.iteritems())
+    # Adjust the dirstate copies
+    for dst, src in copies.iteritems():
+        if (src not in newctx or dst in newctx or ds[dst] != 'a'):
+            src = None
+        ds.copy(src, dst)
+
 def writerequires(opener, requirements):
     with opener('requires', 'w', atomictemp=True) as fp:
         for r in sorted(requirements):
--- a/mercurial/setdiscovery.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/setdiscovery.py	Wed Apr 17 13:41:18 2019 -0400
@@ -92,69 +92,6 @@
                 dist.setdefault(p, d + 1)
                 visit.append(p)
 
-def _takequicksample(repo, headrevs, revs, size):
-    """takes a quick sample of size <size>
-
-    It is meant for initial sampling and focuses on querying heads and close
-    ancestors of heads.
-
-    :dag: a dag object
-    :headrevs: set of head revisions in local DAG to consider
-    :revs: set of revs to discover
-    :size: the maximum size of the sample"""
-    if len(revs) <= size:
-        return list(revs)
-    sample = set(repo.revs('heads(%ld)', revs))
-
-    if len(sample) >= size:
-        return _limitsample(sample, size)
-
-    _updatesample(None, headrevs, sample, repo.changelog.parentrevs,
-                  quicksamplesize=size)
-    return sample
-
-def _takefullsample(repo, headrevs, revs, size):
-    if len(revs) <= size:
-        return list(revs)
-    sample = set(repo.revs('heads(%ld)', revs))
-
-    # update from heads
-    revsheads = set(repo.revs('heads(%ld)', revs))
-    _updatesample(revs, revsheads, sample, repo.changelog.parentrevs)
-
-    # update from roots
-    revsroots = set(repo.revs('roots(%ld)', revs))
-
-    # _updatesample() essentially does interaction over revisions to look up
-    # their children. This lookup is expensive and doing it in a loop is
-    # quadratic. We precompute the children for all relevant revisions and
-    # make the lookup in _updatesample() a simple dict lookup.
-    #
-    # Because this function can be called multiple times during discovery, we
-    # may still perform redundant work and there is room to optimize this by
-    # keeping a persistent cache of children across invocations.
-    children = {}
-
-    parentrevs = repo.changelog.parentrevs
-    for rev in repo.changelog.revs(start=min(revsroots)):
-        # Always ensure revision has an entry so we don't need to worry about
-        # missing keys.
-        children.setdefault(rev, [])
-
-        for prev in parentrevs(rev):
-            if prev == nullrev:
-                continue
-
-            children.setdefault(prev, []).append(rev)
-
-    _updatesample(revs, revsroots, sample, children.__getitem__)
-    assert sample
-    sample = _limitsample(sample, size)
-    if len(sample) < size:
-        more = size - len(sample)
-        sample.update(random.sample(list(revs - sample), more))
-    return sample
-
 def _limitsample(sample, desiredlen):
     """return a random subset of sample of at most desiredlen item"""
     if len(sample) > desiredlen:
@@ -179,6 +116,7 @@
         self._common = repo.changelog.incrementalmissingrevs()
         self._undecided = None
         self.missing = set()
+        self._childrenmap = None
 
     def addcommons(self, commons):
         """registrer nodes known as common"""
@@ -222,12 +160,98 @@
         self._undecided = set(self._common.missingancestors(self._targetheads))
         return self._undecided
 
+    def stats(self):
+        return {
+            'undecided': len(self.undecided),
+        }
+
     def commonheads(self):
         """the heads of the known common set"""
         # heads(common) == heads(common.bases) since common represents
         # common.bases and all its ancestors
         return self._common.basesheads()
 
+    def _parentsgetter(self):
+        getrev = self._repo.changelog.index.__getitem__
+        def getparents(r):
+            return getrev(r)[5:7]
+        return getparents
+
+    def _childrengetter(self):
+
+        if self._childrenmap is not None:
+            # During discovery, the `undecided` set keep shrinking.
+            # Therefore, the map computed for an iteration N will be
+            # valid for iteration N+1. Instead of computing the same
+            # data over and over we cached it the first time.
+            return self._childrenmap.__getitem__
+
+        # _updatesample() essentially does interaction over revisions to look
+        # up their children. This lookup is expensive and doing it in a loop is
+        # quadratic. We precompute the children for all relevant revisions and
+        # make the lookup in _updatesample() a simple dict lookup.
+        self._childrenmap = children = {}
+
+        parentrevs = self._parentsgetter()
+        revs = self.undecided
+
+        for rev in sorted(revs):
+            # Always ensure revision has an entry so we don't need to worry
+            # about missing keys.
+            children[rev] = []
+            for prev in parentrevs(rev):
+                if prev == nullrev:
+                    continue
+                c = children.get(prev)
+                if c is not None:
+                    c.append(rev)
+        return children.__getitem__
+
+    def takequicksample(self, headrevs, size):
+        """takes a quick sample of size <size>
+
+        It is meant for initial sampling and focuses on querying heads and close
+        ancestors of heads.
+
+        :headrevs: set of head revisions in local DAG to consider
+        :size: the maximum size of the sample"""
+        revs = self.undecided
+        if len(revs) <= size:
+            return list(revs)
+        sample = set(self._repo.revs('heads(%ld)', revs))
+
+        if len(sample) >= size:
+            return _limitsample(sample, size)
+
+        _updatesample(None, headrevs, sample, self._parentsgetter(),
+                      quicksamplesize=size)
+        return sample
+
+    def takefullsample(self, headrevs, size):
+        revs = self.undecided
+        if len(revs) <= size:
+            return list(revs)
+        repo = self._repo
+        sample = set(repo.revs('heads(%ld)', revs))
+        parentrevs = self._parentsgetter()
+
+        # update from heads
+        revsheads = sample.copy()
+        _updatesample(revs, revsheads, sample, parentrevs)
+
+        # update from roots
+        revsroots = set(repo.revs('roots(%ld)', revs))
+
+        childrenrevs = self._childrengetter()
+
+        _updatesample(revs, revsroots, sample, childrenrevs)
+        assert sample
+        sample = _limitsample(sample, size)
+        if len(sample) < size:
+            more = size - len(sample)
+            sample.update(random.sample(list(revs - sample), more))
+        return sample
+
 def findcommonheads(ui, local, remote,
                     initialsamplesize=100,
                     fullsamplesize=200,
@@ -272,18 +296,18 @@
     # compatibility reasons)
     ui.status(_("searching for changes\n"))
 
-    srvheads = []
+    knownsrvheads = []  # revnos of remote heads that are known locally
     for node in srvheadhashes:
         if node == nullid:
             continue
 
         try:
-            srvheads.append(clrev(node))
+            knownsrvheads.append(clrev(node))
         # Catches unknown and filtered nodes.
         except error.LookupError:
             continue
 
-    if len(srvheads) == len(srvheadhashes):
+    if len(knownsrvheads) == len(srvheadhashes):
         ui.debug("all remote heads known locally\n")
         return srvheadhashes, False, srvheadhashes
 
@@ -297,7 +321,7 @@
     disco = partialdiscovery(local, ownheads)
     # treat remote heads (and maybe own heads) as a first implicit sample
     # response
-    disco.addcommons(srvheads)
+    disco.addcommons(knownsrvheads)
     disco.addinfo(zip(sample, yesno))
 
     full = False
@@ -309,19 +333,21 @@
                 ui.note(_("sampling from both directions\n"))
             else:
                 ui.debug("taking initial sample\n")
-            samplefunc = _takefullsample
+            samplefunc = disco.takefullsample
             targetsize = fullsamplesize
         else:
             # use even cheaper initial sample
             ui.debug("taking quick initial sample\n")
-            samplefunc = _takequicksample
+            samplefunc = disco.takequicksample
             targetsize = initialsamplesize
-        sample = samplefunc(local, ownheads, disco.undecided, targetsize)
+        sample = samplefunc(ownheads, targetsize)
 
         roundtrips += 1
         progress.update(roundtrips)
+        stats = disco.stats()
         ui.debug("query %i; still undecided: %i, sample size is: %i\n"
-                 % (roundtrips, len(disco.undecided), len(sample)))
+                 % (roundtrips, stats['undecided'], len(sample)))
+
         # indices between sample and externalized version must match
         sample = list(sample)
 
@@ -340,7 +366,7 @@
     ui.debug("%d total queries in %.4fs\n" % (roundtrips, elapsed))
     msg = ('found %d common and %d unknown server heads,'
            ' %d roundtrips in %.4fs\n')
-    missing = set(result) - set(srvheads)
+    missing = set(result) - set(knownsrvheads)
     ui.log('discovery', msg, len(result), len(missing), roundtrips,
            elapsed)
 
--- a/mercurial/simplemerge.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/simplemerge.py	Wed Apr 17 13:41:18 2019 -0400
@@ -289,15 +289,15 @@
 
             # find matches at the front
             ii = 0
-            while ii < alen and ii < blen and \
-                  self.a[a1 + ii] == self.b[b1 + ii]:
+            while (ii < alen and ii < blen and
+                   self.a[a1 + ii] == self.b[b1 + ii]):
                 ii += 1
             startmatches = ii
 
             # find matches at the end
             ii = 0
-            while ii < alen and ii < blen and \
-                  self.a[a2 - ii - 1] == self.b[b2 - ii - 1]:
+            while (ii < alen and ii < blen and
+                   self.a[a2 - ii - 1] == self.b[b2 - ii - 1]):
                 ii += 1
             endmatches = ii
 
@@ -350,8 +350,8 @@
                 aend = asub + intlen
                 bend = bsub + intlen
 
-                assert self.base[intbase:intend] == self.a[asub:aend], \
-                       (self.base[intbase:intend], self.a[asub:aend])
+                assert self.base[intbase:intend] == self.a[asub:aend], (
+                        (self.base[intbase:intend], self.a[asub:aend]))
 
                 assert self.base[intbase:intend] == self.b[bsub:bend]
 
--- a/mercurial/sparse.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/sparse.py	Wed Apr 17 13:41:18 2019 -0400
@@ -264,7 +264,7 @@
     """Returns a matcher that returns true for any of the forced includes
     before testing against the actual matcher."""
     kindpats = [('path', include, '') for include in includes]
-    includematcher = matchmod.includematcher('', '', kindpats)
+    includematcher = matchmod.includematcher('', kindpats)
     return matchmod.unionmatcher([includematcher, matcher])
 
 def matcher(repo, revs=None, includetemp=True):
@@ -277,7 +277,7 @@
     """
     # If sparse isn't enabled, sparse matcher matches everything.
     if not enabled:
-        return matchmod.always(repo.root, '')
+        return matchmod.always()
 
     if not revs or revs == [None]:
         revs = [repo.changelog.rev(node)
@@ -305,7 +305,7 @@
             pass
 
     if not matchers:
-        result = matchmod.always(repo.root, '')
+        result = matchmod.always()
     elif len(matchers) == 1:
         result = matchers[0]
     else:
@@ -336,7 +336,7 @@
     if branchmerge:
         # If we're merging, use the wctx filter, since we're merging into
         # the wctx.
-        sparsematch = matcher(repo, [wctx.parents()[0].rev()])
+        sparsematch = matcher(repo, [wctx.p1().rev()])
     else:
         # If we're updating, use the target context's filter, since we're
         # moving to the target context.
@@ -643,8 +643,8 @@
             for kindpat in pats:
                 kind, pat = matchmod._patsplit(kindpat, None)
                 if kind in matchmod.cwdrelativepatternkinds or kind is None:
-                    ap = (kind + ':' if kind else '') +\
-                            pathutil.canonpath(root, cwd, pat)
+                    ap = ((kind + ':' if kind else '') +
+                          pathutil.canonpath(root, cwd, pat))
                     abspats.append(ap)
                 else:
                     abspats.append(kindpat)
--- a/mercurial/sslutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/sslutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -430,6 +430,7 @@
                           'error)\n'))
         except ssl.SSLError:
             pass
+
         # Try to print more helpful error messages for known failures.
         if util.safehasattr(e, 'reason'):
             # This error occurs when the client and server don't share a
@@ -437,7 +438,7 @@
             # outright. Hopefully the reason for this error is that we require
             # TLS 1.1+ and the server only supports TLS 1.0. Whatever the
             # reason, try to emit an actionable warning.
-            if e.reason == 'UNSUPPORTED_PROTOCOL':
+            if e.reason == r'UNSUPPORTED_PROTOCOL':
                 # We attempted TLS 1.0+.
                 if settings['protocolui'] == 'tls1.0':
                     # We support more than just TLS 1.0+. If this happens,
@@ -453,7 +454,7 @@
                             'server; see '
                             'https://mercurial-scm.org/wiki/SecureConnections '
                             'for more info)\n') % (
-                                serverhostname,
+                                pycompat.bytesurl(serverhostname),
                                 ', '.join(sorted(supportedprotocols))))
                     else:
                         ui.warn(_(
@@ -462,7 +463,8 @@
                             'supports TLS 1.0 because it has known security '
                             'vulnerabilities; see '
                             'https://mercurial-scm.org/wiki/SecureConnections '
-                            'for more info)\n') % serverhostname)
+                            'for more info)\n') %
+                                pycompat.bytesurl(serverhostname))
                 else:
                     # We attempted TLS 1.1+. We can only get here if the client
                     # supports the configured protocol. So the likely reason is
@@ -472,19 +474,20 @@
                         '(could not negotiate a common security protocol (%s+) '
                         'with %s; the likely cause is Mercurial is configured '
                         'to be more secure than the server can support)\n') % (
-                        settings['protocolui'], serverhostname))
+                        settings['protocolui'],
+                        pycompat.bytesurl(serverhostname)))
                     ui.warn(_('(consider contacting the operator of this '
                               'server and ask them to support modern TLS '
                               'protocol versions; or, set '
                               'hostsecurity.%s:minimumprotocol=tls1.0 to allow '
                               'use of legacy, less secure protocols when '
                               'communicating with this server)\n') %
-                            serverhostname)
+                            pycompat.bytesurl(serverhostname))
                     ui.warn(_(
                         '(see https://mercurial-scm.org/wiki/SecureConnections '
                         'for more info)\n'))
 
-            elif (e.reason == 'CERTIFICATE_VERIFY_FAILED' and
+            elif (e.reason == r'CERTIFICATE_VERIFY_FAILED' and
                 pycompat.iswindows):
 
                 ui.warn(_('(the full certificate chain may not be available '
--- a/mercurial/statichttprepo.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/statichttprepo.py	Wed Apr 17 13:41:18 2019 -0400
@@ -13,12 +13,14 @@
 
 from .i18n import _
 from . import (
+    branchmap,
     changelog,
     error,
     localrepo,
     manifest,
     namespaces,
     pathutil,
+    pycompat,
     url,
     util,
     vfs as vfsmod,
@@ -44,12 +46,12 @@
     def seek(self, pos):
         self.pos = pos
     def read(self, bytes=None):
-        req = urlreq.request(self.url)
+        req = urlreq.request(pycompat.strurl(self.url))
         end = ''
         if bytes:
             end = self.pos + bytes - 1
         if self.pos or end:
-            req.add_header('Range', 'bytes=%d-%s' % (self.pos, end))
+            req.add_header(r'Range', r'bytes=%d-%s' % (self.pos, end))
 
         try:
             f = self.opener.open(req)
@@ -59,7 +61,7 @@
             num = inst.code == 404 and errno.ENOENT or None
             raise IOError(num, inst)
         except urlerr.urlerror as inst:
-            raise IOError(None, inst.reason[1])
+            raise IOError(None, inst.reason)
 
         if code == 200:
             # HTTPRangeHandler does nothing if remote does not support
@@ -192,7 +194,7 @@
         self.changelog = changelog.changelog(self.svfs)
         self._tags = None
         self.nodetagscache = None
-        self._branchcaches = {}
+        self._branchcaches = branchmap.BranchMapCache()
         self._revbranchcache = None
         self.encodepats = None
         self.decodepats = None
--- a/mercurial/statprof.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/statprof.py	Wed Apr 17 13:41:18 2019 -0400
@@ -203,7 +203,7 @@
 class CodeSite(object):
     cache = {}
 
-    __slots__ = (u'path', u'lineno', u'function', u'source')
+    __slots__ = (r'path', r'lineno', r'function', r'source')
 
     def __init__(self, path, lineno, function):
         assert isinstance(path, bytes)
@@ -263,7 +263,7 @@
         return r'%s:%s' % (self.filename(), self.function)
 
 class Sample(object):
-    __slots__ = (u'stack', u'time')
+    __slots__ = (r'stack', r'time')
 
     def __init__(self, stack, time):
         self.stack = stack
@@ -816,9 +816,6 @@
             id2stack[-1].update(parent=parent)
         return myid
 
-    def endswith(a, b):
-        return list(a)[-len(b):] == list(b)
-
     # The sampling profiler can sample multiple times without
     # advancing the clock, potentially causing the Chrome trace viewer
     # to render single-pixel columns that we cannot zoom in on.  We
@@ -858,9 +855,6 @@
     # events given only stack snapshots.
 
     for sample in data.samples:
-        tos = sample.stack[0]
-        name = tos.function
-        path = simplifypath(tos.path)
         stack = tuple((('%s:%d' % (simplifypath(frame.path), frame.lineno),
                         frame.function) for frame in sample.stack))
         qstack = collections.deque(stack)
--- a/mercurial/store.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/store.py	Wed Apr 17 13:41:18 2019 -0400
@@ -8,6 +8,7 @@
 from __future__ import absolute_import
 
 import errno
+import functools
 import hashlib
 import os
 import stat
@@ -23,6 +24,9 @@
 )
 
 parsers = policy.importmod(r'parsers')
+# how much bytes should be read from fncache in one read
+# It is done to prevent loading large fncache files into memory
+fncache_chunksize = 10 ** 6
 
 def _matchtrackedpath(path, matcher):
     """parses a fncache entry and returns whether the entry is tracking a path
@@ -463,14 +467,35 @@
             # skip nonexistent file
             self.entries = set()
             return
-        self.entries = set(decodedir(fp.read()).splitlines())
+
+        self.entries = set()
+        chunk = b''
+        for c in iter(functools.partial(fp.read, fncache_chunksize), b''):
+            chunk += c
+            try:
+                p = chunk.rindex(b'\n')
+                self.entries.update(decodedir(chunk[:p + 1]).splitlines())
+                chunk = chunk[p + 1:]
+            except ValueError:
+                # substring '\n' not found, maybe the entry is bigger than the
+                # chunksize, so let's keep iterating
+                pass
+
+        if chunk:
+            raise error.Abort(_("fncache does not ends with a newline"),
+                              hint=_("use 'hg debugrebuildfncache' to rebuild"
+                                     " the fncache"))
+        self._checkentries(fp)
+        fp.close()
+
+    def _checkentries(self, fp):
+        """ make sure there is no empty string in entries """
         if '' in self.entries:
             fp.seek(0)
             for n, line in enumerate(util.iterfile(fp)):
                 if not line.rstrip('\n'):
                     t = _('invalid entry in fncache, line %d') % (n + 1)
                     raise error.Abort(t)
-        fp.close()
 
     def write(self, tr):
         if self._dirty:
--- a/mercurial/streamclone.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/streamclone.py	Wed Apr 17 13:41:18 2019 -0400
@@ -13,7 +13,6 @@
 
 from .i18n import _
 from . import (
-    branchmap,
     cacheutil,
     error,
     narrowspec,
@@ -174,7 +173,7 @@
         repo._writerequirements()
 
         if rbranchmap:
-            branchmap.replacecache(repo, rbranchmap)
+            repo._branchcaches.replace(repo, rbranchmap)
 
         repo.invalidate()
 
--- a/mercurial/subrepo.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/subrepo.py	Wed Apr 17 13:41:18 2019 -0400
@@ -11,7 +11,6 @@
 import errno
 import hashlib
 import os
-import posixpath
 import re
 import stat
 import subprocess
@@ -288,10 +287,10 @@
         """
         raise NotImplementedError
 
-    def add(self, ui, match, prefix, explicitonly, **opts):
+    def add(self, ui, match, prefix, uipathfn, explicitonly, **opts):
         return []
 
-    def addremove(self, matcher, prefix, opts):
+    def addremove(self, matcher, prefix, uipathfn, opts):
         self.ui.warn("%s: %s" % (prefix, _("addremove is not supported")))
         return 1
 
@@ -324,9 +323,9 @@
 
     def matchfileset(self, expr, badfn=None):
         """Resolve the fileset expression for this repo"""
-        return matchmod.nevermatcher(self.wvfs.base, '', badfn=badfn)
+        return matchmod.never(badfn=badfn)
 
-    def printfiles(self, ui, m, fm, fmt, subrepos):
+    def printfiles(self, ui, m, uipathfn, fm, fmt, subrepos):
         """handle the files command for this subrepo"""
         return 1
 
@@ -344,8 +343,8 @@
             flags = self.fileflags(name)
             mode = 'x' in flags and 0o755 or 0o644
             symlink = 'l' in flags
-            archiver.addfile(prefix + self._path + '/' + name,
-                             mode, symlink, self.filedata(name, decode))
+            archiver.addfile(prefix + name, mode, symlink,
+                             self.filedata(name, decode))
             progress.increment()
         progress.complete()
         return total
@@ -356,10 +355,10 @@
         matched by the match function
         '''
 
-    def forget(self, match, prefix, dryrun, interactive):
+    def forget(self, match, prefix, uipathfn, dryrun, interactive):
         return ([], [])
 
-    def removefiles(self, matcher, prefix, after, force, subrepos,
+    def removefiles(self, matcher, prefix, uipathfn, after, force, subrepos,
                     dryrun, warnings):
         """remove the matched files from the subrepository and the filesystem,
         possibly by force and/or after the file has been removed from the
@@ -370,8 +369,8 @@
         return 1
 
     def revert(self, substate, *pats, **opts):
-        self.ui.warn(_('%s: reverting %s subrepos is unsupported\n') \
-            % (substate[0], substate[2]))
+        self.ui.warn(_('%s: reverting %s subrepos is unsupported\n')
+                     % (substate[0], substate[2]))
         return []
 
     def shortid(self, revid):
@@ -517,20 +516,18 @@
             self._repo.vfs.write('hgrc', util.tonativeeol(''.join(lines)))
 
     @annotatesubrepoerror
-    def add(self, ui, match, prefix, explicitonly, **opts):
-        return cmdutil.add(ui, self._repo, match,
-                           self.wvfs.reljoin(prefix, self._path),
+    def add(self, ui, match, prefix, uipathfn, explicitonly, **opts):
+        return cmdutil.add(ui, self._repo, match, prefix, uipathfn,
                            explicitonly, **opts)
 
     @annotatesubrepoerror
-    def addremove(self, m, prefix, opts):
+    def addremove(self, m, prefix, uipathfn, opts):
         # In the same way as sub directories are processed, once in a subrepo,
         # always entry any of its subrepos.  Don't corrupt the options that will
         # be used to process sibling subrepos however.
         opts = copy.copy(opts)
         opts['subrepos'] = True
-        return scmutil.addremove(self._repo, m,
-                                 self.wvfs.reljoin(prefix, self._path), opts)
+        return scmutil.addremove(self._repo, m, prefix, uipathfn, opts)
 
     @annotatesubrepoerror
     def cat(self, match, fm, fntemplate, prefix, **opts):
@@ -559,10 +556,9 @@
             # in hex format
             if node2 is not None:
                 node2 = node.bin(node2)
-            logcmdutil.diffordiffstat(ui, self._repo, diffopts,
-                                      node1, node2, match,
-                                      prefix=posixpath.join(prefix, self._path),
-                                      listsubrepos=True, **opts)
+            logcmdutil.diffordiffstat(ui, self._repo, diffopts, node1, node2,
+                                      match, prefix=prefix, listsubrepos=True,
+                                      **opts)
         except error.RepoLookupError as inst:
             self.ui.warn(_('warning: error "%s" in subrepository "%s"\n')
                           % (inst, subrelpath(self)))
@@ -581,7 +577,8 @@
         for subpath in ctx.substate:
             s = subrepo(ctx, subpath, True)
             submatch = matchmod.subdirmatcher(subpath, match)
-            total += s.archive(archiver, prefix + self._path + '/', submatch,
+            subprefix = prefix + subpath + '/'
+            total += s.archive(archiver, subprefix, submatch,
                                decode)
         return total
 
@@ -700,7 +697,7 @@
             ctx = urepo[revision]
             if ctx.hidden():
                 urepo.ui.warn(
-                    _('revision %s in subrepository "%s" is hidden\n') \
+                    _('revision %s in subrepository "%s" is hidden\n')
                     % (revision[0:12], self._path))
                 repo = urepo
         hg.updaterepo(repo, revision, overwrite)
@@ -798,7 +795,7 @@
         return ctx.flags(name)
 
     @annotatesubrepoerror
-    def printfiles(self, ui, m, fm, fmt, subrepos):
+    def printfiles(self, ui, m, uipathfn, fm, fmt, subrepos):
         # If the parent context is a workingctx, use the workingctx here for
         # consistency.
         if self._ctx.rev() is None:
@@ -806,16 +803,15 @@
         else:
             rev = self._state[1]
             ctx = self._repo[rev]
-        return cmdutil.files(ui, ctx, m, fm, fmt, subrepos)
+        return cmdutil.files(ui, ctx, m, uipathfn, fm, fmt, subrepos)
 
     @annotatesubrepoerror
     def matchfileset(self, expr, badfn=None):
-        repo = self._repo
         if self._ctx.rev() is None:
-            ctx = repo[None]
+            ctx = self._repo[None]
         else:
             rev = self._state[1]
-            ctx = repo[rev]
+            ctx = self._repo[rev]
 
         matchers = [ctx.matchfileset(expr, badfn=badfn)]
 
@@ -824,8 +820,7 @@
 
             try:
                 sm = sub.matchfileset(expr, badfn=badfn)
-                pm = matchmod.prefixdirmatcher(repo.root, repo.getcwd(),
-                                               subpath, sm, badfn=badfn)
+                pm = matchmod.prefixdirmatcher(subpath, sm, badfn=badfn)
                 matchers.append(pm)
             except error.LookupError:
                 self.ui.status(_("skipping missing subrepository: %s\n")
@@ -839,16 +834,14 @@
         return ctx.walk(match)
 
     @annotatesubrepoerror
-    def forget(self, match, prefix, dryrun, interactive):
-        return cmdutil.forget(self.ui, self._repo, match,
-                              self.wvfs.reljoin(prefix, self._path),
+    def forget(self, match, prefix, uipathfn, dryrun, interactive):
+        return cmdutil.forget(self.ui, self._repo, match, prefix, uipathfn,
                               True, dryrun=dryrun, interactive=interactive)
 
     @annotatesubrepoerror
-    def removefiles(self, matcher, prefix, after, force, subrepos,
+    def removefiles(self, matcher, prefix, uipathfn, after, force, subrepos,
                     dryrun, warnings):
-        return cmdutil.remove(self.ui, self._repo, matcher,
-                              self.wvfs.reljoin(prefix, self._path),
+        return cmdutil.remove(self.ui, self._repo, matcher, prefix, uipathfn,
                               after, force, subrepos, dryrun)
 
     @annotatesubrepoerror
@@ -971,9 +964,8 @@
         p = subprocess.Popen(pycompat.rapply(procutil.tonativestr, cmd),
                              bufsize=-1, close_fds=procutil.closefds,
                              stdout=subprocess.PIPE, stderr=subprocess.PIPE,
-                             universal_newlines=True,
                              env=procutil.tonativeenv(env), **extrakw)
-        stdout, stderr = p.communicate()
+        stdout, stderr = map(util.fromnativeeol, p.communicate())
         stderr = stderr.strip()
         if not failok:
             if p.returncode:
@@ -1000,13 +992,14 @@
         # both. We used to store the working directory one.
         output, err = self._svncommand(['info', '--xml'])
         doc = xml.dom.minidom.parseString(output)
-        entries = doc.getElementsByTagName('entry')
+        entries = doc.getElementsByTagName(r'entry')
         lastrev, rev = '0', '0'
         if entries:
-            rev = str(entries[0].getAttribute('revision')) or '0'
-            commits = entries[0].getElementsByTagName('commit')
+            rev = pycompat.bytestr(entries[0].getAttribute(r'revision')) or '0'
+            commits = entries[0].getElementsByTagName(r'commit')
             if commits:
-                lastrev = str(commits[0].getAttribute('revision')) or '0'
+                lastrev = pycompat.bytestr(
+                    commits[0].getAttribute(r'revision')) or '0'
         return (lastrev, rev)
 
     def _wcrev(self):
@@ -1021,19 +1014,19 @@
         output, err = self._svncommand(['status', '--xml'])
         externals, changes, missing = [], [], []
         doc = xml.dom.minidom.parseString(output)
-        for e in doc.getElementsByTagName('entry'):
-            s = e.getElementsByTagName('wc-status')
+        for e in doc.getElementsByTagName(r'entry'):
+            s = e.getElementsByTagName(r'wc-status')
             if not s:
                 continue
-            item = s[0].getAttribute('item')
-            props = s[0].getAttribute('props')
-            path = e.getAttribute('path')
-            if item == 'external':
+            item = s[0].getAttribute(r'item')
+            props = s[0].getAttribute(r'props')
+            path = e.getAttribute(r'path').encode('utf8')
+            if item == r'external':
                 externals.append(path)
-            elif item == 'missing':
+            elif item == r'missing':
                 missing.append(path)
-            if (item not in ('', 'normal', 'unversioned', 'external')
-                or props not in ('', 'none', 'normal')):
+            if (item not in (r'', r'normal', r'unversioned', r'external')
+                or props not in (r'', r'none', r'normal')):
                 changes.append(path)
         for path in changes:
             for ext in externals:
@@ -1154,14 +1147,14 @@
         output = self._svncommand(['list', '--recursive', '--xml'])[0]
         doc = xml.dom.minidom.parseString(output)
         paths = []
-        for e in doc.getElementsByTagName('entry'):
-            kind = pycompat.bytestr(e.getAttribute('kind'))
+        for e in doc.getElementsByTagName(r'entry'):
+            kind = pycompat.bytestr(e.getAttribute(r'kind'))
             if kind != 'file':
                 continue
-            name = ''.join(c.data for c
-                           in e.getElementsByTagName('name')[0].childNodes
-                           if c.nodeType == c.TEXT_NODE)
-            paths.append(name.encode('utf-8'))
+            name = r''.join(c.data for c
+                            in e.getElementsByTagName(r'name')[0].childNodes
+                            if c.nodeType == c.TEXT_NODE)
+            paths.append(name.encode('utf8'))
         return paths
 
     def filedata(self, name, decode):
@@ -1596,7 +1589,7 @@
             return False
 
     @annotatesubrepoerror
-    def add(self, ui, match, prefix, explicitonly, **opts):
+    def add(self, ui, match, prefix, uipathfn, explicitonly, **opts):
         if self._gitmissing():
             return []
 
@@ -1620,7 +1613,7 @@
             if exact:
                 command.append("-f") #should be added, even if ignored
             if ui.verbose or not exact:
-                ui.status(_('adding %s\n') % match.rel(f))
+                ui.status(_('adding %s\n') % uipathfn(f))
 
             if f in tracked:  # hg prints 'adding' even if already tracked
                 if exact:
@@ -1630,7 +1623,7 @@
                 self._gitcommand(command + [f])
 
         for f in rejected:
-            ui.warn(_("%s already tracked!\n") % match.abs(f))
+            ui.warn(_("%s already tracked!\n") % uipathfn(f))
 
         return rejected
 
@@ -1673,14 +1666,14 @@
         for info in tar:
             if info.isdir():
                 continue
-            if match and not match(info.name):
+            bname = pycompat.fsencode(info.name)
+            if match and not match(bname):
                 continue
             if info.issym():
                 data = info.linkname
             else:
                 data = tar.extractfile(info).read()
-            archiver.addfile(prefix + self._path + '/' + info.name,
-                             info.mode, info.issym(), data)
+            archiver.addfile(prefix + bname, info.mode, info.issym(), data)
             total += 1
             progress.increment()
         progress.complete()
@@ -1783,21 +1776,19 @@
             # for Git, this also implies '-p'
             cmd.append('-U%d' % diffopts.context)
 
-        gitprefix = self.wvfs.reljoin(prefix, self._path)
-
         if diffopts.noprefix:
-            cmd.extend(['--src-prefix=%s/' % gitprefix,
-                        '--dst-prefix=%s/' % gitprefix])
+            cmd.extend(['--src-prefix=%s/' % prefix,
+                        '--dst-prefix=%s/' % prefix])
         else:
-            cmd.extend(['--src-prefix=a/%s/' % gitprefix,
-                        '--dst-prefix=b/%s/' % gitprefix])
+            cmd.extend(['--src-prefix=a/%s/' % prefix,
+                        '--dst-prefix=b/%s/' % prefix])
 
         if diffopts.ignorews:
             cmd.append('--ignore-all-space')
         if diffopts.ignorewsamount:
             cmd.append('--ignore-space-change')
-        if self._gitversion(self._gitcommand(['--version'])) >= (1, 8, 4) \
-                and diffopts.ignoreblanklines:
+        if (self._gitversion(self._gitcommand(['--version'])) >= (1, 8, 4)
+            and diffopts.ignoreblanklines):
             cmd.append('--ignore-blank-lines')
 
         cmd.append(node1)
@@ -1823,15 +1814,15 @@
         if not opts.get(r'no_backup'):
             status = self.status(None)
             names = status.modified
-            origvfs = scmutil.getorigvfs(self.ui, self._subparent)
-            if origvfs is None:
-                origvfs = self.wvfs
             for name in names:
-                bakname = scmutil.origpath(self.ui, self._subparent, name)
+                # backuppath() expects a path relative to the parent repo (the
+                # repo that ui.origbackuppath is relative to)
+                parentname = os.path.join(self._path, name)
+                bakname = scmutil.backuppath(self.ui, self._subparent,
+                                             parentname)
                 self.ui.note(_('saving current version of %s as %s\n') %
-                        (name, bakname))
-                name = self.wvfs.join(name)
-                origvfs.rename(name, bakname)
+                        (name, os.path.relpath(bakname)))
+                util.rename(self.wvfs.join(name), bakname)
 
         if not opts.get(r'dry_run'):
             self.get(substate, overwrite=True)
--- a/mercurial/subrepoutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/subrepoutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -145,7 +145,6 @@
 
     promptssrc = filemerge.partextras(labels)
     for s, l in sorted(s1.iteritems()):
-        prompts = None
         a = sa.get(s, nullstate)
         ld = l # local state with possible dirty flag for compares
         if wctx.sub(s).dirty():
@@ -218,7 +217,6 @@
                 wctx.sub(s).remove()
 
     for s, r in sorted(s2.items()):
-        prompts = None
         if s in s1:
             continue
         elif s not in sa:
--- a/mercurial/tags.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/tags.py	Wed Apr 17 13:41:18 2019 -0400
@@ -188,8 +188,8 @@
         return alltags
 
     for head in reversed(heads):  # oldest to newest
-        assert head in repo.changelog.nodemap, \
-               "tag cache returned bogus head %s" % short(head)
+        assert head in repo.changelog.nodemap, (
+               "tag cache returned bogus head %s" % short(head))
     fnodes = _filterfnodes(tagfnode, reversed(heads))
     alltags = _tagsfromfnodes(ui, repo, fnodes)
 
@@ -536,7 +536,7 @@
     date: date tuple to use if committing'''
 
     if not local:
-        m = matchmod.exact(repo.root, '', ['.hgtags'])
+        m = matchmod.exact(['.hgtags'])
         if any(repo.status(match=m, unknown=True, ignored=True)):
             raise error.Abort(_('working copy of .hgtags is changed'),
                              hint=_('please commit .hgtags manually'))
@@ -548,7 +548,7 @@
 
 def _tag(repo, names, node, message, local, user, date, extra=None,
          editor=False):
-    if isinstance(names, str):
+    if isinstance(names, bytes):
         names = (names,)
 
     branches = repo.branchmap()
@@ -610,7 +610,7 @@
     if '.hgtags' not in repo.dirstate:
         repo[None].add(['.hgtags'])
 
-    m = matchmod.exact(repo.root, '', ['.hgtags'])
+    m = matchmod.exact(['.hgtags'])
     tagnode = repo.commit(message, user, date, extra=extra, match=m,
                           editor=editor)
 
--- a/mercurial/templatefilters.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/templatefilters.py	Wed Apr 17 13:41:18 2019 -0400
@@ -23,6 +23,7 @@
     util,
 )
 from .utils import (
+    cborutil,
     dateutil,
     stringutil,
 )
@@ -99,6 +100,11 @@
     """
     return os.path.basename(path)
 
+@templatefilter('cbor')
+def cbor(obj):
+    """Any object. Serializes the object to CBOR bytes."""
+    return b''.join(cborutil.streamencode(obj))
+
 @templatefilter('commondir')
 def commondir(filelist):
     """List of text. Treats each list item as file name with /
--- a/mercurial/templatefuncs.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/templatefuncs.py	Wed Apr 17 13:41:18 2019 -0400
@@ -295,6 +295,39 @@
         hint = _("get() expects a dict as first argument")
         raise error.ParseError(bytes(err), hint=hint)
 
+@templatefunc('config(section, name[, default])', requires={'ui'})
+def config(context, mapping, args):
+    """Returns the requested hgrc config option as a string."""
+    fn = context.resource(mapping, 'ui').config
+    return _config(context, mapping, args, fn, evalstring)
+
+@templatefunc('configbool(section, name[, default])', requires={'ui'})
+def configbool(context, mapping, args):
+    """Returns the requested hgrc config option as a boolean."""
+    fn = context.resource(mapping, 'ui').configbool
+    return _config(context, mapping, args, fn, evalboolean)
+
+@templatefunc('configint(section, name[, default])', requires={'ui'})
+def configint(context, mapping, args):
+    """Returns the requested hgrc config option as an integer."""
+    fn = context.resource(mapping, 'ui').configint
+    return _config(context, mapping, args, fn, evalinteger)
+
+def _config(context, mapping, args, configfn, defaultfn):
+    if not (2 <= len(args) <= 3):
+        raise error.ParseError(_("config expects two or three arguments"))
+
+    # The config option can come from any section, though we specifically
+    # reserve the [templateconfig] section for dynamically defining options
+    # for this function without also requiring an extension.
+    section = evalstringliteral(context, mapping, args[0])
+    name = evalstringliteral(context, mapping, args[1])
+    if len(args) == 3:
+        default = defaultfn(context, mapping, args[2])
+        return configfn(section, name, default)
+    else:
+        return configfn(section, name)
+
 @templatefunc('if(expr, then[, else])')
 def if_(context, mapping, args):
     """Conditionally execute based on the result of
--- a/mercurial/templatekw.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/templatekw.py	Wed Apr 17 13:41:18 2019 -0400
@@ -104,38 +104,6 @@
         latesttags[rev] = pdate, pdist + 1, ptag
     return latesttags[rev]
 
-def getrenamedfn(repo, endrev=None):
-    rcache = {}
-    if endrev is None:
-        endrev = len(repo)
-
-    def getrenamed(fn, rev):
-        '''looks up all renames for a file (up to endrev) the first
-        time the file is given. It indexes on the changerev and only
-        parses the manifest if linkrev != changerev.
-        Returns rename info for fn at changerev rev.'''
-        if fn not in rcache:
-            rcache[fn] = {}
-            fl = repo.file(fn)
-            for i in fl:
-                lr = fl.linkrev(i)
-                renamed = fl.renamed(fl.node(i))
-                rcache[fn][lr] = renamed and renamed[0]
-                if lr >= endrev:
-                    break
-        if rev in rcache[fn]:
-            return rcache[fn][rev]
-
-        # If linkrev != rev (i.e. rev not found in rcache) fallback to
-        # filectx logic.
-        try:
-            renamed = repo[rev][fn].renamed()
-            return renamed and renamed[0]
-        except error.LookupError:
-            return None
-
-    return getrenamed
-
 def getlogcolumns():
     """Return a dict of log column labels"""
     _ = pycompat.identity  # temporarily disable gettext
@@ -344,7 +312,7 @@
     copies = context.resource(mapping, 'revcache').get('copies')
     if copies is None:
         if 'getrenamed' not in cache:
-            cache['getrenamed'] = getrenamedfn(repo)
+            cache['getrenamed'] = scmutil.getrenamedfn(repo)
         copies = []
         getrenamed = cache['getrenamed']
         for fn in ctx.files():
@@ -554,6 +522,17 @@
 
     return _hybrid(f, namespaces, makemap, pycompat.identity)
 
+@templatekeyword('negrev', requires={'repo', 'ctx'})
+def shownegrev(context, mapping):
+    """Integer. The repository-local changeset negative revision number,
+    which counts in the opposite direction."""
+    ctx = context.resource(mapping, 'ctx')
+    rev = ctx.rev()
+    if rev is None or rev < 0:  # wdir() or nullrev?
+        return None
+    repo = context.resource(mapping, 'repo')
+    return rev - len(repo)
+
 @templatekeyword('node', requires={'ctx'})
 def shownode(context, mapping):
     """String. The changeset identification hash, as a 40 hexadecimal
@@ -796,7 +775,7 @@
     substate = ctx.substate
     if not substate:
         return compatlist(context, mapping, 'subrepo', [])
-    psubstate = ctx.parents()[0].substate or {}
+    psubstate = ctx.p1().substate or {}
     subrepos = []
     for sub in substate:
         if sub not in psubstate or substate[sub] != psubstate[sub]:
--- a/mercurial/thirdparty/attr/_make.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/thirdparty/attr/_make.py	Wed Apr 17 13:41:18 2019 -0400
@@ -56,7 +56,7 @@
 def attr(default=NOTHING, validator=None,
          repr=True, cmp=True, hash=None, init=True,
          convert=None, metadata={}):
-    """
+    r"""
     Create a new attribute on a class.
 
     ..  warning::
@@ -555,7 +555,10 @@
 
     # We cache the generated init methods for the same kinds of attributes.
     sha1 = hashlib.sha1()
-    sha1.update(repr(attrs).encode("utf-8"))
+    r = repr(attrs)
+    if not isinstance(r, bytes):
+        r = r.encode('utf-8')
+    sha1.update(r)
     unique_filename = "<attrs generated init {0}>".format(
         sha1.hexdigest()
     )
--- a/mercurial/thirdparty/attr/filters.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/thirdparty/attr/filters.py	Wed Apr 17 13:41:18 2019 -0400
@@ -19,7 +19,7 @@
 
 
 def include(*what):
-    """
+    r"""
     Whitelist *what*.
 
     :param what: What to whitelist.
@@ -36,7 +36,7 @@
 
 
 def exclude(*what):
-    """
+    r"""
     Blacklist *what*.
 
     :param what: What to blacklist.
--- a/mercurial/transaction.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/transaction.py	Wed Apr 17 13:41:18 2019 -0400
@@ -89,7 +89,7 @@
                 except (IOError, OSError) as inst:
                     if inst.errno != errno.ENOENT:
                         raise
-        except (IOError, OSError, error.Abort) as inst:
+        except (IOError, OSError, error.Abort):
             if not c:
                 raise
 
@@ -101,7 +101,7 @@
         for f in backupfiles:
             if opener.exists(f):
                 opener.unlink(f)
-    except (IOError, OSError, error.Abort) as inst:
+    except (IOError, OSError, error.Abort):
         # only pure backup file remains, it is sage to ignore any error
         pass
 
--- a/mercurial/ui.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/ui.py	Wed Apr 17 13:41:18 2019 -0400
@@ -58,12 +58,12 @@
 statuscopies = yes
 # Prefer curses UIs when available. Revert to plain-text with `text`.
 interface = curses
+# Make compatible commands emit cwd-relative paths by default.
+relative-paths = yes
 
 [commands]
 # Grep working directory by default.
 grep.all-files = True
-# Make `hg status` emit cwd-relative paths by default.
-status.relative = yes
 # Refuse to perform an `hg update` that would cause a file content merge
 update.check = noconflict
 # Show conflicts information in `hg status`
@@ -97,10 +97,13 @@
 # paginate = never
 
 [extensions]
-# uncomment these lines to enable some popular extensions
+# uncomment the lines below to enable some popular extensions
 # (see 'hg help extensions' for more info)
 #
-# churn =
+# histedit =
+# rebase =
+# shelve =
+# uncommit =
 """,
 
     'cloned':
@@ -149,7 +152,7 @@
 # paginate = never
 
 [extensions]
-# uncomment these lines to enable some popular extensions
+# uncomment the lines below to enable some popular extensions
 # (see 'hg help extensions' for more info)
 #
 # blackbox =
@@ -344,8 +347,8 @@
         try:
             yield
         finally:
-            self._blockedtimes[key + '_blocked'] += \
-                (util.timer() - starttime) * 1000
+            self._blockedtimes[key + '_blocked'] += (
+                (util.timer() - starttime) * 1000)
 
     @contextlib.contextmanager
     def uninterruptible(self):
@@ -566,8 +569,6 @@
             candidate = self._data(untrusted).get(s, n, None)
             if candidate is not None:
                 value = candidate
-                section = s
-                name = n
                 break
 
         if self.debugflag and not untrusted and self._reportuntrusted:
@@ -1029,8 +1030,8 @@
         except IOError as err:
             raise error.StdioError(err)
         finally:
-            self._blockedtimes['stdio_blocked'] += \
-                (util.timer() - starttime) * 1000
+            self._blockedtimes['stdio_blocked'] += (
+                (util.timer() - starttime) * 1000)
 
     def write_err(self, *args, **opts):
         self._write(self._ferr, *args, **opts)
@@ -1080,8 +1081,8 @@
                 return
             raise error.StdioError(err)
         finally:
-            self._blockedtimes['stdio_blocked'] += \
-                (util.timer() - starttime) * 1000
+            self._blockedtimes['stdio_blocked'] += (
+                (util.timer() - starttime) * 1000)
 
     def _writemsg(self, dest, *args, **opts):
         _writemsgwith(self._write, dest, *args, **opts)
@@ -1105,8 +1106,8 @@
                     if err.errno not in (errno.EPIPE, errno.EIO, errno.EBADF):
                         raise error.StdioError(err)
         finally:
-            self._blockedtimes['stdio_blocked'] += \
-                (util.timer() - starttime) * 1000
+            self._blockedtimes['stdio_blocked'] += (
+                (util.timer() - starttime) * 1000)
 
     def _isatty(self, fh):
         if self.configbool('ui', 'nontty'):
@@ -1429,7 +1430,7 @@
 
         return i
 
-    def _readline(self):
+    def _readline(self, prompt=' ', promptopts=None):
         # Replacing stdin/stdout temporarily is a hard problem on Python 3
         # because they have to be text streams with *no buffering*. Instead,
         # we use rawinput() only if call_readline() will be invoked by
@@ -1448,17 +1449,27 @@
             except Exception:
                 usereadline = False
 
+        if self._colormode == 'win32' or not usereadline:
+            if not promptopts:
+                promptopts = {}
+            self._writemsgnobuf(self._fmsgout, prompt, type='prompt',
+                                **promptopts)
+            self.flush()
+            prompt = ' '
+        else:
+            prompt = self.label(prompt, 'ui.prompt') + ' '
+
         # prompt ' ' must exist; otherwise readline may delete entire line
         # - http://bugs.python.org/issue12833
         with self.timeblockedsection('stdio'):
             if usereadline:
-                line = encoding.strtolocal(pycompat.rawinput(r' '))
+                line = encoding.strtolocal(pycompat.rawinput(prompt))
                 # When stdin is in binary mode on Windows, it can cause
                 # raw_input() to emit an extra trailing carriage return
                 if pycompat.oslinesep == b'\r\n' and line.endswith(b'\r'):
                     line = line[:-1]
             else:
-                self._fout.write(b' ')
+                self._fout.write(pycompat.bytestr(prompt))
                 self._fout.flush()
                 line = self._fin.readline()
                 if not line:
@@ -1480,10 +1491,8 @@
             self._writemsg(self._fmsgout, default or '', "\n",
                            type='promptecho')
             return default
-        self._writemsgnobuf(self._fmsgout, msg, type='prompt', **opts)
-        self.flush()
         try:
-            r = self._readline()
+            r = self._readline(prompt=msg, promptopts=opts)
             if not r:
                 r = default
             if self.configbool('ui', 'promptecho'):
@@ -1555,7 +1564,7 @@
                         raise EOFError
                     return l.rstrip('\n')
                 else:
-                    return getpass.getpass('')
+                    return getpass.getpass(r'')
         except EOFError:
             raise error.ResponseExpected()
 
@@ -2053,7 +2062,11 @@
         This is its own function so that extensions can change the definition of
         'valid' in this case (like when pulling from a git repo into a hg
         one)."""
-        return os.path.isdir(os.path.join(path, '.hg'))
+        try:
+            return os.path.isdir(os.path.join(path, '.hg'))
+        # Python 2 may return TypeError. Python 3, ValueError.
+        except (TypeError, ValueError):
+            return False
 
     @property
     def suboptions(self):
--- a/mercurial/upgrade.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/upgrade.py	Wed Apr 17 13:41:18 2019 -0400
@@ -24,6 +24,10 @@
     vfs as vfsmod,
 )
 
+from .utils import (
+    compression,
+)
+
 def requiredsourcerequirements(repo):
     """Obtain requirements required to be present to upgrade a repo.
 
@@ -61,9 +65,16 @@
     the dropped requirement must appear in the returned set for the upgrade
     to be allowed.
     """
-    return {
+    supported = {
         localrepo.SPARSEREVLOG_REQUIREMENT,
     }
+    for name in compression.compengines:
+        engine = compression.compengines[name]
+        if engine.available() and engine.revlogheader():
+            supported.add(b'exp-compression-%s' % name)
+            if engine.name() == 'zstd':
+                supported.add(b'revlog-compression-zstd')
+    return supported
 
 def supporteddestrequirements(repo):
     """Obtain requirements that upgrade supports in the destination.
@@ -73,7 +84,7 @@
 
     Extensions should monkeypatch this to add their custom requirements.
     """
-    return {
+    supported = {
         'dotencode',
         'fncache',
         'generaldelta',
@@ -81,6 +92,13 @@
         'store',
         localrepo.SPARSEREVLOG_REQUIREMENT,
     }
+    for name in compression.compengines:
+        engine = compression.compengines[name]
+        if engine.available() and engine.revlogheader():
+            supported.add(b'exp-compression-%s' % name)
+            if engine.name() == 'zstd':
+                supported.add(b'revlog-compression-zstd')
+    return supported
 
 def allowednewrequirements(repo):
     """Obtain requirements that can be added to a repository during upgrade.
@@ -92,12 +110,19 @@
     bad additions because the whitelist approach is safer and will prevent
     future, unknown requirements from accidentally being added.
     """
-    return {
+    supported = {
         'dotencode',
         'fncache',
         'generaldelta',
         localrepo.SPARSEREVLOG_REQUIREMENT,
     }
+    for name in compression.compengines:
+        engine = compression.compengines[name]
+        if engine.available() and engine.revlogheader():
+            supported.add(b'exp-compression-%s' % name)
+            if engine.name() == 'zstd':
+                supported.add(b'revlog-compression-zstd')
+    return supported
 
 def preservedrequirements(repo):
     return set()
@@ -325,14 +350,53 @@
 
     @classmethod
     def fromrepo(cls, repo):
+        # we allow multiple compression engine requirement to co-exist because
+        # strickly speaking, revlog seems to support mixed compression style.
+        #
+        # The compression used for new entries will be "the last one"
+        compression = 'zlib'
         for req in repo.requirements:
-            if req.startswith('exp-compression-'):
-                return req.split('-', 2)[2]
-        return 'zlib'
+            prefix = req.startswith
+            if prefix('revlog-compression-') or prefix('exp-compression-'):
+                compression = req.split('-', 2)[2]
+        return compression
 
     @classmethod
     def fromconfig(cls, repo):
-        return repo.ui.config('experimental', 'format.compression')
+        return repo.ui.config('format', 'revlog-compression')
+
+@registerformatvariant
+class compressionlevel(formatvariant):
+    name = 'compression-level'
+    default = 'default'
+
+    description = _('compression level')
+
+    upgrademessage = _('revlog content will be recompressed')
+
+    @classmethod
+    def fromrepo(cls, repo):
+        comp = compressionengine.fromrepo(repo)
+        level = None
+        if comp == 'zlib':
+            level = repo.ui.configint('storage', 'revlog.zlib.level')
+        elif comp == 'zstd':
+            level = repo.ui.configint('storage', 'revlog.zstd.level')
+        if level is None:
+            return 'default'
+        return bytes(level)
+
+    @classmethod
+    def fromconfig(cls, repo):
+        comp = compressionengine.fromconfig(repo)
+        level = None
+        if comp == 'zlib':
+            level = repo.ui.configint('storage', 'revlog.zlib.level')
+        elif comp == 'zstd':
+            level = repo.ui.configint('storage', 'revlog.zstd.level')
+        if level is None:
+            return 'default'
+        return bytes(level)
 
 def finddeficiencies(repo):
     """returns a list of deficiencies that the repo suffer from"""
--- a/mercurial/url.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/url.py	Wed Apr 17 13:41:18 2019 -0400
@@ -58,11 +58,14 @@
         return self.passwddb.add_password(realm, uri, user, passwd)
 
     def find_user_password(self, realm, authuri):
+        assert isinstance(realm, (type(None), str))
+        assert isinstance(authuri, str)
         authinfo = self.passwddb.find_user_password(realm, authuri)
         user, passwd = authinfo
+        user, passwd = pycompat.bytesurl(user), pycompat.bytesurl(passwd)
         if user and passwd:
             self._writedebug(user, passwd)
-            return (user, passwd)
+            return (pycompat.strurl(user), pycompat.strurl(passwd))
 
         if not user or not passwd:
             res = httpconnectionmod.readauthforuri(self.ui, authuri, user)
@@ -90,7 +93,7 @@
 
         self.passwddb.add_password(realm, authuri, user, passwd)
         self._writedebug(user, passwd)
-        return (user, passwd)
+        return (pycompat.strurl(user), pycompat.strurl(passwd))
 
     def _writedebug(self, user, passwd):
         msg = _('http auth: user %s, password %s\n')
@@ -128,9 +131,11 @@
             else:
                 self.no_list = no_list
 
-            proxyurl = bytes(proxy)
-            proxies = {'http': proxyurl, 'https': proxyurl}
-            ui.debug('proxying through %s\n' % util.hidepassword(proxyurl))
+            # Keys and values need to be str because the standard library
+            # expects them to be.
+            proxyurl = str(proxy)
+            proxies = {r'http': proxyurl, r'https': proxyurl}
+            ui.debug('proxying through %s\n' % util.hidepassword(bytes(proxy)))
         else:
             proxies = {}
 
@@ -138,7 +143,7 @@
         self.ui = ui
 
     def proxy_open(self, req, proxy, type_):
-        host = urllibcompat.gethost(req).split(':')[0]
+        host = pycompat.bytesurl(urllibcompat.gethost(req)).split(':')[0]
         for e in self.no_list:
             if host == e:
                 return None
@@ -176,20 +181,20 @@
             return proxyres
         return keepalive.HTTPConnection.getresponse(self)
 
-# general transaction handler to support different ways to handle
-# HTTPS proxying before and after Python 2.6.3.
+# Large parts of this function have their origin from before Python 2.6
+# and could potentially be removed.
 def _generic_start_transaction(handler, h, req):
-    tunnel_host = getattr(req, '_tunnel_host', None)
+    tunnel_host = req._tunnel_host
     if tunnel_host:
-        if tunnel_host[:7] not in ['http://', 'https:/']:
-            tunnel_host = 'https://' + tunnel_host
+        if tunnel_host[:7] not in [r'http://', r'https:/']:
+            tunnel_host = r'https://' + tunnel_host
         new_tunnel = True
     else:
         tunnel_host = urllibcompat.getselector(req)
         new_tunnel = False
 
     if new_tunnel or tunnel_host == urllibcompat.getfullurl(req): # has proxy
-        u = util.url(tunnel_host)
+        u = util.url(pycompat.bytesurl(tunnel_host))
         if new_tunnel or u.scheme == 'https': # only use CONNECT for HTTPS
             h.realhostport = ':'.join([u.host, (u.port or '443')])
             h.headers = req.headers.copy()
@@ -202,7 +207,7 @@
 def _generic_proxytunnel(self):
     proxyheaders = dict(
             [(x, self.headers[x]) for x in self.headers
-             if x.lower().startswith('proxy-')])
+             if x.lower().startswith(r'proxy-')])
     self.send('CONNECT %s HTTP/1.0\r\n' % self.realhostport)
     for header in proxyheaders.iteritems():
         self.send('%s: %s\r\n' % header)
@@ -211,9 +216,14 @@
     # majority of the following code is duplicated from
     # httplib.HTTPConnection as there are no adequate places to
     # override functions to provide the needed functionality
+    # strict was removed in Python 3.4.
+    kwargs = {}
+    if not pycompat.ispy3:
+        kwargs['strict'] = self.strict
+
     res = self.response_class(self.sock,
-                              strict=self.strict,
-                              method=self._method)
+                              method=self._method,
+                              **kwargs)
 
     while True:
         version, status, reason = res._read_status()
@@ -591,7 +601,7 @@
 
     return opener
 
-def open(ui, url_, data=None):
+def open(ui, url_, data=None, sendaccept=True):
     u = util.url(url_)
     if u.scheme:
         u.scheme = u.scheme.lower()
@@ -600,7 +610,9 @@
         path = util.normpath(os.path.abspath(url_))
         url_ = 'file://' + pycompat.bytesurl(urlreq.pathname2url(path))
         authinfo = None
-    return opener(ui, authinfo).open(pycompat.strurl(url_), data)
+    return opener(ui, authinfo,
+                  sendaccept=sendaccept).open(pycompat.strurl(url_),
+                                              data)
 
 def wrapresponse(resp):
     """Wrap a response object with common error handlers.
--- a/mercurial/util.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/util.py	Wed Apr 17 13:41:18 2019 -0400
@@ -16,7 +16,6 @@
 from __future__ import absolute_import, print_function
 
 import abc
-import bz2
 import collections
 import contextlib
 import errno
@@ -34,7 +33,6 @@
 import time
 import traceback
 import warnings
-import zlib
 
 from .thirdparty import (
     attr,
@@ -50,6 +48,7 @@
     urllibcompat,
 )
 from .utils import (
+    compression,
     procutil,
     stringutil,
 )
@@ -127,6 +126,11 @@
 unlink = platform.unlink
 username = platform.username
 
+# small compat layer
+compengines = compression.compengines
+SERVERROLE = compression.SERVERROLE
+CLIENTROLE = compression.CLIENTROLE
+
 try:
     recvfds = osutil.recvfds
 except AttributeError:
@@ -789,6 +793,12 @@
                                                       res))
 
         data = dest[0:res] if res is not None else b''
+
+        # _writedata() uses "in" operator and is confused by memoryview because
+        # characters are ints on Python 3.
+        if isinstance(data, memoryview):
+            data = data.tobytes()
+
         self._writedata(data)
 
     def write(self, res, data):
@@ -1210,7 +1220,7 @@
     Holds a reference to nodes on either side as well as a key-value
     pair for the dictionary entry.
     """
-    __slots__ = (u'next', u'prev', u'key', u'value', u'cost')
+    __slots__ = (r'next', r'prev', r'key', r'value', r'cost')
 
     def __init__(self):
         self.next = None
@@ -3200,714 +3210,6 @@
         yield path[:pos]
         pos = path.rfind('/', 0, pos)
 
-# compression code
-
-SERVERROLE = 'server'
-CLIENTROLE = 'client'
-
-compewireprotosupport = collections.namedtuple(u'compenginewireprotosupport',
-                                               (u'name', u'serverpriority',
-                                                u'clientpriority'))
-
-class compressormanager(object):
-    """Holds registrations of various compression engines.
-
-    This class essentially abstracts the differences between compression
-    engines to allow new compression formats to be added easily, possibly from
-    extensions.
-
-    Compressors are registered against the global instance by calling its
-    ``register()`` method.
-    """
-    def __init__(self):
-        self._engines = {}
-        # Bundle spec human name to engine name.
-        self._bundlenames = {}
-        # Internal bundle identifier to engine name.
-        self._bundletypes = {}
-        # Revlog header to engine name.
-        self._revlogheaders = {}
-        # Wire proto identifier to engine name.
-        self._wiretypes = {}
-
-    def __getitem__(self, key):
-        return self._engines[key]
-
-    def __contains__(self, key):
-        return key in self._engines
-
-    def __iter__(self):
-        return iter(self._engines.keys())
-
-    def register(self, engine):
-        """Register a compression engine with the manager.
-
-        The argument must be a ``compressionengine`` instance.
-        """
-        if not isinstance(engine, compressionengine):
-            raise ValueError(_('argument must be a compressionengine'))
-
-        name = engine.name()
-
-        if name in self._engines:
-            raise error.Abort(_('compression engine %s already registered') %
-                              name)
-
-        bundleinfo = engine.bundletype()
-        if bundleinfo:
-            bundlename, bundletype = bundleinfo
-
-            if bundlename in self._bundlenames:
-                raise error.Abort(_('bundle name %s already registered') %
-                                  bundlename)
-            if bundletype in self._bundletypes:
-                raise error.Abort(_('bundle type %s already registered by %s') %
-                                  (bundletype, self._bundletypes[bundletype]))
-
-            # No external facing name declared.
-            if bundlename:
-                self._bundlenames[bundlename] = name
-
-            self._bundletypes[bundletype] = name
-
-        wiresupport = engine.wireprotosupport()
-        if wiresupport:
-            wiretype = wiresupport.name
-            if wiretype in self._wiretypes:
-                raise error.Abort(_('wire protocol compression %s already '
-                                    'registered by %s') %
-                                  (wiretype, self._wiretypes[wiretype]))
-
-            self._wiretypes[wiretype] = name
-
-        revlogheader = engine.revlogheader()
-        if revlogheader and revlogheader in self._revlogheaders:
-            raise error.Abort(_('revlog header %s already registered by %s') %
-                              (revlogheader, self._revlogheaders[revlogheader]))
-
-        if revlogheader:
-            self._revlogheaders[revlogheader] = name
-
-        self._engines[name] = engine
-
-    @property
-    def supportedbundlenames(self):
-        return set(self._bundlenames.keys())
-
-    @property
-    def supportedbundletypes(self):
-        return set(self._bundletypes.keys())
-
-    def forbundlename(self, bundlename):
-        """Obtain a compression engine registered to a bundle name.
-
-        Will raise KeyError if the bundle type isn't registered.
-
-        Will abort if the engine is known but not available.
-        """
-        engine = self._engines[self._bundlenames[bundlename]]
-        if not engine.available():
-            raise error.Abort(_('compression engine %s could not be loaded') %
-                              engine.name())
-        return engine
-
-    def forbundletype(self, bundletype):
-        """Obtain a compression engine registered to a bundle type.
-
-        Will raise KeyError if the bundle type isn't registered.
-
-        Will abort if the engine is known but not available.
-        """
-        engine = self._engines[self._bundletypes[bundletype]]
-        if not engine.available():
-            raise error.Abort(_('compression engine %s could not be loaded') %
-                              engine.name())
-        return engine
-
-    def supportedwireengines(self, role, onlyavailable=True):
-        """Obtain compression engines that support the wire protocol.
-
-        Returns a list of engines in prioritized order, most desired first.
-
-        If ``onlyavailable`` is set, filter out engines that can't be
-        loaded.
-        """
-        assert role in (SERVERROLE, CLIENTROLE)
-
-        attr = 'serverpriority' if role == SERVERROLE else 'clientpriority'
-
-        engines = [self._engines[e] for e in self._wiretypes.values()]
-        if onlyavailable:
-            engines = [e for e in engines if e.available()]
-
-        def getkey(e):
-            # Sort first by priority, highest first. In case of tie, sort
-            # alphabetically. This is arbitrary, but ensures output is
-            # stable.
-            w = e.wireprotosupport()
-            return -1 * getattr(w, attr), w.name
-
-        return list(sorted(engines, key=getkey))
-
-    def forwiretype(self, wiretype):
-        engine = self._engines[self._wiretypes[wiretype]]
-        if not engine.available():
-            raise error.Abort(_('compression engine %s could not be loaded') %
-                              engine.name())
-        return engine
-
-    def forrevlogheader(self, header):
-        """Obtain a compression engine registered to a revlog header.
-
-        Will raise KeyError if the revlog header value isn't registered.
-        """
-        return self._engines[self._revlogheaders[header]]
-
-compengines = compressormanager()
-
-class compressionengine(object):
-    """Base class for compression engines.
-
-    Compression engines must implement the interface defined by this class.
-    """
-    def name(self):
-        """Returns the name of the compression engine.
-
-        This is the key the engine is registered under.
-
-        This method must be implemented.
-        """
-        raise NotImplementedError()
-
-    def available(self):
-        """Whether the compression engine is available.
-
-        The intent of this method is to allow optional compression engines
-        that may not be available in all installations (such as engines relying
-        on C extensions that may not be present).
-        """
-        return True
-
-    def bundletype(self):
-        """Describes bundle identifiers for this engine.
-
-        If this compression engine isn't supported for bundles, returns None.
-
-        If this engine can be used for bundles, returns a 2-tuple of strings of
-        the user-facing "bundle spec" compression name and an internal
-        identifier used to denote the compression format within bundles. To
-        exclude the name from external usage, set the first element to ``None``.
-
-        If bundle compression is supported, the class must also implement
-        ``compressstream`` and `decompressorreader``.
-
-        The docstring of this method is used in the help system to tell users
-        about this engine.
-        """
-        return None
-
-    def wireprotosupport(self):
-        """Declare support for this compression format on the wire protocol.
-
-        If this compression engine isn't supported for compressing wire
-        protocol payloads, returns None.
-
-        Otherwise, returns ``compenginewireprotosupport`` with the following
-        fields:
-
-        * String format identifier
-        * Integer priority for the server
-        * Integer priority for the client
-
-        The integer priorities are used to order the advertisement of format
-        support by server and client. The highest integer is advertised
-        first. Integers with non-positive values aren't advertised.
-
-        The priority values are somewhat arbitrary and only used for default
-        ordering. The relative order can be changed via config options.
-
-        If wire protocol compression is supported, the class must also implement
-        ``compressstream`` and ``decompressorreader``.
-        """
-        return None
-
-    def revlogheader(self):
-        """Header added to revlog chunks that identifies this engine.
-
-        If this engine can be used to compress revlogs, this method should
-        return the bytes used to identify chunks compressed with this engine.
-        Else, the method should return ``None`` to indicate it does not
-        participate in revlog compression.
-        """
-        return None
-
-    def compressstream(self, it, opts=None):
-        """Compress an iterator of chunks.
-
-        The method receives an iterator (ideally a generator) of chunks of
-        bytes to be compressed. It returns an iterator (ideally a generator)
-        of bytes of chunks representing the compressed output.
-
-        Optionally accepts an argument defining how to perform compression.
-        Each engine treats this argument differently.
-        """
-        raise NotImplementedError()
-
-    def decompressorreader(self, fh):
-        """Perform decompression on a file object.
-
-        Argument is an object with a ``read(size)`` method that returns
-        compressed data. Return value is an object with a ``read(size)`` that
-        returns uncompressed data.
-        """
-        raise NotImplementedError()
-
-    def revlogcompressor(self, opts=None):
-        """Obtain an object that can be used to compress revlog entries.
-
-        The object has a ``compress(data)`` method that compresses binary
-        data. This method returns compressed binary data or ``None`` if
-        the data could not be compressed (too small, not compressible, etc).
-        The returned data should have a header uniquely identifying this
-        compression format so decompression can be routed to this engine.
-        This header should be identified by the ``revlogheader()`` return
-        value.
-
-        The object has a ``decompress(data)`` method that decompresses
-        data. The method will only be called if ``data`` begins with
-        ``revlogheader()``. The method should return the raw, uncompressed
-        data or raise a ``StorageError``.
-
-        The object is reusable but is not thread safe.
-        """
-        raise NotImplementedError()
-
-class _CompressedStreamReader(object):
-    def __init__(self, fh):
-        if safehasattr(fh, 'unbufferedread'):
-            self._reader = fh.unbufferedread
-        else:
-            self._reader = fh.read
-        self._pending = []
-        self._pos = 0
-        self._eof = False
-
-    def _decompress(self, chunk):
-        raise NotImplementedError()
-
-    def read(self, l):
-        buf = []
-        while True:
-            while self._pending:
-                if len(self._pending[0]) > l + self._pos:
-                    newbuf = self._pending[0]
-                    buf.append(newbuf[self._pos:self._pos + l])
-                    self._pos += l
-                    return ''.join(buf)
-
-                newbuf = self._pending.pop(0)
-                if self._pos:
-                    buf.append(newbuf[self._pos:])
-                    l -= len(newbuf) - self._pos
-                else:
-                    buf.append(newbuf)
-                    l -= len(newbuf)
-                self._pos = 0
-
-            if self._eof:
-                return ''.join(buf)
-            chunk = self._reader(65536)
-            self._decompress(chunk)
-            if not chunk and not self._pending and not self._eof:
-                # No progress and no new data, bail out
-                return ''.join(buf)
-
-class _GzipCompressedStreamReader(_CompressedStreamReader):
-    def __init__(self, fh):
-        super(_GzipCompressedStreamReader, self).__init__(fh)
-        self._decompobj = zlib.decompressobj()
-    def _decompress(self, chunk):
-        newbuf = self._decompobj.decompress(chunk)
-        if newbuf:
-            self._pending.append(newbuf)
-        d = self._decompobj.copy()
-        try:
-            d.decompress('x')
-            d.flush()
-            if d.unused_data == 'x':
-                self._eof = True
-        except zlib.error:
-            pass
-
-class _BZ2CompressedStreamReader(_CompressedStreamReader):
-    def __init__(self, fh):
-        super(_BZ2CompressedStreamReader, self).__init__(fh)
-        self._decompobj = bz2.BZ2Decompressor()
-    def _decompress(self, chunk):
-        newbuf = self._decompobj.decompress(chunk)
-        if newbuf:
-            self._pending.append(newbuf)
-        try:
-            while True:
-                newbuf = self._decompobj.decompress('')
-                if newbuf:
-                    self._pending.append(newbuf)
-                else:
-                    break
-        except EOFError:
-            self._eof = True
-
-class _TruncatedBZ2CompressedStreamReader(_BZ2CompressedStreamReader):
-    def __init__(self, fh):
-        super(_TruncatedBZ2CompressedStreamReader, self).__init__(fh)
-        newbuf = self._decompobj.decompress('BZ')
-        if newbuf:
-            self._pending.append(newbuf)
-
-class _ZstdCompressedStreamReader(_CompressedStreamReader):
-    def __init__(self, fh, zstd):
-        super(_ZstdCompressedStreamReader, self).__init__(fh)
-        self._zstd = zstd
-        self._decompobj = zstd.ZstdDecompressor().decompressobj()
-    def _decompress(self, chunk):
-        newbuf = self._decompobj.decompress(chunk)
-        if newbuf:
-            self._pending.append(newbuf)
-        try:
-            while True:
-                newbuf = self._decompobj.decompress('')
-                if newbuf:
-                    self._pending.append(newbuf)
-                else:
-                    break
-        except self._zstd.ZstdError:
-            self._eof = True
-
-class _zlibengine(compressionengine):
-    def name(self):
-        return 'zlib'
-
-    def bundletype(self):
-        """zlib compression using the DEFLATE algorithm.
-
-        All Mercurial clients should support this format. The compression
-        algorithm strikes a reasonable balance between compression ratio
-        and size.
-        """
-        return 'gzip', 'GZ'
-
-    def wireprotosupport(self):
-        return compewireprotosupport('zlib', 20, 20)
-
-    def revlogheader(self):
-        return 'x'
-
-    def compressstream(self, it, opts=None):
-        opts = opts or {}
-
-        z = zlib.compressobj(opts.get('level', -1))
-        for chunk in it:
-            data = z.compress(chunk)
-            # Not all calls to compress emit data. It is cheaper to inspect
-            # here than to feed empty chunks through generator.
-            if data:
-                yield data
-
-        yield z.flush()
-
-    def decompressorreader(self, fh):
-        return _GzipCompressedStreamReader(fh)
-
-    class zlibrevlogcompressor(object):
-        def compress(self, data):
-            insize = len(data)
-            # Caller handles empty input case.
-            assert insize > 0
-
-            if insize < 44:
-                return None
-
-            elif insize <= 1000000:
-                compressed = zlib.compress(data)
-                if len(compressed) < insize:
-                    return compressed
-                return None
-
-            # zlib makes an internal copy of the input buffer, doubling
-            # memory usage for large inputs. So do streaming compression
-            # on large inputs.
-            else:
-                z = zlib.compressobj()
-                parts = []
-                pos = 0
-                while pos < insize:
-                    pos2 = pos + 2**20
-                    parts.append(z.compress(data[pos:pos2]))
-                    pos = pos2
-                parts.append(z.flush())
-
-                if sum(map(len, parts)) < insize:
-                    return ''.join(parts)
-                return None
-
-        def decompress(self, data):
-            try:
-                return zlib.decompress(data)
-            except zlib.error as e:
-                raise error.StorageError(_('revlog decompress error: %s') %
-                                         stringutil.forcebytestr(e))
-
-    def revlogcompressor(self, opts=None):
-        return self.zlibrevlogcompressor()
-
-compengines.register(_zlibengine())
-
-class _bz2engine(compressionengine):
-    def name(self):
-        return 'bz2'
-
-    def bundletype(self):
-        """An algorithm that produces smaller bundles than ``gzip``.
-
-        All Mercurial clients should support this format.
-
-        This engine will likely produce smaller bundles than ``gzip`` but
-        will be significantly slower, both during compression and
-        decompression.
-
-        If available, the ``zstd`` engine can yield similar or better
-        compression at much higher speeds.
-        """
-        return 'bzip2', 'BZ'
-
-    # We declare a protocol name but don't advertise by default because
-    # it is slow.
-    def wireprotosupport(self):
-        return compewireprotosupport('bzip2', 0, 0)
-
-    def compressstream(self, it, opts=None):
-        opts = opts or {}
-        z = bz2.BZ2Compressor(opts.get('level', 9))
-        for chunk in it:
-            data = z.compress(chunk)
-            if data:
-                yield data
-
-        yield z.flush()
-
-    def decompressorreader(self, fh):
-        return _BZ2CompressedStreamReader(fh)
-
-compengines.register(_bz2engine())
-
-class _truncatedbz2engine(compressionengine):
-    def name(self):
-        return 'bz2truncated'
-
-    def bundletype(self):
-        return None, '_truncatedBZ'
-
-    # We don't implement compressstream because it is hackily handled elsewhere.
-
-    def decompressorreader(self, fh):
-        return _TruncatedBZ2CompressedStreamReader(fh)
-
-compengines.register(_truncatedbz2engine())
-
-class _noopengine(compressionengine):
-    def name(self):
-        return 'none'
-
-    def bundletype(self):
-        """No compression is performed.
-
-        Use this compression engine to explicitly disable compression.
-        """
-        return 'none', 'UN'
-
-    # Clients always support uncompressed payloads. Servers don't because
-    # unless you are on a fast network, uncompressed payloads can easily
-    # saturate your network pipe.
-    def wireprotosupport(self):
-        return compewireprotosupport('none', 0, 10)
-
-    # We don't implement revlogheader because it is handled specially
-    # in the revlog class.
-
-    def compressstream(self, it, opts=None):
-        return it
-
-    def decompressorreader(self, fh):
-        return fh
-
-    class nooprevlogcompressor(object):
-        def compress(self, data):
-            return None
-
-    def revlogcompressor(self, opts=None):
-        return self.nooprevlogcompressor()
-
-compengines.register(_noopengine())
-
-class _zstdengine(compressionengine):
-    def name(self):
-        return 'zstd'
-
-    @propertycache
-    def _module(self):
-        # Not all installs have the zstd module available. So defer importing
-        # until first access.
-        try:
-            from . import zstd
-            # Force delayed import.
-            zstd.__version__
-            return zstd
-        except ImportError:
-            return None
-
-    def available(self):
-        return bool(self._module)
-
-    def bundletype(self):
-        """A modern compression algorithm that is fast and highly flexible.
-
-        Only supported by Mercurial 4.1 and newer clients.
-
-        With the default settings, zstd compression is both faster and yields
-        better compression than ``gzip``. It also frequently yields better
-        compression than ``bzip2`` while operating at much higher speeds.
-
-        If this engine is available and backwards compatibility is not a
-        concern, it is likely the best available engine.
-        """
-        return 'zstd', 'ZS'
-
-    def wireprotosupport(self):
-        return compewireprotosupport('zstd', 50, 50)
-
-    def revlogheader(self):
-        return '\x28'
-
-    def compressstream(self, it, opts=None):
-        opts = opts or {}
-        # zstd level 3 is almost always significantly faster than zlib
-        # while providing no worse compression. It strikes a good balance
-        # between speed and compression.
-        level = opts.get('level', 3)
-
-        zstd = self._module
-        z = zstd.ZstdCompressor(level=level).compressobj()
-        for chunk in it:
-            data = z.compress(chunk)
-            if data:
-                yield data
-
-        yield z.flush()
-
-    def decompressorreader(self, fh):
-        return _ZstdCompressedStreamReader(fh, self._module)
-
-    class zstdrevlogcompressor(object):
-        def __init__(self, zstd, level=3):
-            # TODO consider omitting frame magic to save 4 bytes.
-            # This writes content sizes into the frame header. That is
-            # extra storage. But it allows a correct size memory allocation
-            # to hold the result.
-            self._cctx = zstd.ZstdCompressor(level=level)
-            self._dctx = zstd.ZstdDecompressor()
-            self._compinsize = zstd.COMPRESSION_RECOMMENDED_INPUT_SIZE
-            self._decompinsize = zstd.DECOMPRESSION_RECOMMENDED_INPUT_SIZE
-
-        def compress(self, data):
-            insize = len(data)
-            # Caller handles empty input case.
-            assert insize > 0
-
-            if insize < 50:
-                return None
-
-            elif insize <= 1000000:
-                compressed = self._cctx.compress(data)
-                if len(compressed) < insize:
-                    return compressed
-                return None
-            else:
-                z = self._cctx.compressobj()
-                chunks = []
-                pos = 0
-                while pos < insize:
-                    pos2 = pos + self._compinsize
-                    chunk = z.compress(data[pos:pos2])
-                    if chunk:
-                        chunks.append(chunk)
-                    pos = pos2
-                chunks.append(z.flush())
-
-                if sum(map(len, chunks)) < insize:
-                    return ''.join(chunks)
-                return None
-
-        def decompress(self, data):
-            insize = len(data)
-
-            try:
-                # This was measured to be faster than other streaming
-                # decompressors.
-                dobj = self._dctx.decompressobj()
-                chunks = []
-                pos = 0
-                while pos < insize:
-                    pos2 = pos + self._decompinsize
-                    chunk = dobj.decompress(data[pos:pos2])
-                    if chunk:
-                        chunks.append(chunk)
-                    pos = pos2
-                # Frame should be exhausted, so no finish() API.
-
-                return ''.join(chunks)
-            except Exception as e:
-                raise error.StorageError(_('revlog decompress error: %s') %
-                                         stringutil.forcebytestr(e))
-
-    def revlogcompressor(self, opts=None):
-        opts = opts or {}
-        return self.zstdrevlogcompressor(self._module,
-                                         level=opts.get('level', 3))
-
-compengines.register(_zstdengine())
-
-def bundlecompressiontopics():
-    """Obtains a list of available bundle compressions for use in help."""
-    # help.makeitemsdocs() expects a dict of names to items with a .__doc__.
-    items = {}
-
-    # We need to format the docstring. So use a dummy object/type to hold it
-    # rather than mutating the original.
-    class docobject(object):
-        pass
-
-    for name in compengines:
-        engine = compengines[name]
-
-        if not engine.available():
-            continue
-
-        bt = engine.bundletype()
-        if not bt or not bt[0]:
-            continue
-
-        doc = b'``%s``\n    %s' % (bt[0], pycompat.getdoc(engine.bundletype))
-
-        value = docobject()
-        value.__doc__ = pycompat.sysstr(doc)
-        value._origdoc = engine.bundletype.__doc__
-        value._origfunc = engine.bundletype
-
-        items[bt[0]] = value
-
-    return items
-
-i18nfunctions = bundlecompressiontopics().values()
 
 # convenient shortcut
 dst = debugstacktrace
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/mercurial/utils/compression.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,764 @@
+# compression.py - Mercurial utility functions for compression
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+
+from __future__ import absolute_import, print_function
+
+import bz2
+import collections
+import zlib
+
+from .. import (
+    error,
+    i18n,
+    pycompat,
+)
+from . import (
+    stringutil,
+)
+
+safehasattr = pycompat.safehasattr
+
+
+_ = i18n._
+
+# compression code
+
+SERVERROLE = 'server'
+CLIENTROLE = 'client'
+
+compewireprotosupport = collections.namedtuple(r'compenginewireprotosupport',
+                                               (r'name', r'serverpriority',
+                                                r'clientpriority'))
+
+class propertycache(object):
+    def __init__(self, func):
+        self.func = func
+        self.name = func.__name__
+    def __get__(self, obj, type=None):
+        result = self.func(obj)
+        self.cachevalue(obj, result)
+        return result
+
+    def cachevalue(self, obj, value):
+        # __dict__ assignment required to bypass __setattr__ (eg: repoview)
+        obj.__dict__[self.name] = value
+
+class compressormanager(object):
+    """Holds registrations of various compression engines.
+
+    This class essentially abstracts the differences between compression
+    engines to allow new compression formats to be added easily, possibly from
+    extensions.
+
+    Compressors are registered against the global instance by calling its
+    ``register()`` method.
+    """
+    def __init__(self):
+        self._engines = {}
+        # Bundle spec human name to engine name.
+        self._bundlenames = {}
+        # Internal bundle identifier to engine name.
+        self._bundletypes = {}
+        # Revlog header to engine name.
+        self._revlogheaders = {}
+        # Wire proto identifier to engine name.
+        self._wiretypes = {}
+
+    def __getitem__(self, key):
+        return self._engines[key]
+
+    def __contains__(self, key):
+        return key in self._engines
+
+    def __iter__(self):
+        return iter(self._engines.keys())
+
+    def register(self, engine):
+        """Register a compression engine with the manager.
+
+        The argument must be a ``compressionengine`` instance.
+        """
+        if not isinstance(engine, compressionengine):
+            raise ValueError(_('argument must be a compressionengine'))
+
+        name = engine.name()
+
+        if name in self._engines:
+            raise error.Abort(_('compression engine %s already registered') %
+                              name)
+
+        bundleinfo = engine.bundletype()
+        if bundleinfo:
+            bundlename, bundletype = bundleinfo
+
+            if bundlename in self._bundlenames:
+                raise error.Abort(_('bundle name %s already registered') %
+                                  bundlename)
+            if bundletype in self._bundletypes:
+                raise error.Abort(_('bundle type %s already registered by %s') %
+                                  (bundletype, self._bundletypes[bundletype]))
+
+            # No external facing name declared.
+            if bundlename:
+                self._bundlenames[bundlename] = name
+
+            self._bundletypes[bundletype] = name
+
+        wiresupport = engine.wireprotosupport()
+        if wiresupport:
+            wiretype = wiresupport.name
+            if wiretype in self._wiretypes:
+                raise error.Abort(_('wire protocol compression %s already '
+                                    'registered by %s') %
+                                  (wiretype, self._wiretypes[wiretype]))
+
+            self._wiretypes[wiretype] = name
+
+        revlogheader = engine.revlogheader()
+        if revlogheader and revlogheader in self._revlogheaders:
+            raise error.Abort(_('revlog header %s already registered by %s') %
+                              (revlogheader, self._revlogheaders[revlogheader]))
+
+        if revlogheader:
+            self._revlogheaders[revlogheader] = name
+
+        self._engines[name] = engine
+
+    @property
+    def supportedbundlenames(self):
+        return set(self._bundlenames.keys())
+
+    @property
+    def supportedbundletypes(self):
+        return set(self._bundletypes.keys())
+
+    def forbundlename(self, bundlename):
+        """Obtain a compression engine registered to a bundle name.
+
+        Will raise KeyError if the bundle type isn't registered.
+
+        Will abort if the engine is known but not available.
+        """
+        engine = self._engines[self._bundlenames[bundlename]]
+        if not engine.available():
+            raise error.Abort(_('compression engine %s could not be loaded') %
+                              engine.name())
+        return engine
+
+    def forbundletype(self, bundletype):
+        """Obtain a compression engine registered to a bundle type.
+
+        Will raise KeyError if the bundle type isn't registered.
+
+        Will abort if the engine is known but not available.
+        """
+        engine = self._engines[self._bundletypes[bundletype]]
+        if not engine.available():
+            raise error.Abort(_('compression engine %s could not be loaded') %
+                              engine.name())
+        return engine
+
+    def supportedwireengines(self, role, onlyavailable=True):
+        """Obtain compression engines that support the wire protocol.
+
+        Returns a list of engines in prioritized order, most desired first.
+
+        If ``onlyavailable`` is set, filter out engines that can't be
+        loaded.
+        """
+        assert role in (SERVERROLE, CLIENTROLE)
+
+        attr = 'serverpriority' if role == SERVERROLE else 'clientpriority'
+
+        engines = [self._engines[e] for e in self._wiretypes.values()]
+        if onlyavailable:
+            engines = [e for e in engines if e.available()]
+
+        def getkey(e):
+            # Sort first by priority, highest first. In case of tie, sort
+            # alphabetically. This is arbitrary, but ensures output is
+            # stable.
+            w = e.wireprotosupport()
+            return -1 * getattr(w, attr), w.name
+
+        return list(sorted(engines, key=getkey))
+
+    def forwiretype(self, wiretype):
+        engine = self._engines[self._wiretypes[wiretype]]
+        if not engine.available():
+            raise error.Abort(_('compression engine %s could not be loaded') %
+                              engine.name())
+        return engine
+
+    def forrevlogheader(self, header):
+        """Obtain a compression engine registered to a revlog header.
+
+        Will raise KeyError if the revlog header value isn't registered.
+        """
+        return self._engines[self._revlogheaders[header]]
+
+compengines = compressormanager()
+
+class compressionengine(object):
+    """Base class for compression engines.
+
+    Compression engines must implement the interface defined by this class.
+    """
+    def name(self):
+        """Returns the name of the compression engine.
+
+        This is the key the engine is registered under.
+
+        This method must be implemented.
+        """
+        raise NotImplementedError()
+
+    def available(self):
+        """Whether the compression engine is available.
+
+        The intent of this method is to allow optional compression engines
+        that may not be available in all installations (such as engines relying
+        on C extensions that may not be present).
+        """
+        return True
+
+    def bundletype(self):
+        """Describes bundle identifiers for this engine.
+
+        If this compression engine isn't supported for bundles, returns None.
+
+        If this engine can be used for bundles, returns a 2-tuple of strings of
+        the user-facing "bundle spec" compression name and an internal
+        identifier used to denote the compression format within bundles. To
+        exclude the name from external usage, set the first element to ``None``.
+
+        If bundle compression is supported, the class must also implement
+        ``compressstream`` and `decompressorreader``.
+
+        The docstring of this method is used in the help system to tell users
+        about this engine.
+        """
+        return None
+
+    def wireprotosupport(self):
+        """Declare support for this compression format on the wire protocol.
+
+        If this compression engine isn't supported for compressing wire
+        protocol payloads, returns None.
+
+        Otherwise, returns ``compenginewireprotosupport`` with the following
+        fields:
+
+        * String format identifier
+        * Integer priority for the server
+        * Integer priority for the client
+
+        The integer priorities are used to order the advertisement of format
+        support by server and client. The highest integer is advertised
+        first. Integers with non-positive values aren't advertised.
+
+        The priority values are somewhat arbitrary and only used for default
+        ordering. The relative order can be changed via config options.
+
+        If wire protocol compression is supported, the class must also implement
+        ``compressstream`` and ``decompressorreader``.
+        """
+        return None
+
+    def revlogheader(self):
+        """Header added to revlog chunks that identifies this engine.
+
+        If this engine can be used to compress revlogs, this method should
+        return the bytes used to identify chunks compressed with this engine.
+        Else, the method should return ``None`` to indicate it does not
+        participate in revlog compression.
+        """
+        return None
+
+    def compressstream(self, it, opts=None):
+        """Compress an iterator of chunks.
+
+        The method receives an iterator (ideally a generator) of chunks of
+        bytes to be compressed. It returns an iterator (ideally a generator)
+        of bytes of chunks representing the compressed output.
+
+        Optionally accepts an argument defining how to perform compression.
+        Each engine treats this argument differently.
+        """
+        raise NotImplementedError()
+
+    def decompressorreader(self, fh):
+        """Perform decompression on a file object.
+
+        Argument is an object with a ``read(size)`` method that returns
+        compressed data. Return value is an object with a ``read(size)`` that
+        returns uncompressed data.
+        """
+        raise NotImplementedError()
+
+    def revlogcompressor(self, opts=None):
+        """Obtain an object that can be used to compress revlog entries.
+
+        The object has a ``compress(data)`` method that compresses binary
+        data. This method returns compressed binary data or ``None`` if
+        the data could not be compressed (too small, not compressible, etc).
+        The returned data should have a header uniquely identifying this
+        compression format so decompression can be routed to this engine.
+        This header should be identified by the ``revlogheader()`` return
+        value.
+
+        The object has a ``decompress(data)`` method that decompresses
+        data. The method will only be called if ``data`` begins with
+        ``revlogheader()``. The method should return the raw, uncompressed
+        data or raise a ``StorageError``.
+
+        The object is reusable but is not thread safe.
+        """
+        raise NotImplementedError()
+
+class _CompressedStreamReader(object):
+    def __init__(self, fh):
+        if safehasattr(fh, 'unbufferedread'):
+            self._reader = fh.unbufferedread
+        else:
+            self._reader = fh.read
+        self._pending = []
+        self._pos = 0
+        self._eof = False
+
+    def _decompress(self, chunk):
+        raise NotImplementedError()
+
+    def read(self, l):
+        buf = []
+        while True:
+            while self._pending:
+                if len(self._pending[0]) > l + self._pos:
+                    newbuf = self._pending[0]
+                    buf.append(newbuf[self._pos:self._pos + l])
+                    self._pos += l
+                    return ''.join(buf)
+
+                newbuf = self._pending.pop(0)
+                if self._pos:
+                    buf.append(newbuf[self._pos:])
+                    l -= len(newbuf) - self._pos
+                else:
+                    buf.append(newbuf)
+                    l -= len(newbuf)
+                self._pos = 0
+
+            if self._eof:
+                return ''.join(buf)
+            chunk = self._reader(65536)
+            self._decompress(chunk)
+            if not chunk and not self._pending and not self._eof:
+                # No progress and no new data, bail out
+                return ''.join(buf)
+
+class _GzipCompressedStreamReader(_CompressedStreamReader):
+    def __init__(self, fh):
+        super(_GzipCompressedStreamReader, self).__init__(fh)
+        self._decompobj = zlib.decompressobj()
+    def _decompress(self, chunk):
+        newbuf = self._decompobj.decompress(chunk)
+        if newbuf:
+            self._pending.append(newbuf)
+        d = self._decompobj.copy()
+        try:
+            d.decompress('x')
+            d.flush()
+            if d.unused_data == 'x':
+                self._eof = True
+        except zlib.error:
+            pass
+
+class _BZ2CompressedStreamReader(_CompressedStreamReader):
+    def __init__(self, fh):
+        super(_BZ2CompressedStreamReader, self).__init__(fh)
+        self._decompobj = bz2.BZ2Decompressor()
+    def _decompress(self, chunk):
+        newbuf = self._decompobj.decompress(chunk)
+        if newbuf:
+            self._pending.append(newbuf)
+        try:
+            while True:
+                newbuf = self._decompobj.decompress('')
+                if newbuf:
+                    self._pending.append(newbuf)
+                else:
+                    break
+        except EOFError:
+            self._eof = True
+
+class _TruncatedBZ2CompressedStreamReader(_BZ2CompressedStreamReader):
+    def __init__(self, fh):
+        super(_TruncatedBZ2CompressedStreamReader, self).__init__(fh)
+        newbuf = self._decompobj.decompress('BZ')
+        if newbuf:
+            self._pending.append(newbuf)
+
+class _ZstdCompressedStreamReader(_CompressedStreamReader):
+    def __init__(self, fh, zstd):
+        super(_ZstdCompressedStreamReader, self).__init__(fh)
+        self._zstd = zstd
+        self._decompobj = zstd.ZstdDecompressor().decompressobj()
+    def _decompress(self, chunk):
+        newbuf = self._decompobj.decompress(chunk)
+        if newbuf:
+            self._pending.append(newbuf)
+        try:
+            while True:
+                newbuf = self._decompobj.decompress('')
+                if newbuf:
+                    self._pending.append(newbuf)
+                else:
+                    break
+        except self._zstd.ZstdError:
+            self._eof = True
+
+class _zlibengine(compressionengine):
+    def name(self):
+        return 'zlib'
+
+    def bundletype(self):
+        """zlib compression using the DEFLATE algorithm.
+
+        All Mercurial clients should support this format. The compression
+        algorithm strikes a reasonable balance between compression ratio
+        and size.
+        """
+        return 'gzip', 'GZ'
+
+    def wireprotosupport(self):
+        return compewireprotosupport('zlib', 20, 20)
+
+    def revlogheader(self):
+        return 'x'
+
+    def compressstream(self, it, opts=None):
+        opts = opts or {}
+
+        z = zlib.compressobj(opts.get('level', -1))
+        for chunk in it:
+            data = z.compress(chunk)
+            # Not all calls to compress emit data. It is cheaper to inspect
+            # here than to feed empty chunks through generator.
+            if data:
+                yield data
+
+        yield z.flush()
+
+    def decompressorreader(self, fh):
+        return _GzipCompressedStreamReader(fh)
+
+    class zlibrevlogcompressor(object):
+
+        def __init__(self, level=None):
+            self._level = level
+
+        def compress(self, data):
+            insize = len(data)
+            # Caller handles empty input case.
+            assert insize > 0
+
+            if insize < 44:
+                return None
+
+            elif insize <= 1000000:
+                if self._level is None:
+                    compressed = zlib.compress(data)
+                else:
+                    compressed = zlib.compress(data, self._level)
+                if len(compressed) < insize:
+                    return compressed
+                return None
+
+            # zlib makes an internal copy of the input buffer, doubling
+            # memory usage for large inputs. So do streaming compression
+            # on large inputs.
+            else:
+                if self._level is None:
+                    z = zlib.compressobj()
+                else:
+                    z = zlib.compressobj(level=self._level)
+                parts = []
+                pos = 0
+                while pos < insize:
+                    pos2 = pos + 2**20
+                    parts.append(z.compress(data[pos:pos2]))
+                    pos = pos2
+                parts.append(z.flush())
+
+                if sum(map(len, parts)) < insize:
+                    return ''.join(parts)
+                return None
+
+        def decompress(self, data):
+            try:
+                return zlib.decompress(data)
+            except zlib.error as e:
+                raise error.StorageError(_('revlog decompress error: %s') %
+                                         stringutil.forcebytestr(e))
+
+    def revlogcompressor(self, opts=None):
+        level = None
+        if opts is not None:
+            level = opts.get('zlib.level')
+        return self.zlibrevlogcompressor(level)
+
+compengines.register(_zlibengine())
+
+class _bz2engine(compressionengine):
+    def name(self):
+        return 'bz2'
+
+    def bundletype(self):
+        """An algorithm that produces smaller bundles than ``gzip``.
+
+        All Mercurial clients should support this format.
+
+        This engine will likely produce smaller bundles than ``gzip`` but
+        will be significantly slower, both during compression and
+        decompression.
+
+        If available, the ``zstd`` engine can yield similar or better
+        compression at much higher speeds.
+        """
+        return 'bzip2', 'BZ'
+
+    # We declare a protocol name but don't advertise by default because
+    # it is slow.
+    def wireprotosupport(self):
+        return compewireprotosupport('bzip2', 0, 0)
+
+    def compressstream(self, it, opts=None):
+        opts = opts or {}
+        z = bz2.BZ2Compressor(opts.get('level', 9))
+        for chunk in it:
+            data = z.compress(chunk)
+            if data:
+                yield data
+
+        yield z.flush()
+
+    def decompressorreader(self, fh):
+        return _BZ2CompressedStreamReader(fh)
+
+compengines.register(_bz2engine())
+
+class _truncatedbz2engine(compressionengine):
+    def name(self):
+        return 'bz2truncated'
+
+    def bundletype(self):
+        return None, '_truncatedBZ'
+
+    # We don't implement compressstream because it is hackily handled elsewhere.
+
+    def decompressorreader(self, fh):
+        return _TruncatedBZ2CompressedStreamReader(fh)
+
+compengines.register(_truncatedbz2engine())
+
+class _noopengine(compressionengine):
+    def name(self):
+        return 'none'
+
+    def bundletype(self):
+        """No compression is performed.
+
+        Use this compression engine to explicitly disable compression.
+        """
+        return 'none', 'UN'
+
+    # Clients always support uncompressed payloads. Servers don't because
+    # unless you are on a fast network, uncompressed payloads can easily
+    # saturate your network pipe.
+    def wireprotosupport(self):
+        return compewireprotosupport('none', 0, 10)
+
+    # We don't implement revlogheader because it is handled specially
+    # in the revlog class.
+
+    def compressstream(self, it, opts=None):
+        return it
+
+    def decompressorreader(self, fh):
+        return fh
+
+    class nooprevlogcompressor(object):
+        def compress(self, data):
+            return None
+
+    def revlogcompressor(self, opts=None):
+        return self.nooprevlogcompressor()
+
+compengines.register(_noopengine())
+
+class _zstdengine(compressionengine):
+    def name(self):
+        return 'zstd'
+
+    @propertycache
+    def _module(self):
+        # Not all installs have the zstd module available. So defer importing
+        # until first access.
+        try:
+            from .. import zstd
+            # Force delayed import.
+            zstd.__version__
+            return zstd
+        except ImportError:
+            return None
+
+    def available(self):
+        return bool(self._module)
+
+    def bundletype(self):
+        """A modern compression algorithm that is fast and highly flexible.
+
+        Only supported by Mercurial 4.1 and newer clients.
+
+        With the default settings, zstd compression is both faster and yields
+        better compression than ``gzip``. It also frequently yields better
+        compression than ``bzip2`` while operating at much higher speeds.
+
+        If this engine is available and backwards compatibility is not a
+        concern, it is likely the best available engine.
+        """
+        return 'zstd', 'ZS'
+
+    def wireprotosupport(self):
+        return compewireprotosupport('zstd', 50, 50)
+
+    def revlogheader(self):
+        return '\x28'
+
+    def compressstream(self, it, opts=None):
+        opts = opts or {}
+        # zstd level 3 is almost always significantly faster than zlib
+        # while providing no worse compression. It strikes a good balance
+        # between speed and compression.
+        level = opts.get('level', 3)
+
+        zstd = self._module
+        z = zstd.ZstdCompressor(level=level).compressobj()
+        for chunk in it:
+            data = z.compress(chunk)
+            if data:
+                yield data
+
+        yield z.flush()
+
+    def decompressorreader(self, fh):
+        return _ZstdCompressedStreamReader(fh, self._module)
+
+    class zstdrevlogcompressor(object):
+        def __init__(self, zstd, level=3):
+            # TODO consider omitting frame magic to save 4 bytes.
+            # This writes content sizes into the frame header. That is
+            # extra storage. But it allows a correct size memory allocation
+            # to hold the result.
+            self._cctx = zstd.ZstdCompressor(level=level)
+            self._dctx = zstd.ZstdDecompressor()
+            self._compinsize = zstd.COMPRESSION_RECOMMENDED_INPUT_SIZE
+            self._decompinsize = zstd.DECOMPRESSION_RECOMMENDED_INPUT_SIZE
+
+        def compress(self, data):
+            insize = len(data)
+            # Caller handles empty input case.
+            assert insize > 0
+
+            if insize < 50:
+                return None
+
+            elif insize <= 1000000:
+                compressed = self._cctx.compress(data)
+                if len(compressed) < insize:
+                    return compressed
+                return None
+            else:
+                z = self._cctx.compressobj()
+                chunks = []
+                pos = 0
+                while pos < insize:
+                    pos2 = pos + self._compinsize
+                    chunk = z.compress(data[pos:pos2])
+                    if chunk:
+                        chunks.append(chunk)
+                    pos = pos2
+                chunks.append(z.flush())
+
+                if sum(map(len, chunks)) < insize:
+                    return ''.join(chunks)
+                return None
+
+        def decompress(self, data):
+            insize = len(data)
+
+            try:
+                # This was measured to be faster than other streaming
+                # decompressors.
+                dobj = self._dctx.decompressobj()
+                chunks = []
+                pos = 0
+                while pos < insize:
+                    pos2 = pos + self._decompinsize
+                    chunk = dobj.decompress(data[pos:pos2])
+                    if chunk:
+                        chunks.append(chunk)
+                    pos = pos2
+                # Frame should be exhausted, so no finish() API.
+
+                return ''.join(chunks)
+            except Exception as e:
+                raise error.StorageError(_('revlog decompress error: %s') %
+                                         stringutil.forcebytestr(e))
+
+    def revlogcompressor(self, opts=None):
+        opts = opts or {}
+        level = opts.get('zstd.level')
+        if level is None:
+            level = opts.get('level')
+        if level is None:
+            level = 3
+        return self.zstdrevlogcompressor(self._module, level=level)
+
+compengines.register(_zstdengine())
+
+def bundlecompressiontopics():
+    """Obtains a list of available bundle compressions for use in help."""
+    # help.makeitemsdocs() expects a dict of names to items with a .__doc__.
+    items = {}
+
+    # We need to format the docstring. So use a dummy object/type to hold it
+    # rather than mutating the original.
+    class docobject(object):
+        pass
+
+    for name in compengines:
+        engine = compengines[name]
+
+        if not engine.available():
+            continue
+
+        bt = engine.bundletype()
+        if not bt or not bt[0]:
+            continue
+
+        doc = b'``%s``\n    %s' % (bt[0], pycompat.getdoc(engine.bundletype))
+
+        value = docobject()
+        value.__doc__ = pycompat.sysstr(doc)
+        value._origdoc = engine.bundletype.__doc__
+        value._origfunc = engine.bundletype
+
+        items[bt[0]] = value
+
+    return items
+
+i18nfunctions = bundlecompressiontopics().values()
--- a/mercurial/utils/procutil.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/utils/procutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -221,7 +221,7 @@
     """
     return (pycompat.safehasattr(sys, "frozen") or # new py2exe
             pycompat.safehasattr(sys, "importers") or # old py2exe
-            imp.is_frozen(u"__main__")) # tools/freeze
+            imp.is_frozen(r"__main__")) # tools/freeze
 
 _hgexecutable = None
 
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/mercurial/utils/repoviewutil.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,22 @@
+# repoviewutil.py - constaints data relevant to repoview.py and other module
+#
+# Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
+#                Logilab SA        <contact@logilab.fr>
+#
+# This software may be used and distributed according to the terms of the
+# GNU General Public License version 2 or any later version.
+
+from __future__ import absolute_import
+
+### Nearest subset relation
+# Nearest subset of filter X is a filter Y so that:
+# * Y is included in X,
+# * X - Y is as small as possible.
+# This create and ordering used for branchmap purpose.
+# the ordering may be partial
+subsettable = {None: 'visible',
+               'visible-hidden': 'visible',
+               'visible': 'served',
+               'served.hidden': 'served',
+               'served': 'immutable',
+               'immutable': 'base'}
--- a/mercurial/verify.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/verify.py	Wed Apr 17 13:41:18 2019 -0400
@@ -51,11 +51,13 @@
         self.skipflags = repo.ui.configint('verify', 'skipflags')
         self.warnorphanstorefiles = True
 
-    def warn(self, msg):
+    def _warn(self, msg):
+        """record a "warning" level issue"""
         self.ui.warn(msg + "\n")
         self.warnings += 1
 
-    def err(self, linkrev, msg, filename=None):
+    def _err(self, linkrev, msg, filename=None):
+        """record a "error" level issue"""
         if linkrev is not None:
             self.badrevs.add(linkrev)
             linkrev = "%d" % linkrev
@@ -67,15 +69,23 @@
         self.ui.warn(" " + msg + "\n")
         self.errors += 1
 
-    def exc(self, linkrev, msg, inst, filename=None):
+    def _exc(self, linkrev, msg, inst, filename=None):
+        """record exception raised during the verify process"""
         fmsg = pycompat.bytestr(inst)
         if not fmsg:
             fmsg = pycompat.byterepr(inst)
-        self.err(linkrev, "%s: %s" % (msg, fmsg), filename)
+        self._err(linkrev, "%s: %s" % (msg, fmsg), filename)
+
+    def _checkrevlog(self, obj, name, linkrev):
+        """verify high level property of a revlog
 
-    def checklog(self, obj, name, linkrev):
+        - revlog is present,
+        - revlog is non-empty,
+        - sizes (index and data) are correct,
+        - revlog's format version is correct.
+        """
         if not len(obj) and (self.havecl or self.havemf):
-            self.err(linkrev, _("empty or missing %s") % name)
+            self._err(linkrev, _("empty or missing %s") % name)
             return
 
         d = obj.checksize()
@@ -86,18 +96,37 @@
 
         if obj.version != revlog.REVLOGV0:
             if not self.revlogv1:
-                self.warn(_("warning: `%s' uses revlog format 1") % name)
+                self._warn(_("warning: `%s' uses revlog format 1") % name)
         elif self.revlogv1:
-            self.warn(_("warning: `%s' uses revlog format 0") % name)
+            self._warn(_("warning: `%s' uses revlog format 0") % name)
+
+    def _checkentry(self, obj, i, node, seen, linkrevs, f):
+        """verify a single revlog entry
 
-    def checkentry(self, obj, i, node, seen, linkrevs, f):
+        arguments are:
+        - obj:      the source revlog
+        - i:        the revision number
+        - node:        the revision node id
+        - seen:     nodes previously seen for this revlog
+        - linkrevs: [changelog-revisions] introducing "node"
+        - f:        string label ("changelog", "manifest", or filename)
+
+        Performs the following checks:
+        - linkrev points to an existing changelog revision,
+        - linkrev points to a changelog revision that introduces this revision,
+        - linkrev points to the lowest of these changesets,
+        - both parents exist in the revlog,
+        - the revision is not duplicated.
+
+        Return the linkrev of the revision (or None for changelog's revisions).
+        """
         lr = obj.linkrev(obj.rev(node))
         if lr < 0 or (self.havecl and lr not in linkrevs):
             if lr < 0 or lr >= len(self.repo.changelog):
                 msg = _("rev %d points to nonexistent changeset %d")
             else:
                 msg = _("rev %d points to unexpected changeset %d")
-            self.err(None, msg % (i, lr), f)
+            self._err(None, msg % (i, lr), f)
             if linkrevs:
                 if f and len(linkrevs) > 1:
                     try:
@@ -106,31 +135,35 @@
                                     if self.lrugetctx(l)[f].filenode() == node]
                     except Exception:
                         pass
-                self.warn(_(" (expected %s)") % " ".join
-                          (map(pycompat.bytestr, linkrevs)))
+                self._warn(_(" (expected %s)") % " ".join
+                           (map(pycompat.bytestr, linkrevs)))
             lr = None # can't be trusted
 
         try:
             p1, p2 = obj.parents(node)
             if p1 not in seen and p1 != nullid:
-                self.err(lr, _("unknown parent 1 %s of %s") %
+                self._err(lr, _("unknown parent 1 %s of %s") %
                     (short(p1), short(node)), f)
             if p2 not in seen and p2 != nullid:
-                self.err(lr, _("unknown parent 2 %s of %s") %
+                self._err(lr, _("unknown parent 2 %s of %s") %
                     (short(p2), short(node)), f)
         except Exception as inst:
-            self.exc(lr, _("checking parents of %s") % short(node), inst, f)
+            self._exc(lr, _("checking parents of %s") % short(node), inst, f)
 
         if node in seen:
-            self.err(lr, _("duplicate revision %d (%d)") % (i, seen[node]), f)
+            self._err(lr, _("duplicate revision %d (%d)") % (i, seen[node]), f)
         seen[node] = i
         return lr
 
     def verify(self):
-        repo = self.repo
+        """verify the content of the Mercurial repository
+
+        This method run all verifications, displaying issues as they are found.
 
+        return 1 if any error have been encountered, 0 otherwise."""
+        # initial validation and generic report
+        repo = self.repo
         ui = repo.ui
-
         if not repo.url().startswith('file:'):
             raise error.Abort(_("cannot verify bundle or remote repos"))
 
@@ -141,15 +174,14 @@
             ui.status(_("repository uses revlog format %d\n") %
                            (self.revlogv1 and 1 or 0))
 
+        # data verification
         mflinkrevs, filelinkrevs = self._verifychangelog()
-
         filenodes = self._verifymanifest(mflinkrevs)
         del mflinkrevs
-
         self._crosscheckfiles(filelinkrevs, filenodes)
-
         totalfiles, filerevisions = self._verifyfiles(filenodes, filelinkrevs)
 
+        # final report
         ui.status(_("checked %d changesets with %d changes to %d files\n") %
                        (len(repo.changelog), filerevisions, totalfiles))
         if self.warnings:
@@ -163,8 +195,24 @@
                 ui.warn(_("(first damaged changeset appears to be %d)\n")
                         % min(self.badrevs))
             return 1
+        return 0
 
     def _verifychangelog(self):
+        """verify the changelog of a repository
+
+        The following checks are performed:
+        - all of `_checkrevlog` checks,
+        - all of `_checkentry` checks (for each revisions),
+        - each revision can be read.
+
+        The function returns some of the data observed in the changesets as a
+        (mflinkrevs, filelinkrevs) tuples:
+        - mflinkrevs:   is a { manifest-node -> [changelog-rev] } mapping
+        - filelinkrevs: is a { file-path -> [changelog-rev] } mapping
+
+        If a matcher was specified, filelinkrevs will only contains matched
+        files.
+        """
         ui = self.ui
         repo = self.repo
         match = self.match
@@ -174,13 +222,13 @@
         mflinkrevs = {}
         filelinkrevs = {}
         seen = {}
-        self.checklog(cl, "changelog", 0)
+        self._checkrevlog(cl, "changelog", 0)
         progress = ui.makeprogress(_('checking'), unit=_('changesets'),
                                    total=len(repo))
         for i in repo:
             progress.update(i)
             n = cl.node(i)
-            self.checkentry(cl, i, n, seen, [i], "changelog")
+            self._checkentry(cl, i, n, seen, [i], "changelog")
 
             try:
                 changes = cl.read(n)
@@ -192,12 +240,39 @@
                         filelinkrevs.setdefault(_normpath(f), []).append(i)
             except Exception as inst:
                 self.refersmf = True
-                self.exc(i, _("unpacking changeset %s") % short(n), inst)
+                self._exc(i, _("unpacking changeset %s") % short(n), inst)
         progress.complete()
         return mflinkrevs, filelinkrevs
 
     def _verifymanifest(self, mflinkrevs, dir="", storefiles=None,
                         subdirprogress=None):
+        """verify the manifestlog content
+
+        Inputs:
+        - mflinkrevs:     a {manifest-node -> [changelog-revisions]} mapping
+        - dir:            a subdirectory to check (for tree manifest repo)
+        - storefiles:     set of currently "orphan" files.
+        - subdirprogress: a progress object
+
+        This function checks:
+        * all of `_checkrevlog` checks (for all manifest related revlogs)
+        * all of `_checkentry` checks (for all manifest related revisions)
+        * nodes for subdirectory exists in the sub-directory manifest
+        * each manifest entries have a file path
+        * each manifest node refered in mflinkrevs exist in the manifest log
+
+        If tree manifest is in use and a matchers is specified, only the
+        sub-directories matching it will be verified.
+
+        return a two level mapping:
+            {"path" -> { filenode -> changelog-revision}}
+
+        This mapping primarily contains entries for every files in the
+        repository. In addition, when tree-manifest is used, it also contains
+        sub-directory entries.
+
+        If a matcher is provided, only matching paths will be included.
+        """
         repo = self.repo
         ui = self.ui
         match = self.match
@@ -220,27 +295,27 @@
         if self.refersmf:
             # Do not check manifest if there are only changelog entries with
             # null manifests.
-            self.checklog(mf, label, 0)
+            self._checkrevlog(mf, label, 0)
         progress = ui.makeprogress(_('checking'), unit=_('manifests'),
                                    total=len(mf))
         for i in mf:
             if not dir:
                 progress.update(i)
             n = mf.node(i)
-            lr = self.checkentry(mf, i, n, seen, mflinkrevs.get(n, []), label)
+            lr = self._checkentry(mf, i, n, seen, mflinkrevs.get(n, []), label)
             if n in mflinkrevs:
                 del mflinkrevs[n]
             elif dir:
-                self.err(lr, _("%s not in parent-directory manifest") %
+                self._err(lr, _("%s not in parent-directory manifest") %
                          short(n), label)
             else:
-                self.err(lr, _("%s not in changesets") % short(n), label)
+                self._err(lr, _("%s not in changesets") % short(n), label)
 
             try:
                 mfdelta = mfl.get(dir, n).readdelta(shallow=True)
                 for f, fn, fl in mfdelta.iterentries():
                     if not f:
-                        self.err(lr, _("entry without name in manifest"))
+                        self._err(lr, _("entry without name in manifest"))
                     elif f == "/dev/null":  # ignore this in very old repos
                         continue
                     fullpath = dir + _normpath(f)
@@ -254,19 +329,21 @@
                             continue
                         filenodes.setdefault(fullpath, {}).setdefault(fn, lr)
             except Exception as inst:
-                self.exc(lr, _("reading delta %s") % short(n), inst, label)
+                self._exc(lr, _("reading delta %s") % short(n), inst, label)
         if not dir:
             progress.complete()
 
         if self.havemf:
-            for c, m in sorted([(c, m) for m in mflinkrevs
-                        for c in mflinkrevs[m]]):
+            # since we delete entry in `mflinkrevs` during iteration, any
+            # remaining entries are "missing". We need to issue errors for them.
+            changesetpairs = [(c, m) for m in mflinkrevs for c in mflinkrevs[m]]
+            for c, m in sorted(changesetpairs):
                 if dir:
-                    self.err(c, _("parent-directory manifest refers to unknown "
-                                  "revision %s") % short(m), label)
+                    self._err(c, _("parent-directory manifest refers to unknown"
+                                   " revision %s") % short(m), label)
                 else:
-                    self.err(c, _("changeset refers to unknown revision %s") %
-                             short(m), label)
+                    self._err(c, _("changeset refers to unknown revision %s") %
+                              short(m), label)
 
         if not dir and subdirnodes:
             self.ui.status(_("checking directory manifests\n"))
@@ -275,7 +352,7 @@
             revlogv1 = self.revlogv1
             for f, f2, size in repo.store.datafiles():
                 if not f:
-                    self.err(None, _("cannot decode filename '%s'") % f2)
+                    self._err(None, _("cannot decode filename '%s'") % f2)
                 elif (size > 0 or not revlogv1) and f.startswith('meta/'):
                     storefiles.add(_normpath(f))
                     subdirs.add(os.path.dirname(f))
@@ -292,7 +369,7 @@
             subdirprogress.complete()
             if self.warnorphanstorefiles:
                 for f in sorted(storefiles):
-                    self.warn(_("warning: orphan data file '%s'") % f)
+                    self._warn(_("warning: orphan data file '%s'") % f)
 
         return filenodes
 
@@ -309,7 +386,7 @@
                 progress.increment()
                 if f not in filenodes:
                     lr = filelinkrevs[f][0]
-                    self.err(lr, _("in changeset but not in manifest"), f)
+                    self._err(lr, _("in changeset but not in manifest"), f)
 
         if self.havecl:
             for f in sorted(filenodes):
@@ -320,7 +397,7 @@
                         lr = min([fl.linkrev(fl.rev(n)) for n in filenodes[f]])
                     except Exception:
                         lr = None
-                    self.err(lr, _("in manifest but not in changeset"), f)
+                    self._err(lr, _("in manifest but not in changeset"), f)
 
         progress.complete()
 
@@ -335,7 +412,7 @@
         storefiles = set()
         for f, f2, size in repo.store.datafiles():
             if not f:
-                self.err(None, _("cannot decode filename '%s'") % f2)
+                self._err(None, _("cannot decode filename '%s'") % f2)
             elif (size > 0 or not revlogv1) and f.startswith('data/'):
                 storefiles.add(_normpath(f))
 
@@ -367,7 +444,7 @@
             try:
                 fl = repo.file(f)
             except error.StorageError as e:
-                self.err(lr, _("broken revlog! (%s)") % e, f)
+                self._err(lr, _("broken revlog! (%s)") % e, f)
                 continue
 
             for ff in fl.files():
@@ -375,12 +452,12 @@
                     storefiles.remove(ff)
                 except KeyError:
                     if self.warnorphanstorefiles:
-                        self.warn(_(" warning: revlog '%s' not in fncache!") %
+                        self._warn(_(" warning: revlog '%s' not in fncache!") %
                                   ff)
                         self.fncachewarned = True
 
             if not len(fl) and (self.havecl or self.havemf):
-                self.err(lr, _("empty or missing %s") % f)
+                self._err(lr, _("empty or missing %s") % f)
             else:
                 # Guard against implementations not setting this.
                 state['skipread'] = set()
@@ -391,10 +468,10 @@
                         linkrev = None
 
                     if problem.warning:
-                        self.warn(problem.warning)
+                        self._warn(problem.warning)
                     elif problem.error:
-                        self.err(linkrev if linkrev is not None else lr,
-                                 problem.error, f)
+                        self._err(linkrev if linkrev is not None else lr,
+                                  problem.error, f)
                     else:
                         raise error.ProgrammingError(
                             'problem instance does not set warning or error '
@@ -404,10 +481,10 @@
             for i in fl:
                 revisions += 1
                 n = fl.node(i)
-                lr = self.checkentry(fl, i, n, seen, linkrevs, f)
+                lr = self._checkentry(fl, i, n, seen, linkrevs, f)
                 if f in filenodes:
                     if havemf and n not in filenodes[f]:
-                        self.err(lr, _("%s not in manifests") % (short(n)), f)
+                        self._err(lr, _("%s not in manifests") % (short(n)), f)
                     else:
                         del filenodes[f][n]
 
@@ -424,12 +501,15 @@
                         if lr is not None and ui.verbose:
                             ctx = lrugetctx(lr)
                             if not any(rp[0] in pctx for pctx in ctx.parents()):
-                                self.warn(_("warning: copy source of '%s' not"
+                                self._warn(_("warning: copy source of '%s' not"
                                             " in parents of %s") % (f, ctx))
                         fl2 = repo.file(rp[0])
                         if not len(fl2):
-                            self.err(lr, _("empty or missing copy source "
-                                     "revlog %s:%s") % (rp[0], short(rp[1])), f)
+                            self._err(lr,
+                                      _("empty or missing copy source revlog "
+                                        "%s:%s") % (rp[0],
+                                      short(rp[1])),
+                                      f)
                         elif rp[1] == nullid:
                             ui.note(_("warning: %s@%s: copy source"
                                       " revision is nullid %s:%s\n")
@@ -437,18 +517,19 @@
                         else:
                             fl2.rev(rp[1])
                 except Exception as inst:
-                    self.exc(lr, _("checking rename of %s") % short(n), inst, f)
+                    self._exc(lr, _("checking rename of %s") % short(n),
+                              inst, f)
 
             # cross-check
             if f in filenodes:
                 fns = [(v, k) for k, v in filenodes[f].iteritems()]
                 for lr, node in sorted(fns):
-                    self.err(lr, _("manifest refers to unknown revision %s") %
-                             short(node), f)
+                    self._err(lr, _("manifest refers to unknown revision %s") %
+                              short(node), f)
         progress.complete()
 
         if self.warnorphanstorefiles:
             for f in sorted(storefiles):
-                self.warn(_("warning: orphan data file '%s'") % f)
+                self._warn(_("warning: orphan data file '%s'") % f)
 
         return len(files), revisions
--- a/mercurial/wireprotoserver.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/wireprotoserver.py	Wed Apr 17 13:41:18 2019 -0400
@@ -23,6 +23,7 @@
 )
 from .utils import (
     cborutil,
+    compression,
     interfaceutil,
 )
 
@@ -144,7 +145,7 @@
         caps.append('httpmediatype=0.1rx,0.1tx,0.2tx')
 
         compengines = wireprototypes.supportedcompengines(repo.ui,
-                                                          util.SERVERROLE)
+            compression.SERVERROLE)
         if compengines:
             comptypes = ','.join(urlreq.quote(e.wireprotosupport().name)
                                  for e in compengines)
@@ -320,11 +321,12 @@
     if '0.2' in proto.getprotocaps():
         # All clients are expected to support uncompressed data.
         if prefer_uncompressed:
-            return HGTYPE2, util._noopengine(), {}
+            return HGTYPE2, compression._noopengine(), {}
 
         # Now find an agreed upon compression format.
         compformats = wireprotov1server.clientcompressionsupport(proto)
-        for engine in wireprototypes.supportedcompengines(ui, util.SERVERROLE):
+        for engine in wireprototypes.supportedcompengines(ui,
+                compression.SERVERROLE):
             if engine.wireprotosupport().name in compformats:
                 opts = {}
                 level = ui.configint('server', '%slevel' % engine.name())
--- a/mercurial/wireprototypes.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/wireprototypes.py	Wed Apr 17 13:41:18 2019 -0400
@@ -18,6 +18,7 @@
     util,
 )
 from .utils import (
+    compression,
     interfaceutil,
 )
 
@@ -316,12 +317,12 @@
 
 def supportedcompengines(ui, role):
     """Obtain the list of supported compression engines for a request."""
-    assert role in (util.CLIENTROLE, util.SERVERROLE)
+    assert role in (compression.CLIENTROLE, compression.SERVERROLE)
 
-    compengines = util.compengines.supportedwireengines(role)
+    compengines = compression.compengines.supportedwireengines(role)
 
     # Allow config to override default list and ordering.
-    if role == util.SERVERROLE:
+    if role == compression.SERVERROLE:
         configengines = ui.configlist('server', 'compressionengines')
         config = 'server.compressionengines'
     else:
--- a/mercurial/wireprotov1server.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/wireprotov1server.py	Wed Apr 17 13:41:18 2019 -0400
@@ -7,6 +7,7 @@
 
 from __future__ import absolute_import
 
+import binascii
 import os
 
 from .i18n import _
@@ -63,7 +64,8 @@
     extensions that need commands to operate on different repo views under
     specialized circumstances.
     """
-    return repo.filtered('served')
+    viewconfig = repo.ui.config('server', 'view')
+    return repo.filtered(viewconfig)
 
 def dispatch(repo, proto, command):
     repo = getdispatchrepo(repo, proto, command)
@@ -165,7 +167,6 @@
 @wireprotocommand('batch', 'cmds *', permission='pull')
 def batch(repo, proto, cmds, others):
     unescapearg = wireprototypes.unescapebatcharg
-    repo = repo.filtered("served")
     res = []
     for pair in cmds.split(';'):
         op, args = pair.split(' ', 1)
@@ -344,7 +345,7 @@
       one specific branch of many.
     """
     def decodehexstring(s):
-        return set([h.decode('hex') for h in s.split(';')])
+        return {binascii.unhexlify(h) for h in s.split(';')}
 
     manifest = repo.vfs.tryread('pullbundles.manifest')
     if not manifest:
@@ -424,8 +425,6 @@
             raise error.Abort(bundle2requiredmain,
                               hint=bundle2requiredhint)
 
-    prefercompressed = True
-
     try:
         clheads = set(repo.changelog.heads())
         heads = set(opts.get('heads', set()))
@@ -578,7 +577,6 @@
                     repo.ui.debug('redirecting incoming bundle to %s\n' %
                         tempname)
                     fp = os.fdopen(fd, pycompat.sysstr('wb+'))
-                    r = 0
                     for p in payload:
                         fp.write(p)
                     fp.seek(0)
--- a/mercurial/wireprotov2peer.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/wireprotov2peer.py	Wed Apr 17 13:41:18 2019 -0400
@@ -304,7 +304,7 @@
                 # TODO tell reactor?
                 self._frameseof = True
             else:
-                self._ui.note(_('received %r\n') % frame)
+                self._ui.debug('received %r\n' % frame)
                 self._processframe(frame)
 
         # Also try to read the first redirect.
@@ -510,7 +510,7 @@
     # Bytestring where each byte is a 0 or 1.
     raw = next(objs)
 
-    return [True if c == '1' else False for c in raw]
+    return [True if raw[i:i + 1] == b'1' else False for i in range(len(raw))]
 
 def decodelistkeys(objs):
     # Map with bytestring keys and values.
--- a/mercurial/wireprotov2server.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/mercurial/wireprotov2server.py	Wed Apr 17 13:41:18 2019 -0400
@@ -23,6 +23,7 @@
     narrowspec,
     pycompat,
     streamclone,
+    templatefilters,
     util,
     wireprotoframing,
     wireprototypes,
@@ -148,8 +149,6 @@
     tracker. We then dump the log of all that activity back out to the
     client.
     """
-    import json
-
     # Reflection APIs have a history of being abused, accidentally disclosing
     # sensitive data, etc. So we have a config knob.
     if not ui.configbool('experimental', 'web.api.debugreflect'):
@@ -175,12 +174,11 @@
                                                   frame.payload))
 
         action, meta = reactor.onframerecv(frame)
-        states.append(json.dumps((action, meta), sort_keys=True,
-                                 separators=(', ', ': ')))
+        states.append(templatefilters.json((action, meta)))
 
     action, meta = reactor.oninputeof()
     meta['action'] = action
-    states.append(json.dumps(meta, sort_keys=True, separators=(', ',': ')))
+    states.append(templatefilters.json(meta))
 
     res.status = b'200 OK'
     res.headers[b'Content-Type'] = b'text/plain'
@@ -344,7 +342,8 @@
                                      action)
 
 def getdispatchrepo(repo, proto, command):
-    return repo.filtered('served')
+    viewconfig = repo.ui.config('server', 'view')
+    return repo.filtered(viewconfig)
 
 def dispatch(repo, proto, command, redirect):
     """Run a wire protocol command.
@@ -390,7 +389,8 @@
         return
 
     with cacher:
-        cachekey = entry.cachekeyfn(repo, proto, cacher, **args)
+        cachekey = entry.cachekeyfn(repo, proto, cacher,
+                                    **pycompat.strkwargs(args))
 
         # No cache key or the cacher doesn't like it. Do default handling.
         if cachekey is None or not cacher.setcachekey(cachekey):
@@ -744,7 +744,7 @@
             # More granular cache key invalidation.
             b'localversion': localversion,
             # Cache keys are segmented by command.
-            b'command': pycompat.sysbytes(command),
+            b'command': command,
             # Throw in the media type and API version strings so changes
             # to exchange semantics invalid cache.
             b'mediatype': FRAMINGTYPE,
--- a/rust/Cargo.lock	Tue Mar 19 09:23:35 2019 -0400
+++ b/rust/Cargo.lock	Wed Apr 17 13:41:18 2019 -0400
@@ -7,11 +7,29 @@
 ]
 
 [[package]]
+name = "autocfg"
+version = "0.1.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+
+[[package]]
+name = "bitflags"
+version = "1.0.4"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+
+[[package]]
 name = "cfg-if"
 version = "0.1.6"
 source = "registry+https://github.com/rust-lang/crates.io-index"
 
 [[package]]
+name = "cloudabi"
+version = "0.0.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "bitflags 1.0.4 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
 name = "cpython"
 version = "0.2.1"
 source = "registry+https://github.com/rust-lang/crates.io-index"
@@ -23,8 +41,17 @@
 ]
 
 [[package]]
+name = "fuchsia-cprng"
+version = "0.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+
+[[package]]
 name = "hg-core"
 version = "0.1.0"
+dependencies = [
+ "rand 0.6.5 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_pcg 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
+]
 
 [[package]]
 name = "hg-cpython"
@@ -89,6 +116,110 @@
 ]
 
 [[package]]
+name = "rand"
+version = "0.6.5"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "autocfg 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
+ "libc 0.2.45 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_chacha 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_core 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_hc 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_isaac 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_jitter 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_os 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_pcg 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_xorshift 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
+ "winapi 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "rand_chacha"
+version = "0.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "autocfg 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "rand_core"
+version = "0.3.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "rand_core 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "rand_core"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+
+[[package]]
+name = "rand_hc"
+version = "0.1.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "rand_isaac"
+version = "0.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "rand_jitter"
+version = "0.1.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "libc 0.2.45 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_core 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
+ "winapi 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "rand_os"
+version = "0.1.2"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "cloudabi 0.0.3 (registry+https://github.com/rust-lang/crates.io-index)",
+ "fuchsia-cprng 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
+ "libc 0.2.45 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rand_core 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rdrand 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
+ "winapi 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "rand_pcg"
+version = "0.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
+ "rustc_version 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "rand_xorshift"
+version = "0.1.1"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "rdrand"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
 name = "regex"
 version = "1.1.0"
 source = "registry+https://github.com/rust-lang/crates.io-index"
@@ -109,6 +240,27 @@
 ]
 
 [[package]]
+name = "rustc_version"
+version = "0.2.3"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "semver 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "semver"
+version = "0.9.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "semver-parser 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "semver-parser"
+version = "0.7.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+
+[[package]]
 name = "thread_local"
 version = "0.3.6"
 source = "registry+https://github.com/rust-lang/crates.io-index"
@@ -131,19 +283,59 @@
 version = "0.1.5"
 source = "registry+https://github.com/rust-lang/crates.io-index"
 
+[[package]]
+name = "winapi"
+version = "0.3.6"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+dependencies = [
+ "winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
+ "winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)",
+]
+
+[[package]]
+name = "winapi-i686-pc-windows-gnu"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+
+[[package]]
+name = "winapi-x86_64-pc-windows-gnu"
+version = "0.4.0"
+source = "registry+https://github.com/rust-lang/crates.io-index"
+
 [metadata]
 "checksum aho-corasick 0.6.9 (registry+https://github.com/rust-lang/crates.io-index)" = "1e9a933f4e58658d7b12defcf96dc5c720f20832deebe3e0a19efd3b6aaeeb9e"
+"checksum autocfg 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "a6d640bee2da49f60a4068a7fae53acde8982514ab7bae8b8cea9e88cbcfd799"
+"checksum bitflags 1.0.4 (registry+https://github.com/rust-lang/crates.io-index)" = "228047a76f468627ca71776ecdebd732a3423081fcf5125585bcd7c49886ce12"
 "checksum cfg-if 0.1.6 (registry+https://github.com/rust-lang/crates.io-index)" = "082bb9b28e00d3c9d39cc03e64ce4cea0f1bb9b3fde493f0cbc008472d22bdf4"
+"checksum cloudabi 0.0.3 (registry+https://github.com/rust-lang/crates.io-index)" = "ddfc5b9aa5d4507acaf872de71051dfd0e309860e88966e1051e462a077aac4f"
 "checksum cpython 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "b489034e723e7f5109fecd19b719e664f89ef925be785885252469e9822fa940"
+"checksum fuchsia-cprng 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "81f7f8eb465745ea9b02e2704612a9946a59fa40572086c6fd49d6ddcf30bf31"
 "checksum lazy_static 1.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "a374c89b9db55895453a74c1e38861d9deec0b01b405a82516e9d5de4820dea1"
 "checksum libc 0.2.45 (registry+https://github.com/rust-lang/crates.io-index)" = "2d2857ec59fadc0773853c664d2d18e7198e83883e7060b63c924cb077bd5c74"
 "checksum memchr 2.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "db4c41318937f6e76648f42826b1d9ade5c09cafb5aef7e351240a70f39206e9"
 "checksum num-traits 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)" = "0b3a5d7cc97d6d30d8b9bc8fa19bf45349ffe46241e8816f50f62f6d6aaabee1"
 "checksum python27-sys 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "56114c37d4dca82526d74009df7782a28c871ac9d36b19d4cb9e67672258527e"
 "checksum python3-sys 0.2.1 (registry+https://github.com/rust-lang/crates.io-index)" = "61e4aac43f833fd637e429506cb2ac9d7df672c4b68f2eaaa163649b7fdc0444"
+"checksum rand 0.6.5 (registry+https://github.com/rust-lang/crates.io-index)" = "6d71dacdc3c88c1fde3885a3be3fbab9f35724e6ce99467f7d9c5026132184ca"
+"checksum rand_chacha 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "556d3a1ca6600bfcbab7c7c91ccb085ac7fbbcd70e008a98742e7847f4f7bcef"
+"checksum rand_core 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)" = "7a6fdeb83b075e8266dcc8762c22776f6877a63111121f5f8c7411e5be7eed4b"
+"checksum rand_core 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "d0e7a549d590831370895ab7ba4ea0c1b6b011d106b5ff2da6eee112615e6dc0"
+"checksum rand_hc 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "7b40677c7be09ae76218dc623efbf7b18e34bced3f38883af07bb75630a21bc4"
+"checksum rand_isaac 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "ded997c9d5f13925be2a6fd7e66bf1872597f759fd9dd93513dd7e92e5a5ee08"
+"checksum rand_jitter 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "080723c6145e37503a2224f801f252e14ac5531cb450f4502698542d188cb3c0"
+"checksum rand_os 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)" = "b7c690732391ae0abafced5015ffb53656abfaec61b342290e5eb56b286a679d"
+"checksum rand_pcg 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "086bd09a33c7044e56bb44d5bdde5a60e7f119a9e95b0775f545de759a32fe05"
+"checksum rand_xorshift 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "cbf7e9e623549b0e21f6e97cf8ecf247c1a8fd2e8a992ae265314300b2455d5c"
+"checksum rdrand 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "678054eb77286b51581ba43620cc911abf02758c91f93f479767aed0f90458b2"
 "checksum regex 1.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "37e7cbbd370869ce2e8dff25c7018702d10b21a20ef7135316f8daecd6c25b7f"
 "checksum regex-syntax 0.6.4 (registry+https://github.com/rust-lang/crates.io-index)" = "4e47a2ed29da7a9e1960e1639e7a982e6edc6d49be308a3b02daf511504a16d1"
+"checksum rustc_version 0.2.3 (registry+https://github.com/rust-lang/crates.io-index)" = "138e3e0acb6c9fb258b19b67cb8abd63c00679d2851805ea151465464fe9030a"
+"checksum semver 0.9.0 (registry+https://github.com/rust-lang/crates.io-index)" = "1d7eb9ef2c18661902cc47e535f9bc51b78acd254da71d375c2f6720d9a40403"
+"checksum semver-parser 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)" = "388a1df253eca08550bef6c72392cfe7c30914bf41df5269b68cbd6ff8f570a3"
 "checksum thread_local 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "c6b53e329000edc2b34dbe8545fd20e55a333362d0a321909685a19bd28c3f1b"
 "checksum ucd-util 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)" = "535c204ee4d8434478593480b8f86ab45ec9aae0e83c568ca81abf0fd0e88f86"
 "checksum utf8-ranges 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)" = "796f7e48bef87609f7ade7e06495a87d5cd06c7866e6a5cbfceffc558a243737"
 "checksum version_check 0.1.5 (registry+https://github.com/rust-lang/crates.io-index)" = "914b1a6776c4c929a602fafd8bc742e06365d4bcbe48c30f9cca5824f70dc9dd"
+"checksum winapi 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "92c1eb33641e276cfa214a0522acad57be5c56b10cb348b3c5117db75f3ac4b0"
+"checksum winapi-i686-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
+"checksum winapi-x86_64-pc-windows-gnu 0.4.0 (registry+https://github.com/rust-lang/crates.io-index)" = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
--- a/rust/chg/src/sighandlers.c	Tue Mar 19 09:23:35 2019 -0400
+++ b/rust/chg/src/sighandlers.c	Wed Apr 17 13:41:18 2019 -0400
@@ -33,28 +33,36 @@
 {
 	sigset_t unblockset, oldset;
 	struct sigaction sa, oldsa;
-	if (sigemptyset(&unblockset) < 0)
+	if (sigemptyset(&unblockset) < 0) {
 		return;
-	if (sigaddset(&unblockset, sig) < 0)
+	}
+	if (sigaddset(&unblockset, sig) < 0) {
 		return;
+	}
 	memset(&sa, 0, sizeof(sa));
 	sa.sa_handler = SIG_DFL;
 	sa.sa_flags = SA_RESTART;
-	if (sigemptyset(&sa.sa_mask) < 0)
+	if (sigemptyset(&sa.sa_mask) < 0) {
 		return;
+	}
 
 	forwardsignal(sig);
-	if (raise(sig) < 0) /* resend to self */
+	if (raise(sig) < 0) { /* resend to self */
 		return;
-	if (sigaction(sig, &sa, &oldsa) < 0)
+	}
+	if (sigaction(sig, &sa, &oldsa) < 0) {
 		return;
-	if (sigprocmask(SIG_UNBLOCK, &unblockset, &oldset) < 0)
+	}
+	if (sigprocmask(SIG_UNBLOCK, &unblockset, &oldset) < 0) {
 		return;
+	}
 	/* resent signal will be handled before sigprocmask() returns */
-	if (sigprocmask(SIG_SETMASK, &oldset, NULL) < 0)
+	if (sigprocmask(SIG_SETMASK, &oldset, NULL) < 0) {
 		return;
-	if (sigaction(sig, &oldsa, NULL) < 0)
+	}
+	if (sigaction(sig, &oldsa, NULL) < 0) {
 		return;
+	}
 }
 
 /*
@@ -81,37 +89,46 @@
 	 * - SIGINT: usually generated by the terminal */
 	sa.sa_handler = forwardsignaltogroup;
 	sa.sa_flags = SA_RESTART;
-	if (sigemptyset(&sa.sa_mask) < 0)
+	if (sigemptyset(&sa.sa_mask) < 0) {
+		return -1;
+	}
+	if (sigaction(SIGHUP, &sa, NULL) < 0) {
 		return -1;
-	if (sigaction(SIGHUP, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGINT, &sa, NULL) < 0) {
 		return -1;
-	if (sigaction(SIGINT, &sa, NULL) < 0)
-		return -1;
+	}
 
 	/* terminate frontend by double SIGTERM in case of server freeze */
 	sa.sa_handler = forwardsignal;
 	sa.sa_flags |= SA_RESETHAND;
-	if (sigaction(SIGTERM, &sa, NULL) < 0)
+	if (sigaction(SIGTERM, &sa, NULL) < 0) {
 		return -1;
+	}
 
 	/* notify the worker about window resize events */
 	sa.sa_flags = SA_RESTART;
-	if (sigaction(SIGWINCH, &sa, NULL) < 0)
+	if (sigaction(SIGWINCH, &sa, NULL) < 0) {
 		return -1;
+	}
 	/* forward user-defined signals */
-	if (sigaction(SIGUSR1, &sa, NULL) < 0)
+	if (sigaction(SIGUSR1, &sa, NULL) < 0) {
 		return -1;
-	if (sigaction(SIGUSR2, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGUSR2, &sa, NULL) < 0) {
 		return -1;
+	}
 	/* propagate job control requests to worker */
 	sa.sa_handler = forwardsignal;
 	sa.sa_flags = SA_RESTART;
-	if (sigaction(SIGCONT, &sa, NULL) < 0)
+	if (sigaction(SIGCONT, &sa, NULL) < 0) {
 		return -1;
+	}
 	sa.sa_handler = handlestopsignal;
 	sa.sa_flags = SA_RESTART;
-	if (sigaction(SIGTSTP, &sa, NULL) < 0)
+	if (sigaction(SIGTSTP, &sa, NULL) < 0) {
 		return -1;
+	}
 
 	return 0;
 }
@@ -127,24 +144,31 @@
 	memset(&sa, 0, sizeof(sa));
 	sa.sa_handler = SIG_DFL;
 	sa.sa_flags = SA_RESTART;
-	if (sigemptyset(&sa.sa_mask) < 0)
+	if (sigemptyset(&sa.sa_mask) < 0) {
 		return -1;
+	}
 
-	if (sigaction(SIGHUP, &sa, NULL) < 0)
+	if (sigaction(SIGHUP, &sa, NULL) < 0) {
 		return -1;
-	if (sigaction(SIGTERM, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGTERM, &sa, NULL) < 0) {
 		return -1;
-	if (sigaction(SIGWINCH, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGWINCH, &sa, NULL) < 0) {
 		return -1;
-	if (sigaction(SIGCONT, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGCONT, &sa, NULL) < 0) {
 		return -1;
-	if (sigaction(SIGTSTP, &sa, NULL) < 0)
+	}
+	if (sigaction(SIGTSTP, &sa, NULL) < 0) {
 		return -1;
+	}
 
 	/* ignore Ctrl+C while shutting down to make pager exits cleanly */
 	sa.sa_handler = SIG_IGN;
-	if (sigaction(SIGINT, &sa, NULL) < 0)
+	if (sigaction(SIGINT, &sa, NULL) < 0) {
 		return -1;
+	}
 
 	peerpid = 0;
 	return 0;
--- a/rust/hg-core/Cargo.toml	Tue Mar 19 09:23:35 2019 -0400
+++ b/rust/hg-core/Cargo.toml	Wed Apr 17 13:41:18 2019 -0400
@@ -6,3 +6,7 @@
 
 [lib]
 name = "hg"
+
+[dev-dependencies]
+rand = "*"
+rand_pcg = "*"
--- a/rust/hg-core/src/ancestors.rs	Tue Mar 19 09:23:35 2019 -0400
+++ b/rust/hg-core/src/ancestors.rs	Wed Apr 17 13:41:18 2019 -0400
@@ -38,6 +38,7 @@
 pub struct MissingAncestors<G: Graph> {
     graph: G,
     bases: HashSet<Revision>,
+    max_base: Revision,
 }
 
 impl<G: Graph> AncestorsIterator<G> {
@@ -79,8 +80,7 @@
 
     #[inline]
     fn conditionally_push_rev(&mut self, rev: Revision) {
-        if self.stoprev <= rev && !self.seen.contains(&rev) {
-            self.seen.insert(rev);
+        if self.stoprev <= rev && self.seen.insert(rev) {
             self.visit.push(rev);
         }
     }
@@ -154,11 +154,10 @@
             Ok(ps) => ps,
             Err(e) => return Some(Err(e)),
         };
-        if p1 < self.stoprev || self.seen.contains(&p1) {
+        if p1 < self.stoprev || !self.seen.insert(p1) {
             self.visit.pop();
         } else {
             *(self.visit.peek_mut().unwrap()) = p1;
-            self.seen.insert(p1);
         };
 
         self.conditionally_push_rev(p2);
@@ -211,15 +210,17 @@
 
 impl<G: Graph> MissingAncestors<G> {
     pub fn new(graph: G, bases: impl IntoIterator<Item = Revision>) -> Self {
-        let mut bases: HashSet<Revision> = bases.into_iter().collect();
-        if bases.is_empty() {
-            bases.insert(NULL_REVISION);
-        }
-        MissingAncestors { graph, bases }
+        let mut created = MissingAncestors {
+            graph: graph,
+            bases: HashSet::new(),
+            max_base: NULL_REVISION,
+        };
+        created.add_bases(bases);
+        created
     }
 
     pub fn has_bases(&self) -> bool {
-        self.bases.iter().any(|&b| b != NULL_REVISION)
+        !self.bases.is_empty()
     }
 
     /// Return a reference to current bases.
@@ -238,16 +239,33 @@
     }
 
     /// Consumes the object and returns the relative heads of its bases.
-    pub fn into_bases_heads(mut self) -> Result<HashSet<Revision>, GraphError> {
+    pub fn into_bases_heads(
+        mut self,
+    ) -> Result<HashSet<Revision>, GraphError> {
         dagops::retain_heads(&self.graph, &mut self.bases)?;
         Ok(self.bases)
     }
 
+    /// Add some revisions to `self.bases`
+    ///
+    /// Takes care of keeping `self.max_base` up to date.
     pub fn add_bases(
         &mut self,
         new_bases: impl IntoIterator<Item = Revision>,
     ) {
-        self.bases.extend(new_bases);
+        let mut max_base = self.max_base;
+        self.bases.extend(
+            new_bases
+                .into_iter()
+                .filter(|&rev| rev != NULL_REVISION)
+                .map(|r| {
+                    if r > max_base {
+                        max_base = r;
+                    }
+                    r
+                }),
+        );
+        self.max_base = max_base;
     }
 
     /// Remove all ancestors of self.bases from the revs set (in place)
@@ -256,28 +274,26 @@
         revs: &mut HashSet<Revision>,
     ) -> Result<(), GraphError> {
         revs.retain(|r| !self.bases.contains(r));
-        // the null revision is always an ancestor
+        // the null revision is always an ancestor. Logically speaking
+        // it's debatable in case bases is empty, but the Python
+        // implementation always adds NULL_REVISION to bases, making it
+        // unconditionnally true.
         revs.remove(&NULL_REVISION);
         if revs.is_empty() {
             return Ok(());
         }
         // anything in revs > start is definitely not an ancestor of bases
         // revs <= start need to be investigated
-        // TODO optim: if a missingancestors is to be used several times,
-        // we shouldn't need to iterate each time on bases
-        let start = match self.bases.iter().cloned().max() {
-            Some(m) => m,
-            None => {
-                // bases is empty (shouldn't happen, but let's be safe)
-                return Ok(());
-            }
-        };
+        if self.max_base == NULL_REVISION {
+            return Ok(());
+        }
+
         // whatever happens, we'll keep at least keepcount of them
         // knowing this gives us a earlier stop condition than
         // going all the way to the root
-        let keepcount = revs.iter().filter(|r| **r > start).count();
+        let keepcount = revs.iter().filter(|r| **r > self.max_base).count();
 
-        let mut curr = start;
+        let mut curr = self.max_base;
         while curr != NULL_REVISION && revs.len() > keepcount {
             if self.bases.contains(&curr) {
                 revs.remove(&curr);
@@ -288,12 +304,17 @@
         Ok(())
     }
 
-    /// Add rev's parents to self.bases
+    /// Add the parents of `rev` to `self.bases`
+    ///
+    /// This has no effect on `self.max_base`
     #[inline]
     fn add_parents(&mut self, rev: Revision) -> Result<(), GraphError> {
-        // No need to bother the set with inserting NULL_REVISION over and
-        // over
+        if rev == NULL_REVISION {
+            return Ok(());
+        }
         for p in self.graph.parents(rev)?.iter().cloned() {
+            // No need to bother the set with inserting NULL_REVISION over and
+            // over
             if p != NULL_REVISION {
                 self.bases.insert(p);
             }
@@ -323,12 +344,8 @@
         if revs_visit.is_empty() {
             return Ok(Vec::new());
         }
-
-        let max_bases =
-            bases_visit.iter().cloned().max().unwrap_or(NULL_REVISION);
-        let max_revs =
-            revs_visit.iter().cloned().max().unwrap_or(NULL_REVISION);
-        let start = max(max_bases, max_revs);
+        let max_revs = revs_visit.iter().cloned().max().unwrap();
+        let start = max(self.max_base, max_revs);
 
         // TODO heuristics for with_capacity()?
         let mut missing: Vec<Revision> = Vec::new();
@@ -336,12 +353,9 @@
             if revs_visit.is_empty() {
                 break;
             }
-            if both_visit.contains(&curr) {
+            if both_visit.remove(&curr) {
                 // curr's parents might have made it into revs_visit through
                 // another path
-                // TODO optim: Rust's HashSet.remove returns a boolean telling
-                // if it happened. This will spare us one set lookup
-                both_visit.remove(&curr);
                 for p in self.graph.parents(curr)?.iter().cloned() {
                     if p == NULL_REVISION {
                         continue;
@@ -356,13 +370,14 @@
                     if p == NULL_REVISION {
                         continue;
                     }
-                    if bases_visit.contains(&p) || both_visit.contains(&p) {
-                        // p is an ancestor of revs_visit, and is implicitly
-                        // in bases_visit, which means p is ::revs & ::bases.
-                        // TODO optim: hence if bothvisit, we look up twice
+                    if bases_visit.contains(&p) {
+                        // p is already known to be an ancestor of revs_visit
+                        revs_visit.remove(&p);
+                        both_visit.insert(p);
+                    } else if both_visit.contains(&p) {
+                        // p should have been in bases_visit
                         revs_visit.remove(&p);
                         bases_visit.insert(p);
-                        both_visit.insert(p);
                     } else {
                         // visit later
                         revs_visit.insert(p);
@@ -373,11 +388,9 @@
                     if p == NULL_REVISION {
                         continue;
                     }
-                    if revs_visit.contains(&p) || both_visit.contains(&p) {
+                    if revs_visit.remove(&p) || both_visit.contains(&p) {
                         // p is an ancestor of bases_visit, and is implicitly
                         // in revs_visit, which means p is ::revs & ::bases.
-                        // TODO optim: hence if bothvisit, we look up twice
-                        revs_visit.remove(&p);
                         bases_visit.insert(p);
                         both_visit.insert(p);
                     } else {
@@ -578,11 +591,13 @@
             missing_ancestors.get_bases().iter().cloned().collect();
         as_vec.sort();
         assert_eq!(as_vec, [1, 3, 5]);
+        assert_eq!(missing_ancestors.max_base, 5);
 
         missing_ancestors.add_bases([3, 7, 8].iter().cloned());
         as_vec = missing_ancestors.get_bases().iter().cloned().collect();
         as_vec.sort();
         assert_eq!(as_vec, [1, 3, 5, 7, 8]);
+        assert_eq!(missing_ancestors.max_base, 8);
 
         as_vec = missing_ancestors.bases_heads()?.iter().cloned().collect();
         as_vec.sort();
--- a/rust/hg-core/src/dagops.rs	Tue Mar 19 09:23:35 2019 -0400
+++ b/rust/hg-core/src/dagops.rs	Wed Apr 17 13:41:18 2019 -0400
@@ -46,7 +46,9 @@
     let mut heads: HashSet<Revision> = iter_revs.clone().cloned().collect();
     heads.remove(&NULL_REVISION);
     for rev in iter_revs {
-        remove_parents(graph, *rev, &mut heads)?;
+        if *rev != NULL_REVISION {
+            remove_parents(graph, *rev, &mut heads)?;
+        }
     }
     Ok(heads)
 }
@@ -71,7 +73,9 @@
     // mutating
     let as_vec: Vec<Revision> = revs.iter().cloned().collect();
     for rev in as_vec {
-        remove_parents(graph, rev, revs)?;
+        if rev != NULL_REVISION {
+            remove_parents(graph, rev, revs)?;
+        }
     }
     Ok(())
 }
--- a/rust/hg-core/src/lib.rs	Tue Mar 19 09:23:35 2019 -0400
+++ b/rust/hg-core/src/lib.rs	Wed Apr 17 13:41:18 2019 -0400
@@ -5,8 +5,7 @@
 mod ancestors;
 pub mod dagops;
 pub use ancestors::{AncestorsIterator, LazyAncestors, MissingAncestors};
-#[cfg(test)]
-pub mod testing;
+pub mod testing;  // unconditionally built, for use from integration tests
 
 /// Mercurial revision numbers
 ///
@@ -14,6 +13,11 @@
 /// 4 bytes, and are liberally converted to ints, whence the i32
 pub type Revision = i32;
 
+
+/// Marker expressing the absence of a parent
+///
+/// Independently of the actual representation, `NULL_REVISION` is guaranteed
+/// to be smaller that all existing revisions.
 pub const NULL_REVISION: Revision = -1;
 
 /// Same as `mercurial.node.wdirrev`
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/rust/hg-core/tests/test_missing_ancestors.rs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,340 @@
+extern crate hg;
+extern crate rand;
+extern crate rand_pcg;
+
+use hg::testing::VecGraph;
+use hg::Revision;
+use hg::*;
+use rand::distributions::{Distribution, LogNormal, Uniform};
+use rand::{thread_rng, Rng, RngCore, SeedableRng};
+use std::cmp::min;
+use std::collections::HashSet;
+use std::env;
+use std::fmt::Debug;
+
+fn build_random_graph(
+    nodes_opt: Option<usize>,
+    rootprob_opt: Option<f64>,
+    mergeprob_opt: Option<f64>,
+    prevprob_opt: Option<f64>,
+) -> VecGraph {
+    let nodes = nodes_opt.unwrap_or(100);
+    let rootprob = rootprob_opt.unwrap_or(0.05);
+    let mergeprob = mergeprob_opt.unwrap_or(0.2);
+    let prevprob = prevprob_opt.unwrap_or(0.7);
+
+    let mut rng = thread_rng();
+    let mut vg: VecGraph = Vec::with_capacity(nodes);
+    for i in 0..nodes {
+        if i == 0 || rng.gen_bool(rootprob) {
+            vg.push([NULL_REVISION, NULL_REVISION])
+        } else if i == 1 {
+            vg.push([0, NULL_REVISION])
+        } else if rng.gen_bool(mergeprob) {
+            let p1 = {
+                if i == 2 || rng.gen_bool(prevprob) {
+                    (i - 1) as Revision
+                } else {
+                    rng.gen_range(0, i - 1) as Revision
+                }
+            };
+            // p2 is a random revision lower than i and different from p1
+            let mut p2 = rng.gen_range(0, i - 1) as Revision;
+            if p2 >= p1 {
+                p2 = p2 + 1;
+            }
+            vg.push([p1, p2]);
+        } else if rng.gen_bool(prevprob) {
+            vg.push([(i - 1) as Revision, NULL_REVISION])
+        } else {
+            vg.push([rng.gen_range(0, i - 1) as Revision, NULL_REVISION])
+        }
+    }
+    vg
+}
+
+/// Compute the ancestors set of all revisions of a VecGraph
+fn ancestors_sets(vg: &VecGraph) -> Vec<HashSet<Revision>> {
+    let mut ancs: Vec<HashSet<Revision>> = Vec::new();
+    for i in 0..vg.len() {
+        let mut ancs_i = HashSet::new();
+        ancs_i.insert(i as Revision);
+        for p in vg[i].iter().cloned() {
+            if p != NULL_REVISION {
+                ancs_i.extend(&ancs[p as usize]);
+            }
+        }
+        ancs.push(ancs_i);
+    }
+    ancs
+}
+
+#[derive(Clone, Debug)]
+enum MissingAncestorsAction {
+    InitialBases(HashSet<Revision>),
+    AddBases(HashSet<Revision>),
+    RemoveAncestorsFrom(HashSet<Revision>),
+    MissingAncestors(HashSet<Revision>),
+}
+
+/// An instrumented naive yet obviously correct implementation
+///
+/// It also records all its actions for easy reproduction for replay
+/// of problematic cases
+struct NaiveMissingAncestors<'a> {
+    ancestors_sets: &'a Vec<HashSet<Revision>>,
+    graph: &'a VecGraph, // used for error reporting only
+    bases: HashSet<Revision>,
+    history: Vec<MissingAncestorsAction>,
+    // for error reporting, assuming we are in a random test
+    random_seed: String,
+}
+
+impl<'a> NaiveMissingAncestors<'a> {
+    fn new(
+        graph: &'a VecGraph,
+        ancestors_sets: &'a Vec<HashSet<Revision>>,
+        bases: &HashSet<Revision>,
+        random_seed: &str,
+    ) -> Self {
+        Self {
+            ancestors_sets: ancestors_sets,
+            bases: bases.clone(),
+            graph: graph,
+            history: vec![MissingAncestorsAction::InitialBases(bases.clone())],
+            random_seed: random_seed.into(),
+        }
+    }
+
+    fn add_bases(&mut self, new_bases: HashSet<Revision>) {
+        self.bases.extend(&new_bases);
+        self.history
+            .push(MissingAncestorsAction::AddBases(new_bases))
+    }
+
+    fn remove_ancestors_from(&mut self, revs: &mut HashSet<Revision>) {
+        revs.remove(&NULL_REVISION);
+        self.history
+            .push(MissingAncestorsAction::RemoveAncestorsFrom(revs.clone()));
+        for base in self.bases.iter().cloned() {
+            if base != NULL_REVISION {
+                for rev in &self.ancestors_sets[base as usize] {
+                    revs.remove(&rev);
+                }
+            }
+        }
+    }
+
+    fn missing_ancestors(
+        &mut self,
+        revs: impl IntoIterator<Item = Revision>,
+    ) -> Vec<Revision> {
+        let revs_as_set: HashSet<Revision> = revs.into_iter().collect();
+
+        let mut missing: HashSet<Revision> = HashSet::new();
+        for rev in revs_as_set.iter().cloned() {
+            if rev != NULL_REVISION {
+                missing.extend(&self.ancestors_sets[rev as usize])
+            }
+        }
+        self.history
+            .push(MissingAncestorsAction::MissingAncestors(revs_as_set));
+
+        for base in self.bases.iter().cloned() {
+            if base != NULL_REVISION {
+                for rev in &self.ancestors_sets[base as usize] {
+                    missing.remove(&rev);
+                }
+            }
+        }
+        let mut res: Vec<Revision> = missing.iter().cloned().collect();
+        res.sort();
+        res
+    }
+
+    fn assert_eq<T>(&self, left: T, right: T)
+    where
+        T: PartialEq + Debug,
+    {
+        if left == right {
+            return;
+        }
+        panic!(format!(
+            "Equality assertion failed (left != right)
+                left={:?}
+                right={:?}
+                graph={:?}
+                current bases={:?}
+                history={:?}
+                random seed={}
+            ",
+            left,
+            right,
+            self.graph,
+            self.bases,
+            self.history,
+            self.random_seed,
+        ));
+    }
+}
+
+/// Choose a set of random revisions
+///
+/// The size of the set is taken from a LogNormal distribution
+/// with default mu=1.1 and default sigma=0.8. Quoting the Python
+/// test this is taken from:
+///     the default mu and sigma give us a nice distribution of mostly
+///     single-digit counts (including 0) with some higher ones
+/// The sample may include NULL_REVISION
+fn sample_revs<R: RngCore>(
+    rng: &mut R,
+    maxrev: Revision,
+    mu_opt: Option<f64>,
+    sigma_opt: Option<f64>,
+) -> HashSet<Revision> {
+    let mu = mu_opt.unwrap_or(1.1);
+    let sigma = sigma_opt.unwrap_or(0.8);
+
+    let log_normal = LogNormal::new(mu, sigma);
+    let nb = min(maxrev as usize, log_normal.sample(rng).floor() as usize);
+
+    let dist = Uniform::from(NULL_REVISION..maxrev);
+    return rng.sample_iter(&dist).take(nb).collect();
+}
+
+/// Produces the hexadecimal representation of a slice of bytes
+fn hex_bytes(bytes: &[u8]) -> String {
+    let mut s = String::with_capacity(bytes.len() * 2);
+    for b in bytes {
+        s.push_str(&format!("{:x}", b));
+    }
+    s
+}
+
+/// Fill a random seed from its hexadecimal representation.
+///
+/// This signature is meant to be consistent with `RngCore::fill_bytes`
+fn seed_parse_in(hex: &str, seed: &mut [u8]) {
+    if hex.len() != 32 {
+        panic!("Seed {} is too short for 128 bits hex", hex);
+    }
+    for i in 0..8 {
+        seed[i] = u8::from_str_radix(&hex[2 * i..2 * (i + 1)], 16)
+            .unwrap_or_else(|_e| panic!("Seed {} is not 128 bits hex", hex));
+    }
+}
+
+/// Parse the parameters for `test_missing_ancestors()`
+///
+/// Returns (graphs, instances, calls per instance)
+fn parse_test_missing_ancestors_params(var: &str) -> (usize, usize, usize) {
+    let err_msg = "TEST_MISSING_ANCESTORS format: GRAPHS,INSTANCES,CALLS";
+    let params: Vec<usize> = var
+        .split(',')
+        .map(|n| n.trim().parse().expect(err_msg))
+        .collect();
+    if params.len() != 3 {
+        panic!(err_msg);
+    }
+    (params[0], params[1], params[2])
+}
+
+#[test]
+/// This test creates lots of random VecGraphs,
+/// and compare a bunch of MissingAncestors for them with
+/// NaiveMissingAncestors that rely on precomputed transitive closures of
+/// these VecGraphs (ancestors_sets).
+///
+/// For each generater graph, several instances of `MissingAncestors` are
+/// created, whose methods are called and checked a given number of times.
+///
+/// This test can be parametrized by two environment variables:
+///
+/// - TEST_RANDOM_SEED: must be 128 bits in hexadecimal
+/// - TEST_MISSING_ANCESTORS: "GRAPHS,INSTANCES,CALLS". The default is
+///   "100,10,10"
+///
+/// This is slow: it runs on my workstation in about 5 seconds with the
+/// default parameters with a plain `cargo --test`.
+///
+/// If you want to run it faster, especially if you're changing the
+/// parameters, use `cargo test --release`.
+/// For me, that gets it down to 0.15 seconds with the default parameters
+fn test_missing_ancestors_compare_naive() {
+    let (graphcount, testcount, inccount) =
+        match env::var("TEST_MISSING_ANCESTORS") {
+            Err(env::VarError::NotPresent) => (100, 10, 10),
+            Ok(val) => parse_test_missing_ancestors_params(&val),
+            Err(env::VarError::NotUnicode(_)) => {
+                panic!("TEST_MISSING_ANCESTORS is invalid");
+            }
+        };
+    let mut seed: [u8; 16] = [0; 16];
+    match env::var("TEST_RANDOM_SEED") {
+        Ok(val) => {
+            seed_parse_in(&val, &mut seed);
+        }
+        Err(env::VarError::NotPresent) => {
+            thread_rng().fill_bytes(&mut seed);
+        }
+        Err(env::VarError::NotUnicode(_)) => {
+            panic!("TEST_RANDOM_SEED must be 128 bits in hex");
+        }
+    }
+    let hex_seed = hex_bytes(&seed);
+    eprintln!("Random seed: {}", hex_seed);
+
+    let mut rng = rand_pcg::Pcg32::from_seed(seed);
+
+    eprint!("Checking MissingAncestors against brute force implementation ");
+    eprint!("for {} random graphs, ", graphcount);
+    eprintln!(
+        "with {} instances for each and {} calls per instance",
+        testcount, inccount,
+    );
+    for g in 0..graphcount {
+        if g != 0 && g % 100 == 0 {
+            eprintln!("Tested with {} graphs", g);
+        }
+        let graph = build_random_graph(None, None, None, None);
+        let graph_len = graph.len() as Revision;
+        let ancestors_sets = ancestors_sets(&graph);
+        for _testno in 0..testcount {
+            let bases: HashSet<Revision> =
+                sample_revs(&mut rng, graph_len, None, None);
+            let mut inc = MissingAncestors::<VecGraph>::new(
+                graph.clone(),
+                bases.clone(),
+            );
+            let mut naive = NaiveMissingAncestors::new(
+                &graph,
+                &ancestors_sets,
+                &bases,
+                &hex_seed,
+            );
+            for _m in 0..inccount {
+                if rng.gen_bool(0.2) {
+                    let new_bases =
+                        sample_revs(&mut rng, graph_len, None, None);
+                    inc.add_bases(new_bases.iter().cloned());
+                    naive.add_bases(new_bases);
+                }
+                if rng.gen_bool(0.4) {
+                    // larger set so that there are more revs to remove from
+                    let mut hrevs =
+                        sample_revs(&mut rng, graph_len, Some(1.5), None);
+                    let mut rrevs = hrevs.clone();
+                    inc.remove_ancestors_from(&mut hrevs).unwrap();
+                    naive.remove_ancestors_from(&mut rrevs);
+                    naive.assert_eq(hrevs, rrevs);
+                } else {
+                    let revs = sample_revs(&mut rng, graph_len, None, None);
+                    let hm =
+                        inc.missing_ancestors(revs.iter().cloned()).unwrap();
+                    let rm = naive.missing_ancestors(revs.iter().cloned());
+                    naive.assert_eq(hm, rm);
+                }
+            }
+        }
+    }
+}
--- a/rust/hg-cpython/src/ancestors.rs	Tue Mar 19 09:23:35 2019 -0400
+++ b/rust/hg-cpython/src/ancestors.rs	Wed Apr 17 13:41:18 2019 -0400
@@ -34,11 +34,11 @@
 //! [`LazyAncestors`]: struct.LazyAncestors.html
 //! [`MissingAncestors`]: struct.MissingAncestors.html
 //! [`AncestorsIterator`]: struct.AncestorsIterator.html
-use crate::conversion::rev_pyiter_collect;
+use crate::conversion::{py_set, rev_pyiter_collect};
 use cindex::Index;
 use cpython::{
     ObjectProtocol, PyClone, PyDict, PyList, PyModule, PyObject, PyResult,
-    PyTuple, Python, PythonObject, ToPyObject,
+    Python, PythonObject, ToPyObject,
 };
 use exceptions::GraphError;
 use hg::Revision;
@@ -90,24 +90,6 @@
     }
 }
 
-/// Copy and convert an `HashSet<Revision>` in a Python set
-///
-/// This will probably turn useless once `PySet` support lands in
-/// `rust-cpython`.
-///
-/// This builds a Python tuple, then calls Python's "set()" on it
-fn py_set(py: Python, set: &HashSet<Revision>) -> PyResult<PyObject> {
-    let as_vec: Vec<PyObject> = set
-        .iter()
-        .map(|rev| rev.to_py_object(py).into_object())
-        .collect();
-    let as_pytuple = PyTuple::new(py, as_vec.as_slice());
-
-    let locals = PyDict::new(py);
-    locals.set_item(py, "obj", as_pytuple.to_py_object(py))?;
-    py.eval("set(obj)", None, Some(&locals))
-}
-
 py_class!(pub class LazyAncestors |py| {
     data inner: RefCell<Box<CoreLazy<Index>>>;
 
--- a/rust/hg-cpython/src/conversion.rs	Tue Mar 19 09:23:35 2019 -0400
+++ b/rust/hg-cpython/src/conversion.rs	Wed Apr 17 13:41:18 2019 -0400
@@ -8,8 +8,12 @@
 //! Bindings for the hg::ancestors module provided by the
 //! `hg-core` crate. From Python, this will be seen as `rustext.ancestor`
 
-use cpython::{ObjectProtocol, PyObject, PyResult, Python};
+use cpython::{
+    ObjectProtocol, PyDict, PyObject, PyResult, PyTuple, Python, PythonObject,
+    ToPyObject,
+};
 use hg::Revision;
+use std::collections::HashSet;
 use std::iter::FromIterator;
 
 /// Utility function to convert a Python iterable into various collections
@@ -26,3 +30,21 @@
         .map(|r| r.and_then(|o| o.extract::<Revision>(py)))
         .collect()
 }
+
+/// Copy and convert an `HashSet<Revision>` in a Python set
+///
+/// This will probably turn useless once `PySet` support lands in
+/// `rust-cpython`.
+///
+/// This builds a Python tuple, then calls Python's "set()" on it
+pub fn py_set(py: Python, set: &HashSet<Revision>) -> PyResult<PyObject> {
+    let as_vec: Vec<PyObject> = set
+        .iter()
+        .map(|rev| rev.to_py_object(py).into_object())
+        .collect();
+    let as_pytuple = PyTuple::new(py, as_vec.as_slice());
+
+    let locals = PyDict::new(py);
+    locals.set_item(py, "obj", as_pytuple.to_py_object(py))?;
+    py.eval("set(obj)", None, Some(&locals))
+}
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/rust/hg-cpython/src/dagops.rs	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,53 @@
+// dagops.rs
+//
+// Copyright 2019 Georges Racinet <georges.racinet@octobus.net>
+//
+// This software may be used and distributed according to the terms of the
+// GNU General Public License version 2 or any later version.
+
+//! Bindings for the `hg::dagops` module provided by the
+//! `hg-core` package.
+//!
+//! From Python, this will be seen as `mercurial.rustext.dagop`
+use cindex::Index;
+use cpython::{PyDict, PyModule, PyObject, PyResult, Python};
+use crate::conversion::{py_set, rev_pyiter_collect};
+use exceptions::GraphError;
+use hg::dagops;
+use hg::Revision;
+use std::collections::HashSet;
+
+/// Using the the `index`, return heads out of any Python iterable of Revisions
+///
+/// This is the Rust counterpart for `mercurial.dagop.headrevs`
+pub fn headrevs(
+    py: Python,
+    index: PyObject,
+    revs: PyObject,
+) -> PyResult<PyObject> {
+    let mut as_set: HashSet<Revision> = rev_pyiter_collect(py, &revs)?;
+    dagops::retain_heads(&Index::new(py, index)?, &mut as_set)
+        .map_err(|e| GraphError::pynew(py, e))?;
+    py_set(py, &as_set)
+}
+
+/// Create the module, with `__package__` given from parent
+pub fn init_module(py: Python, package: &str) -> PyResult<PyModule> {
+    let dotted_name = &format!("{}.dagop", package);
+    let m = PyModule::new(py, dotted_name)?;
+    m.add(py, "__package__", package)?;
+    m.add(py, "__doc__", "DAG operations - Rust implementation")?;
+    m.add(
+        py,
+        "headrevs",
+        py_fn!(py, headrevs(index: PyObject, revs: PyObject)),
+    )?;
+
+    let sys = PyModule::import(py, "sys")?;
+    let sys_modules: PyDict = sys.get(py, "modules")?.extract(py)?;
+    sys_modules.set_item(py, dotted_name, &m)?;
+    // Example C code (see pyexpat.c and import.c) will "give away the
+    // reference", but we won't because it will be consumed once the
+    // Rust PyObject is dropped.
+    Ok(m)
+}
--- a/rust/hg-cpython/src/lib.rs	Tue Mar 19 09:23:35 2019 -0400
+++ b/rust/hg-cpython/src/lib.rs	Wed Apr 17 13:41:18 2019 -0400
@@ -27,6 +27,7 @@
 pub mod ancestors;
 mod cindex;
 mod conversion;
+pub mod dagops;
 pub mod exceptions;
 
 py_module_initializer!(rustext, initrustext, PyInit_rustext, |py, m| {
@@ -38,6 +39,7 @@
 
     let dotted_name: String = m.get(py, "__name__")?.extract(py)?;
     m.add(py, "ancestor", ancestors::init_module(py, &dotted_name)?)?;
+    m.add(py, "dagop", dagops::init_module(py, &dotted_name)?)?;
     m.add(py, "GraphError", py.get_type::<exceptions::GraphError>())?;
     Ok(())
 });
--- a/setup.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/setup.py	Wed Apr 17 13:41:18 2019 -0400
@@ -240,9 +240,9 @@
 except ImportError:
     py2exeloaded = False
 
-def runcmd(cmd, env):
+def runcmd(cmd, env, cwd=None):
     p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
-                         stderr=subprocess.PIPE, env=env)
+                         stderr=subprocess.PIPE, env=env, cwd=cwd)
     out, err = p.communicate()
     return p.returncode, out, err
 
@@ -437,10 +437,9 @@
     pure = False
     cffi = ispypy
 
-    global_options = Distribution.global_options + \
-                     [('pure', None, "use pure (slow) Python "
-                        "code instead of C extensions"),
-                     ]
+    global_options = Distribution.global_options + [
+        ('pure', None, "use pure (slow) Python code instead of C extensions"),
+    ]
 
     def has_ext_modules(self):
         # self.ext_modules is emptied in hgbuildpy.finalize_options which is
@@ -584,9 +583,9 @@
         if err or returncode != 0:
             raise DistutilsExecError(err)
 
-        with open(self._indexfilename, 'w') as f:
-            f.write('# this file is autogenerated by setup.py\n')
-            f.write('docs = ')
+        with open(self._indexfilename, 'wb') as f:
+            f.write(b'# this file is autogenerated by setup.py\n')
+            f.write(b'docs = ')
             f.write(out)
 
 class buildhgexe(build_ext):
@@ -666,7 +665,7 @@
             self.addlongpathsmanifest()
 
     def addlongpathsmanifest(self):
-        """Add manifest pieces so that hg.exe understands long paths
+        r"""Add manifest pieces so that hg.exe understands long paths
 
         This is an EXPERIMENTAL feature, use with care.
         To enable long paths support, one needs to do two things:
@@ -703,6 +702,117 @@
         dir = os.path.dirname(self.get_ext_fullpath('dummy'))
         return os.path.join(self.build_temp, dir, 'hg.exe')
 
+class hgbuilddoc(Command):
+    description = 'build documentation'
+    user_options = [
+        ('man', None, 'generate man pages'),
+        ('html', None, 'generate html pages'),
+    ]
+
+    def initialize_options(self):
+        self.man = None
+        self.html = None
+
+    def finalize_options(self):
+        # If --man or --html are set, only generate what we're told to.
+        # Otherwise generate everything.
+        have_subset = self.man is not None or self.html is not None
+
+        if have_subset:
+            self.man = True if self.man else False
+            self.html = True if self.html else False
+        else:
+            self.man = True
+            self.html = True
+
+    def run(self):
+        def normalizecrlf(p):
+            with open(p, 'rb') as fh:
+                orig = fh.read()
+
+            if b'\r\n' not in orig:
+                return
+
+            log.info('normalizing %s to LF line endings' % p)
+            with open(p, 'wb') as fh:
+                fh.write(orig.replace(b'\r\n', b'\n'))
+
+        def gentxt(root):
+            txt = 'doc/%s.txt' % root
+            log.info('generating %s' % txt)
+            res, out, err = runcmd(
+                [sys.executable, 'gendoc.py', root],
+                os.environ,
+                cwd='doc')
+            if res:
+                raise SystemExit('error running gendoc.py: %s' %
+                                 '\n'.join([out, err]))
+
+            with open(txt, 'wb') as fh:
+                fh.write(out)
+
+        def gengendoc(root):
+            gendoc = 'doc/%s.gendoc.txt' % root
+
+            log.info('generating %s' % gendoc)
+            res, out, err = runcmd(
+                [sys.executable, 'gendoc.py', '%s.gendoc' % root],
+                os.environ,
+                cwd='doc')
+            if res:
+                raise SystemExit('error running gendoc: %s' %
+                                 '\n'.join([out, err]))
+
+            with open(gendoc, 'wb') as fh:
+                fh.write(out)
+
+        def genman(root):
+            log.info('generating doc/%s' % root)
+            res, out, err = runcmd(
+                [sys.executable, 'runrst', 'hgmanpage', '--halt', 'warning',
+                 '--strip-elements-with-class', 'htmlonly',
+                 '%s.txt' % root, root],
+                os.environ,
+                cwd='doc')
+            if res:
+                raise SystemExit('error running runrst: %s' %
+                                 '\n'.join([out, err]))
+
+            normalizecrlf('doc/%s' % root)
+
+        def genhtml(root):
+            log.info('generating doc/%s.html' % root)
+            res, out, err = runcmd(
+                [sys.executable, 'runrst', 'html', '--halt', 'warning',
+                 '--link-stylesheet', '--stylesheet-path', 'style.css',
+                 '%s.txt' % root, '%s.html' % root],
+                os.environ,
+                cwd='doc')
+            if res:
+                raise SystemExit('error running runrst: %s' %
+                                 '\n'.join([out, err]))
+
+            normalizecrlf('doc/%s.html' % root)
+
+        # This logic is duplicated in doc/Makefile.
+        sources = {f for f in os.listdir('mercurial/help')
+                   if re.search(r'[0-9]\.txt$', f)}
+
+        # common.txt is a one-off.
+        gentxt('common')
+
+        for source in sorted(sources):
+            assert source[-4:] == '.txt'
+            root = source[:-4]
+
+            gentxt(root)
+            gengendoc(root)
+
+            if self.man:
+                genman(root)
+            if self.html:
+                genhtml(root)
+
 class hginstall(install):
 
     user_options = install.user_options + [
@@ -828,6 +938,7 @@
                 fp.write(data)
 
 cmdclass = {'build': hgbuild,
+            'build_doc': hgbuilddoc,
             'build_mo': hgbuildmo,
             'build_ext': hgbuildext,
             'build_py': hgbuildpy,
@@ -864,6 +975,12 @@
     packages.extend(['mercurial.thirdparty.concurrent',
                      'mercurial.thirdparty.concurrent.futures'])
 
+if 'HG_PY2EXE_EXTRA_INSTALL_PACKAGES' in os.environ:
+    # py2exe can't cope with namespace packages very well, so we have to
+    # install any hgext3rd.* extensions that we want in the final py2exe
+    # image here. This is gross, but you gotta do what you gotta do.
+    packages.extend(os.environ['HG_PY2EXE_EXTRA_INSTALL_PACKAGES'].split(' '))
+
 common_depends = ['mercurial/bitmanipulation.h',
                   'mercurial/compat.h',
                   'mercurial/cext/util.h']
@@ -973,7 +1090,8 @@
         except subprocess.CalledProcessError:
             raise RustCompilationError(
                 "Cargo failed. Working directory: %r, "
-                "command: %r, environment: %r" % (self.rustsrcdir, cmd, env))
+                "command: %r, environment: %r"
+                % (self.rustsrcdir, cargocmd, env))
 
 class RustEnhancedExtension(RustExtension):
     """A C Extension, conditionally enhanced with Rust code.
@@ -1129,18 +1247,51 @@
 
 extra = {}
 
+py2exepackages = [
+    'hgdemandimport',
+    'hgext3rd',
+    'hgext',
+    'email',
+    # implicitly imported per module policy
+    # (cffi wouldn't be used as a frozen exe)
+    'mercurial.cext',
+    #'mercurial.cffi',
+    'mercurial.pure',
+]
+
+py2exeexcludes = []
+py2exedllexcludes = ['crypt32.dll']
+
 if issetuptools:
     extra['python_requires'] = supportedpy
+
 if py2exeloaded:
     extra['console'] = [
         {'script':'hg',
          'copyright':'Copyright (C) 2005-2019 Matt Mackall and others',
          'product_version':version}]
-    # sub command of 'build' because 'py2exe' does not handle sub_commands
-    build.sub_commands.insert(0, ('build_hgextindex', None))
+    # Sub command of 'build' because 'py2exe' does not handle sub_commands.
+    # Need to override hgbuild because it has a private copy of
+    # build.sub_commands.
+    hgbuild.sub_commands.insert(0, ('build_hgextindex', None))
     # put dlls in sub directory so that they won't pollute PATH
     extra['zipfile'] = 'lib/library.zip'
 
+    # We allow some configuration to be supplemented via environment
+    # variables. This is better than setup.cfg files because it allows
+    # supplementing configs instead of replacing them.
+    extrapackages = os.environ.get('HG_PY2EXE_EXTRA_PACKAGES')
+    if extrapackages:
+        py2exepackages.extend(extrapackages.split(' '))
+
+    excludes = os.environ.get('HG_PY2EXE_EXTRA_EXCLUDES')
+    if excludes:
+        py2exeexcludes.extend(excludes.split(' '))
+
+    dllexcludes = os.environ.get('HG_PY2EXE_EXTRA_DLL_EXCLUDES')
+    if dllexcludes:
+        py2exedllexcludes.extend(dllexcludes.split(' '))
+
 if os.name == 'nt':
     # Windows binary file versions for exe/dll files must have the
     # form W.X.Y.Z, where W,X,Y,Z are numbers in the range 0..65535
@@ -1220,16 +1371,10 @@
       distclass=hgdist,
       options={
           'py2exe': {
-              'packages': [
-                  'hgdemandimport',
-                  'hgext',
-                  'email',
-                  # implicitly imported per module policy
-                  # (cffi wouldn't be used as a frozen exe)
-                  'mercurial.cext',
-                  #'mercurial.cffi',
-                  'mercurial.pure',
-              ],
+              'bundle_files': 3,
+              'dll_excludes': py2exedllexcludes,
+              'excludes': py2exeexcludes,
+              'packages': py2exepackages,
           },
           'bdist_mpkg': {
               'zipdist': False,
--- a/tests/artifacts/scripts/generate-churning-bundle.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/artifacts/scripts/generate-churning-bundle.py	Wed Apr 17 13:41:18 2019 -0400
@@ -42,7 +42,6 @@
 FILENAME='SPARSE-REVLOG-TEST-FILE'
 NB_LINES = 10500
 ALWAYS_CHANGE_LINES = 500
-FILENAME = 'SPARSE-REVLOG-TEST-FILE'
 OTHER_CHANGES = 300
 
 def nextcontent(previous_content):
--- a/tests/badserverext.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/badserverext.py	Wed Apr 17 13:41:18 2019 -0400
@@ -34,6 +34,7 @@
 import socket
 
 from mercurial import(
+    pycompat,
     registrar,
 )
 
@@ -48,10 +49,10 @@
     default=False,
 )
 configitem(b'badserver', b'closeafterrecvbytes',
-    default='0',
+    default=b'0',
 )
 configitem(b'badserver', b'closeaftersendbytes',
-    default='0',
+    default=b'0',
 )
 configitem(b'badserver', b'closebeforeaccept',
     default=False,
@@ -74,7 +75,7 @@
         object.__setattr__(self, '_closeaftersendbytes', closeaftersendbytes)
 
     def __getattribute__(self, name):
-        if name in ('makefile',):
+        if name in ('makefile', 'sendall', '_writelog'):
             return object.__getattribute__(self, name)
 
         return getattr(object.__getattribute__(self, '_orig'), name)
@@ -85,6 +86,13 @@
     def __setattr__(self, name, value):
         setattr(object.__getattribute__(self, '_orig'), name, value)
 
+    def _writelog(self, msg):
+        msg = msg.replace(b'\r', b'\\r').replace(b'\n', b'\\n')
+
+        object.__getattribute__(self, '_logfp').write(msg)
+        object.__getattribute__(self, '_logfp').write(b'\n')
+        object.__getattribute__(self, '_logfp').flush()
+
     def makefile(self, mode, bufsize):
         f = object.__getattribute__(self, '_orig').makefile(mode, bufsize)
 
@@ -98,6 +106,38 @@
                                closeafterrecvbytes=closeafterrecvbytes,
                                closeaftersendbytes=closeaftersendbytes)
 
+    def sendall(self, data, flags=0):
+        remaining = object.__getattribute__(self, '_closeaftersendbytes')
+
+        # No read limit. Call original function.
+        if not remaining:
+            result = object.__getattribute__(self, '_orig').sendall(data, flags)
+            self._writelog(b'sendall(%d) -> %s' % (len(data), data))
+            return result
+
+        if len(data) > remaining:
+            newdata = data[0:remaining]
+        else:
+            newdata = data
+
+        remaining -= len(newdata)
+
+        result = object.__getattribute__(self, '_orig').sendall(newdata, flags)
+
+        self._writelog(b'sendall(%d from %d) -> (%d) %s' % (
+            len(newdata), len(data), remaining, newdata))
+
+        object.__setattr__(self, '_closeaftersendbytes', remaining)
+
+        if remaining <= 0:
+            self._writelog(b'write limit reached; closing socket')
+            object.__getattribute__(self, '_orig').shutdown(socket.SHUT_RDWR)
+
+            raise Exception('connection closed after sending N bytes')
+
+        return result
+
+
 # We can't adjust __class__ on socket._fileobject, so define a proxy.
 class fileobjectproxy(object):
     __slots__ = (
@@ -115,7 +155,7 @@
         object.__setattr__(self, '_closeaftersendbytes', closeaftersendbytes)
 
     def __getattribute__(self, name):
-        if name in ('read', 'readline', 'write', '_writelog'):
+        if name in ('_close', 'read', 'readline', 'write', '_writelog'):
             return object.__getattribute__(self, name)
 
         return getattr(object.__getattribute__(self, '_orig'), name)
@@ -127,21 +167,34 @@
         setattr(object.__getattribute__(self, '_orig'), name, value)
 
     def _writelog(self, msg):
-        msg = msg.replace('\r', '\\r').replace('\n', '\\n')
+        msg = msg.replace(b'\r', b'\\r').replace(b'\n', b'\\n')
 
         object.__getattribute__(self, '_logfp').write(msg)
-        object.__getattribute__(self, '_logfp').write('\n')
+        object.__getattribute__(self, '_logfp').write(b'\n')
         object.__getattribute__(self, '_logfp').flush()
 
+    def _close(self):
+        # Python 3 uses an io.BufferedIO instance. Python 2 uses some file
+        # object wrapper.
+        if pycompat.ispy3:
+            orig = object.__getattribute__(self, '_orig')
+
+            if hasattr(orig, 'raw'):
+                orig.raw._sock.shutdown(socket.SHUT_RDWR)
+            else:
+                self.close()
+        else:
+            self._sock.shutdown(socket.SHUT_RDWR)
+
     def read(self, size=-1):
         remaining = object.__getattribute__(self, '_closeafterrecvbytes')
 
         # No read limit. Call original function.
         if not remaining:
             result = object.__getattribute__(self, '_orig').read(size)
-            self._writelog('read(%d) -> (%d) (%s) %s' % (size,
-                                                           len(result),
-                                                           result))
+            self._writelog(b'read(%d) -> (%d) (%s) %s' % (size,
+                                                          len(result),
+                                                          result))
             return result
 
         origsize = size
@@ -154,14 +207,15 @@
         result = object.__getattribute__(self, '_orig').read(size)
         remaining -= len(result)
 
-        self._writelog('read(%d from %d) -> (%d) %s' % (
+        self._writelog(b'read(%d from %d) -> (%d) %s' % (
             size, origsize, len(result), result))
 
         object.__setattr__(self, '_closeafterrecvbytes', remaining)
 
         if remaining <= 0:
-            self._writelog('read limit reached, closing socket')
-            self._sock.close()
+            self._writelog(b'read limit reached, closing socket')
+            self._close()
+
             # This is the easiest way to abort the current request.
             raise Exception('connection closed after receiving N bytes')
 
@@ -173,7 +227,7 @@
         # No read limit. Call original function.
         if not remaining:
             result = object.__getattribute__(self, '_orig').readline(size)
-            self._writelog('readline(%d) -> (%d) %s' % (
+            self._writelog(b'readline(%d) -> (%d) %s' % (
                 size, len(result), result))
             return result
 
@@ -187,14 +241,15 @@
         result = object.__getattribute__(self, '_orig').readline(size)
         remaining -= len(result)
 
-        self._writelog('readline(%d from %d) -> (%d) %s' % (
+        self._writelog(b'readline(%d from %d) -> (%d) %s' % (
             size, origsize, len(result), result))
 
         object.__setattr__(self, '_closeafterrecvbytes', remaining)
 
         if remaining <= 0:
-            self._writelog('read limit reached; closing socket')
-            self._sock.close()
+            self._writelog(b'read limit reached; closing socket')
+            self._close()
+
             # This is the easiest way to abort the current request.
             raise Exception('connection closed after receiving N bytes')
 
@@ -205,7 +260,7 @@
 
         # No byte limit on this operation. Call original function.
         if not remaining:
-            self._writelog('write(%d) -> %s' % (len(data), data))
+            self._writelog(b'write(%d) -> %s' % (len(data), data))
             result = object.__getattribute__(self, '_orig').write(data)
             return result
 
@@ -216,7 +271,7 @@
 
         remaining -= len(newdata)
 
-        self._writelog('write(%d from %d) -> (%d) %s' % (
+        self._writelog(b'write(%d from %d) -> (%d) %s' % (
             len(newdata), len(data), remaining, newdata))
 
         result = object.__getattribute__(self, '_orig').write(newdata)
@@ -224,8 +279,9 @@
         object.__setattr__(self, '_closeaftersendbytes', remaining)
 
         if remaining <= 0:
-            self._writelog('write limit reached; closing socket')
-            self._sock.close()
+            self._writelog(b'write limit reached; closing socket')
+            self._close()
+
             raise Exception('connection closed after sending N bytes')
 
         return result
@@ -239,10 +295,10 @@
             super(badserver, self).__init__(ui, *args, **kwargs)
 
             recvbytes = self._ui.config(b'badserver', b'closeafterrecvbytes')
-            recvbytes = recvbytes.split(',')
+            recvbytes = recvbytes.split(b',')
             self.closeafterrecvbytes = [int(v) for v in recvbytes if v]
             sendbytes = self._ui.config(b'badserver', b'closeaftersendbytes')
-            sendbytes = sendbytes.split(',')
+            sendbytes = sendbytes.split(b',')
             self.closeaftersendbytes = [int(v) for v in sendbytes if v]
 
             # Need to inherit object so super() works.
@@ -270,7 +326,7 @@
                 # Simulate failure to stop processing this request.
                 raise socket.error('close before accept')
 
-            if self._ui.configbool('badserver', 'closeafteraccept'):
+            if self._ui.configbool(b'badserver', b'closeafteraccept'):
                 request, client_address = super(badserver, self).get_request()
                 request.close()
                 raise socket.error('close after accept')
--- a/tests/check-perf-code.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/check-perf-code.py	Wed Apr 17 13:41:18 2019 -0400
@@ -10,7 +10,7 @@
 # write static check patterns here
 perfpypats = [
   [
-    (r'(branchmap|repoview)\.subsettable',
+    (r'(branchmap|repoview|repoviewutil)\.subsettable',
      "use getbranchmapsubsettable() for early Mercurial"),
     (r'\.(vfs|svfs|opener|sopener)',
      "use getvfs()/getsvfs() for early Mercurial"),
@@ -24,7 +24,7 @@
 
 def modulewhitelist(names):
     replacement = [('.py', ''), ('.c', ''), # trim suffix
-                   ('mercurial%s' % (os.sep), ''), # trim "mercurial/" path
+                   ('mercurial%s' % ('/'), ''), # trim "mercurial/" path
                   ]
     ignored = {'__init__'}
     modules = {}
--- a/tests/drawdag.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/drawdag.py	Wed Apr 17 13:41:18 2019 -0400
@@ -275,7 +275,7 @@
     def path(self):
         return self._path
 
-    def renamed(self):
+    def copysource(self):
         return None
 
     def flags(self):
@@ -322,7 +322,7 @@
                     v.remove(leaf)
 
 def _getcomments(text):
-    """
+    r"""
     >>> [pycompat.sysstr(s) for s in _getcomments(br'''
     ...        G
     ...        |
@@ -341,7 +341,7 @@
 
 @command(b'debugdrawdag', [])
 def debugdrawdag(ui, repo, **opts):
-    """read an ASCII graph from stdin and create changesets
+    r"""read an ASCII graph from stdin and create changesets
 
     The ASCII graph is like what :hg:`log -G` outputs, with each `o` replaced
     to the name of the node. The command will create dummy changesets and local
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/filtertraceback.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,29 @@
+#!/usr/bin/env python
+
+# Filters traceback lines from stdin.
+
+from __future__ import absolute_import, print_function
+
+import sys
+
+state = 'none'
+
+for line in sys.stdin:
+    if state == 'none':
+        if line.startswith('Traceback '):
+            state = 'tb'
+
+    elif state == 'tb':
+        if line.startswith('  File '):
+            state = 'file'
+            continue
+
+        elif not line.startswith(' '):
+            state = 'none'
+
+    elif state == 'file':
+        # Ignore lines after "  File "
+        state = 'tb'
+        continue
+
+    print(line, end='')
--- a/tests/flagprocessorext.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/flagprocessorext.py	Wed Apr 17 13:41:18 2019 -0400
@@ -107,7 +107,7 @@
 
     # Teach exchange to use changegroup 3
     for k in exchange._bundlespeccontentopts.keys():
-        exchange._bundlespeccontentopts[k]["cg.version"] = "03"
+        exchange._bundlespeccontentopts[k][b"cg.version"] = b"03"
 
     # Register flag processors for each extension
     revlog.addflagprocessor(
--- a/tests/hghave.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/hghave.py	Wed Apr 17 13:41:18 2019 -0400
@@ -1,6 +1,5 @@
 from __future__ import absolute_import
 
-import errno
 import os
 import re
 import socket
@@ -118,13 +117,8 @@
     is matched by the supplied regular expression.
     """
     r = re.compile(regexp)
-    try:
-        p = subprocess.Popen(
-            cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
-    except OSError as e:
-        if e.errno != errno.ENOENT:
-            raise
-        ret = -1
+    p = subprocess.Popen(
+        cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
     s = p.communicate()[0]
     ret = p.returncode
     return (ignorestatus or not ret) and r.search(s)
@@ -349,8 +343,8 @@
 
 @check("svn", "subversion client and admin tools")
 def has_svn():
-    return matchoutput('svn --version 2>&1', br'^svn, version') and \
-        matchoutput('svnadmin --version 2>&1', br'^svnadmin, version')
+    return (matchoutput('svn --version 2>&1', br'^svn, version') and
+            matchoutput('svnadmin --version 2>&1', br'^svnadmin, version'))
 
 @check("svn-bindings", "subversion python bindings")
 def has_svn_bindings():
@@ -549,7 +543,7 @@
 @check("tls1.2", "TLS 1.2 protocol support")
 def has_tls1_2():
     from mercurial import sslutil
-    return 'tls1.2' in sslutil.supportedprotocols
+    return b'tls1.2' in sslutil.supportedprotocols
 
 @check("windows", "Windows")
 def has_windows():
@@ -652,6 +646,13 @@
     # chg disables demandimport intentionally for performance wins.
     return ((not has_chg()) and os.environ.get('HGDEMANDIMPORT') != 'disable')
 
+@checkvers("py", "Python >= %s", (2.7, 3.5, 3.6, 3.7, 3.8, 3.9))
+def has_python_range(v):
+    major, minor = v.split('.')[0:2]
+    py_major, py_minor = sys.version_info.major, sys.version_info.minor
+
+    return (py_major, py_minor) >= (int(major), int(minor))
+
 @check("py3", "running with Python 3.x")
 def has_py3():
     return 3 == sys.version_info[0]
@@ -721,7 +722,7 @@
 
 @check("clang-libfuzzer", "clang new enough to include libfuzzer")
 def has_clang_libfuzzer():
-    mat = matchoutput('clang --version', b'clang version (\d)')
+    mat = matchoutput('clang --version', br'clang version (\d)')
     if mat:
         # libfuzzer is new in clang 6
         return int(mat.group(1)) > 5
@@ -729,7 +730,7 @@
 
 @check("clang-6.0", "clang 6.0 with version suffix (libfuzzer included)")
 def has_clang60():
-    return matchoutput('clang-6.0 --version', b'clang version 6\.')
+    return matchoutput('clang-6.0 --version', br'clang version 6\.')
 
 @check("xdiff", "xdiff algorithm")
 def has_xdiff():
@@ -810,7 +811,7 @@
         # WITH clause not supported
         return False
 
-    return matchoutput('sqlite3 -version', b'^3\.\d+')
+    return matchoutput('sqlite3 -version', br'^3\.\d+')
 
 @check('vcr', 'vcr http mocking library')
 def has_vcr():
@@ -821,3 +822,10 @@
     except (ImportError, AttributeError):
         pass
     return False
+
+@check('emacs', 'GNU Emacs')
+def has_emacs():
+    # Our emacs lisp uses `with-eval-after-load` which is new in emacs
+    # 24.4, so we allow emacs 24.4, 24.5, and 25+ (24.5 was the last
+    # 24 release)
+    return matchoutput('emacs --version', b'GNU Emacs 2(4.4|4.5|5|6|7|8|9)')
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/httpserverauth.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,113 @@
+from __future__ import absolute_import
+
+import base64
+import hashlib
+
+from mercurial.hgweb import common
+from mercurial import (
+    node,
+)
+
+def parse_keqv_list(req, l):
+    """Parse list of key=value strings where keys are not duplicated."""
+    parsed = {}
+    for elt in l:
+        k, v = elt.split(b'=', 1)
+        if v[0:1] == b'"' and v[-1:] == b'"':
+            v = v[1:-1]
+        parsed[k] = v
+    return parsed
+
+class digestauthserver(object):
+    def __init__(self):
+        self._user_hashes = {}
+
+    def gethashers(self):
+        def _md5sum(x):
+            m = hashlib.md5()
+            m.update(x)
+            return node.hex(m.digest())
+
+        h = _md5sum
+
+        kd = lambda s, d, h=h: h(b"%s:%s" % (s, d))
+        return h, kd
+
+    def adduser(self, user, password, realm):
+        h, kd = self.gethashers()
+        a1 = h(b'%s:%s:%s' % (user, realm, password))
+        self._user_hashes[(user, realm)] = a1
+
+    def makechallenge(self, realm):
+        # We aren't testing the protocol here, just that the bytes make the
+        # proper round trip.  So hardcoded seems fine.
+        nonce = b'064af982c5b571cea6450d8eda91c20d'
+        return b'realm="%s", nonce="%s", algorithm=MD5, qop="auth"' % (realm,
+                                                                       nonce)
+
+    def checkauth(self, req, header):
+        log = req.rawenv[b'wsgi.errors']
+
+        h, kd = self.gethashers()
+        resp = parse_keqv_list(req, header.split(b', '))
+
+        if resp.get(b'algorithm', b'MD5').upper() != b'MD5':
+            log.write(b'Unsupported algorithm: %s' % resp.get(b'algorithm'))
+            raise common.ErrorResponse(common.HTTP_FORBIDDEN,
+                                       b"unknown algorithm")
+        user = resp[b'username']
+        realm = resp[b'realm']
+        nonce = resp[b'nonce']
+
+        ha1 = self._user_hashes.get((user, realm))
+        if not ha1:
+            log.write(b'No hash found for user/realm "%s/%s"' % (user, realm))
+            raise common.ErrorResponse(common.HTTP_FORBIDDEN, b"bad user")
+
+        qop = resp.get(b'qop', b'auth')
+        if qop != b'auth':
+            log.write(b"Unsupported qop: %s" % qop)
+            raise common.ErrorResponse(common.HTTP_FORBIDDEN, b"bad qop")
+
+        cnonce, ncvalue = resp.get(b'cnonce'), resp.get(b'nc')
+        if not cnonce or not ncvalue:
+            log.write(b'No cnonce (%s) or ncvalue (%s)' % (cnonce, ncvalue))
+            raise common.ErrorResponse(common.HTTP_FORBIDDEN, b"no cnonce")
+
+        a2 = b'%s:%s' % (req.method, resp[b'uri'])
+        noncebit = b"%s:%s:%s:%s:%s" % (nonce, ncvalue, cnonce, qop, h(a2))
+
+        respdig = kd(ha1, noncebit)
+        if respdig != resp[b'response']:
+            log.write(b'User/realm "%s/%s" gave %s, but expected %s'
+                      % (user, realm, resp[b'response'], respdig))
+            return False
+
+        return True
+
+digest = digestauthserver()
+
+def perform_authentication(hgweb, req, op):
+    auth = req.headers.get(b'Authorization')
+
+    if req.headers.get(b'X-HgTest-AuthType') == b'Digest':
+        if not auth:
+            challenge = digest.makechallenge(b'mercurial')
+            raise common.ErrorResponse(common.HTTP_UNAUTHORIZED, b'who',
+                    [(b'WWW-Authenticate', b'Digest %s' % challenge)])
+
+        if not digest.checkauth(req, auth[7:]):
+            raise common.ErrorResponse(common.HTTP_FORBIDDEN, b'no')
+
+        return
+
+    if not auth:
+        raise common.ErrorResponse(common.HTTP_UNAUTHORIZED, b'who',
+                [(b'WWW-Authenticate', b'Basic Realm="mercurial"')])
+
+    if base64.b64decode(auth.split()[1]).split(b':', 1) != [b'user', b'pass']:
+        raise common.ErrorResponse(common.HTTP_FORBIDDEN, b'no')
+
+def extsetup(ui):
+    common.permhooks.insert(0, perform_authentication)
+    digest.adduser(b'user', b'pass', b'mercurial')
--- a/tests/notcapable	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/notcapable	Wed Apr 17 13:41:18 2019 -0400
@@ -11,7 +11,7 @@
     extensions.wrapfunction(repository.peer, 'capable', wrapcapable)
     extensions.wrapfunction(localrepo.localrepository, 'peer', wrappeer)
 def wrapcapable(orig, self, name, *args, **kwargs):
-    if name in '$CAP'.split(' '):
+    if name in b'$CAP'.split(b' '):
         return False
     return orig(self, name, *args, **kwargs)
 def wrappeer(orig, self):
--- a/tests/phabricator/phabsend-create-alpha.json	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/phabricator/phabsend-create-alpha.json	Wed Apr 17 13:41:18 2019 -0400
@@ -1,590 +1,617 @@
 {
-    "version": 1, 
     "interactions": [
         {
+            "request": {
+                "method": "POST",
+                "body": "constraints%5Bcallsigns%5D%5B0%5D=HG&api.token=cli-hahayouwish",
+                "uri": "https://phab.mercurial-scm.org//api/diffusion.repository.search",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "79"
+                    ]
+                }
+            },
             "response": {
                 "status": {
-                    "message": "OK", 
-                    "code": 200
-                }, 
+                    "code": 200,
+                    "message": "OK"
+                },
                 "body": {
                     "string": "{\"result\":{\"data\":[{\"id\":2,\"type\":\"REPO\",\"phid\":\"PHID-REPO-bvunnehri4u2isyr7bc3\",\"fields\":{\"name\":\"Mercurial\",\"vcs\":\"hg\",\"callsign\":\"HG\",\"shortName\":null,\"status\":\"active\",\"isImporting\":false,\"spacePHID\":null,\"dateCreated\":1498761653,\"dateModified\":1500403184,\"policy\":{\"view\":\"public\",\"edit\":\"admin\",\"diffusion.push\":\"users\"}},\"attachments\":{}}],\"maps\":{},\"query\":{\"queryKey\":null},\"cursor\":{\"limit\":100,\"after\":null,\"before\":null,\"order\":null}},\"error_code\":null,\"error_info\":null}"
-                }, 
+                },
                 "headers": {
-                    "x-xss-protection": [
-                        "1; mode=block"
-                    ], 
                     "expires": [
                         "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2F4wycgjx3wajuukr7ggfpqedpe7czucr7mvmaems3; expires=Thu, 14-Sep-2023 04:47:40 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:23 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fpywot5xerq4gs2tjxw3gnadzdg6vomqmfcnwqddp; expires=Fri, 01-Mar-2024 00:12:23 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
                     "strict-transport-security": [
                         "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "server": [
-                        "Apache/2.4.10 (Debian)"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:47:40 GMT"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ]
-                }
-            }, 
-            "request": {
-                "method": "POST", 
-                "uri": "https://phab.mercurial-scm.org//api/diffusion.repository.search", 
-                "body": "constraints%5Bcallsigns%5D%5B0%5D=HG&api.token=cli-hahayouwish", 
-                "headers": {
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-length": [
-                        "79"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+866-5f07496726a1+20180915)"
                     ]
                 }
             }
-        }, 
+        },
         {
+            "request": {
+                "method": "POST",
+                "body": "repositoryPHID=PHID-REPO-bvunnehri4u2isyr7bc3&api.token=cli-hahayouwish&diff=diff+--git+a%2Falpha+b%2Falpha%0Anew+file+mode+100644%0A---+%2Fdev%2Fnull%0A%2B%2B%2B+b%2Falpha%0A%40%40+-0%2C0+%2B1%2C1+%40%40%0A%2Balpha%0A",
+                "uri": "https://phab.mercurial-scm.org//api/differential.createrawdiff",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "235"
+                    ]
+                }
+            },
             "response": {
                 "status": {
-                    "message": "OK", 
-                    "code": 200
-                }, 
+                    "code": 200,
+                    "message": "OK"
+                },
                 "body": {
-                    "string": "{\"result\":{\"id\":11072,\"phid\":\"PHID-DIFF-xm6cw76uivc6g56xiuv2\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/differential\\/diff\\/11072\\/\"},\"error_code\":null,\"error_info\":null}"
-                }, 
+                    "string": "{\"result\":{\"id\":14303,\"phid\":\"PHID-DIFF-allzuauvigfjpv4z6dpi\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/differential\\/diff\\/14303\\/\"},\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Fll65pt562b6d7ifhjva4jwqqzxh2oopj4tuc6lfa; expires=Thu, 14-Sep-2023 04:47:40 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:24 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:47:40 GMT"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ]
-                }
-            }, 
-            "request": {
-                "method": "POST", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.createrawdiff", 
-                "body": "repositoryPHID=PHID-REPO-bvunnehri4u2isyr7bc3&diff=diff+--git+a%2Falpha+b%2Falpha%0Anew+file+mode+100644%0A---+%2Fdev%2Fnull%0A%2B%2B%2B+b%2Falpha%0A%40%40+-0%2C0+%2B1%2C1+%40%40%0A%2Balpha%0A&api.token=cli-hahayouwish", 
-                "headers": {
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-length": [
-                        "235"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+866-5f07496726a1+20180915)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2F2n2dlkkwzljrpzfghpdsflbt4ftnrwcc446dzcy5; expires=Fri, 01-Mar-2024 00:12:24 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
                 }
             }
-        }, 
+        },
         {
+            "request": {
+                "method": "POST",
+                "body": "diff_id=14303&data=%7B%22user%22%3A+%22test%22%2C+%22parent%22%3A+%220000000000000000000000000000000000000000%22%2C+%22node%22%3A+%22d386117f30e6b1282897bdbde75ac21e095163d4%22%2C+%22date%22%3A+%220+0%22%7D&api.token=cli-hahayouwish&name=hg%3Ameta",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "264"
+                    ]
+                }
+            },
             "response": {
                 "status": {
-                    "message": "OK", 
-                    "code": 200
-                }, 
+                    "code": 200,
+                    "message": "OK"
+                },
                 "body": {
                     "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
-                }, 
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2F5ivszbehkvbetlnks7omsqmbsu7r5by3p3yqw3ep; expires=Thu, 14-Sep-2023 04:47:41 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:25 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:47:41 GMT"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ]
-                }
-            }, 
-            "request": {
-                "method": "POST", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
-                "body": "data=%7B%22date%22%3A+%220+0%22%2C+%22node%22%3A+%225206a4fa1e6cd7dbc027640267c109e05a9d2341%22%2C+%22user%22%3A+%22test%22%2C+%22parent%22%3A+%220000000000000000000000000000000000000000%22%7D&name=hg%3Ameta&diff_id=11072&api.token=cli-hahayouwish", 
-                "headers": {
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-length": [
-                        "264"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+866-5f07496726a1+20180915)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2F5mq3t25wu5igv7oufpwcoy32fveozo7wn5wni3gw; expires=Fri, 01-Mar-2024 00:12:25 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
                 }
             }
-        }, 
+        },
         {
+            "request": {
+                "method": "POST",
+                "body": "diff_id=14303&data=%7B%22d386117f30e6b1282897bdbde75ac21e095163d4%22%3A+%7B%22author%22%3A+%22test%22%2C+%22authorEmail%22%3A+%22test%22%2C+%22time%22%3A+0.0%7D%7D&api.token=cli-hahayouwish&name=local%3Acommits",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "227"
+                    ]
+                }
+            },
             "response": {
                 "status": {
-                    "message": "OK", 
-                    "code": 200
-                }, 
+                    "code": 200,
+                    "message": "OK"
+                },
                 "body": {
                     "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
-                }, 
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Fxvwxxrmwpjntx6dlohrstyox7yjssdbzufiwygcg; expires=Thu, 14-Sep-2023 04:47:41 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:25 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:47:41 GMT"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ]
-                }
-            }, 
-            "request": {
-                "method": "POST", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
-                "body": "data=%7B%225206a4fa1e6cd7dbc027640267c109e05a9d2341%22%3A+%7B%22time%22%3A+0.0%2C+%22author%22%3A+%22test%22%2C+%22authorEmail%22%3A+%22test%22%7D%7D&name=local%3Acommits&diff_id=11072&api.token=cli-hahayouwish", 
-                "headers": {
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-length": [
-                        "227"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+866-5f07496726a1+20180915)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2F5nja6g4cnpt63ctjjwykxyceyb7kokfptrzbejoc; expires=Fri, 01-Mar-2024 00:12:25 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
                 }
             }
-        }, 
+        },
         {
+            "request": {
+                "method": "POST",
+                "body": "api.token=cli-hahayouwish&corpus=create+alpha+for+phabricator+test+%E2%82%AC",
+                "uri": "https://phab.mercurial-scm.org//api/differential.parsecommitmessage",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "93"
+                    ]
+                }
+            },
             "response": {
                 "status": {
-                    "message": "OK", 
-                    "code": 200
-                }, 
+                    "code": 200,
+                    "message": "OK"
+                },
                 "body": {
-                    "string": "{\"result\":{\"errors\":[],\"fields\":{\"title\":\"create alpha for phabricator test\"},\"revisionIDFieldInfo\":{\"value\":null,\"validDomain\":\"https:\\/\\/phab.mercurial-scm.org\"}},\"error_code\":null,\"error_info\":null}"
-                }, 
+                    "string": "{\"result\":{\"errors\":[],\"fields\":{\"title\":\"create alpha for phabricator test \\u20ac\"},\"revisionIDFieldInfo\":{\"value\":null,\"validDomain\":\"https:\\/\\/phab.mercurial-scm.org\"}},\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Fy3s5iysh6h2javfdo2u7myspyjypv4mvojegqr6j; expires=Thu, 14-Sep-2023 04:47:42 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:26 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:47:42 GMT"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ]
-                }
-            }, 
-            "request": {
-                "method": "POST", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.parsecommitmessage", 
-                "body": "corpus=create+alpha+for+phabricator+test&api.token=cli-hahayouwish", 
-                "headers": {
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-length": [
-                        "83"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+866-5f07496726a1+20180915)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fkrxawhyvcd4jhv77inuwdmzcci4f7kql6c7l3smz; expires=Fri, 01-Mar-2024 00:12:26 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
                 }
             }
-        }, 
+        },
         {
+            "request": {
+                "method": "POST",
+                "body": "transactions%5B0%5D%5Btype%5D=update&transactions%5B0%5D%5Bvalue%5D=PHID-DIFF-allzuauvigfjpv4z6dpi&transactions%5B1%5D%5Btype%5D=title&transactions%5B1%5D%5Bvalue%5D=create+alpha+for+phabricator+test+%E2%82%AC&api.token=cli-hahayouwish",
+                "uri": "https://phab.mercurial-scm.org//api/differential.revision.edit",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "252"
+                    ]
+                }
+            },
             "response": {
                 "status": {
-                    "message": "OK", 
-                    "code": 200
-                }, 
+                    "code": 200,
+                    "message": "OK"
+                },
                 "body": {
-                    "string": "{\"result\":{\"object\":{\"id\":4596,\"phid\":\"PHID-DREV-bntcdwe74cw3vwkzt6nq\"},\"transactions\":[{\"phid\":\"PHID-XACT-DREV-mnqxquobbhdgttd\"},{\"phid\":\"PHID-XACT-DREV-nd34pqrjamxbhop\"},{\"phid\":\"PHID-XACT-DREV-4ka4rghn6b7xooc\"},{\"phid\":\"PHID-XACT-DREV-mfuvfyiijdqwpyg\"},{\"phid\":\"PHID-XACT-DREV-ckar54h6yenx24s\"}]},\"error_code\":null,\"error_info\":null}"
-                }, 
+                    "string": "{\"result\":{\"object\":{\"id\":6054,\"phid\":\"PHID-DREV-6pczsbtdpqjc2nskmxwy\"},\"transactions\":[{\"phid\":\"PHID-XACT-DREV-efgl4j4fesixjog\"},{\"phid\":\"PHID-XACT-DREV-xj7ksjeyfadwf5m\"},{\"phid\":\"PHID-XACT-DREV-gecx5zw42kkuffc\"},{\"phid\":\"PHID-XACT-DREV-asda7zcwgzdadoi\"},{\"phid\":\"PHID-XACT-DREV-ku26t33y6iiugjw\"}]},\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Foe7kd7hhldo25tzbegntkyfxm6wnztgdfmsfubo2; expires=Thu, 14-Sep-2023 04:47:42 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:27 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:47:42 GMT"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ]
-                }
-            }, 
-            "request": {
-                "method": "POST", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.revision.edit", 
-                "body": "transactions%5B0%5D%5Bvalue%5D=PHID-DIFF-xm6cw76uivc6g56xiuv2&transactions%5B0%5D%5Btype%5D=update&transactions%5B1%5D%5Bvalue%5D=create+alpha+for+phabricator+test&transactions%5B1%5D%5Btype%5D=title&api.token=cli-hahayouwish", 
-                "headers": {
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-length": [
-                        "242"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+866-5f07496726a1+20180915)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fjwgcqb5hvbltjq4jqbpauz7rmmhpuh2rb7phsdmf; expires=Fri, 01-Mar-2024 00:12:27 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
                 }
             }
-        }, 
+        },
         {
+            "request": {
+                "method": "POST",
+                "body": "api.token=cli-hahayouwish&ids%5B0%5D=6054",
+                "uri": "https://phab.mercurial-scm.org//api/differential.query",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "58"
+                    ]
+                }
+            },
             "response": {
                 "status": {
-                    "message": "OK", 
-                    "code": 200
-                }, 
+                    "code": 200,
+                    "message": "OK"
+                },
                 "body": {
-                    "string": "{\"result\":[{\"id\":\"4596\",\"phid\":\"PHID-DREV-bntcdwe74cw3vwkzt6nq\",\"title\":\"create alpha for phabricator test\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/D4596\",\"dateCreated\":\"1536986862\",\"dateModified\":\"1536986862\",\"authorPHID\":\"PHID-USER-cgcdlc6c3gpxapbmkwa2\",\"status\":\"0\",\"statusName\":\"Needs Review\",\"properties\":[],\"branch\":null,\"summary\":\"\",\"testPlan\":\"\",\"lineCount\":\"1\",\"activeDiffPHID\":\"PHID-DIFF-xm6cw76uivc6g56xiuv2\",\"diffs\":[\"11072\"],\"commits\":[],\"reviewers\":{\"PHID-PROJ-3dvcxzznrjru2xmmses3\":\"PHID-PROJ-3dvcxzznrjru2xmmses3\"},\"ccs\":[\"PHID-USER-q42dn7cc3donqriafhjx\"],\"hashes\":[],\"auxiliary\":{\"phabricator:projects\":[],\"phabricator:depends-on\":[]},\"repositoryPHID\":\"PHID-REPO-bvunnehri4u2isyr7bc3\",\"sourcePath\":null}],\"error_code\":null,\"error_info\":null}"
-                }, 
+                    "string": "{\"result\":[{\"id\":\"6054\",\"phid\":\"PHID-DREV-6pczsbtdpqjc2nskmxwy\",\"title\":\"create alpha for phabricator test \\u20ac\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/D6054\",\"dateCreated\":\"1551571947\",\"dateModified\":\"1551571947\",\"authorPHID\":\"PHID-USER-5iy6mkoveguhm2zthvww\",\"status\":\"0\",\"statusName\":\"Needs Review\",\"properties\":[],\"branch\":null,\"summary\":\"\",\"testPlan\":\"\",\"lineCount\":\"1\",\"activeDiffPHID\":\"PHID-DIFF-allzuauvigfjpv4z6dpi\",\"diffs\":[\"14303\"],\"commits\":[],\"reviewers\":{\"PHID-PROJ-3dvcxzznrjru2xmmses3\":\"PHID-PROJ-3dvcxzznrjru2xmmses3\"},\"ccs\":[\"PHID-USER-q42dn7cc3donqriafhjx\"],\"hashes\":[],\"auxiliary\":{\"phabricator:projects\":[],\"phabricator:depends-on\":[]},\"repositoryPHID\":\"PHID-REPO-bvunnehri4u2isyr7bc3\",\"sourcePath\":null}],\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2F5d2bgafhoqhg5thqxeu6y4fngq7lqezf5h6eo5pd; expires=Thu, 14-Sep-2023 04:47:43 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:28 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:47:43 GMT"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ]
-                }
-            }, 
-            "request": {
-                "method": "POST", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.query", 
-                "body": "api.token=cli-hahayouwish&ids%5B0%5D=4596", 
-                "headers": {
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-length": [
-                        "58"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+866-5f07496726a1+20180915)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2F3lgkbbyaa646ng5klghjyehsbjxtaqblipnvocuz; expires=Fri, 01-Mar-2024 00:12:28 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
                 }
             }
-        }, 
+        },
         {
+            "request": {
+                "method": "POST",
+                "body": "diff_id=14303&data=%7B%22user%22%3A+%22test%22%2C+%22parent%22%3A+%220000000000000000000000000000000000000000%22%2C+%22node%22%3A+%22cb03845d6dd98c72bec766c7ed08c693cc49817a%22%2C+%22date%22%3A+%220+0%22%7D&api.token=cli-hahayouwish&name=hg%3Ameta",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "264"
+                    ]
+                }
+            },
             "response": {
                 "status": {
-                    "message": "OK", 
-                    "code": 200
-                }, 
+                    "code": 200,
+                    "message": "OK"
+                },
                 "body": {
                     "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
-                }, 
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2F2cewrqifmvko6evm2sy2nvksvcvhk6hpsj36lcv2; expires=Thu, 14-Sep-2023 04:47:43 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:28 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:47:43 GMT"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ]
-                }
-            }, 
-            "request": {
-                "method": "POST", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
-                "body": "data=%7B%22date%22%3A+%220+0%22%2C+%22node%22%3A+%22d8f232f7d799e1064d3da179df41a2b5d04334e9%22%2C+%22user%22%3A+%22test%22%2C+%22parent%22%3A+%220000000000000000000000000000000000000000%22%7D&name=hg%3Ameta&diff_id=11072&api.token=cli-hahayouwish", 
-                "headers": {
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-length": [
-                        "264"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+866-5f07496726a1+20180915)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fwjxvlsjqmqwvcljfv6oe2sbometi3gebps6vzrlw; expires=Fri, 01-Mar-2024 00:12:28 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
                 }
             }
-        }, 
+        },
         {
+            "request": {
+                "method": "POST",
+                "body": "diff_id=14303&data=%7B%22cb03845d6dd98c72bec766c7ed08c693cc49817a%22%3A+%7B%22author%22%3A+%22test%22%2C+%22authorEmail%22%3A+%22test%22%2C+%22time%22%3A+0.0%7D%7D&api.token=cli-hahayouwish&name=local%3Acommits",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "227"
+                    ]
+                }
+            },
             "response": {
                 "status": {
-                    "message": "OK", 
-                    "code": 200
-                }, 
+                    "code": 200,
+                    "message": "OK"
+                },
                 "body": {
                     "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
-                }, 
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Fped6v7jlldydnkfolkdmecyyjrkciqhkr7opvbt2; expires=Thu, 14-Sep-2023 04:47:44 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:29 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:47:44 GMT"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ]
-                }
-            }, 
-            "request": {
-                "method": "POST", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
-                "body": "data=%7B%22d8f232f7d799e1064d3da179df41a2b5d04334e9%22%3A+%7B%22time%22%3A+0.0%2C+%22author%22%3A+%22test%22%2C+%22authorEmail%22%3A+%22test%22%7D%7D&name=local%3Acommits&diff_id=11072&api.token=cli-hahayouwish", 
-                "headers": {
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-length": [
-                        "227"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+866-5f07496726a1+20180915)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Foeyncgzaanzmnhgfc7ecvmu5pq7qju7ewq6tvgrp; expires=Fri, 01-Mar-2024 00:12:29 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
                 }
             }
         }
-    ]
+    ],
+    "version": 1
 }
--- a/tests/phabricator/phabsend-update-alpha-create-beta.json	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/phabricator/phabsend-update-alpha-create-beta.json	Wed Apr 17 13:41:18 2019 -0400
@@ -1,915 +1,1025 @@
 {
-    "version": 1, 
     "interactions": [
         {
             "request": {
-                "body": "api.token=cli-hahayouwish&revisionIDs%5B0%5D=4596", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.querydiffs", 
+                "method": "POST",
+                "body": "api.token=cli-hahayouwish&revisionIDs%5B0%5D=6054",
+                "uri": "https://phab.mercurial-scm.org//api/differential.querydiffs",
                 "headers": {
-                    "content-length": [
-                        "66"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "66"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
-                "headers": {
-                    "server": [
-                        "Apache/2.4.10 (Debian)"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2F5bjqjyefdbiq65cc3qepzxq7ncczgfqo2xxsybaf; expires=Thu, 14-Sep-2023 04:53:46 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
-                    "x-xss-protection": [
-                        "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:46 GMT"
-                    ]
-                }, 
+                },
                 "body": {
-                    "string": "{\"result\":{\"11073\":{\"id\":\"11073\",\"revisionID\":\"4596\",\"dateCreated\":\"1536986866\",\"dateModified\":\"1536986868\",\"sourceControlBaseRevision\":null,\"sourceControlPath\":null,\"sourceControlSystem\":null,\"branch\":null,\"bookmark\":null,\"creationMethod\":\"web\",\"description\":null,\"unitStatus\":\"4\",\"lintStatus\":\"4\",\"changes\":[{\"id\":\"24417\",\"metadata\":{\"line:first\":1},\"oldPath\":null,\"currentPath\":\"alpha\",\"awayPaths\":[],\"oldProperties\":[],\"newProperties\":{\"unix:filemode\":\"100644\"},\"type\":\"1\",\"fileType\":\"1\",\"commitHash\":null,\"addLines\":\"2\",\"delLines\":\"0\",\"hunks\":[{\"oldOffset\":\"0\",\"newOffset\":\"1\",\"oldLength\":\"0\",\"newLength\":\"2\",\"addLines\":null,\"delLines\":null,\"isMissingOldNewline\":null,\"isMissingNewNewline\":null,\"corpus\":\"+alpha\\n+more\\n\"}]}],\"properties\":{\"hg:meta\":{\"parent\":\"0000000000000000000000000000000000000000\",\"node\":\"f70265671c65ab4b5416e611a6bd61887c013122\",\"user\":\"test\",\"date\":\"0 0\"},\"local:commits\":{\"f70265671c65ab4b5416e611a6bd61887c013122\":{\"time\":0,\"authorEmail\":\"test\",\"author\":\"test\"}}},\"authorName\":\"test\",\"authorEmail\":\"test\"},\"11072\":{\"id\":\"11072\",\"revisionID\":\"4596\",\"dateCreated\":\"1536986860\",\"dateModified\":\"1536986862\",\"sourceControlBaseRevision\":null,\"sourceControlPath\":null,\"sourceControlSystem\":null,\"branch\":null,\"bookmark\":null,\"creationMethod\":\"web\",\"description\":null,\"unitStatus\":\"4\",\"lintStatus\":\"4\",\"changes\":[{\"id\":\"24416\",\"metadata\":{\"line:first\":1},\"oldPath\":null,\"currentPath\":\"alpha\",\"awayPaths\":[],\"oldProperties\":[],\"newProperties\":{\"unix:filemode\":\"100644\"},\"type\":\"1\",\"fileType\":\"1\",\"commitHash\":null,\"addLines\":\"1\",\"delLines\":\"0\",\"hunks\":[{\"oldOffset\":\"0\",\"newOffset\":\"1\",\"oldLength\":\"0\",\"newLength\":\"1\",\"addLines\":null,\"delLines\":null,\"isMissingOldNewline\":null,\"isMissingNewNewline\":null,\"corpus\":\"+alpha\\n\"}]}],\"properties\":{\"hg:meta\":{\"date\":\"0 0\",\"node\":\"d8f232f7d799e1064d3da179df41a2b5d04334e9\",\"user\":\"test\",\"parent\":\"0000000000000000000000000000000000000000\"},\"local:commits\":{\"d8f232f7d799e1064d3da179df41a2b5d04334e9\":{\"time\":0,\"author\":\"test\",\"authorEmail\":\"test\"}}},\"authorName\":\"test\",\"authorEmail\":\"test\"}},\"error_code\":null,\"error_info\":null}"
-                }
-            }
-        }, 
-        {
-            "request": {
-                "body": "diff_id=11073&api.token=cli-hahayouwish&data=%7B%22parent%22%3A+%220000000000000000000000000000000000000000%22%2C+%22node%22%3A+%22f70265671c65ab4b5416e611a6bd61887c013122%22%2C+%22user%22%3A+%22test%22%2C+%22date%22%3A+%220+0%22%7D&name=hg%3Ameta", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
+                    "string": "{\"result\":{\"14303\":{\"id\":\"14303\",\"revisionID\":\"6054\",\"dateCreated\":\"1551571944\",\"dateModified\":\"1551571947\",\"sourceControlBaseRevision\":null,\"sourceControlPath\":null,\"sourceControlSystem\":null,\"branch\":null,\"bookmark\":null,\"creationMethod\":\"web\",\"description\":null,\"unitStatus\":\"4\",\"lintStatus\":\"4\",\"changes\":[{\"id\":\"32287\",\"metadata\":{\"line:first\":1},\"oldPath\":null,\"currentPath\":\"alpha\",\"awayPaths\":[],\"oldProperties\":[],\"newProperties\":{\"unix:filemode\":\"100644\"},\"type\":\"1\",\"fileType\":\"1\",\"commitHash\":null,\"addLines\":\"1\",\"delLines\":\"0\",\"hunks\":[{\"oldOffset\":\"0\",\"newOffset\":\"1\",\"oldLength\":\"0\",\"newLength\":\"1\",\"addLines\":null,\"delLines\":null,\"isMissingOldNewline\":null,\"isMissingNewNewline\":null,\"corpus\":\"+alpha\\n\"}]}],\"properties\":{\"hg:meta\":{\"user\":\"test\",\"parent\":\"0000000000000000000000000000000000000000\",\"node\":\"cb03845d6dd98c72bec766c7ed08c693cc49817a\",\"date\":\"0 0\"},\"local:commits\":{\"cb03845d6dd98c72bec766c7ed08c693cc49817a\":{\"author\":\"test\",\"authorEmail\":\"test\",\"time\":0}}},\"authorName\":\"test\",\"authorEmail\":\"test\"}},\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
-                    "content-length": [
-                        "264"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
-                    ]
-                }, 
-                "method": "POST"
-            }, 
-            "response": {
-                "status": {
-                    "code": 200, 
-                    "message": "OK"
-                }, 
-                "headers": {
-                    "server": [
-                        "Apache/2.4.10 (Debian)"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
                     "expires": [
                         "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Ff6o4ingm2wmr3ma4aht2kytfrrxvrkitj6ipkf5k; expires=Thu, 14-Sep-2023 04:53:46 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:30 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
                     "cache-control": [
                         "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:46 GMT"
-                    ]
-                }, 
-                "body": {
-                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
-                }
-            }
-        }, 
-        {
-            "request": {
-                "body": "diff_id=11073&api.token=cli-hahayouwish&data=%7B%22f70265671c65ab4b5416e611a6bd61887c013122%22%3A+%7B%22time%22%3A+0.0%2C+%22authorEmail%22%3A+%22test%22%2C+%22author%22%3A+%22test%22%7D%7D&name=local%3Acommits", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
-                "headers": {
-                    "content-length": [
-                        "227"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
+                    ],
                     "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
-                    ]
-                }, 
-                "method": "POST"
-            }, 
-            "response": {
-                "status": {
-                    "code": 200, 
-                    "message": "OK"
-                }, 
-                "headers": {
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fnf3xdxgvvgky277foc7s2p6xrgtsvn4bzmayrbmb; expires=Fri, 01-Mar-2024 00:12:30 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
                     "strict-transport-security": [
                         "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2F4fitvy4kno46zkca6hq7npvuxvnh4dxlbvscmodb; expires=Thu, 14-Sep-2023 04:53:47 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
-                    "x-xss-protection": [
-                        "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:47 GMT"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "api.token=cli-hahayouwish&corpus=create+alpha+for+phabricator+test%0A%0ADifferential+Revision%3A+https%3A%2F%2Fphab.mercurial-scm.org%2FD4596", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.parsecommitmessage", 
+                "method": "POST",
+                "body": "constraints%5Bcallsigns%5D%5B0%5D=HG&api.token=cli-hahayouwish",
+                "uri": "https://phab.mercurial-scm.org//api/diffusion.repository.search",
                 "headers": {
-                    "content-length": [
-                        "158"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "79"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":{\"data\":[{\"id\":2,\"type\":\"REPO\",\"phid\":\"PHID-REPO-bvunnehri4u2isyr7bc3\",\"fields\":{\"name\":\"Mercurial\",\"vcs\":\"hg\",\"callsign\":\"HG\",\"shortName\":null,\"status\":\"active\",\"isImporting\":false,\"spacePHID\":null,\"dateCreated\":1498761653,\"dateModified\":1500403184,\"policy\":{\"view\":\"public\",\"edit\":\"admin\",\"diffusion.push\":\"users\"}},\"attachments\":{}}],\"maps\":{},\"query\":{\"queryKey\":null},\"cursor\":{\"limit\":100,\"after\":null,\"before\":null,\"order\":null}},\"error_code\":null,\"error_info\":null}"
+                },
+                "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:31 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fmlq7cl6pakmia2uecfcevwhdl3hyqe6rdb2y7usm; expires=Fri, 01-Mar-2024 00:12:31 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
+                    ]
+                }
+            }
+        },
+        {
+            "request": {
+                "method": "POST",
+                "body": "repositoryPHID=PHID-REPO-bvunnehri4u2isyr7bc3&api.token=cli-hahayouwish&diff=diff+--git+a%2Falpha+b%2Falpha%0Anew+file+mode+100644%0A---+%2Fdev%2Fnull%0A%2B%2B%2B+b%2Falpha%0A%40%40+-0%2C0+%2B1%2C2+%40%40%0A%2Balpha%0A%2Bmore%0A",
+                "uri": "https://phab.mercurial-scm.org//api/differential.createrawdiff",
                 "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "245"
+                    ]
+                }
+            },
+            "response": {
+                "status": {
+                    "code": 200,
+                    "message": "OK"
+                },
+                "body": {
+                    "string": "{\"result\":{\"id\":14304,\"phid\":\"PHID-DIFF-3wv2fwmzp27uamb66xxg\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/differential\\/diff\\/14304\\/\"},\"error_code\":null,\"error_info\":null}"
+                },
+                "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:32 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fptjtujvqlcwhzs4yhneogb323aqessc5axlu4rif; expires=Fri, 01-Mar-2024 00:12:32 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
                     "strict-transport-security": [
                         "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2F7u2j7nsrtq2dtxqws7pnsnjyaufsamwj44e45euz; expires=Thu, 14-Sep-2023 04:53:47 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
-                    "x-xss-protection": [
-                        "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:47 GMT"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":{\"errors\":[],\"fields\":{\"title\":\"create alpha for phabricator test\",\"revisionID\":4596},\"revisionIDFieldInfo\":{\"value\":4596,\"validDomain\":\"https:\\/\\/phab.mercurial-scm.org\"}},\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "api.token=cli-hahayouwish&objectIdentifier=4596&transactions%5B0%5D%5Btype%5D=title&transactions%5B0%5D%5Bvalue%5D=create+alpha+for+phabricator+test", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.revision.edit", 
+                "method": "POST",
+                "body": "diff_id=14304&data=%7B%22user%22%3A+%22test%22%2C+%22parent%22%3A+%220000000000000000000000000000000000000000%22%2C+%22node%22%3A+%22939d862f03181a366fea64a540baf0bb33f85d92%22%2C+%22date%22%3A+%220+0%22%7D&api.token=cli-hahayouwish&name=hg%3Ameta",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
                 "headers": {
-                    "content-length": [
-                        "165"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "264"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
-                    "server": [
-                        "Apache/2.4.10 (Debian)"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
                     "expires": [
                         "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2F7ubtculubfazivfxjxbmnyt3wzjcgdxnfdn57t42; expires=Thu, 14-Sep-2023 04:53:48 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:32 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
                     "cache-control": [
                         "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:47 GMT"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Feho2462w6mulsjeoz3e4rwgf37aekqwgpqmarn2f; expires=Fri, 01-Mar-2024 00:12:32 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":{\"object\":{\"id\":\"4596\",\"phid\":\"PHID-DREV-bntcdwe74cw3vwkzt6nq\"},\"transactions\":[]},\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "api.token=cli-hahayouwish&constraints%5Bcallsigns%5D%5B0%5D=HG", 
-                "uri": "https://phab.mercurial-scm.org//api/diffusion.repository.search", 
+                "method": "POST",
+                "body": "diff_id=14304&data=%7B%22939d862f03181a366fea64a540baf0bb33f85d92%22%3A+%7B%22author%22%3A+%22test%22%2C+%22authorEmail%22%3A+%22test%22%2C+%22time%22%3A+0.0%7D%7D&api.token=cli-hahayouwish&name=local%3Acommits",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
                 "headers": {
-                    "content-length": [
-                        "79"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "227"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
-                    "server": [
-                        "Apache/2.4.10 (Debian)"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:33 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
                     "set-cookie": [
-                        "phsid=A%2Fdpvy3rwephm5krs7posuadvjmkh7o7wbytgdhisv; expires=Thu, 14-Sep-2023 04:53:48 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
-                    "x-xss-protection": [
-                        "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:48 GMT"
+                        "phsid=A%2F4ca3h5qhtwgn55t3zznczixyt2st4tm44t23aceg; expires=Fri, 01-Mar-2024 00:12:33 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":{\"data\":[{\"id\":2,\"type\":\"REPO\",\"phid\":\"PHID-REPO-bvunnehri4u2isyr7bc3\",\"fields\":{\"name\":\"Mercurial\",\"vcs\":\"hg\",\"callsign\":\"HG\",\"shortName\":null,\"status\":\"active\",\"isImporting\":false,\"spacePHID\":null,\"dateCreated\":1498761653,\"dateModified\":1500403184,\"policy\":{\"view\":\"public\",\"edit\":\"admin\",\"diffusion.push\":\"users\"}},\"attachments\":{}}],\"maps\":{},\"query\":{\"queryKey\":null},\"cursor\":{\"limit\":100,\"after\":null,\"before\":null,\"order\":null}},\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "api.token=cli-hahayouwish&diff=diff+--git+a%2Fbeta+b%2Fbeta%0Anew+file+mode+100644%0A---+%2Fdev%2Fnull%0A%2B%2B%2B+b%2Fbeta%0A%40%40+-0%2C0+%2B1%2C1+%40%40%0A%2Bbeta%0A&repositoryPHID=PHID-REPO-bvunnehri4u2isyr7bc3", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.createrawdiff", 
+                "method": "POST",
+                "body": "api.token=cli-hahayouwish&corpus=create+alpha+for+phabricator+test+%E2%82%AC%0A%0ADifferential+Revision%3A+https%3A%2F%2Fphab.mercurial-scm.org%2FD6054",
+                "uri": "https://phab.mercurial-scm.org//api/differential.parsecommitmessage",
                 "headers": {
-                    "content-length": [
-                        "231"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "168"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":{\"errors\":[],\"fields\":{\"title\":\"create alpha for phabricator test \\u20ac\",\"revisionID\":6054},\"revisionIDFieldInfo\":{\"value\":6054,\"validDomain\":\"https:\\/\\/phab.mercurial-scm.org\"}},\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:34 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2F7pvtbpw2waiblbsbydew3vfpulqnccf4647ymipq; expires=Fri, 01-Mar-2024 00:12:34 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
                     "strict-transport-security": [
                         "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
+                    ]
+                }
+            }
+        },
+        {
+            "request": {
+                "method": "POST",
+                "body": "api.token=cli-hahayouwish&transactions%5B0%5D%5Btype%5D=update&transactions%5B0%5D%5Bvalue%5D=PHID-DIFF-3wv2fwmzp27uamb66xxg&transactions%5B1%5D%5Btype%5D=title&transactions%5B1%5D%5Bvalue%5D=create+alpha+for+phabricator+test+%E2%82%AC&objectIdentifier=6054",
+                "uri": "https://phab.mercurial-scm.org//api/differential.revision.edit",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "274"
+                    ]
+                }
+            },
+            "response": {
+                "status": {
+                    "code": 200,
+                    "message": "OK"
+                },
+                "body": {
+                    "string": "{\"result\":{\"object\":{\"id\":\"6054\",\"phid\":\"PHID-DREV-6pczsbtdpqjc2nskmxwy\"},\"transactions\":[{\"phid\":\"PHID-XACT-DREV-mc2gfyoyhkfz7dy\"}]},\"error_code\":null,\"error_info\":null}"
+                },
+                "headers": {
                     "expires": [
                         "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Fafqgsnm7vbqi3vyfg5c7xgxyiv7fgi77vauw6wnv; expires=Thu, 14-Sep-2023 04:53:49 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:34 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
                     "content-type": [
                         "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:49 GMT"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fhmyuw3lg6h4joaswqnfcmnzdkp6p2qxotsvahb7l; expires=Fri, 01-Mar-2024 00:12:34 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":{\"id\":11074,\"phid\":\"PHID-DIFF-sitmath22fwgsfsbdmne\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/differential\\/diff\\/11074\\/\"},\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "diff_id=11074&api.token=cli-hahayouwish&data=%7B%22parent%22%3A+%22f70265671c65ab4b5416e611a6bd61887c013122%22%2C+%22node%22%3A+%221a5640df7bbfc26fc4f6ef38e4d1581d5b2a3122%22%2C+%22user%22%3A+%22test%22%2C+%22date%22%3A+%220+0%22%7D&name=hg%3Ameta", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
+                "method": "POST",
+                "body": "repositoryPHID=PHID-REPO-bvunnehri4u2isyr7bc3&api.token=cli-hahayouwish&diff=diff+--git+a%2Fbeta+b%2Fbeta%0Anew+file+mode+100644%0A---+%2Fdev%2Fnull%0A%2B%2B%2B+b%2Fbeta%0A%40%40+-0%2C0+%2B1%2C1+%40%40%0A%2Bbeta%0A",
+                "uri": "https://phab.mercurial-scm.org//api/differential.createrawdiff",
                 "headers": {
-                    "content-length": [
-                        "264"
-                    ], 
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
                     "host": [
                         "phab.mercurial-scm.org"
-                    ], 
-                    "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                    ],
+                    "content-length": [
+                        "231"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":{\"id\":14305,\"phid\":\"PHID-DIFF-pofynzhmmqm2czm33teg\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/differential\\/diff\\/14305\\/\"},\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
-                    "server": [
-                        "Apache/2.4.10 (Debian)"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:35 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Frvpld6nyjmtrq3qynmldbquhgwbrhcdhythbot6r; expires=Thu, 14-Sep-2023 04:53:49 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
-                    "x-xss-protection": [
-                        "1; mode=block"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
                     "content-type": [
                         "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:49 GMT"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2F2xpzt6bryn7n3gug3ll7iu2gfqyy4zss5d7nolew; expires=Fri, 01-Mar-2024 00:12:35 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "diff_id=11074&api.token=cli-hahayouwish&data=%7B%221a5640df7bbfc26fc4f6ef38e4d1581d5b2a3122%22%3A+%7B%22time%22%3A+0.0%2C+%22authorEmail%22%3A+%22test%22%2C+%22author%22%3A+%22test%22%7D%7D&name=local%3Acommits", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
+                "method": "POST",
+                "body": "diff_id=14305&data=%7B%22user%22%3A+%22test%22%2C+%22parent%22%3A+%22939d862f03181a366fea64a540baf0bb33f85d92%22%2C+%22node%22%3A+%22f55f947ed0f8ad80a04b7e87a0bf9febda2070b1%22%2C+%22date%22%3A+%220+0%22%7D&api.token=cli-hahayouwish&name=hg%3Ameta",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
                 "headers": {
-                    "content-length": [
-                        "227"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "264"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
-                    "server": [
-                        "Apache/2.4.10 (Debian)"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
                     "expires": [
                         "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Flpkv333zitgztqx2clpg2uibjy633myliembguf2; expires=Thu, 14-Sep-2023 04:53:50 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:36 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
                     "cache-control": [
                         "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:49 GMT"
-                    ]
-                }, 
-                "body": {
-                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
-                }
-            }
-        }, 
-        {
-            "request": {
-                "body": "api.token=cli-hahayouwish&corpus=create+beta+for+phabricator+test", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.parsecommitmessage", 
-                "headers": {
-                    "content-length": [
-                        "82"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
+                    ],
                     "content-type": [
-                        "application/x-www-form-urlencoded"
-                    ], 
-                    "accept": [
-                        "application/mercurial-0.1"
-                    ], 
-                    "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
-                    ]
-                }, 
-                "method": "POST"
-            }, 
-            "response": {
-                "status": {
-                    "code": 200, 
-                    "message": "OK"
-                }, 
-                "headers": {
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fygzbpe74xh6shrejkd3tj32t4gaqnvumy63iudrd; expires=Fri, 01-Mar-2024 00:12:36 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
                     "strict-transport-security": [
                         "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Fav6ovbqxoy3dijysouoabcz7jqescejugeedwspi; expires=Thu, 14-Sep-2023 04:53:50 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
-                    "x-xss-protection": [
-                        "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:50 GMT"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":{\"errors\":[],\"fields\":{\"title\":\"create beta for phabricator test\"},\"revisionIDFieldInfo\":{\"value\":null,\"validDomain\":\"https:\\/\\/phab.mercurial-scm.org\"}},\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "api.token=cli-hahayouwish&transactions%5B0%5D%5Btype%5D=update&transactions%5B0%5D%5Bvalue%5D=PHID-DIFF-sitmath22fwgsfsbdmne&transactions%5B1%5D%5Btype%5D=summary&transactions%5B1%5D%5Bvalue%5D=Depends+on+D4596&transactions%5B2%5D%5Btype%5D=summary&transactions%5B2%5D%5Bvalue%5D=+&transactions%5B3%5D%5Btype%5D=title&transactions%5B3%5D%5Bvalue%5D=create+beta+for+phabricator+test", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.revision.edit", 
+                "method": "POST",
+                "body": "diff_id=14305&data=%7B%22f55f947ed0f8ad80a04b7e87a0bf9febda2070b1%22%3A+%7B%22author%22%3A+%22test%22%2C+%22authorEmail%22%3A+%22test%22%2C+%22time%22%3A+0.0%7D%7D&api.token=cli-hahayouwish&name=local%3Acommits",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
                 "headers": {
-                    "content-length": [
-                        "398"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "227"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
+                },
+                "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:37 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fgw67yfcsx7vvxkymeac52ca5is4jkxjwqqkhayco; expires=Fri, 01-Mar-2024 00:12:37 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
+                    ]
+                }
+            }
+        },
+        {
+            "request": {
+                "method": "POST",
+                "body": "api.token=cli-hahayouwish&corpus=create+beta+for+phabricator+test",
+                "uri": "https://phab.mercurial-scm.org//api/differential.parsecommitmessage",
                 "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "82"
+                    ]
+                }
+            },
+            "response": {
+                "status": {
+                    "code": 200,
+                    "message": "OK"
+                },
+                "body": {
+                    "string": "{\"result\":{\"errors\":[],\"fields\":{\"title\":\"create beta for phabricator test\"},\"revisionIDFieldInfo\":{\"value\":null,\"validDomain\":\"https:\\/\\/phab.mercurial-scm.org\"}},\"error_code\":null,\"error_info\":null}"
+                },
+                "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:37 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fyt5ejs6pgvjdxzms7geaxup63jpqkisngu3cprk6; expires=Fri, 01-Mar-2024 00:12:37 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
                     "strict-transport-security": [
                         "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Fywrdtdafcn5p267qiqfgfh7h4buaqxmnrgan6fh2; expires=Thu, 14-Sep-2023 04:53:50 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
-                    "x-xss-protection": [
-                        "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:50 GMT"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":{\"object\":{\"id\":4597,\"phid\":\"PHID-DREV-as7flhipq636gqvnyrsf\"},\"transactions\":[{\"phid\":\"PHID-XACT-DREV-bwzosyyqmzlhe6g\"},{\"phid\":\"PHID-XACT-DREV-ina5ktuwp6eiwv6\"},{\"phid\":\"PHID-XACT-DREV-22bjztn3szeyicy\"},{\"phid\":\"PHID-XACT-DREV-kcv6zk2yboepbmo\"},{\"phid\":\"PHID-XACT-DREV-mnbp6f6sq54hzs2\"},{\"phid\":\"PHID-XACT-DREV-qlakltzsdzclpha\"},{\"phid\":\"PHID-XACT-DREV-a5347cobhvqnc22\"},{\"phid\":\"PHID-XACT-DREV-sciqq5cqfuqfh67\"}]},\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "api.token=cli-hahayouwish&ids%5B0%5D=4596&ids%5B1%5D=4597", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.query", 
+                "method": "POST",
+                "body": "transactions%5B0%5D%5Btype%5D=update&transactions%5B0%5D%5Bvalue%5D=PHID-DIFF-pofynzhmmqm2czm33teg&transactions%5B1%5D%5Btype%5D=summary&transactions%5B1%5D%5Bvalue%5D=Depends+on+D6054&transactions%5B2%5D%5Btype%5D=summary&transactions%5B2%5D%5Bvalue%5D=+&transactions%5B3%5D%5Btype%5D=title&transactions%5B3%5D%5Bvalue%5D=create+beta+for+phabricator+test&api.token=cli-hahayouwish",
+                "uri": "https://phab.mercurial-scm.org//api/differential.revision.edit",
                 "headers": {
-                    "content-length": [
-                        "74"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "398"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":{\"object\":{\"id\":6055,\"phid\":\"PHID-DREV-k2hin2iytzuvu3j5icm3\"},\"transactions\":[{\"phid\":\"PHID-XACT-DREV-3xjvwemev7dqsj3\"},{\"phid\":\"PHID-XACT-DREV-giypqlavgemr56i\"},{\"phid\":\"PHID-XACT-DREV-tcfqd4aj6rxtxzz\"},{\"phid\":\"PHID-XACT-DREV-2timgnudaxeln7a\"},{\"phid\":\"PHID-XACT-DREV-vb6564lrsxpsw4l\"},{\"phid\":\"PHID-XACT-DREV-maym4xi2tdhysvo\"},{\"phid\":\"PHID-XACT-DREV-bna5heyckxkk5ke\"},{\"phid\":\"PHID-XACT-DREV-b2eig3stbdic7k7\"}]},\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
-                    "server": [
-                        "Apache/2.4.10 (Debian)"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
                     "expires": [
                         "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2F2iio6iugurtd7ml2tnwfwv24hkrfhs62yshvmouv; expires=Thu, 14-Sep-2023 04:53:51 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:38 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
                     "cache-control": [
                         "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:51 GMT"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Fgqyrj3op7rar26t6crqlt6rpdsxcefnrofqkw5rt; expires=Fri, 01-Mar-2024 00:12:38 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":[{\"id\":\"4597\",\"phid\":\"PHID-DREV-as7flhipq636gqvnyrsf\",\"title\":\"create beta for phabricator test\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/D4597\",\"dateCreated\":\"1536987231\",\"dateModified\":\"1536987231\",\"authorPHID\":\"PHID-USER-cgcdlc6c3gpxapbmkwa2\",\"status\":\"0\",\"statusName\":\"Needs Review\",\"properties\":[],\"branch\":null,\"summary\":\" \",\"testPlan\":\"\",\"lineCount\":\"1\",\"activeDiffPHID\":\"PHID-DIFF-sitmath22fwgsfsbdmne\",\"diffs\":[\"11074\"],\"commits\":[],\"reviewers\":{\"PHID-PROJ-3dvcxzznrjru2xmmses3\":\"PHID-PROJ-3dvcxzznrjru2xmmses3\"},\"ccs\":[\"PHID-USER-q42dn7cc3donqriafhjx\"],\"hashes\":[],\"auxiliary\":{\"phabricator:projects\":[],\"phabricator:depends-on\":[\"PHID-DREV-bntcdwe74cw3vwkzt6nq\"]},\"repositoryPHID\":\"PHID-REPO-bvunnehri4u2isyr7bc3\",\"sourcePath\":null},{\"id\":\"4596\",\"phid\":\"PHID-DREV-bntcdwe74cw3vwkzt6nq\",\"title\":\"create alpha for phabricator test\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/D4596\",\"dateCreated\":\"1536986862\",\"dateModified\":\"1536987231\",\"authorPHID\":\"PHID-USER-cgcdlc6c3gpxapbmkwa2\",\"status\":\"0\",\"statusName\":\"Needs Review\",\"properties\":[],\"branch\":null,\"summary\":\"\",\"testPlan\":\"\",\"lineCount\":\"2\",\"activeDiffPHID\":\"PHID-DIFF-vwre7kpjdq52wbt56ftl\",\"diffs\":[\"11073\",\"11072\"],\"commits\":[],\"reviewers\":{\"PHID-PROJ-3dvcxzznrjru2xmmses3\":\"PHID-PROJ-3dvcxzznrjru2xmmses3\"},\"ccs\":[\"PHID-USER-q42dn7cc3donqriafhjx\"],\"hashes\":[],\"auxiliary\":{\"phabricator:projects\":[],\"phabricator:depends-on\":[]},\"repositoryPHID\":\"PHID-REPO-bvunnehri4u2isyr7bc3\",\"sourcePath\":null}],\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "diff_id=11074&api.token=cli-hahayouwish&data=%7B%22parent%22%3A+%22f70265671c65ab4b5416e611a6bd61887c013122%22%2C+%22node%22%3A+%22c2b605ada280b38c38031b5d31622869c72b0d8d%22%2C+%22user%22%3A+%22test%22%2C+%22date%22%3A+%220+0%22%7D&name=hg%3Ameta", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
+                "method": "POST",
+                "body": "api.token=cli-hahayouwish&ids%5B0%5D=6054&ids%5B1%5D=6055",
+                "uri": "https://phab.mercurial-scm.org//api/differential.query",
                 "headers": {
-                    "content-length": [
-                        "264"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "74"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":[{\"id\":\"6055\",\"phid\":\"PHID-DREV-k2hin2iytzuvu3j5icm3\",\"title\":\"create beta for phabricator test\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/D6055\",\"dateCreated\":\"1551571958\",\"dateModified\":\"1551571958\",\"authorPHID\":\"PHID-USER-5iy6mkoveguhm2zthvww\",\"status\":\"0\",\"statusName\":\"Needs Review\",\"properties\":[],\"branch\":null,\"summary\":\" \",\"testPlan\":\"\",\"lineCount\":\"1\",\"activeDiffPHID\":\"PHID-DIFF-pofynzhmmqm2czm33teg\",\"diffs\":[\"14305\"],\"commits\":[],\"reviewers\":{\"PHID-PROJ-3dvcxzznrjru2xmmses3\":\"PHID-PROJ-3dvcxzznrjru2xmmses3\"},\"ccs\":[\"PHID-USER-q42dn7cc3donqriafhjx\"],\"hashes\":[],\"auxiliary\":{\"phabricator:projects\":[],\"phabricator:depends-on\":[\"PHID-DREV-6pczsbtdpqjc2nskmxwy\"]},\"repositoryPHID\":\"PHID-REPO-bvunnehri4u2isyr7bc3\",\"sourcePath\":null},{\"id\":\"6054\",\"phid\":\"PHID-DREV-6pczsbtdpqjc2nskmxwy\",\"title\":\"create alpha for phabricator test \\u20ac\",\"uri\":\"https:\\/\\/phab.mercurial-scm.org\\/D6054\",\"dateCreated\":\"1551571947\",\"dateModified\":\"1551571958\",\"authorPHID\":\"PHID-USER-5iy6mkoveguhm2zthvww\",\"status\":\"0\",\"statusName\":\"Needs Review\",\"properties\":[],\"branch\":null,\"summary\":\"\",\"testPlan\":\"\",\"lineCount\":\"2\",\"activeDiffPHID\":\"PHID-DIFF-3wv2fwmzp27uamb66xxg\",\"diffs\":[\"14304\",\"14303\"],\"commits\":[],\"reviewers\":{\"PHID-PROJ-3dvcxzznrjru2xmmses3\":\"PHID-PROJ-3dvcxzznrjru2xmmses3\"},\"ccs\":[\"PHID-USER-q42dn7cc3donqriafhjx\"],\"hashes\":[],\"auxiliary\":{\"phabricator:projects\":[],\"phabricator:depends-on\":[]},\"repositoryPHID\":\"PHID-REPO-bvunnehri4u2isyr7bc3\",\"sourcePath\":null}],\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
-                    "server": [
-                        "Apache/2.4.10 (Debian)"
-                    ], 
-                    "strict-transport-security": [
-                        "max-age=0; includeSubdomains; preload"
-                    ], 
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:39 GMT"
+                    ],
                     "x-frame-options": [
                         "Deny"
-                    ], 
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
                     "x-content-type-options": [
                         "nosniff"
-                    ], 
-                    "expires": [
-                        "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
                     "set-cookie": [
-                        "phsid=A%2Fvwsd2gtkeg64gticvthsxnpufne42t4eqityra25; expires=Thu, 14-Sep-2023 04:53:52 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
-                    "x-xss-protection": [
-                        "1; mode=block"
-                    ], 
-                    "content-type": [
-                        "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:52 GMT"
+                        "phsid=A%2F5wxg6sdf2mby5iljd5e5qpgoex6uefo5pgltav7k; expires=Fri, 01-Mar-2024 00:12:39 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
                 }
             }
-        }, 
+        },
         {
             "request": {
-                "body": "diff_id=11074&api.token=cli-hahayouwish&data=%7B%22c2b605ada280b38c38031b5d31622869c72b0d8d%22%3A+%7B%22time%22%3A+0.0%2C+%22authorEmail%22%3A+%22test%22%2C+%22author%22%3A+%22test%22%7D%7D&name=local%3Acommits", 
-                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty", 
+                "method": "POST",
+                "body": "diff_id=14305&data=%7B%22user%22%3A+%22test%22%2C+%22parent%22%3A+%22939d862f03181a366fea64a540baf0bb33f85d92%22%2C+%22node%22%3A+%229c64e1fc33e1b9a70eb60643fe96a4d5badad9dc%22%2C+%22date%22%3A+%220+0%22%7D&api.token=cli-hahayouwish&name=hg%3Ameta",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
                 "headers": {
-                    "content-length": [
-                        "227"
-                    ], 
-                    "host": [
-                        "phab.mercurial-scm.org"
-                    ], 
                     "content-type": [
                         "application/x-www-form-urlencoded"
-                    ], 
+                    ],
                     "accept": [
                         "application/mercurial-0.1"
-                    ], 
+                    ],
                     "user-agent": [
-                        "mercurial/proto-1.0 (Mercurial 4.7.1+867-34bcd3af7109+20180915)"
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "264"
                     ]
-                }, 
-                "method": "POST"
-            }, 
+                }
+            },
             "response": {
                 "status": {
-                    "code": 200, 
+                    "code": 200,
                     "message": "OK"
-                }, 
+                },
+                "body": {
+                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
+                },
                 "headers": {
+                    "expires": [
+                        "Sat, 01 Jan 2000 00:00:00 GMT"
+                    ],
+                    "x-xss-protection": [
+                        "1; mode=block"
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:40 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
+                    "content-type": [
+                        "application/json"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
                     "server": [
                         "Apache/2.4.10 (Debian)"
-                    ], 
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2F4c7iamnsn57y6qpccmbesf4ooflmkqvt4m6udawl; expires=Fri, 01-Mar-2024 00:12:40 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
                     "strict-transport-security": [
                         "max-age=0; includeSubdomains; preload"
-                    ], 
-                    "x-frame-options": [
-                        "Deny"
-                    ], 
-                    "x-content-type-options": [
-                        "nosniff"
-                    ], 
+                    ]
+                }
+            }
+        },
+        {
+            "request": {
+                "method": "POST",
+                "body": "diff_id=14305&data=%7B%229c64e1fc33e1b9a70eb60643fe96a4d5badad9dc%22%3A+%7B%22author%22%3A+%22test%22%2C+%22authorEmail%22%3A+%22test%22%2C+%22time%22%3A+0.0%7D%7D&api.token=cli-hahayouwish&name=local%3Acommits",
+                "uri": "https://phab.mercurial-scm.org//api/differential.setdiffproperty",
+                "headers": {
+                    "content-type": [
+                        "application/x-www-form-urlencoded"
+                    ],
+                    "accept": [
+                        "application/mercurial-0.1"
+                    ],
+                    "user-agent": [
+                        "mercurial/proto-1.0 (Mercurial 4.9+477-7c86ec0ca5c5+20190303)"
+                    ],
+                    "host": [
+                        "phab.mercurial-scm.org"
+                    ],
+                    "content-length": [
+                        "227"
+                    ]
+                }
+            },
+            "response": {
+                "status": {
+                    "code": 200,
+                    "message": "OK"
+                },
+                "body": {
+                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
+                },
+                "headers": {
                     "expires": [
                         "Sat, 01 Jan 2000 00:00:00 GMT"
-                    ], 
-                    "set-cookie": [
-                        "phsid=A%2Fflxjbmx24qcq7qhggolo6b7iue7utwp7kyoazduk; expires=Thu, 14-Sep-2023 04:53:52 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
-                    ], 
+                    ],
                     "x-xss-protection": [
                         "1; mode=block"
-                    ], 
+                    ],
+                    "transfer-encoding": [
+                        "chunked"
+                    ],
+                    "date": [
+                        "Sun, 03 Mar 2019 00:12:40 GMT"
+                    ],
+                    "x-frame-options": [
+                        "Deny"
+                    ],
+                    "cache-control": [
+                        "no-store"
+                    ],
                     "content-type": [
                         "application/json"
-                    ], 
-                    "cache-control": [
-                        "no-store"
-                    ], 
-                    "date": [
-                        "Sat, 15 Sep 2018 04:53:52 GMT"
+                    ],
+                    "x-content-type-options": [
+                        "nosniff"
+                    ],
+                    "server": [
+                        "Apache/2.4.10 (Debian)"
+                    ],
+                    "set-cookie": [
+                        "phsid=A%2Ftdudqohojcq4hyc7gl4kthzkhuq3nmcxgnunpbjm; expires=Fri, 01-Mar-2024 00:12:40 GMT; Max-Age=157680000; path=/; domain=phab.mercurial-scm.org; secure; httponly"
+                    ],
+                    "strict-transport-security": [
+                        "max-age=0; includeSubdomains; preload"
                     ]
-                }, 
-                "body": {
-                    "string": "{\"result\":null,\"error_code\":null,\"error_info\":null}"
                 }
             }
         }
-    ]
+    ],
+    "version": 1
 }
--- a/tests/run-tests.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/run-tests.py	Wed Apr 17 13:41:18 2019 -0400
@@ -290,7 +290,7 @@
 defaults = {
     'jobs': ('HGTEST_JOBS', multiprocessing.cpu_count()),
     'timeout': ('HGTEST_TIMEOUT', 180),
-    'slowtimeout': ('HGTEST_SLOWTIMEOUT', 500),
+    'slowtimeout': ('HGTEST_SLOWTIMEOUT', 1500),
     'port': ('HGTEST_PORT', 20059),
     'shell': ('HGTEST_SHELL', 'sh'),
 }
@@ -634,7 +634,7 @@
 # list in group 2, and the preceeding line output in group 1:
 #
 #   output..output (feature !)\n
-optline = re.compile(b'(.*) \((.+?) !\)\n$')
+optline = re.compile(br'(.*) \((.+?) !\)\n$')
 
 def cdatasafe(data):
     """Make a string safe to include in a CDATA block.
@@ -929,8 +929,8 @@
             self.fail('no result code from test')
         elif out != self._refout:
             # Diff generation may rely on written .err file.
-            if (ret != 0 or out != self._refout) and not self._skipped \
-                and not self._debug:
+            if ((ret != 0 or out != self._refout) and not self._skipped
+                and not self._debug):
                 with open(self.errpath, 'wb') as f:
                     for line in out:
                         f.write(line)
@@ -978,8 +978,8 @@
             # files are deleted
             shutil.rmtree(self._chgsockdir, True)
 
-        if (self._ret != 0 or self._out != self._refout) and not self._skipped \
-            and not self._debug and self._out:
+        if ((self._ret != 0 or self._out != self._refout) and not self._skipped
+            and not self._debug and self._out):
             with open(self.errpath, 'wb') as f:
                 for line in self._out:
                     f.write(line)
@@ -1105,8 +1105,8 @@
         if 'HGTESTCATAPULTSERVERPIPE' not in env:
             # If we don't have HGTESTCATAPULTSERVERPIPE explicitly set, pull the
             # non-test one in as a default, otherwise set to devnull
-            env['HGTESTCATAPULTSERVERPIPE'] = \
-                env.get('HGCATAPULTSERVERPIPE', os.devnull)
+            env['HGTESTCATAPULTSERVERPIPE'] = env.get(
+                'HGCATAPULTSERVERPIPE', os.devnull)
 
         extraextensions = []
         for opt in self._extraconfigopts:
@@ -1225,7 +1225,6 @@
             killdaemons(env['DAEMON_PIDS'])
             return ret
 
-        output = b''
         proc.tochild.close()
 
         try:
@@ -1354,6 +1353,9 @@
 
     def _hghave(self, reqs):
         allreqs = b' '.join(reqs)
+
+        self._detectslow(reqs)
+
         if allreqs in self._have:
             return self._have.get(allreqs)
 
@@ -1375,12 +1377,14 @@
             self._have[allreqs] = (False, stdout)
             return False, stdout
 
+        self._have[allreqs] = (True, None)
+        return True, None
+
+    def _detectslow(self, reqs):
+        """update the timeout of slow test when appropriate"""
         if b'slow' in reqs:
             self._timeout = self._slowtimeout
 
-        self._have[allreqs] = (True, None)
-        return True, None
-
     def _iftest(self, args):
         # implements "#if"
         reqs = []
@@ -1393,6 +1397,7 @@
                     return False
             else:
                 reqs.append(arg)
+        self._detectslow(reqs)
         return self._hghave(reqs)[0]
 
     def _parsetest(self, lines):
@@ -1409,8 +1414,8 @@
         session = str(uuid.uuid4())
         if PYTHON3:
             session = session.encode('ascii')
-        hgcatapult = os.getenv('HGTESTCATAPULTSERVERPIPE') or \
-            os.getenv('HGCATAPULTSERVERPIPE')
+        hgcatapult = (os.getenv('HGTESTCATAPULTSERVERPIPE') or
+                      os.getenv('HGCATAPULTSERVERPIPE'))
         def toggletrace(cmd=None):
             if not hgcatapult or hgcatapult == os.devnull:
                 return
@@ -1903,8 +1908,9 @@
                 pass
             elif self._options.view:
                 v = self._options.view
-                os.system(r"%s %s %s" %
-                          (v, _strpath(test.refpath), _strpath(test.errpath)))
+                subprocess.call(r'"%s" "%s" "%s"' %
+                                (v, _strpath(test.refpath),
+                                 _strpath(test.errpath)), shell=True)
             else:
                 servefail, lines = getdiff(expected, got,
                                            test.refpath, test.errpath)
@@ -2259,14 +2265,17 @@
             self.stream.writeln('')
 
             if not self._runner.options.noskips:
-                for test, msg in self._result.skipped:
+                for test, msg in sorted(self._result.skipped,
+                                        key=lambda s: s[0].name):
                     formatted = 'Skipped %s: %s\n' % (test.name, msg)
                     msg = highlightmsg(formatted, self._result.color)
                     self.stream.write(msg)
-            for test, msg in self._result.failures:
+            for test, msg in sorted(self._result.failures,
+                                    key=lambda f: f[0].name):
                 formatted = 'Failed %s: %s\n' % (test.name, msg)
                 self.stream.write(highlightmsg(formatted, self._result.color))
-            for test, msg in self._result.errors:
+            for test, msg in sorted(self._result.errors,
+                                    key=lambda e: e[0].name):
                 self.stream.writeln('Errored %s: %s' % (test.name, msg))
 
             if self._runner.options.xunit:
@@ -2376,12 +2385,12 @@
         timesd = dict((t[0], t[3]) for t in result.times)
         doc = minidom.Document()
         s = doc.createElement('testsuite')
-        s.setAttribute('name', 'run-tests')
-        s.setAttribute('tests', str(result.testsRun))
         s.setAttribute('errors', "0") # TODO
         s.setAttribute('failures', str(len(result.failures)))
+        s.setAttribute('name', 'run-tests')
         s.setAttribute('skipped', str(len(result.skipped) +
                                       len(result.ignored)))
+        s.setAttribute('tests', str(result.testsRun))
         doc.appendChild(s)
         for tc in result.successes:
             t = doc.createElement('testcase')
@@ -2770,8 +2779,8 @@
         """
         if not args:
             if self.options.changed:
-                proc = Popen4('hg st --rev "%s" -man0 .' %
-                              self.options.changed, None, 0)
+                proc = Popen4(b'hg st --rev "%s" -man0 .' %
+                              _bytespath(self.options.changed), None, 0)
                 stdout, stderr = proc.communicate()
                 args = stdout.strip(b'\0').split(b'\0')
             else:
@@ -3110,8 +3119,8 @@
             # installation layout put it in bin/ directly. Fix it
             with open(hgbat, 'rb') as f:
                 data = f.read()
-            if b'"%~dp0..\python" "%~dp0hg" %*' in data:
-                data = data.replace(b'"%~dp0..\python" "%~dp0hg" %*',
+            if br'"%~dp0..\python" "%~dp0hg" %*' in data:
+                data = data.replace(br'"%~dp0..\python" "%~dp0hg" %*',
                                     b'"%~dp0python" "%~dp0hg" %*')
                 with open(hgbat, 'wb') as f:
                     f.write(data)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/svnurlof.py	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,18 @@
+from __future__ import absolute_import, print_function
+import sys
+
+from mercurial import (
+    pycompat,
+    util,
+)
+
+def main(argv):
+    enc = util.urlreq.quote(pycompat.sysbytes(argv[1]))
+    if pycompat.iswindows:
+        fmt = 'file:///%s'
+    else:
+        fmt = 'file://%s'
+    print(fmt % pycompat.sysstr(enc))
+
+if __name__ == '__main__':
+    main(sys.argv)
--- a/tests/svnxml.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/svnxml.py	Wed Apr 17 13:41:18 2019 -0400
@@ -20,10 +20,10 @@
     if paths:
         paths = paths[0]
         for p in paths.getElementsByTagName('path'):
-            action = p.getAttribute('action')
-            path = xmltext(p)
-            frompath = p.getAttribute('copyfrom-path')
-            fromrev = p.getAttribute('copyfrom-rev')
+            action = p.getAttribute('action').encode('utf-8')
+            path = xmltext(p).encode('utf-8')
+            frompath = p.getAttribute('copyfrom-path').encode('utf-8')
+            fromrev = p.getAttribute('copyfrom-rev').encode('utf-8')
             e['paths'].append((path, action, frompath, fromrev))
     return e
 
@@ -43,11 +43,11 @@
         for k in ('revision', 'author', 'msg'):
             fp.write(('%s: %s\n' % (k, e[k])).encode('utf-8'))
         for path, action, fpath, frev in sorted(e['paths']):
-            frominfo = ''
+            frominfo = b''
             if frev:
-                frominfo = ' (from %s@%s)' % (fpath, frev)
-            p = ' %s %s%s\n' % (action, path, frominfo)
-            fp.write(p.encode('utf-8'))
+                frominfo = b' (from %s@%s)' % (fpath, frev)
+            p = b' %s %s%s\n' % (action, path, frominfo)
+            fp.write(p)
 
 if __name__ == '__main__':
     data = sys.stdin.read()
--- a/tests/test-absorb-strip.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-absorb-strip.t	Wed Apr 17 13:41:18 2019 -0400
@@ -23,6 +23,7 @@
   $ echo 1 >> B
   $ echo 2 >> D
   $ hg absorb -a
+  warning: orphaned descendants detected, not stripping 112478962961, 26805aba1e60
   saved backup bundle to * (glob)
   2 of 2 chunk(s) applied
 
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/test-absorb-unfinished.t	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,30 @@
+  $ cat >> $HGRCPATH << EOF
+  > [extensions]
+  > absorb=
+  > EOF
+
+Abort absorb if there is an unfinished operation.
+
+  $ hg init abortunresolved
+  $ cd abortunresolved
+
+  $ echo "foo1" > foo.whole
+  $ hg commit -Aqm "foo 1"
+
+  $ hg update null
+  0 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  $ echo "foo2" > foo.whole
+  $ hg commit -Aqm "foo 2"
+
+  $ hg --config extensions.rebase= rebase -r 1 -d 0
+  rebasing 1:c3b6dc0e177a "foo 2" (tip)
+  merging foo.whole
+  warning: conflicts while merging foo.whole! (edit, then use 'hg resolve --mark')
+  unresolved conflicts (see hg resolve, then hg rebase --continue)
+  [1]
+
+  $ hg --config extensions.rebase= absorb
+  abort: rebase in progress
+  (use 'hg rebase --continue' or 'hg rebase --abort')
+  [255]
+
--- a/tests/test-acl.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-acl.t	Wed Apr 17 13:41:18 2019 -0400
@@ -38,8 +38,8 @@
   > def fakegetusers(ui, group):
   >     try:
   >         return acl._getusersorig(ui, group)
-  >     except:
-  >         return ["fred", "betty"]
+  >     except BaseException:
+  >         return [b"fred", b"betty"]
   > acl._getusersorig = acl._getusers
   > acl._getusers = fakegetusers
   > EOF
@@ -1125,7 +1125,7 @@
   bundle2-input-bundle: 4 parts total
   transaction abort!
   rollback completed
-  abort: $ENOENT$: ../acl.config
+  abort: $ENOENT$: '../acl.config'
   no rollback information available
   0:6675d58eff77
   
--- a/tests/test-ancestor.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-ancestor.py	Wed Apr 17 13:41:18 2019 -0400
@@ -123,7 +123,6 @@
             # reference slow algorithm
             naiveinc = naiveincrementalmissingancestors(ancs, bases)
             seq = []
-            revs = []
             for _ in xrange(inccount):
                 if rng.random() < 0.2:
                     newbases = samplerevs(graphnodes)
--- a/tests/test-annotate.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-annotate.t	Wed Apr 17 13:41:18 2019 -0400
@@ -438,15 +438,15 @@
   > def reposetup(ui, repo):
   >     class legacyrepo(repo.__class__):
   >         def _filecommit(self, fctx, manifest1, manifest2,
-  >                         linkrev, tr, changelist):
+  >                         linkrev, tr, changelist, includecopymeta):
   >             fname = fctx.path()
   >             text = fctx.data()
   >             flog = self.file(fname)
   >             fparent1 = manifest1.get(fname, node.nullid)
   >             fparent2 = manifest2.get(fname, node.nullid)
   >             meta = {}
-  >             copy = fctx.renamed()
-  >             if copy and copy[0] != fname:
+  >             copy = fctx.copysource()
+  >             if copy and copy != fname:
   >                 raise error.Abort('copying is not supported')
   >             if fparent2 != node.nullid:
   >                 changelist.append(fname)
@@ -589,7 +589,7 @@
 
   $ hg annotate -ncr "wdir()" baz
   abort: $TESTTMP\repo\baz: $ENOENT$ (windows !)
-  abort: $ENOENT$: $TESTTMP/repo/baz (no-windows !)
+  abort: $ENOENT$: '$TESTTMP/repo/baz' (no-windows !)
   [255]
 
 annotate removed file
@@ -598,7 +598,7 @@
 
   $ hg annotate -ncr "wdir()" baz
   abort: $TESTTMP\repo\baz: $ENOENT$ (windows !)
-  abort: $ENOENT$: $TESTTMP/repo/baz (no-windows !)
+  abort: $ENOENT$: '$TESTTMP/repo/baz' (no-windows !)
   [255]
 
   $ hg revert --all --no-backup --quiet
@@ -809,6 +809,15 @@
   |\
   ~ ~
 
+An integer as a line range, which is parsed as '1:1'
+
+  $ hg log -r 'followlines(baz, 1)'
+  changeset:   22:2174d0bf352a
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     added two lines with 0
+  
+
 check error cases
   $ hg up 24 --quiet
   $ hg log -r 'followlines()'
@@ -817,8 +826,8 @@
   $ hg log -r 'followlines(baz)'
   hg: parse error: followlines requires a line range
   [255]
-  $ hg log -r 'followlines(baz, 1)'
-  hg: parse error: followlines expects a line range
+  $ hg log -r 'followlines(baz, x)'
+  hg: parse error: followlines expects a line number or a range
   [255]
   $ hg log -r 'followlines(baz, 1:2, startrev=desc("b"))'
   hg: parse error: followlines expects exactly one revision
--- a/tests/test-arbitraryfilectx.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-arbitraryfilectx.t	Wed Apr 17 13:41:18 2019 -0400
@@ -72,30 +72,30 @@
 These files are different and should return True (different):
 (Note that filecmp.cmp's return semantics are inverted from ours, so we invert
 for simplicity):
-  $ hg eval "context.arbitraryfilectx('A', repo).cmp(repo[None]['real_A'])"
+  $ hg eval "context.arbitraryfilectx(b'A', repo).cmp(repo[None][b'real_A'])"
   True (no-eol)
-  $ hg eval "not filecmp.cmp('A', 'real_A')"
+  $ hg eval "not filecmp.cmp(b'A', b'real_A')"
   True (no-eol)
 
 These files are identical and should return False (same):
-  $ hg eval "context.arbitraryfilectx('A', repo).cmp(repo[None]['A'])"
+  $ hg eval "context.arbitraryfilectx(b'A', repo).cmp(repo[None][b'A'])"
   False (no-eol)
-  $ hg eval "context.arbitraryfilectx('A', repo).cmp(repo[None]['B'])"
+  $ hg eval "context.arbitraryfilectx(b'A', repo).cmp(repo[None][b'B'])"
   False (no-eol)
-  $ hg eval "not filecmp.cmp('A', 'B')"
+  $ hg eval "not filecmp.cmp(b'A', b'B')"
   False (no-eol)
 
 This comparison should also return False, since A and sym_A are substantially
 the same in the eyes of ``filectx.cmp``, which looks at data only.
-  $ hg eval "context.arbitraryfilectx('real_A', repo).cmp(repo[None]['sym_A'])"
+  $ hg eval "context.arbitraryfilectx(b'real_A', repo).cmp(repo[None][b'sym_A'])"
   False (no-eol)
 
 A naive use of filecmp on those two would wrongly return True, since it follows
 the symlink to "A", which has different contents.
 #if symlink
-  $ hg eval "not filecmp.cmp('real_A', 'sym_A')"
+  $ hg eval "not filecmp.cmp(b'real_A', b'sym_A')"
   True (no-eol)
 #else
-  $ hg eval "not filecmp.cmp('real_A', 'sym_A')"
+  $ hg eval "not filecmp.cmp(b'real_A', b'sym_A')"
   False (no-eol)
 #endif
--- a/tests/test-archive.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-archive.t	Wed Apr 17 13:41:18 2019 -0400
@@ -187,7 +187,7 @@
   server: testing stub value
   transfer-encoding: chunked
   
-  body: size=(1377|1461), sha1=(677b14d3d048778d5eb5552c14a67e6192068650|be6d3983aa13dfe930361b2569291cdedd02b537) (re)
+  body: size=(1377|1461|1489), sha1=(677b14d3d048778d5eb5552c14a67e6192068650|be6d3983aa13dfe930361b2569291cdedd02b537|1897e496871aa89ad685a92b936f5fa0d008b9e8) (re)
   % tar.gz and tar.bz2 disallowed should both give 403
   403 Archive type not allowed: gz
   content-type: text/html; charset=ascii
@@ -274,7 +274,7 @@
   server: testing stub value
   transfer-encoding: chunked
   
-  body: size=(1377|1461), sha1=(677b14d3d048778d5eb5552c14a67e6192068650|be6d3983aa13dfe930361b2569291cdedd02b537) (re)
+  body: size=(1377|1461|1489), sha1=(677b14d3d048778d5eb5552c14a67e6192068650|be6d3983aa13dfe930361b2569291cdedd02b537|1897e496871aa89ad685a92b936f5fa0d008b9e8) (re)
   % tar.gz and tar.bz2 disallowed should both give 403
   403 Archive type not allowed: gz
   content-type: text/html; charset=ascii
--- a/tests/test-batching.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-batching.py	Wed Apr 17 13:41:18 2019 -0400
@@ -11,25 +11,28 @@
 
 from mercurial import (
     localrepo,
+    pycompat,
     wireprotov1peer,
+)
 
-)
+def bprint(*bs):
+    print(*[pycompat.sysstr(b) for b in bs])
 
 # equivalent of repo.repository
 class thing(object):
     def hello(self):
-        return "Ready."
+        return b"Ready."
 
 # equivalent of localrepo.localrepository
 class localthing(thing):
     def foo(self, one, two=None):
         if one:
-            return "%s and %s" % (one, two,)
-        return "Nope"
+            return b"%s and %s" % (one, two,)
+        return b"Nope"
     def bar(self, b, a):
-        return "%s und %s" % (b, a,)
+        return b"%s und %s" % (b, a,)
     def greet(self, name=None):
-        return "Hello, %s" % name
+        return b"Hello, %s" % name
 
     @contextlib.contextmanager
     def commandexecutor(self):
@@ -43,27 +46,27 @@
 def use(it):
 
     # Direct call to base method shared between client and server.
-    print(it.hello())
+    bprint(it.hello())
 
     # Direct calls to proxied methods. They cause individual roundtrips.
-    print(it.foo("Un", two="Deux"))
-    print(it.bar("Eins", "Zwei"))
+    bprint(it.foo(b"Un", two=b"Deux"))
+    bprint(it.bar(b"Eins", b"Zwei"))
 
     # Batched call to a couple of proxied methods.
 
     with it.commandexecutor() as e:
-        ffoo = e.callcommand('foo', {'one': 'One', 'two': 'Two'})
-        fbar = e.callcommand('bar', {'b': 'Eins', 'a': 'Zwei'})
-        fbar2 = e.callcommand('bar', {'b': 'Uno', 'a': 'Due'})
+        ffoo = e.callcommand(b'foo', {b'one': b'One', b'two': b'Two'})
+        fbar = e.callcommand(b'bar', {b'b': b'Eins', b'a': b'Zwei'})
+        fbar2 = e.callcommand(b'bar', {b'b': b'Uno', b'a': b'Due'})
 
-    print(ffoo.result())
-    print(fbar.result())
-    print(fbar2.result())
+    bprint(ffoo.result())
+    bprint(fbar.result())
+    bprint(fbar2.result())
 
 # local usage
 mylocal = localthing()
 print()
-print("== Local")
+bprint(b"== Local")
 use(mylocal)
 
 # demo remoting; mimicks what wireproto and HTTP/SSH do
@@ -72,16 +75,16 @@
 
 def escapearg(plain):
     return (plain
-            .replace(':', '::')
-            .replace(',', ':,')
-            .replace(';', ':;')
-            .replace('=', ':='))
+            .replace(b':', b'::')
+            .replace(b',', b':,')
+            .replace(b';', b':;')
+            .replace(b'=', b':='))
 def unescapearg(escaped):
     return (escaped
-            .replace(':=', '=')
-            .replace(':;', ';')
-            .replace(':,', ',')
-            .replace('::', ':'))
+            .replace(b':=', b'=')
+            .replace(b':;', b';')
+            .replace(b':,', b',')
+            .replace(b'::', b':'))
 
 # server side
 
@@ -90,27 +93,28 @@
     def __init__(self, local):
         self.local = local
     def _call(self, name, args):
-        args = dict(arg.split('=', 1) for arg in args)
+        args = dict(arg.split(b'=', 1) for arg in args)
         return getattr(self, name)(**args)
     def perform(self, req):
-        print("REQ:", req)
-        name, args = req.split('?', 1)
-        args = args.split('&')
-        vals = dict(arg.split('=', 1) for arg in args)
-        res = getattr(self, name)(**vals)
-        print("  ->", res)
+        bprint(b"REQ:", req)
+        name, args = req.split(b'?', 1)
+        args = args.split(b'&')
+        vals = dict(arg.split(b'=', 1) for arg in args)
+        res = getattr(self, pycompat.sysstr(name))(**pycompat.strkwargs(vals))
+        bprint(b"  ->", res)
         return res
     def batch(self, cmds):
         res = []
-        for pair in cmds.split(';'):
-            name, args = pair.split(':', 1)
+        for pair in cmds.split(b';'):
+            name, args = pair.split(b':', 1)
             vals = {}
-            for a in args.split(','):
+            for a in args.split(b','):
                 if a:
-                    n, v = a.split('=')
+                    n, v = a.split(b'=')
                     vals[n] = unescapearg(v)
-            res.append(escapearg(getattr(self, name)(**vals)))
-        return ';'.join(res)
+            res.append(escapearg(getattr(self, pycompat.sysstr(name))(
+                **pycompat.strkwargs(vals))))
+        return b';'.join(res)
     def foo(self, one, two):
         return mangle(self.local.foo(unmangle(one), unmangle(two)))
     def bar(self, b, a):
@@ -124,25 +128,25 @@
 # equivalent of wireproto.encode/decodelist, that is, type-specific marshalling
 # here we just transform the strings a bit to check we're properly en-/decoding
 def mangle(s):
-    return ''.join(chr(ord(c) + 1) for c in s)
+    return b''.join(pycompat.bytechr(ord(c) + 1) for c in pycompat.bytestr(s))
 def unmangle(s):
-    return ''.join(chr(ord(c) - 1) for c in s)
+    return b''.join(pycompat.bytechr(ord(c) - 1) for c in pycompat.bytestr(s))
 
 # equivalent of wireproto.wirerepository and something like http's wire format
 class remotething(thing):
     def __init__(self, server):
         self.server = server
     def _submitone(self, name, args):
-        req = name + '?' + '&'.join(['%s=%s' % (n, v) for n, v in args])
+        req = name + b'?' + b'&'.join([b'%s=%s' % (n, v) for n, v in args])
         return self.server.perform(req)
     def _submitbatch(self, cmds):
         req = []
         for name, args in cmds:
-            args = ','.join(n + '=' + escapearg(v) for n, v in args)
-            req.append(name + ':' + args)
-        req = ';'.join(req)
-        res = self._submitone('batch', [('cmds', req,)])
-        for r in res.split(';'):
+            args = b','.join(n + b'=' + escapearg(v) for n, v in args)
+            req.append(name + b':' + args)
+        req = b';'.join(req)
+        res = self._submitone(b'batch', [(b'cmds', req,)])
+        for r in res.split(b';'):
             yield r
 
     @contextlib.contextmanager
@@ -155,7 +159,7 @@
 
     @wireprotov1peer.batchable
     def foo(self, one, two=None):
-        encargs = [('one', mangle(one),), ('two', mangle(two),)]
+        encargs = [(b'one', mangle(one),), (b'two', mangle(two),)]
         encresref = wireprotov1peer.future()
         yield encargs, encresref
         yield unmangle(encresref.value)
@@ -163,18 +167,18 @@
     @wireprotov1peer.batchable
     def bar(self, b, a):
         encresref = wireprotov1peer.future()
-        yield [('b', mangle(b),), ('a', mangle(a),)], encresref
+        yield [(b'b', mangle(b),), (b'a', mangle(a),)], encresref
         yield unmangle(encresref.value)
 
     # greet is coded directly. It therefore does not support batching. If it
     # does appear in a batch, the batch is split around greet, and the call to
     # greet is done in its own roundtrip.
     def greet(self, name=None):
-        return unmangle(self._submitone('greet', [('name', mangle(name),)]))
+        return unmangle(self._submitone(b'greet', [(b'name', mangle(name),)]))
 
 # demo remote usage
 
 myproxy = remotething(myserver)
 print()
-print("== Remote")
+bprint(b"== Remote")
 use(myproxy)
--- a/tests/test-blackbox.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-blackbox.t	Wed Apr 17 13:41:18 2019 -0400
@@ -354,6 +354,35 @@
   warning: cannot write to blackbox.log: $TESTTMP/gone/.hg/blackbox.log: $ENOTDIR$ (windows !)
   $ cd ..
 
+blackbox should disable itself if track is empty
+
+  $ hg --config blackbox.track= init nothing_tracked
+  $ cd nothing_tracked
+  $ cat >> .hg/hgrc << EOF
+  > [blackbox]
+  > track =
+  > EOF
+  $ hg blackbox
+  $ cd $TESTTMP
+
+a '*' entry in blackbox.track is interpreted as log everything
+
+  $ hg --config blackbox.track='*' \
+  >    --config blackbox.logsource=True \
+  >    init track_star
+  $ cd track_star
+  $ cat >> .hg/hgrc << EOF
+  > [blackbox]
+  > logsource = True
+  > track = *
+  > EOF
+(only look for entries with specific logged sources, otherwise this test is
+pretty brittle)
+  $ hg blackbox | egrep '\[command(finish)?\]'
+  1970/01/01 00:00:00 bob @0000000000000000000000000000000000000000 (5000) [commandfinish]> --config *blackbox.track=* --config *blackbox.logsource=True* init track_star exited 0 after * seconds (glob)
+  1970/01/01 00:00:00 bob @0000000000000000000000000000000000000000 (5000) [command]> blackbox
+  $ cd $TESTTMP
+
 #if chg
 
 when using chg, blackbox.log should get rotated correctly
--- a/tests/test-bugzilla.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-bugzilla.t	Wed Apr 17 13:41:18 2019 -0400
@@ -3,7 +3,9 @@
   $ cat <<EOF > bzmock.py
   > from __future__ import absolute_import
   > from mercurial import extensions
+  > from mercurial import pycompat
   > from mercurial import registrar
+  > from mercurial.utils import stringutil
   > 
   > configtable = {}
   > configitem = registrar.configitem(configtable)
@@ -18,14 +20,17 @@
   >             super(bzmock, self).__init__(ui)
   >             self._logfile = ui.config(b'bugzilla', b'mocklog')
   >         def updatebug(self, bugid, newstate, text, committer):
-  >             with open(self._logfile, 'a') as f:
-  >                 f.write('update bugid=%r, newstate=%r, committer=%r\n'
-  >                         % (bugid, newstate, committer))
-  >                 f.write('----\n' + text + '\n----\n')
+  >             with open(pycompat.fsdecode(self._logfile), 'ab') as f:
+  >                 f.write(b'update bugid=%s, newstate=%s, committer=%s\n'
+  >                         % (stringutil.pprint(bugid),
+  >                            stringutil.pprint(newstate),
+  >                            stringutil.pprint(committer)))
+  >                 f.write(b'----\n' + text + b'\n----\n')
   >         def notify(self, bugs, committer):
-  >             with open(self._logfile, 'a') as f:
-  >                 f.write('notify bugs=%r, committer=%r\n'
-  >                         % (bugs, committer))
+  >             with open(pycompat.fsdecode(self._logfile), 'ab') as f:
+  >                 f.write(b'notify bugs=%s, committer=%s\n'
+  >                         % (stringutil.pprint(bugs),
+  >                            stringutil.pprint(committer)))
   >     bugzilla.bugzilla._versions[b'mock'] = bzmock
   > EOF
 
--- a/tests/test-bundle-r.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-bundle-r.t	Wed Apr 17 13:41:18 2019 -0400
@@ -317,8 +317,8 @@
   $ cd ../test
   $ hg merge 7
   note: possible conflict - afile was renamed multiple times to:
+   adifferentfile
    anotherfile
-   adifferentfile
   2 files updated, 0 files merged, 0 files removed, 0 files unresolved
   (branch merge, don't forget to commit)
   $ hg ci -m merge
--- a/tests/test-bundle.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-bundle.t	Wed Apr 17 13:41:18 2019 -0400
@@ -218,10 +218,11 @@
 
   $ cat >> .hg/hgrc <<EOF
   > [hooks]
-  > changegroup = sh -c "printenv.py changegroup"
+  > changegroup = sh -c "printenv.py --line changegroup"
   > EOF
 
 doesn't work (yet ?)
+NOTE: msys is mangling the URL below
 
 hg -R bundle://../full.hg verify
 
@@ -233,7 +234,18 @@
   adding file changes
   added 9 changesets with 7 changes to 4 files (+1 heads)
   new changesets f9ee2f85a263:aa35859c02ea (9 drafts)
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735 HG_NODE_LAST=aa35859c02ea8bd48da5da68cd2740ac71afcbaf HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=bundle*../full.hg (glob)
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735
+  HG_NODE_LAST=aa35859c02ea8bd48da5da68cd2740ac71afcbaf
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  bundle:../full.hg (no-msys !)
+  bundle;../full.hg (msys !)
+  HG_URL=bundle:../full.hg (no-msys !)
+  HG_URL=bundle;../full.hg (msys !)
+  
   (run 'hg heads' to see heads, 'hg merge' to merge)
 
 Rollback empty
@@ -257,7 +269,16 @@
   adding file changes
   added 9 changesets with 7 changes to 4 files (+1 heads)
   new changesets f9ee2f85a263:aa35859c02ea (9 drafts)
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735 HG_NODE_LAST=aa35859c02ea8bd48da5da68cd2740ac71afcbaf HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=bundle:empty+full.hg
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=f9ee2f85a263049e9ae6d37a0e67e96194ffb735
+  HG_NODE_LAST=aa35859c02ea8bd48da5da68cd2740ac71afcbaf
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  bundle:empty+full.hg
+  HG_URL=bundle:empty+full.hg
+  
   (run 'hg heads' to see heads, 'hg merge' to merge)
 
 #endif
--- a/tests/test-bundle2-format.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-bundle2-format.t	Wed Apr 17 13:41:18 2019 -0400
@@ -82,7 +82,8 @@
   >           (b'', b'genraise', False, b'includes a part that raise an exception during generation'),
   >           (b'', b'timeout', False, b'emulate a timeout during bundle generation'),
   >           (b'r', b'rev', [], b'includes those changeset in the bundle'),
-  >           (b'', b'compress', b'', b'compress the stream'),],
+  >           (b'', b'compress', b'', b'compress the stream'),
+  >          ],
   >          b'[OUTPUTFILE]')
   > def cmdbundle2(ui, repo, path=None, **opts):
   >     """write a bundle2 container on standard output"""
--- a/tests/test-bundle2-multiple-changegroups.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-bundle2-multiple-changegroups.t	Wed Apr 17 13:41:18 2019 -0400
@@ -66,9 +66,9 @@
   $ cd ../clone
   $ cat >> .hg/hgrc <<EOF
   > [hooks]
-  > pretxnchangegroup = sh -c "printenv.py pretxnchangegroup"
-  > changegroup = sh -c "printenv.py changegroup"
-  > incoming = sh -c "printenv.py incoming"
+  > pretxnchangegroup = sh -c "printenv.py --line pretxnchangegroup"
+  > changegroup = sh -c "printenv.py --line changegroup"
+  > incoming = sh -c "printenv.py --line incoming"
   > EOF
 
 Pull the new commits in the clone
@@ -81,18 +81,75 @@
   adding manifests
   adding file changes
   added 1 changesets with 1 changes to 1 files
-  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
+  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup
+  HG_HOOKTYPE=pretxnchangegroup
+  HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56
+  HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56
+  HG_PENDING=$TESTTMP/clone
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
   remote: changegroup2
   adding changesets
   adding manifests
   adding file changes
   added 1 changesets with 1 changes to 1 files
-  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
+  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup
+  HG_HOOKTYPE=pretxnchangegroup
+  HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757
+  HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757
+  HG_PENDING=$TESTTMP/clone
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
   new changesets 27547f69f254:f838bfaca5c7
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56
+  HG_NODE_LAST=27547f69f25460a52fff66ad004e58da7ad3fb56
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=27547f69f25460a52fff66ad004e58da7ad3fb56
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757
+  HG_NODE_LAST=f838bfaca5c7226600ebcfd84f3c3c13a28d3757
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=f838bfaca5c7226600ebcfd84f3c3c13a28d3757
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
   pullop.cgresult is 1
   (run 'hg update' to get a working copy)
   $ hg update
@@ -152,21 +209,104 @@
   adding manifests
   adding file changes
   added 2 changesets with 2 changes to 2 files (+1 heads)
-  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
+  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup
+  HG_HOOKTYPE=pretxnchangegroup
+  HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e
+  HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3
+  HG_PENDING=$TESTTMP/clone
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
   remote: changegroup2
   adding changesets
   adding manifests
   adding file changes
   added 3 changesets with 3 changes to 3 files (+1 heads)
-  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
+  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup
+  HG_HOOKTYPE=pretxnchangegroup
+  HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46
+  HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1
+  HG_PENDING=$TESTTMP/clone
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
   new changesets b3325c91a4d9:5cd59d311f65
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=8a5212ebc8527f9fb821601504794e3eb11a1ed3 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=1d14c3ce6ac0582d2809220d33e8cd7a696e0156 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=5cd59d311f6508b8e0ed28a266756c859419c9f1 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e
+  HG_NODE_LAST=8a5212ebc8527f9fb821601504794e3eb11a1ed3
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=b3325c91a4d916bcc4cdc83ea3fe4ece46a42f6e
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=8a5212ebc8527f9fb821601504794e3eb11a1ed3
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46
+  HG_NODE_LAST=5cd59d311f6508b8e0ed28a266756c859419c9f1
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=7f219660301fe4c8a116f714df5e769695cc2b46
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=1d14c3ce6ac0582d2809220d33e8cd7a696e0156
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=5cd59d311f6508b8e0ed28a266756c859419c9f1
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
   pullop.cgresult is 3
   (run 'hg heads' to see heads, 'hg merge' to merge)
   $ hg log -G
@@ -226,18 +366,75 @@
   adding manifests
   adding file changes
   added 1 changesets with 0 changes to 0 files (-1 heads)
-  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_PENDING=$TESTTMP/clone HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
+  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup
+  HG_HOOKTYPE=pretxnchangegroup
+  HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4
+  HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4
+  HG_PENDING=$TESTTMP/clone
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
   remote: changegroup2
   adding changesets
   adding manifests
   adding file changes
   added 1 changesets with 1 changes to 1 files
-  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup HG_HOOKTYPE=pretxnchangegroup HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PENDING=$TESTTMP/clone HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
+  pretxnchangegroup hook: HG_HOOKNAME=pretxnchangegroup
+  HG_HOOKTYPE=pretxnchangegroup
+  HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84
+  HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84
+  HG_PENDING=$TESTTMP/clone
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
   new changesets 71bd7b46de72:9d18e5bd9ab0
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84 HG_PHASES_MOVED=1 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/repo
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4
+  HG_NODE_LAST=71bd7b46de72e69a32455bf88d04757d542e6cf4
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=71bd7b46de72e69a32455bf88d04757d542e6cf4
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84
+  HG_NODE_LAST=9d18e5bd9ab09337802595d49f1dad0c98df4d84
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=9d18e5bd9ab09337802595d49f1dad0c98df4d84
+  HG_PHASES_MOVED=1
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/repo (glob)
+  HG_URL=file:$TESTTMP/repo
+  
   pullop.cgresult is -2
   (run 'hg update' to get a working copy)
   $ hg log -G
--- a/tests/test-bundle2-pushback.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-bundle2-pushback.t	Wed Apr 17 13:41:18 2019 -0400
@@ -25,7 +25,8 @@
   >                   b'key': b'new-server-mark',
   >                   b'old': b'',
   >                   b'new': b'tip'}
-  >         encodedparams = [(k, pushkey.encode(v)) for (k,v) in params.items()]
+  >         encodedparams = [(k, pushkey.encode(v))
+  >                           for (k, v) in params.items()]
   >         op.reply.newpart(b'pushkey', mandatoryparams=encodedparams)
   >     else:
   >         op.reply.newpart(b'output', data=b'pushback not enabled')
--- a/tests/test-cbor.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-cbor.py	Wed Apr 17 13:41:18 2019 -0400
@@ -926,7 +926,7 @@
                                  (False, None, -1, cborutil.SPECIAL_NONE))
 
             with self.assertRaisesRegex(cborutil.CBORDecodeError,
-                                        'semantic tag \d+ not allowed'):
+                                        r'semantic tag \d+ not allowed'):
                 cborutil.decodeitem(encoded)
 
 class SpecialTypesTests(TestCase):
@@ -942,7 +942,7 @@
             encoded = cborutil.encodelength(cborutil.MAJOR_TYPE_SPECIAL, i)
 
             with self.assertRaisesRegex(cborutil.CBORDecodeError,
-                                        'special type \d+ not allowed'):
+                                        r'special type \d+ not allowed'):
                 cborutil.decodeitem(encoded)
 
 class SansIODecoderTests(TestCase):
--- a/tests/test-check-code.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-check-code.t	Wed Apr 17 13:41:18 2019 -0400
@@ -12,6 +12,18 @@
   > -X hgext/fsmonitor/pywatchman \
   > -X mercurial/thirdparty \
   > | sed 's-\\-/-g' | "$check_code" --warnings --per-file=0 - || false
+  Skipping contrib/automation/hgautomation/__init__.py it has no-che?k-code (glob)
+  Skipping contrib/automation/hgautomation/aws.py it has no-che?k-code (glob)
+  Skipping contrib/automation/hgautomation/cli.py it has no-che?k-code (glob)
+  Skipping contrib/automation/hgautomation/windows.py it has no-che?k-code (glob)
+  Skipping contrib/automation/hgautomation/winrm.py it has no-che?k-code (glob)
+  Skipping contrib/packaging/hgpackaging/downloads.py it has no-che?k-code (glob)
+  Skipping contrib/packaging/hgpackaging/inno.py it has no-che?k-code (glob)
+  Skipping contrib/packaging/hgpackaging/py2exe.py it has no-che?k-code (glob)
+  Skipping contrib/packaging/hgpackaging/util.py it has no-che?k-code (glob)
+  Skipping contrib/packaging/hgpackaging/wix.py it has no-che?k-code (glob)
+  Skipping contrib/packaging/inno/build.py it has no-che?k-code (glob)
+  Skipping contrib/packaging/wix/build.py it has no-che?k-code (glob)
   Skipping i18n/polib.py it has no-che?k-code (glob)
   Skipping mercurial/statprof.py it has no-che?k-code (glob)
   Skipping tests/badserverext.py it has no-che?k-code (glob)
@@ -22,7 +34,7 @@
   >>> commands = []
   >>> with open('mercurial/debugcommands.py', 'rb') as fh:
   ...     for line in fh:
-  ...         m = re.match(b"^@command\('([a-z]+)", line)
+  ...         m = re.match(br"^@command\('([a-z]+)", line)
   ...         if m:
   ...             commands.append(m.group(1))
   >>> scommands = list(sorted(commands))
--- a/tests/test-check-module-imports.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-check-module-imports.t	Wed Apr 17 13:41:18 2019 -0400
@@ -18,9 +18,12 @@
   > 'tests/**.t' \
   > -X hgweb.cgi \
   > -X setup.py \
+  > -X contrib/automation/ \
   > -X contrib/debugshell.py \
   > -X contrib/hgweb.fcgi \
   > -X contrib/packaging/hg-docker \
+  > -X contrib/packaging/hgpackaging/ \
+  > -X contrib/packaging/inno/ \
   > -X contrib/python-zstandard/ \
   > -X contrib/win32/hgwebdir_wsgi.py \
   > -X contrib/perf-utils/perf-revlog-write-plot.py \
--- a/tests/test-check-py3-compat.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-check-py3-compat.t	Wed Apr 17 13:41:18 2019 -0400
@@ -5,6 +5,10 @@
 
 #if no-py3
   $ testrepohg files 'set:(**.py)' \
+  > -X contrib/automation/ \
+  > -X contrib/packaging/hgpackaging/ \
+  > -X contrib/packaging/inno/ \
+  > -X contrib/packaging/wix/ \
   > -X hgdemandimport/demandimportpy2.py \
   > -X mercurial/thirdparty/cbor \
   > | sed 's|\\|/|g' | xargs "$PYTHON" contrib/check-py3-compat.py
--- a/tests/test-clone.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-clone.t	Wed Apr 17 13:41:18 2019 -0400
@@ -43,7 +43,6 @@
   default                       10:a7949464abda
   $ ls .hg/cache
   branch2-served
-  manifestfulltextcache (reporevlogstore !)
   rbc-names-v1
   rbc-revs-v1
 
@@ -569,7 +568,7 @@
   > extensions.loadall(myui)
   > extensions.populateui(myui)
   > repo = hg.repository(myui, b'a')
-  > hg.clone(myui, {}, repo, dest=b"ua", branch=[b"stable",])
+  > hg.clone(myui, {}, repo, dest=b"ua", branch=[b"stable"])
   > EOF
 
   $ "$PYTHON" branchclone.py
--- a/tests/test-commit-interactive-curses.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-commit-interactive-curses.t	Wed Apr 17 13:41:18 2019 -0400
@@ -333,9 +333,9 @@
   $ cp $HGRCPATH.pretest $HGRCPATH
   $ chunkselectorinterface() {
   > "$PYTHON" <<EOF
-  > from mercurial import hg, ui;\
-  > repo = hg.repository(ui.ui.load(), ".");\
-  > print(repo.ui.interface("chunkselector"))
+  > from mercurial import hg, pycompat, ui;\
+  > repo = hg.repository(ui.ui.load(), b".");\
+  > print(pycompat.sysstr(repo.ui.interface(b"chunkselector")))
   > EOF
   > }
   $ chunkselectorinterface
--- a/tests/test-commit-interactive.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-commit-interactive.t	Wed Apr 17 13:41:18 2019 -0400
@@ -26,10 +26,8 @@
   > EOF
   diff --git a/empty-rw b/empty-rw
   new file mode 100644
-  examine changes to 'empty-rw'? [Ynesfdaq?] n
-  
-  no changes to record
-  [1]
+  abort: empty commit message
+  [255]
 
   $ hg tip -p
   changeset:   -1:000000000000
@@ -47,8 +45,6 @@
   > EOF
   diff --git a/empty-rw b/empty-rw
   new file mode 100644
-  examine changes to 'empty-rw'? [Ynesfdaq?] y
-  
   abort: empty commit message
   [255]
 
@@ -72,12 +68,9 @@
 
   $ hg commit -i -d '0 0' -m empty empty-rw<<EOF
   > y
-  > y
   > EOF
   diff --git a/empty-rw b/empty-rw
   new file mode 100644
-  examine changes to 'empty-rw'? [Ynesfdaq?] y
-  
 
   $ hg tip -p
   changeset:   0:c0708cf4e46e
@@ -249,8 +242,6 @@
   > EOF
   diff --git a/plain b/plain
   new file mode 100644
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -0,0 +1,10 @@
   +1
   +2
@@ -306,8 +297,6 @@
   > EOF
   diff --git a/plain b/plain
   1 hunks, 1 lines changed
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -8,3 +8,4 @@ 7
    8
    9
@@ -325,8 +314,6 @@
   > EOF
   diff --git a/plain b/plain
   1 hunks, 1 lines changed
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -9,3 +9,4 @@ 8
    9
    10
@@ -467,8 +454,6 @@
   > EOF
   diff --git a/plain b/plain
   1 hunks, 1 lines changed
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -9,4 +9,4 @@ 8
    9
    10
@@ -480,8 +465,6 @@
   
   diff --git a/plain2 b/plain2
   new file mode 100644
-  examine changes to 'plain2'? [Ynesfdaq?] y
-  
   @@ -0,0 +1,1 @@
   +1
   record change 2/2 to 'plain2'? [Ynesfdaq?] y
@@ -504,8 +487,6 @@
   > EOF
   diff --git a/plain b/plain
   2 hunks, 3 lines changed
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -1,4 +1,4 @@
   -1
   +2
@@ -524,8 +505,6 @@
   
   diff --git a/plain2 b/plain2
   1 hunks, 1 lines changed
-  examine changes to 'plain2'? [Ynesfdaq?] y
-  
   @@ -1,1 +1,2 @@
    1
   +2
@@ -572,14 +551,11 @@
 Record end
 
   $ hg commit -i -d '11 0' -m end-only plain <<EOF
-  > y
   > n
   > y
   > EOF
   diff --git a/plain b/plain
   2 hunks, 4 lines changed
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -1,9 +1,6 @@
   -2
   -2
@@ -630,8 +606,6 @@
   > EOF
   diff --git a/plain b/plain
   1 hunks, 3 lines changed
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -1,6 +1,3 @@
   -2
   -2
@@ -671,14 +645,11 @@
 Record end
 
   $ hg commit -i --traceback -d '13 0' -m end-again plain<<EOF
-  > y
   > n
   > y
   > EOF
   diff --git a/plain b/plain
   2 hunks, 4 lines changed
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -1,6 +1,9 @@
   +1
   +2
@@ -714,13 +685,10 @@
   $ hg commit -i --config diff.noprefix=True -d '14 0' -m middle-only plain <<EOF
   > y
   > y
-  > y
   > n
   > EOF
   diff --git a/plain b/plain
   3 hunks, 7 lines changed
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -1,2 +1,5 @@
   +1
   +2
@@ -781,8 +749,6 @@
   > EOF
   diff --git a/plain b/plain
   1 hunks, 2 lines changed
-  examine changes to 'plain'? [Ynesfdaq?] y
-  
   @@ -9,3 +9,5 @@ 6
    7
    8
@@ -823,8 +789,6 @@
   > EOF
   diff --git a/subdir/a b/subdir/a
   1 hunks, 1 lines changed
-  examine changes to 'subdir/a'? [Ynesfdaq?] y
-  
   @@ -1,1 +1,2 @@
    a
   +a
@@ -879,6 +843,35 @@
   abort: user quit
   [255]
 
+Patterns
+
+  $ hg commit -i 'glob:f*' << EOF
+  > y
+  > n
+  > y
+  > n
+  > EOF
+  diff --git a/subdir/f1 b/subdir/f1
+  1 hunks, 1 lines changed
+  examine changes to 'subdir/f1'? [Ynesfdaq?] y
+  
+  @@ -1,1 +1,2 @@
+   a
+  +a
+  record change 1/2 to 'subdir/f1'? [Ynesfdaq?] n
+  
+  diff --git a/subdir/f2 b/subdir/f2
+  1 hunks, 1 lines changed
+  examine changes to 'subdir/f2'? [Ynesfdaq?] y
+  
+  @@ -1,1 +1,2 @@
+   b
+  +b
+  record change 2/2 to 'subdir/f2'? [Ynesfdaq?] n
+  
+  no changes to record
+  [1]
+
 #if gettext
 
 Test translated help message
@@ -1807,3 +1800,82 @@
   n   0         -1 unset               subdir/f1
   $ hg status -A subdir/f1
   M subdir/f1
+
+Test commands.commit.interactive.unified=0
+
+  $ hg init $TESTTMP/b
+  $ cd $TESTTMP/b
+  $ cat > foo <<EOF
+  > 1
+  > 2
+  > 3
+  > 4
+  > 5
+  > EOF
+  $ hg ci -qAm initial
+  $ cat > foo <<EOF
+  > 1
+  > change1
+  > 2
+  > 3
+  > change2
+  > 4
+  > 5
+  > EOF
+  $ printf 'y\ny\ny\n' | hg ci -im initial --config commands.commit.interactive.unified=0
+  diff --git a/foo b/foo
+  2 hunks, 2 lines changed
+  examine changes to 'foo'? [Ynesfdaq?] y
+  
+  @@ -1,0 +2,1 @@ 1
+  +change1
+  record change 1/2 to 'foo'? [Ynesfdaq?] y
+  
+  @@ -3,0 +5,1 @@ 3
+  +change2
+  record change 2/2 to 'foo'? [Ynesfdaq?] y
+  
+  $ cd $TESTTMP
+
+Test diff.ignoreblanklines=1
+
+  $ hg init c
+  $ cd c
+  $ cat > foo <<EOF
+  > 1
+  > 2
+  > 3
+  > 4
+  > 5
+  > EOF
+  $ hg ci -qAm initial
+  $ cat > foo <<EOF
+  > 1
+  > 
+  > 2
+  > 3
+  > change2
+  > 4
+  > 5
+  > EOF
+  $ printf 'y\ny\ny\n' | hg ci -im initial --config diff.ignoreblanklines=1
+  diff --git a/foo b/foo
+  2 hunks, 2 lines changed
+  examine changes to 'foo'? [Ynesfdaq?] y
+  
+  @@ -1,3 +1,4 @@
+   1
+  +
+   2
+   3
+  record change 1/2 to 'foo'? [Ynesfdaq?] y
+  
+  @@ -2,4 +3,5 @@
+   2
+   3
+  +change2
+   4
+   5
+  record change 2/2 to 'foo'? [Ynesfdaq?] y
+  
+
--- a/tests/test-commit-multiple.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-commit-multiple.t	Wed Apr 17 13:41:18 2019 -0400
@@ -95,8 +95,7 @@
   >                                       for f in repo[rev].files())))
   > 
   > repo = hg.repository(uimod.ui.load(), b'.')
-  > assert len(repo) == 6, \
-  >        "initial: len(repo): %d, expected: 6" % len(repo)
+  > assert len(repo) == 6, "initial: len(repo): %d, expected: 6" % len(repo)
   > 
   > replacebyte(b"bugfix", b"u")
   > time.sleep(2)
--- a/tests/test-commit.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-commit.t	Wed Apr 17 13:41:18 2019 -0400
@@ -512,6 +512,7 @@
   HG: dels=
   HG: files=changed
   HG:
+  HG: diff -r d2313f97106f changed
   HG: --- a/changed	Thu Jan 01 00:00:00 1970 +0000
   HG: +++ b/changed	Thu Jan 01 00:00:00 1970 +0000
   HG: @@ -1,1 +1,2 @@
@@ -573,6 +574,7 @@
   HG: dels=removed
   HG: files=added removed
   HG:
+  HG: diff -r d2313f97106f added
   HG: --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
   HG: +++ b/added	Thu Jan 01 00:00:00 1970 +0000
   HG: @@ -0,0 +1,1 @@
@@ -583,6 +585,7 @@
   HG: dels=removed
   HG: files=added removed
   HG:
+  HG: diff -r d2313f97106f removed
   HG: --- a/removed	Thu Jan 01 00:00:00 1970 +0000
   HG: +++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
   HG: @@ -1,1 +0,0 @@
--- a/tests/test-completion.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-completion.t	Wed Apr 17 13:41:18 2019 -0400
@@ -103,7 +103,10 @@
   debugmergestate
   debugnamecomplete
   debugobsolete
+  debugp1copies
+  debugp2copies
   debugpathcomplete
+  debugpathcopies
   debugpeer
   debugpickmergetool
   debugpushkey
@@ -260,7 +263,7 @@
   debugdate: extended
   debugdeltachain: changelog, manifest, dir, template
   debugdirstate: nodates, dates, datesort
-  debugdiscovery: old, nonheads, rev, ssh, remotecmd, insecure
+  debugdiscovery: old, nonheads, rev, seed, ssh, remotecmd, insecure
   debugdownload: output
   debugextensions: template
   debugfileset: rev, all-files, show-matcher, show-stage
@@ -279,7 +282,10 @@
   debugmergestate: 
   debugnamecomplete: 
   debugobsolete: flags, record-parents, rev, exclusive, index, delete, date, user, template
+  debugp1copies: rev
+  debugp2copies: rev
   debugpathcomplete: full, normal, added, removed
+  debugpathcopies: include, exclude
   debugpeer: 
   debugpickmergetool: rev, changedelete, include, exclude, tool
   debugpushkey: 
--- a/tests/test-config.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-config.t	Wed Apr 17 13:41:18 2019 -0400
@@ -211,3 +211,12 @@
   $ hg log --template '{author}\n'
   repo user
   $ cd ..
+
+configs should be read in lexicographical order
+
+  $ mkdir configs
+  $ for i in `$TESTDIR/seq.py 10 99`; do
+  >    printf "[section]\nkey=$i" > configs/$i.rc
+  > done
+  $ HGRCPATH=configs hg config section.key
+  99
--- a/tests/test-context.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-context.py	Wed Apr 17 13:41:18 2019 -0400
@@ -27,9 +27,9 @@
     out.write(data + end)
     out.flush()
 
-u = uimod.ui.load()
+ui = uimod.ui.load()
 
-repo = hg.repository(u, b'test1', create=1)
+repo = hg.repository(ui, b'test1', create=1)
 os.chdir('test1')
 
 # create 'foo' with fixed time stamp
@@ -63,7 +63,7 @@
 # test performing a status
 
 def getfilectx(repo, memctx, f):
-    fctx = memctx.parents()[0][f]
+    fctx = memctx.p1()[f]
     data, flags = fctx.data(), fctx.flags()
     if f == b'foo':
         data += b'bar\n'
@@ -172,7 +172,7 @@
 # test manifestlog being changed
 print('== commit with manifestlog invalidated')
 
-repo = hg.repository(u, b'test2', create=1)
+repo = hg.repository(ui, b'test2', create=1)
 os.chdir('test2')
 
 # make some commits
--- a/tests/test-contrib-check-code.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-contrib-check-code.t	Wed Apr 17 13:41:18 2019 -0400
@@ -7,6 +7,9 @@
   > def toto( arg1, arg2):
   >     del(arg2)
   >     return ( 5+6, 9)
+  > def badwrap():
+  >     return 1 + \\
+  >        2
   > NO_CHECK_EOF
   $ cat > quote.py <<NO_CHECK_EOF
   > # let's use quote in comments
@@ -42,6 +45,9 @@
    >     return ( 5+6, 9)
    gratuitous whitespace in () or []
    missing whitespace in expression
+  ./wrong.py:5:
+   >     return 1 + \
+   Use () to wrap long lines in Python, not \
   ./quote.py:5:
    > '"""', 42+1, """and
    missing whitespace in expression
@@ -373,3 +379,51 @@
    > class empty(object):
    omit superfluous pass
   [1]
+
+Check code fragments embedded in test script
+
+  $ cat > embedded-code.t <<NO_CHECK_EOF
+  > code fragment in doctest style
+  >   >>> x = (1,2)
+  >   ... 
+  >   ... x = (1,2)
+  > 
+  > code fragment in heredoc style
+  >   $ python <<EOF
+  >   > x = (1,2)
+  >   > EOF
+  > 
+  > code fragment in file heredoc style
+  >   $ python > file.py <<EOF
+  >   > x = (1,2)
+  >   > EOF
+  > NO_CHECK_EOF
+  $ "$check_code" embedded-code.t
+  embedded-code.t:2:
+   > x = (1,2)
+   missing whitespace after ,
+  embedded-code.t:4:
+   > x = (1,2)
+   missing whitespace after ,
+  embedded-code.t:8:
+   > x = (1,2)
+   missing whitespace after ,
+  embedded-code.t:13:
+   > x = (1,2)
+   missing whitespace after ,
+  [1]
+
+"max warnings per file" is shared by all embedded code fragments
+
+  $ "$check_code" --per-file=3 embedded-code.t
+  embedded-code.t:2:
+   > x = (1,2)
+   missing whitespace after ,
+  embedded-code.t:4:
+   > x = (1,2)
+   missing whitespace after ,
+  embedded-code.t:8:
+   > x = (1,2)
+   missing whitespace after ,
+   (too many errors, giving up)
+  [1]
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/test-contrib-emacs.t	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,8 @@
+#require emacs
+  $ emacs -q -no-site-file -batch -l $TESTDIR/../contrib/hg-test-mode.el \
+  >  -f ert-run-tests-batch-and-exit
+  Running 1 tests (*) (glob)
+     passed  1/1  hg-test-mode--compilation-mode-support
+  
+  Ran 1 tests, 1 results as expected (*) (glob)
+  
--- a/tests/test-contrib-perf.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-contrib-perf.t	Wed Apr 17 13:41:18 2019 -0400
@@ -32,14 +32,42 @@
 
   $ cat >> $HGRCPATH << EOF
   > [extensions]
-  > perfstatusext=$CONTRIBDIR/perf.py
+  > perf=$CONTRIBDIR/perf.py
   > [perf]
   > presleep=0
   > stub=on
   > parentscount=1
   > EOF
-  $ hg help perfstatusext
-  perfstatusext extension - helper extension to measure performance
+  $ hg help -e perf
+  perf extension - helper extension to measure performance
+  
+  Configurations
+  ==============
+  
+  "perf"
+  ------
+  
+  "all-timing"
+      When set, additional statistics will be reported for each benchmark: best,
+      worst, median average. If not set only the best timing is reported
+      (default: off).
+  
+  "presleep"
+    number of second to wait before any group of runs (default: 1)
+  
+  "run-limits"
+    Control the number of runs each benchmark will perform. The option value
+    should be a list of '<time>-<numberofrun>' pairs. After each run the
+    conditions are considered in order with the following logic:
+  
+        If benchmark has been running for <time> seconds, and we have performed
+        <numberofrun> iterations, stop the benchmark,
+  
+    The default value is: '3.0-100, 10.0-3'
+  
+  "stub"
+      When set, benchmarks will only be run once, useful for testing (default:
+      off)
   
   list of commands:
   
@@ -88,12 +116,12 @@
                  (no help text available)
    perffncachewrite
                  (no help text available)
-   perfheads     (no help text available)
+   perfheads     benchmark the computation of a changelog heads
    perfhelper-pathcopies
                  find statistic about potential parameters for the
                  'perftracecopies'
    perfignore    benchmark operation related to computing ignore
-   perfindex     (no help text available)
+   perfindex     benchmark index creation time followed by a lookup
    perflinelogedits
                  (no help text available)
    perfloadmarkers
@@ -109,7 +137,9 @@
    perfmoonwalk  benchmark walking the changelog backwards
    perfnodelookup
                  (no help text available)
-   perfparents   (no help text available)
+   perfnodemap   benchmark the time necessary to look up revision from a cold
+                 nodemap
+   perfparents   benchmark the time necessary to fetch one changeset's parents.
    perfpathcopies
                  benchmark the copy tracing logic
    perfphases    benchmark phasesets computation
@@ -140,7 +170,7 @@
    perfwalk      (no help text available)
    perfwrite     microbenchmark ui.write
   
-  (use 'hg help -v perfstatusext' to show built-in aliases and global options)
+  (use 'hg help -v perf' to show built-in aliases and global options)
   $ hg perfaddremove
   $ hg perfancestors
   $ hg perfancestorset 2
@@ -211,6 +241,32 @@
   $ hg perfparents
   $ hg perfdiscovery -q .
 
+Test run control
+----------------
+
+Simple single entry
+
+  $ hg perfparents --config perf.stub=no --config perf.run-limits='0.000000001-15'
+  ! wall * comb * user * sys * (best of 15) (glob)
+
+Multiple entries
+
+  $ hg perfparents --config perf.stub=no --config perf.run-limits='500000-1, 0.000000001-5'
+  ! wall * comb * user * sys * (best of 5) (glob)
+
+error case are ignored
+
+  $ hg perfparents --config perf.stub=no --config perf.run-limits='500, 0.000000001-5'
+  malformatted run limit entry, missing "-": 500
+  ! wall * comb * user * sys * (best of 5) (glob)
+  $ hg perfparents --config perf.stub=no --config perf.run-limits='aaa-12, 0.000000001-5'
+  malformatted run limit entry, could not convert string to float: aaa: aaa-12 (no-py3 !)
+  malformatted run limit entry, could not convert string to float: 'aaa': aaa-12 (py3 !)
+  ! wall * comb * user * sys * (best of 5) (glob)
+  $ hg perfparents --config perf.stub=no --config perf.run-limits='12-aaaaaa, 0.000000001-5'
+  malformatted run limit entry, invalid literal for int() with base 10: 'aaaaaa': 12-aaaaaa
+  ! wall * comb * user * sys * (best of 5) (glob)
+
 test actual output
 ------------------
 
--- a/tests/test-contrib-relnotes.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-contrib-relnotes.t	Wed Apr 17 13:41:18 2019 -0400
@@ -266,7 +266,6 @@
    * diff: disable diff.noprefix option for diffstat (Bts:issue5759)
    * evolution: make reporting of new unstable changesets optional
    * extdata: abort if external command exits with non-zero status (BC)
-   * fancyopts: add early-options parser compatible with getopt()
    * graphlog: add another graph node type, unstable, using character "*" (BC)
    * hgdemandimport: use correct hyperlink to python-bug in comments (Bts:issue5765)
    * httppeer: add support for tracing all http request made by the peer
@@ -277,17 +276,18 @@
    * morestatus: don't crash with different drive letters for repo.root and CWD
    * outgoing: respect ":pushurl" paths (Bts:issue5365)
    * remove: print message for each file in verbose mode only while using '-A' (BC)
-   * rewriteutil: use precheck() in uncommit and amend commands
    * scmutil: don't try to delete origbackup symlinks to directories (Bts:issue5731)
    * sshpeer: add support for request tracing
    * subrepo: add config option to reject any subrepo operations (SEC)
    * subrepo: disable git and svn subrepos by default (BC) (SEC)
+   * subrepo: disallow symlink traversal across subrepo mount point (SEC)
    * subrepo: extend config option to disable subrepos by type (SEC)
    * subrepo: handle 'C:' style paths on the command line (Bts:issue5770)
    * subrepo: use per-type config options to enable subrepos
    * svnsubrepo: check if subrepo is missing when checking dirty state (Bts:issue5657)
    * test-bookmarks-pushpull: stabilize for Windows
    * test-run-tests: stabilize the test (Bts:issue5735)
+   * tests: show symlink traversal across subrepo mount point (SEC)
    * tr-summary: keep a weakref to the unfiltered repository
    * unamend: fix command summary line
    * uncommit: unify functions _uncommitdirstate and _unamenddirstate to one
--- a/tests/test-convert-cvs.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-convert-cvs.t	Wed Apr 17 13:41:18 2019 -0400
@@ -11,11 +11,11 @@
   $ echo "[extensions]" >> $HGRCPATH
   $ echo "convert = " >> $HGRCPATH
   $ cat > cvshooks.py <<EOF
-  > def cvslog(ui,repo,hooktype,log):
-  >     ui.write(b'%s hook: %d entries\n' % (hooktype,len(log)))
+  > def cvslog(ui, repo, hooktype, log):
+  >     ui.write(b'%s hook: %d entries\n' % (hooktype, len(log)))
   > 
-  > def cvschangesets(ui,repo,hooktype,changesets):
-  >     ui.write(b'%s hook: %d changesets\n' % (hooktype,len(changesets)))
+  > def cvschangesets(ui, repo, hooktype, changesets):
+  >     ui.write(b'%s hook: %d changesets\n' % (hooktype, len(changesets)))
   > EOF
   $ hookpath=`pwd`
   $ cat <<EOF >> $HGRCPATH
--- a/tests/test-convert-hg-source.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-convert-hg-source.t	Wed Apr 17 13:41:18 2019 -0400
@@ -206,3 +206,22 @@
   a
   c
   d
+  $ cd ..
+
+  $ hg init commit-references
+  $ cd commit-references
+  $ echo a > a
+  $ hg ci -Aqm initial
+  $ echo b > b
+  $ hg ci -Aqm 'the previous commit was 1451231c8757'
+  $ echo c > c
+  $ hg ci -Aqm 'the working copy is called ffffffffffff'
+
+  $ cd ..
+  $ hg convert commit-references new-commit-references -q \
+  >     --config convert.hg.sourcename=yes
+  $ cd new-commit-references
+  $ hg log -T '{node|short} {desc}\n'
+  fe295c9e6bc6 the working copy is called ffffffffffff
+  642508659503 the previous commit was c2491f685436
+  c2491f685436 initial
--- a/tests/test-convert-hg-svn.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-convert-hg-svn.t	Wed Apr 17 13:41:18 2019 -0400
@@ -11,11 +11,7 @@
   > EOF
 
   $ SVNREPOPATH=`pwd`/svn-repo
-#if windows
-  $ SVNREPOURL=file:///`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#else
-  $ SVNREPOURL=file://`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#endif
+  $ SVNREPOURL="`"$PYTHON" $TESTDIR/svnurlof.py \"$SVNREPOPATH\"`"
 
   $ svnadmin create "$SVNREPOPATH"
   $ cat > "$SVNREPOPATH"/hooks/pre-revprop-change <<EOF
--- a/tests/test-convert-svn-move.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-convert-svn-move.t	Wed Apr 17 13:41:18 2019 -0400
@@ -8,11 +8,7 @@
   $ svnadmin create svn-repo
   $ svnadmin load -q svn-repo < "$TESTDIR/svn/move.svndump"
   $ SVNREPOPATH=`pwd`/svn-repo
-#if windows
-  $ SVNREPOURL=file:///`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#else
-  $ SVNREPOURL=file://`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#endif
+  $ SVNREPOURL="`"$PYTHON" $TESTDIR/svnurlof.py \"$SVNREPOPATH\"`"
 
 Convert trunk and branches
 
--- a/tests/test-convert-svn-sink.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-convert-svn-sink.t	Wed Apr 17 13:41:18 2019 -0400
@@ -466,3 +466,85 @@
   msg: Add file a
    A /a
   $ rm -rf a a-hg a-hg-wc
+
+#if execbit
+
+Executable bit removal
+
+  $ hg init a
+
+  $ echo a > a/exec
+  $ chmod +x a/exec
+  $ hg --cwd a ci -d '1 0' -A -m 'create executable'
+  adding exec
+  $ chmod -x a/exec
+  $ hg --cwd a ci -d '2 0' -A -m 'remove executable bit'
+
+  $ hg convert -d svn a
+  assuming destination a-hg
+  initializing svn repository 'a-hg'
+  initializing svn working copy 'a-hg-wc'
+  scanning source...
+  sorting...
+  converting...
+  1 create executable
+  0 remove executable bit
+  $ svnupanddisplay a-hg-wc 0
+   2 2 test .
+   2 2 test exec
+  revision: 2
+  author: test
+  msg: remove executable bit
+   M /exec
+  revision: 1
+  author: test
+  msg: create executable
+   A /exec
+  $ test ! -x a-hg-wc/exec
+
+  $ rm -rf a a-hg a-hg-wc
+
+#endif
+
+Skipping empty commits
+
+  $ hg init a
+
+  $ hg --cwd a --config ui.allowemptycommit=True ci -d '1 0' -m 'Initial empty commit'
+
+  $ echo a > a/a
+  $ hg --cwd a ci -d '0 0' -A -m 'Some change'
+  adding a
+  $ hg --cwd a --config ui.allowemptycommit=True ci -d '2 0' -m 'Empty commit 1'
+  $ hg --cwd a --config ui.allowemptycommit=True ci -d '3 0' -m 'Empty commit 2'
+  $ echo b > a/b
+  $ hg --cwd a ci -d '0 0' -A -m 'Another change'
+  adding b
+
+  $ hg convert -d svn a
+  assuming destination a-hg
+  initializing svn repository 'a-hg'
+  initializing svn working copy 'a-hg-wc'
+  scanning source...
+  sorting...
+  converting...
+  4 Initial empty commit
+  3 Some change
+  2 Empty commit 1
+  1 Empty commit 2
+  0 Another change
+
+  $ svnupanddisplay a-hg-wc 0
+   2 1 test a
+   2 2 test .
+   2 2 test b
+  revision: 2
+  author: test
+  msg: Another change
+   A /b
+  revision: 1
+  author: test
+  msg: Some change
+   A /a
+
+  $ rm -rf a a-hg a-hg-wc
--- a/tests/test-convert-svn-source.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-convert-svn-source.t	Wed Apr 17 13:41:18 2019 -0400
@@ -13,11 +13,7 @@
 
   $ svnadmin create svn-repo
   $ SVNREPOPATH=`pwd`/svn-repo
-#if windows
-  $ SVNREPOURL=file:///`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#else
-  $ SVNREPOURL=file://`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#endif
+  $ SVNREPOURL="`"$PYTHON" $TESTDIR/svnurlof.py \"$SVNREPOPATH\"`"
   $ INVALIDREVISIONID=svn:x2147622-4a9f-4db4-a8d3-13562ff547b2/proj%20B/mytrunk@1
   $ VALIDREVISIONID=svn:a2147622-4a9f-4db4-a8d3-13562ff547b2/proj%20B/mytrunk/mytrunk@1
 
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/test-copies-in-changeset.t	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,135 @@
+
+  $ cat >> $HGRCPATH << EOF
+  > [experimental]
+  > copies.write-to=changeset-only
+  > copies.read-from=changeset-only
+  > [alias]
+  > changesetcopies = log -r . -T 'files: {files}
+  >   {extras % "{ifcontains("copies", key, "{key}: {value}\n")}"}'
+  > showcopies = log -r . -T '{file_copies % "{source} -> {name}\n"}'
+  > EOF
+
+Check that copies are recorded correctly
+
+  $ hg init repo
+  $ cd repo
+  $ echo a > a
+  $ hg add a
+  $ hg ci -m initial
+  $ hg cp a b
+  $ hg cp a c
+  $ hg cp a d
+  $ hg ci -m 'copy a to b, c, and d'
+  $ hg changesetcopies
+  files: b c d
+  p1copies: b\x00a (esc)
+  c\x00a (esc)
+  d\x00a (esc)
+  $ hg showcopies
+  a -> b
+  a -> c
+  a -> d
+  $ hg showcopies --config experimental.copies.read-from=compatibility
+  a -> b
+  a -> c
+  a -> d
+  $ hg showcopies --config experimental.copies.read-from=filelog-only
+
+Check that renames are recorded correctly
+
+  $ hg mv b b2
+  $ hg ci -m 'rename b to b2'
+  $ hg changesetcopies
+  files: b b2
+  p1copies: b2\x00b (esc)
+  $ hg showcopies
+  b -> b2
+
+Rename onto existing file. This should get recorded in the changeset files list and in the extras,
+even though there is no filelog entry.
+
+  $ hg cp b2 c --force
+  $ hg st --copies
+  M c
+    b2
+  $ hg debugindex c
+     rev linkrev nodeid       p1           p2
+       0       1 b789fdd96dc2 000000000000 000000000000
+  $ hg ci -m 'move b onto d'
+  $ hg changesetcopies
+  files: c
+  p1copies: c\x00b2 (esc)
+  $ hg showcopies
+  b2 -> c
+  $ hg debugindex c
+     rev linkrev nodeid       p1           p2
+       0       1 b789fdd96dc2 000000000000 000000000000
+
+Create a merge commit with copying done during merge.
+
+  $ hg co 0
+  0 files updated, 0 files merged, 3 files removed, 0 files unresolved
+  $ hg cp a e
+  $ hg cp a f
+  $ hg ci -m 'copy a to e and f'
+  created new head
+  $ hg merge 3
+  3 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  (branch merge, don't forget to commit)
+File 'a' exists on both sides, so 'g' could be recorded as being from p1 or p2, but we currently
+always record it as being from p1
+  $ hg cp a g
+File 'd' exists only in p2, so 'h' should be from p2
+  $ hg cp d h
+File 'f' exists only in p1, so 'i' should be from p1
+  $ hg cp f i
+  $ hg ci -m 'merge'
+  $ hg changesetcopies
+  files: g h i
+  p1copies: g\x00a (esc)
+  i\x00f (esc)
+  p2copies: h\x00d (esc)
+  $ hg showcopies
+  a -> g
+  d -> h
+  f -> i
+
+Test writing to both changeset and filelog
+
+  $ hg cp a j
+  $ hg ci -m 'copy a to j' --config experimental.copies.write-to=compatibility
+  $ hg changesetcopies
+  files: j
+  p1copies: j\x00a (esc)
+  $ hg debugdata j 0
+  \x01 (esc)
+  copy: a
+  copyrev: b789fdd96dc2f3bd229c1dd8eedf0fc60e2b68e3
+  \x01 (esc)
+  a
+  $ hg showcopies
+  a -> j
+  $ hg showcopies --config experimental.copies.read-from=compatibility
+  a -> j
+  $ hg showcopies --config experimental.copies.read-from=filelog-only
+  a -> j
+
+Test writing only to filelog
+
+  $ hg cp a k
+  $ hg ci -m 'copy a to k' --config experimental.copies.write-to=filelog-only
+  $ hg changesetcopies
+  files: k
+  $ hg debugdata k 0
+  \x01 (esc)
+  copy: a
+  copyrev: b789fdd96dc2f3bd229c1dd8eedf0fc60e2b68e3
+  \x01 (esc)
+  a
+  $ hg showcopies
+  $ hg showcopies --config experimental.copies.read-from=compatibility
+  a -> k
+  $ hg showcopies --config experimental.copies.read-from=filelog-only
+  a -> k
+
+  $ cd ..
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/test-copies.t	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,648 @@
+#testcases filelog compatibility changeset
+
+  $ cat >> $HGRCPATH << EOF
+  > [extensions]
+  > rebase=
+  > [alias]
+  > l = log -G -T '{rev} {desc}\n{files}\n'
+  > EOF
+
+#if compatibility
+  $ cat >> $HGRCPATH << EOF
+  > [experimental]
+  > copies.read-from = compatibility
+  > EOF
+#endif
+
+#if changeset
+  $ cat >> $HGRCPATH << EOF
+  > [experimental]
+  > copies.read-from = changeset-only
+  > copies.write-to = changeset-only
+  > EOF
+#endif
+
+  $ REPONUM=0
+  $ newrepo() {
+  >     cd $TESTTMP
+  >     REPONUM=`expr $REPONUM + 1`
+  >     hg init repo-$REPONUM
+  >     cd repo-$REPONUM
+  > }
+
+Simple rename case
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg mv x y
+  $ hg debugp1copies
+  x -> y
+  $ hg debugp2copies
+  $ hg ci -m 'rename x to y'
+  $ hg l
+  @  1 rename x to y
+  |  x y
+  o  0 add x
+     x
+  $ hg debugp1copies -r 1
+  x -> y
+  $ hg debugpathcopies 0 1
+  x -> y
+  $ hg debugpathcopies 1 0
+  y -> x
+Test filtering copies by path. We do filtering by destination.
+  $ hg debugpathcopies 0 1 x
+  $ hg debugpathcopies 1 0 x
+  y -> x
+  $ hg debugpathcopies 0 1 y
+  x -> y
+  $ hg debugpathcopies 1 0 y
+
+Copy a file onto another file
+  $ newrepo
+  $ echo x > x
+  $ echo y > y
+  $ hg ci -Aqm 'add x and y'
+  $ hg cp -f x y
+  $ hg debugp1copies
+  x -> y
+  $ hg debugp2copies
+  $ hg ci -m 'copy x onto y'
+  $ hg l
+  @  1 copy x onto y
+  |  y
+  o  0 add x and y
+     x y
+  $ hg debugp1copies -r 1
+  x -> y
+Incorrectly doesn't show the rename
+  $ hg debugpathcopies 0 1
+
+Copy a file onto another file with same content. If metadata is stored in changeset, this does not
+produce a new filelog entry. The changeset's "files" entry should still list the file.
+  $ newrepo
+  $ echo x > x
+  $ echo x > x2
+  $ hg ci -Aqm 'add x and x2 with same content'
+  $ hg cp -f x x2
+  $ hg ci -m 'copy x onto x2'
+  $ hg l
+  @  1 copy x onto x2
+  |  x2
+  o  0 add x and x2 with same content
+     x x2
+  $ hg debugp1copies -r 1
+  x -> x2
+Incorrectly doesn't show the rename
+  $ hg debugpathcopies 0 1
+
+Copy a file, then delete destination, then copy again. This does not create a new filelog entry.
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg cp x y
+  $ hg ci -m 'copy x to y'
+  $ hg rm y
+  $ hg ci -m 'remove y'
+  $ hg cp -f x y
+  $ hg ci -m 'copy x onto y (again)'
+  $ hg l
+  @  3 copy x onto y (again)
+  |  y
+  o  2 remove y
+  |  y
+  o  1 copy x to y
+  |  y
+  o  0 add x
+     x
+  $ hg debugp1copies -r 3
+  x -> y
+  $ hg debugpathcopies 0 3
+  x -> y
+
+Rename file in a loop: x->y->z->x
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg mv x y
+  $ hg debugp1copies
+  x -> y
+  $ hg debugp2copies
+  $ hg ci -m 'rename x to y'
+  $ hg mv y z
+  $ hg ci -m 'rename y to z'
+  $ hg mv z x
+  $ hg ci -m 'rename z to x'
+  $ hg l
+  @  3 rename z to x
+  |  x z
+  o  2 rename y to z
+  |  y z
+  o  1 rename x to y
+  |  x y
+  o  0 add x
+     x
+  $ hg debugpathcopies 0 3
+
+Copy x to y, then remove y, then add back y. With copy metadata in the changeset, this could easily
+end up reporting y as copied from x (if we don't unmark it as a copy when it's removed).
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg mv x y
+  $ hg ci -m 'rename x to y'
+  $ hg rm y
+  $ hg ci -qm 'remove y'
+  $ echo x > y
+  $ hg ci -Aqm 'add back y'
+  $ hg l
+  @  3 add back y
+  |  y
+  o  2 remove y
+  |  y
+  o  1 rename x to y
+  |  x y
+  o  0 add x
+     x
+  $ hg debugp1copies -r 3
+  $ hg debugpathcopies 0 3
+
+Copy x to z, then remove z, then copy x2 (same content as x) to z. With copy metadata in the
+changeset, the two copies here will have the same filelog entry, so ctx['z'].introrev() might point
+to the first commit that added the file. We should still report the copy as being from x2.
+  $ newrepo
+  $ echo x > x
+  $ echo x > x2
+  $ hg ci -Aqm 'add x and x2 with same content'
+  $ hg cp x z
+  $ hg ci -qm 'copy x to z'
+  $ hg rm z
+  $ hg ci -m 'remove z'
+  $ hg cp x2 z
+  $ hg ci -m 'copy x2 to z'
+  $ hg l
+  @  3 copy x2 to z
+  |  z
+  o  2 remove z
+  |  z
+  o  1 copy x to z
+  |  z
+  o  0 add x and x2 with same content
+     x x2
+  $ hg debugp1copies -r 3
+  x2 -> z
+  $ hg debugpathcopies 0 3
+  x2 -> z
+
+Create x and y, then rename them both to the same name, but on different sides of a fork
+  $ newrepo
+  $ echo x > x
+  $ echo y > y
+  $ hg ci -Aqm 'add x and y'
+  $ hg mv x z
+  $ hg ci -qm 'rename x to z'
+  $ hg co -q 0
+  $ hg mv y z
+  $ hg ci -qm 'rename y to z'
+  $ hg l
+  @  2 rename y to z
+  |  y z
+  | o  1 rename x to z
+  |/   x z
+  o  0 add x and y
+     x y
+  $ hg debugpathcopies 1 2
+  z -> x
+  y -> z
+
+Fork renames x to y on one side and removes x on the other
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg mv x y
+  $ hg ci -m 'rename x to y'
+  $ hg co -q 0
+  $ hg rm x
+  $ hg ci -m 'remove x'
+  created new head
+  $ hg l
+  @  2 remove x
+  |  x
+  | o  1 rename x to y
+  |/   x y
+  o  0 add x
+     x
+  $ hg debugpathcopies 1 2
+
+Copies via null revision (there shouldn't be any)
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg cp x y
+  $ hg ci -m 'copy x to y'
+  $ hg co -q null
+  $ echo x > x
+  $ hg ci -Aqm 'add x (again)'
+  $ hg l
+  @  2 add x (again)
+     x
+  o  1 copy x to y
+  |  y
+  o  0 add x
+     x
+  $ hg debugpathcopies 1 2
+  $ hg debugpathcopies 2 1
+
+Merge rename from other branch
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg mv x y
+  $ hg ci -m 'rename x to y'
+  $ hg co -q 0
+  $ echo z > z
+  $ hg ci -Aqm 'add z'
+  $ hg merge -q 1
+  $ hg debugp1copies
+  $ hg debugp2copies
+  $ hg ci -m 'merge rename from p2'
+  $ hg l
+  @    3 merge rename from p2
+  |\   x
+  | o  2 add z
+  | |  z
+  o |  1 rename x to y
+  |/   x y
+  o  0 add x
+     x
+Perhaps we should indicate the rename here, but `hg status` is documented to be weird during
+merges, so...
+  $ hg debugp1copies -r 3
+  $ hg debugp2copies -r 3
+  $ hg debugpathcopies 0 3
+  x -> y
+  $ hg debugpathcopies 1 2
+  y -> x
+  $ hg debugpathcopies 1 3
+  $ hg debugpathcopies 2 3
+  x -> y
+
+Copy file from either side in a merge
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg co -q null
+  $ echo y > y
+  $ hg ci -Aqm 'add y'
+  $ hg merge -q 0
+  $ hg cp y z
+  $ hg debugp1copies
+  y -> z
+  $ hg debugp2copies
+  $ hg ci -m 'copy file from p1 in merge'
+  $ hg co -q 1
+  $ hg merge -q 0
+  $ hg cp x z
+  $ hg debugp1copies
+  $ hg debugp2copies
+  x -> z
+  $ hg ci -qm 'copy file from p2 in merge'
+  $ hg l
+  @    3 copy file from p2 in merge
+  |\   z
+  +---o  2 copy file from p1 in merge
+  | |/   z
+  | o  1 add y
+  |    y
+  o  0 add x
+     x
+  $ hg debugp1copies -r 2
+  y -> z
+  $ hg debugp2copies -r 2
+  $ hg debugpathcopies 1 2
+  y -> z
+  $ hg debugpathcopies 0 2
+  $ hg debugp1copies -r 3
+  $ hg debugp2copies -r 3
+  x -> z
+  $ hg debugpathcopies 1 3
+  $ hg debugpathcopies 0 3
+  x -> z
+
+Copy file that exists on both sides of the merge, same content on both sides
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x on branch 1'
+  $ hg co -q null
+  $ echo x > x
+  $ hg ci -Aqm 'add x on branch 2'
+  $ hg merge -q 0
+  $ hg cp x z
+  $ hg debugp1copies
+  x -> z
+  $ hg debugp2copies
+  $ hg ci -qm 'merge'
+  $ hg l
+  @    2 merge
+  |\   z
+  | o  1 add x on branch 2
+  |    x
+  o  0 add x on branch 1
+     x
+  $ hg debugp1copies -r 2
+  x -> z
+  $ hg debugp2copies -r 2
+It's a little weird that it shows up on both sides
+  $ hg debugpathcopies 1 2
+  x -> z
+  $ hg debugpathcopies 0 2
+  x -> z (filelog !)
+
+Copy file that exists on both sides of the merge, different content
+  $ newrepo
+  $ echo branch1 > x
+  $ hg ci -Aqm 'add x on branch 1'
+  $ hg co -q null
+  $ echo branch2 > x
+  $ hg ci -Aqm 'add x on branch 2'
+  $ hg merge -q 0
+  warning: conflicts while merging x! (edit, then use 'hg resolve --mark')
+  [1]
+  $ echo resolved > x
+  $ hg resolve -m x
+  (no more unresolved files)
+  $ hg cp x z
+  $ hg debugp1copies
+  x -> z
+  $ hg debugp2copies
+  $ hg ci -qm 'merge'
+  $ hg l
+  @    2 merge
+  |\   x z
+  | o  1 add x on branch 2
+  |    x
+  o  0 add x on branch 1
+     x
+  $ hg debugp1copies -r 2
+  x -> z (changeset !)
+  $ hg debugp2copies -r 2
+  x -> z (no-changeset !)
+  $ hg debugpathcopies 1 2
+  x -> z (changeset !)
+  $ hg debugpathcopies 0 2
+  x -> z (no-changeset !)
+
+Copy x->y on one side of merge and copy x->z on the other side. Pathcopies from one parent
+of the merge to the merge should include the copy from the other side.
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg cp x y
+  $ hg ci -qm 'copy x to y'
+  $ hg co -q 0
+  $ hg cp x z
+  $ hg ci -qm 'copy x to z'
+  $ hg merge -q 1
+  $ hg ci -m 'merge copy x->y and copy x->z'
+  $ hg l
+  @    3 merge copy x->y and copy x->z
+  |\
+  | o  2 copy x to z
+  | |  z
+  o |  1 copy x to y
+  |/   y
+  o  0 add x
+     x
+  $ hg debugp1copies -r 3
+  $ hg debugp2copies -r 3
+  $ hg debugpathcopies 2 3
+  x -> y
+  $ hg debugpathcopies 1 3
+  x -> z
+
+Copy x to y on one side of merge, create y and rename to z on the other side. Pathcopies from the
+first side should not include the y->z rename since y didn't exist in the merge base.
+  $ newrepo
+  $ echo x > x
+  $ hg ci -Aqm 'add x'
+  $ hg cp x y
+  $ hg ci -qm 'copy x to y'
+  $ hg co -q 0
+  $ echo y > y
+  $ hg ci -Aqm 'add y'
+  $ hg mv y z
+  $ hg ci -m 'rename y to z'
+  $ hg merge -q 1
+  $ hg ci -m 'merge'
+  $ hg l
+  @    4 merge
+  |\
+  | o  3 rename y to z
+  | |  y z
+  | o  2 add y
+  | |  y
+  o |  1 copy x to y
+  |/   y
+  o  0 add x
+     x
+  $ hg debugp1copies -r 3
+  y -> z
+  $ hg debugp2copies -r 3
+  $ hg debugpathcopies 2 3
+  y -> z
+  $ hg debugpathcopies 1 3
+
+Create x and y, then rename x to z on one side of merge, and rename y to z and modify z on the
+other side.
+  $ newrepo
+  $ echo x > x
+  $ echo y > y
+  $ hg ci -Aqm 'add x and y'
+  $ hg mv x z
+  $ hg ci -qm 'rename x to z'
+  $ hg co -q 0
+  $ hg mv y z
+  $ hg ci -qm 'rename y to z'
+  $ echo z >> z
+  $ hg ci -m 'modify z'
+  $ hg merge -q 1
+  warning: conflicts while merging z! (edit, then use 'hg resolve --mark')
+  [1]
+  $ echo z > z
+  $ hg resolve -qm z
+  $ hg ci -m 'merge 1 into 3'
+Try merging the other direction too
+  $ hg co -q 1
+  $ hg merge -q 3
+  warning: conflicts while merging z! (edit, then use 'hg resolve --mark')
+  [1]
+  $ echo z > z
+  $ hg resolve -qm z
+  $ hg ci -m 'merge 3 into 1'
+  created new head
+  $ hg l
+  @    5 merge 3 into 1
+  |\   y z
+  +---o  4 merge 1 into 3
+  | |/   x z
+  | o  3 modify z
+  | |  z
+  | o  2 rename y to z
+  | |  y z
+  o |  1 rename x to z
+  |/   x z
+  o  0 add x and y
+     x y
+  $ hg debugpathcopies 1 4
+  $ hg debugpathcopies 2 4
+  $ hg debugpathcopies 0 4
+  x -> z (filelog !)
+  y -> z (compatibility !)
+  $ hg debugpathcopies 1 5
+  $ hg debugpathcopies 2 5
+  $ hg debugpathcopies 0 5
+  x -> z
+
+
+Test for a case in fullcopytracing algorithm where both the merging csets are
+"dirty"; where a dirty cset means that cset is descendant of merge base. This
+test reflect that for this particular case this algorithm correctly find the copies:
+
+  $ cat >> $HGRCPATH << EOF
+  > [experimental]
+  > evolution.createmarkers=True
+  > evolution.allowunstable=True
+  > EOF
+
+  $ newrepo
+  $ echo a > a
+  $ hg add a
+  $ hg ci -m "added a"
+  $ echo b > b
+  $ hg add b
+  $ hg ci -m "added b"
+
+  $ hg mv b b1
+  $ hg ci -m "rename b to b1"
+
+  $ hg up ".^"
+  1 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  $ echo d > d
+  $ hg add d
+  $ hg ci -m "added d"
+  created new head
+
+  $ echo baba >> b
+  $ hg ci --amend -m "added d, modified b"
+
+  $ hg l --hidden
+  @  4 added d, modified b
+  |  b d
+  | x  3 added d
+  |/   d
+  | o  2 rename b to b1
+  |/   b b1
+  o  1 added b
+  |  b
+  o  0 added a
+     a
+
+Grafting revision 4 on top of revision 2, showing that it respect the rename:
+
+TODO: Make this work with copy info in changesets (probably by writing a
+changeset-centric version of copies.mergecopies())
+#if no-changeset
+  $ hg up 2 -q
+  $ hg graft -r 4 --base 3 --hidden
+  grafting 4:af28412ec03c "added d, modified b" (tip)
+  merging b1 and b to b1
+
+  $ hg l -l1 -p
+  @  5 added d, modified b
+  |  b1
+  ~  diff -r 5a4825cc2926 -r 94a2f1a0e8e2 b1
+     --- a/b1	Thu Jan 01 00:00:00 1970 +0000
+     +++ b/b1	Thu Jan 01 00:00:00 1970 +0000
+     @@ -1,1 +1,2 @@
+      b
+     +baba
+  
+#endif
+
+Test to make sure that fullcopytracing algorithm don't fail when both the merging csets are dirty
+(a dirty cset is one who is not the descendant of merge base)
+-------------------------------------------------------------------------------------------------
+
+  $ newrepo
+  $ echo a > a
+  $ hg add a
+  $ hg ci -m "added a"
+  $ echo b > b
+  $ hg add b
+  $ hg ci -m "added b"
+
+  $ echo foobar > willconflict
+  $ hg add willconflict
+  $ hg ci -m "added willconflict"
+  $ echo c > c
+  $ hg add c
+  $ hg ci -m "added c"
+
+  $ hg l
+  @  3 added c
+  |  c
+  o  2 added willconflict
+  |  willconflict
+  o  1 added b
+  |  b
+  o  0 added a
+     a
+
+  $ hg up ".^^"
+  0 files updated, 0 files merged, 2 files removed, 0 files unresolved
+  $ echo d > d
+  $ hg add d
+  $ hg ci -m "added d"
+  created new head
+
+  $ echo barfoo > willconflict
+  $ hg add willconflict
+  $ hg ci --amend -m "added willconflict and d"
+
+  $ hg l
+  @  5 added willconflict and d
+  |  d willconflict
+  | o  3 added c
+  | |  c
+  | o  2 added willconflict
+  |/   willconflict
+  o  1 added b
+  |  b
+  o  0 added a
+     a
+
+  $ hg rebase -r . -d 2 -t :other
+  rebasing 5:5018b1509e94 "added willconflict and d" (tip)
+
+  $ hg up 3 -q
+  $ hg l --hidden
+  o  6 added willconflict and d
+  |  d willconflict
+  | x  5 added willconflict and d
+  | |  d willconflict
+  | | x  4 added d
+  | |/   d
+  +---@  3 added c
+  | |    c
+  o |  2 added willconflict
+  |/   willconflict
+  o  1 added b
+  |  b
+  o  0 added a
+     a
+
+Now if we trigger a merge between cset revision 3 and 6 using base revision 4, in this case
+both the merging csets will be dirty as no one is descendent of base revision:
+
+  $ hg graft -r 6 --base 4 --hidden -t :other
+  grafting 6:99802e4f1e46 "added willconflict and d" (tip)
--- a/tests/test-copy.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-copy.t	Wed Apr 17 13:41:18 2019 -0400
@@ -118,6 +118,23 @@
   [255]
   $ hg st -A
   ? foo
+respects ui.relative-paths
+  $ mkdir dir
+  $ cd dir
+  $ hg mv ../foo ../bar
+  ../foo: not copying - file is not managed
+  abort: no files to copy
+  [255]
+  $ hg mv ../foo ../bar --config ui.relative-paths=yes
+  ../foo: not copying - file is not managed
+  abort: no files to copy
+  [255]
+  $ hg mv ../foo ../bar --config ui.relative-paths=no
+  foo: not copying - file is not managed
+  abort: no files to copy
+  [255]
+  $ cd ..
+  $ rmdir dir
   $ hg add foo
 dry-run; print a warning that this is not a real copy; foo is added
   $ hg mv --dry-run foo bar
--- a/tests/test-debugcommands.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-debugcommands.t	Wed Apr 17 13:41:18 2019 -0400
@@ -541,9 +541,10 @@
   $ hg debugupdatecaches --debug
   updating the branch cache
   $ ls -r .hg/cache/*
+  .hg/cache/tags2-served
+  .hg/cache/tags2
   .hg/cache/rbc-revs-v1
   .hg/cache/rbc-names-v1
-  .hg/cache/manifestfulltextcache (reporevlogstore !)
   .hg/cache/branch2-served
 
 Test debugcolor
--- a/tests/test-demandimport.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-demandimport.py	Wed Apr 17 13:41:18 2019 -0400
@@ -6,12 +6,30 @@
 import os
 import subprocess
 import sys
+import types
+
+# Don't import pycompat because it has too many side-effects.
+ispy3 = sys.version_info[0] >= 3
 
 # Only run if demandimport is allowed
 if subprocess.call(['python', '%s/hghave' % os.environ['TESTDIR'],
                     'demandimport']):
     sys.exit(80)
 
+# We rely on assert, which gets optimized out.
+if sys.flags.optimize:
+    sys.exit(80)
+
+if ispy3:
+    from importlib.util import _LazyModule
+
+    try:
+        from importlib.util import _Module as moduletype
+    except ImportError:
+        moduletype = types.ModuleType
+else:
+    moduletype = types.ModuleType
+
 if os.name != 'nt':
     try:
         import distutils.msvc9compiler
@@ -36,76 +54,173 @@
 # this enable call should not actually enable demandimport!
 demandimport.enable()
 from mercurial import node
-print("node =", f(node))
+
+# We use assert instead of a unittest test case because having imports inside
+# functions changes behavior of the demand importer.
+if ispy3:
+    assert not isinstance(node, _LazyModule)
+else:
+    assert f(node) == "<module 'mercurial.node' from '?'>", f(node)
+
 # now enable it for real
 del os.environ['HGDEMANDIMPORT']
 demandimport.enable()
 
 # Test access to special attributes through demandmod proxy
+assert 'mercurial.error' not in sys.modules
 from mercurial import error as errorproxy
-print("errorproxy =", f(errorproxy))
-print("errorproxy.__doc__ = %r"
-      % (' '.join(errorproxy.__doc__.split()[:3]) + ' ...'))
-print("errorproxy.__name__ = %r" % errorproxy.__name__)
+
+if ispy3:
+    # unsure why this isn't lazy.
+    assert not isinstance(f, _LazyModule)
+    assert f(errorproxy) == "<module 'mercurial.error' from '?'>", f(errorproxy)
+else:
+    assert f(errorproxy) == "<unloaded module 'error'>", f(errorproxy)
+
+doc = ' '.join(errorproxy.__doc__.split()[:3])
+assert doc == 'Mercurial exceptions. This', doc
+assert errorproxy.__name__ == 'mercurial.error', errorproxy.__name__
+
 # __name__ must be accessible via __dict__ so the relative imports can be
 # resolved
-print("errorproxy.__dict__['__name__'] = %r" % errorproxy.__dict__['__name__'])
-print("errorproxy =", f(errorproxy))
+name = errorproxy.__dict__['__name__']
+assert name == 'mercurial.error', name
+
+if ispy3:
+    assert not isinstance(errorproxy, _LazyModule)
+    assert f(errorproxy) == "<module 'mercurial.error' from '?'>", f(errorproxy)
+else:
+    assert f(errorproxy) == "<proxied module 'error'>", f(errorproxy)
 
 import os
 
-print("os =", f(os))
-print("os.system =", f(os.system))
-print("os =", f(os))
+if ispy3:
+    assert not isinstance(os, _LazyModule)
+    assert f(os) == "<module 'os' from '?'>", f(os)
+else:
+    assert f(os) == "<unloaded module 'os'>", f(os)
 
+assert f(os.system) == '<built-in function system>', f(os.system)
+assert f(os) == "<module 'os' from '?'>", f(os)
+
+assert 'mercurial.utils.procutil' not in sys.modules
 from mercurial.utils import procutil
 
-print("procutil =", f(procutil))
-print("procutil.system =", f(procutil.system))
-print("procutil =", f(procutil))
-print("procutil.system =", f(procutil.system))
+if ispy3:
+    assert isinstance(procutil, _LazyModule)
+    assert f(procutil) == "<module 'mercurial.utils.procutil' from '?'>", f(
+        procutil
+    )
+else:
+    assert f(procutil) == "<unloaded module 'procutil'>", f(procutil)
+
+assert f(procutil.system) == '<function system at 0x?>', f(procutil.system)
+assert procutil.__class__ == moduletype, procutil.__class__
+assert f(procutil) == "<module 'mercurial.utils.procutil' from '?'>", f(
+    procutil
+)
+assert f(procutil.system) == '<function system at 0x?>', f(procutil.system)
 
+assert 'mercurial.hgweb' not in sys.modules
 from mercurial import hgweb
-print("hgweb =", f(hgweb))
-print("hgweb_mod =", f(hgweb.hgweb_mod))
-print("hgweb =", f(hgweb))
+
+if ispy3:
+    assert not isinstance(hgweb, _LazyModule)
+    assert f(hgweb) == "<module 'mercurial.hgweb' from '?'>", f(hgweb)
+    assert isinstance(hgweb.hgweb_mod, _LazyModule)
+    assert (
+        f(hgweb.hgweb_mod) == "<module 'mercurial.hgweb.hgweb_mod' from '?'>"
+    ), f(hgweb.hgweb_mod)
+else:
+    assert f(hgweb) == "<unloaded module 'hgweb'>", f(hgweb)
+    assert f(hgweb.hgweb_mod) == "<unloaded module 'hgweb_mod'>", f(
+        hgweb.hgweb_mod
+    )
+
+assert f(hgweb) == "<module 'mercurial.hgweb' from '?'>", f(hgweb)
 
 import re as fred
-print("fred =", f(fred))
+
+if ispy3:
+    assert not isinstance(fred, _LazyModule)
+    assert f(fred) == "<module 're' from '?'>"
+else:
+    assert f(fred) == "<unloaded module 're'>", f(fred)
 
 import re as remod
-print("remod =", f(remod))
+
+if ispy3:
+    assert not isinstance(remod, _LazyModule)
+    assert f(remod) == "<module 're' from '?'>"
+else:
+    assert f(remod) == "<unloaded module 're'>", f(remod)
 
 import sys as re
-print("re =", f(re))
+
+if ispy3:
+    assert not isinstance(re, _LazyModule)
+    assert f(re) == "<module 'sys' (built-in)>"
+else:
+    assert f(re) == "<unloaded module 'sys'>", f(re)
 
-print("fred =", f(fred))
-print("fred.sub =", f(fred.sub))
-print("fred =", f(fred))
+if ispy3:
+    assert not isinstance(fred, _LazyModule)
+    assert f(fred) == "<module 're' from '?'>", f(fred)
+else:
+    assert f(fred) == "<unloaded module 're'>", f(fred)
+
+assert f(fred.sub) == '<function sub at 0x?>', f(fred.sub)
+
+if ispy3:
+    assert not isinstance(fred, _LazyModule)
+    assert f(fred) == "<module 're' from '?'>", f(fred)
+else:
+    assert f(fred) == "<proxied module 're'>", f(fred)
 
 remod.escape  # use remod
-print("remod =", f(remod))
+assert f(remod) == "<module 're' from '?'>", f(remod)
 
-print("re =", f(re))
-print("re.stderr =", f(re.stderr))
-print("re =", f(re))
+if ispy3:
+    assert not isinstance(re, _LazyModule)
+    assert f(re) == "<module 'sys' (built-in)>"
+    assert f(type(re.stderr)) == "<class '_io.TextIOWrapper'>", f(
+        type(re.stderr)
+    )
+    assert f(re) == "<module 'sys' (built-in)>"
+else:
+    assert f(re) == "<unloaded module 'sys'>", f(re)
+    assert f(re.stderr) == "<open file '<whatever>', mode 'w' at 0x?>", f(
+        re.stderr
+    )
+    assert f(re) == "<proxied module 'sys'>", f(re)
 
-import contextlib
-print("contextlib =", f(contextlib))
+assert 'telnetlib' not in sys.modules
+import telnetlib
+
+if ispy3:
+    assert not isinstance(telnetlib, _LazyModule)
+    assert f(telnetlib) == "<module 'telnetlib' from '?'>"
+else:
+    assert f(telnetlib) == "<unloaded module 'telnetlib'>", f(telnetlib)
+
 try:
-    from contextlib import unknownattr
-    print('no demandmod should be created for attribute of non-package '
-          'module:\ncontextlib.unknownattr =', f(unknownattr))
+    from telnetlib import unknownattr
+
+    assert False, (
+        'no demandmod should be created for attribute of non-package '
+        'module:\ntelnetlib.unknownattr = %s' % f(unknownattr)
+    )
 except ImportError as inst:
-    print('contextlib.unknownattr = ImportError: %s'
-          % rsub(r"'", '', str(inst)))
+    assert rsub(r"'", '', str(inst)).startswith(
+        'cannot import name unknownattr'
+    )
 
 from mercurial import util
 
 # Unlike the import statement, __import__() function should not raise
 # ImportError even if fromlist has an unknown item
 # (see Python/import.c:import_module_level() and ensure_fromlist())
-contextlibimp = __import__('contextlib', globals(), locals(), ['unknownattr'])
-print("__import__('contextlib', ..., ['unknownattr']) =", f(contextlibimp))
-print("hasattr(contextlibimp, 'unknownattr') =",
-      util.safehasattr(contextlibimp, 'unknownattr'))
+assert 'zipfile' not in sys.modules
+zipfileimp = __import__('zipfile', globals(), locals(), ['unknownattr'])
+assert f(zipfileimp) == "<module 'zipfile' from '?'>", f(zipfileimp)
+assert not util.safehasattr(zipfileimp, 'unknownattr')
--- a/tests/test-demandimport.py.out	Tue Mar 19 09:23:35 2019 -0400
+++ /dev/null	Thu Jan 01 00:00:00 1970 +0000
@@ -1,30 +0,0 @@
-node = <module 'mercurial.node' from '?'>
-errorproxy = <unloaded module 'error'>
-errorproxy.__doc__ = 'Mercurial exceptions. This ...'
-errorproxy.__name__ = 'mercurial.error'
-errorproxy.__dict__['__name__'] = 'mercurial.error'
-errorproxy = <proxied module 'error'>
-os = <unloaded module 'os'>
-os.system = <built-in function system>
-os = <module 'os' from '?'>
-procutil = <unloaded module 'procutil'>
-procutil.system = <function system at 0x?>
-procutil = <module 'mercurial.utils.procutil' from '?'>
-procutil.system = <function system at 0x?>
-hgweb = <unloaded module 'hgweb'>
-hgweb_mod = <unloaded module 'hgweb_mod'>
-hgweb = <module 'mercurial.hgweb' from '?'>
-fred = <unloaded module 're'>
-remod = <unloaded module 're'>
-re = <unloaded module 'sys'>
-fred = <unloaded module 're'>
-fred.sub = <function sub at 0x?>
-fred = <proxied module 're'>
-remod = <module 're' from '?'>
-re = <unloaded module 'sys'>
-re.stderr = <open file '<whatever>', mode 'w' at 0x?>
-re = <proxied module 'sys'>
-contextlib = <unloaded module 'contextlib'>
-contextlib.unknownattr = ImportError: cannot import name unknownattr
-__import__('contextlib', ..., ['unknownattr']) = <module 'contextlib' from '?'>
-hasattr(contextlibimp, 'unknownattr') = False
--- a/tests/test-diff-color.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-diff-color.t	Wed Apr 17 13:41:18 2019 -0400
@@ -157,14 +157,11 @@
   $ chmod +x a
   $ hg record -m moda a <<EOF
   > y
-  > y
   > EOF
   \x1b[0;1mdiff --git a/a b/a\x1b[0m (esc)
   \x1b[0;36;1mold mode 100644\x1b[0m (esc)
   \x1b[0;36;1mnew mode 100755\x1b[0m (esc)
   1 hunks, 1 lines changed
-  \x1b[0;33mexamine changes to 'a'? [Ynesfdaq?]\x1b[0m y (esc)
-  
   \x1b[0;35m@@ -2,7 +2,7 @@ c\x1b[0m (esc)
    c
    a
--- a/tests/test-diff-hashes.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-diff-hashes.t	Wed Apr 17 13:41:18 2019 -0400
@@ -13,6 +13,7 @@
   $ hg ci -m 'change foo'
 
   $ hg --quiet diff -r 0 -r 1
+  diff -r a99fb63adac3 -r 9b8568d3af2f foo
   --- a/foo	Thu Jan 01 00:00:00 1970 +0000
   +++ b/foo	Thu Jan 01 00:00:00 1970 +0000
   @@ -1,1 +1,1 @@
--- a/tests/test-diffstat.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-diffstat.t	Wed Apr 17 13:41:18 2019 -0400
@@ -146,10 +146,21 @@
   $ hg diff --stat .
    dir1/new |  1 +
    1 files changed, 1 insertions(+), 0 deletions(-)
+  $ hg diff --stat . --config ui.relative-paths=yes
+   new |  1 +
+   1 files changed, 1 insertions(+), 0 deletions(-)
   $ hg diff --stat --root .
    new |  1 +
    1 files changed, 1 insertions(+), 0 deletions(-)
 
+  $ hg diff --stat --root . --config ui.relative-paths=yes
+   new |  1 +
+   1 files changed, 1 insertions(+), 0 deletions(-)
+--root trumps ui.relative-paths
+  $ hg diff --stat --root .. --config ui.relative-paths=yes
+   new         |  1 +
+   ../dir2/new |  1 +
+   2 files changed, 2 insertions(+), 0 deletions(-)
   $ hg diff --stat --root ../dir1 ../dir2
   warning: ../dir2 not inside relative root .
 
@@ -236,3 +247,48 @@
   $ hg diff --root . --stat
    file |  2 +-
    1 files changed, 1 insertions(+), 1 deletions(-)
+
+When a file is renamed, --git shouldn't loss the info about old file
+  $ hg init issue6025
+  $ cd issue6025
+  $ echo > a
+  $ hg ci -Am 'add a'
+  adding a
+  $ hg mv a b
+  $ hg diff --git
+  diff --git a/a b/b
+  rename from a
+  rename to b
+  $ hg diff --stat
+   a |  1 -
+   b |  1 +
+   2 files changed, 1 insertions(+), 1 deletions(-)
+  $ hg diff --stat --git
+   a => b |  0 
+   1 files changed, 0 insertions(+), 0 deletions(-)
+-- filename may contain whitespaces
+  $ echo > c
+  $ hg ci -Am 'add c'
+  adding c
+  $ hg mv c 'new c'
+  $ hg diff --git
+  diff --git a/c b/new c
+  rename from c
+  rename to new c
+  $ hg diff --stat
+   c     |  1 -
+   new c |  1 +
+   2 files changed, 1 insertions(+), 1 deletions(-)
+  $ hg diff --stat --git
+   c => new c |  0 
+   1 files changed, 0 insertions(+), 0 deletions(-)
+
+Make sure `diff --stat -q --config diff.git-0` shows stat (issue4037)
+
+  $ hg status
+  A new c
+  R c
+  $ hg diff --stat -q
+   c     |  1 -
+   new c |  1 +
+   2 files changed, 1 insertions(+), 1 deletions(-)
--- a/tests/test-dispatch.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-dispatch.t	Wed Apr 17 13:41:18 2019 -0400
@@ -188,7 +188,8 @@
 specified" should include filename even when it is empty
 
   $ hg -R a archive ''
-  abort: *: '' (glob)
+  abort: $ENOENT$: '' (no-windows !)
+  abort: $ENOTDIR$: '' (windows !)
   [255]
 
 #if no-outer-repo
--- a/tests/test-doctest.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-doctest.py	Wed Apr 17 13:41:18 2019 -0400
@@ -62,6 +62,7 @@
 testmod('mercurial.pycompat')
 testmod('mercurial.revlog')
 testmod('mercurial.revlogutils.deltas')
+testmod('mercurial.revset')
 testmod('mercurial.revsetlang')
 testmod('mercurial.smartset')
 testmod('mercurial.store')
--- a/tests/test-duplicateoptions.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-duplicateoptions.py	Wed Apr 17 13:41:18 2019 -0400
@@ -41,8 +41,8 @@
     seenshort = globalshort.copy()
     seenlong = globallong.copy()
     for option in entry[1]:
-        if (option[0] and option[0] in seenshort) or \
-           (option[1] and option[1] in seenlong):
+        if ((option[0] and option[0] in seenshort) or
+            (option[1] and option[1] in seenlong)):
             print("command '" + cmd + "' has duplicate option " + str(option))
         seenshort.add(option[0])
         seenlong.add(option[1])
--- a/tests/test-encoding-align.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-encoding-align.t	Wed Apr 17 13:41:18 2019 -0400
@@ -5,6 +5,7 @@
   $ hg init t
   $ cd t
   $ "$PYTHON" << EOF
+  > from mercurial import pycompat
   > # (byte, width) = (6, 4)
   > s = b"\xe7\x9f\xad\xe5\x90\x8d"
   > # (byte, width) = (7, 7): odd width is good for alignment test
@@ -21,14 +22,17 @@
   > command = registrar.command(cmdtable)
   > 
   > @command(b'showoptlist',
-  >     [('s', 'opt1', '', 'short width'  + ' %(s)s' * 8, '%(s)s'),
-  >     ('m', 'opt2', '', 'middle width' + ' %(m)s' * 8, '%(m)s'),
-  >     ('l', 'opt3', '', 'long width'   + ' %(l)s' * 8, '%(l)s')],
-  >     '')
+  >     [(b's', b'opt1', b'', b'short width'  + (b' ' +%(s)s) * 8, %(s)s),
+  >     (b'm', b'opt2', b'', b'middle width' + (b' ' + %(m)s) * 8, %(m)s),
+  >     (b'l', b'opt3', b'', b'long width'   + (b' ' + %(l)s) * 8, %(l)s)],
+  >     b'')
   > def showoptlist(ui, repo, *pats, **opts):
   >     '''dummy command to show option descriptions'''
   >     return 0
-  > """ % globals())
+  > """ % {b's': pycompat.byterepr(s),
+  >        b'm': pycompat.byterepr(m),
+  >        b'l': pycompat.byterepr(l),
+  >       })
   > f.close()
   > EOF
   $ S=`cat s`
--- a/tests/test-extdiff.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-extdiff.t	Wed Apr 17 13:41:18 2019 -0400
@@ -22,6 +22,10 @@
   > opts.falabala = diffing
   > cmd.edspace = echo
   > opts.edspace = "name  <user@example.com>"
+  > alabalaf =
+  > [merge-tools]
+  > alabalaf.executable = echo
+  > alabalaf.diffargs = diffing
   > EOF
 
   $ hg falabala
@@ -48,6 +52,8 @@
    -o --option OPT [+]      pass option to comparison program
    -r --rev REV [+]         revision
    -c --change REV          change made by revision
+      --per-file            compare each file instead of revision snapshots
+      --confirm             prompt user before each external program invocation
       --patch               compare patches for two revisions
    -I --include PATTERN [+] include names matching the given patterns
    -X --exclude PATTERN [+] exclude names matching the given patterns
@@ -128,6 +134,72 @@
   diffing a.398e36faf9c6 a.5ab95fb166c4
   [1]
 
+Test --per-file option:
+
+  $ hg up -q -C 3
+  $ echo a2 > a
+  $ echo b2 > b
+  $ hg ci -d '3 0' -mtestmode1
+  created new head
+  $ hg falabala -c 6 --per-file
+  diffing "*\\extdiff.*\\a.46c0e4daeb72\\a" "a.81906f2b98ac\\a" (glob) (windows !)
+  diffing */extdiff.*/a.46c0e4daeb72/a a.81906f2b98ac/a (glob) (no-windows !)
+  diffing "*\\extdiff.*\\a.46c0e4daeb72\\b" "a.81906f2b98ac\\b" (glob) (windows !)
+  diffing */extdiff.*/a.46c0e4daeb72/b a.81906f2b98ac/b (glob) (no-windows !)
+  [1]
+
+Test --per-file option for gui tool:
+
+  $ hg --config extdiff.gui.alabalaf=True alabalaf -c 6 --per-file --debug
+  diffing */extdiff.*/a.46c0e4daeb72/* a.81906f2b98ac/* (glob)
+  diffing */extdiff.*/a.46c0e4daeb72/* a.81906f2b98ac/* (glob)
+  making snapshot of 2 files from rev 46c0e4daeb72
+    a
+    b
+  making snapshot of 2 files from rev 81906f2b98ac
+    a
+    b
+  running '* diffing * *' in * (backgrounded) (glob)
+  running '* diffing * *' in * (backgrounded) (glob)
+  cleaning up temp directory
+  [1]
+
+Test --per-file option for gui tool again:
+
+  $ hg --config merge-tools.alabalaf.gui=True alabalaf -c 6 --per-file --debug
+  diffing */extdiff.*/a.46c0e4daeb72/* a.81906f2b98ac/* (glob)
+  diffing */extdiff.*/a.46c0e4daeb72/* a.81906f2b98ac/* (glob)
+  making snapshot of 2 files from rev 46c0e4daeb72
+    a
+    b
+  making snapshot of 2 files from rev 81906f2b98ac
+    a
+    b
+  running '* diffing * *' in * (backgrounded) (glob)
+  running '* diffing * *' in * (backgrounded) (glob)
+  cleaning up temp directory
+  [1]
+
+Test --per-file and --confirm options:
+
+  $ hg --config ui.interactive=True falabala -c 6 --per-file --confirm <<EOF
+  > n
+  > y
+  > EOF
+  diff a (1 of 2) [Yns?] n
+  diff b (2 of 2) [Yns?] y
+  diffing "*\\extdiff.*\\a.46c0e4daeb72\\b" "a.81906f2b98ac\\b" (glob) (windows !)
+  diffing */extdiff.*/a.46c0e4daeb72/b a.81906f2b98ac/b (glob) (no-windows !)
+  [1]
+
+Test --per-file and --confirm options with skipping:
+
+  $ hg --config ui.interactive=True falabala -c 6 --per-file --confirm <<EOF
+  > s
+  > EOF
+  diff a (1 of 2) [Yns?] s
+  [1]
+
 issue4463: usage of command line configuration without additional quoting
 
   $ cat <<EOF >> $HGRCPATH
--- a/tests/test-extension.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-extension.t	Wed Apr 17 13:41:18 2019 -0400
@@ -610,7 +610,8 @@
   > cmdtable = {}
   > command = registrar.command(cmdtable)
   > 
-  > # demand import avoids failure of importing notexist here
+  > # demand import avoids failure of importing notexist here, but only on
+  > # Python 2.
   > import extlibroot.lsub1.lsub2.notexist
   > 
   > @command(b'checkrelativity', [], norepo=True)
@@ -622,7 +623,13 @@
   >         pass # intentional failure
   > NO_CHECK_EOF
 
-  $ (PYTHONPATH=${PYTHONPATH}${PATHSEP}${TESTTMP}; hg --config extensions.checkrelativity=$TESTTMP/checkrelativity.py checkrelativity)
+Python 3's lazy importer verifies modules exist before returning the lazy
+module stub. Our custom lazy importer for Python 2 always returns a stub.
+
+  $ (PYTHONPATH=${PYTHONPATH}${PATHSEP}${TESTTMP}; hg --config extensions.checkrelativity=$TESTTMP/checkrelativity.py checkrelativity) || true
+  *** failed to import extension checkrelativity from $TESTTMP/checkrelativity.py: No module named 'extlibroot.lsub1.lsub2.notexist' (py3 !)
+  hg: unknown command 'checkrelativity' (py3 !)
+  (use 'hg help' for a list of commands) (py3 !)
 
 #endif
 
@@ -633,7 +640,7 @@
 Make sure a broken uisetup doesn't globally break hg:
   $ cat > $TESTTMP/baduisetup.py <<EOF
   > def uisetup(ui):
-  >     1/0
+  >     1 / 0
   > EOF
 
 Even though the extension fails during uisetup, hg is still basically usable:
@@ -642,7 +649,7 @@
     File "*/mercurial/extensions.py", line *, in _runuisetup (glob)
       uisetup(ui)
     File "$TESTTMP/baduisetup.py", line 2, in uisetup
-      1/0
+      1 / 0
   ZeroDivisionError: * by zero (glob)
   *** failed to set up extension baduisetup: * by zero (glob)
   Mercurial Distributed SCM (version *) (glob)
@@ -681,13 +688,11 @@
   > @command(b'debugfoobar', [], b'hg debugfoobar')
   > def debugfoobar(ui, repo, *args, **opts):
   >     "yet another debug command"
-  >     pass
   > @command(b'foo', [], b'hg foo')
   > def foo(ui, repo, *args, **opts):
   >     """yet another foo command
   >     This command has been DEPRECATED since forever.
   >     """
-  >     pass
   > EOF
   $ debugpath=`pwd`/debugextension.py
   $ echo "debugextension = $debugpath" >> $HGRCPATH
@@ -805,15 +810,28 @@
       "-Npru".
   
       To select a different program, use the -p/--program option. The program
-      will be passed the names of two directories to compare. To pass additional
-      options to the program, use -o/--option. These will be passed before the
-      names of the directories to compare.
+      will be passed the names of two directories to compare, unless the --per-
+      file option is specified (see below). To pass additional options to the
+      program, use -o/--option. These will be passed before the names of the
+      directories or files to compare.
   
       When two revision arguments are given, then changes are shown between
       those revisions. If only one revision is specified then that revision is
       compared to the working directory, and, when no revisions are specified,
       the working directory files are compared to its parent.
   
+      The --per-file option runs the external program repeatedly on each file to
+      diff, instead of once on two directories. By default, this happens one by
+      one, where the next file diff is open in the external program only once
+      the previous external program (for the previous file diff) has exited. If
+      the external program has a graphical interface, it can open all the file
+      diffs at once instead of one by one. See 'hg help -e extdiff' for
+      information about how to tell Mercurial that a given program has a
+      graphical interface.
+  
+      The --confirm option will prompt the user before each invocation of the
+      external program. It is ignored if --per-file isn't specified.
+  
   (use 'hg help -e extdiff' to show help for the extdiff extension)
   
   options ([+] can be repeated):
@@ -822,6 +840,8 @@
    -o --option OPT [+]      pass option to comparison program
    -r --rev REV [+]         revision
    -c --change REV          change made by revision
+      --per-file            compare each file instead of revision snapshots
+      --confirm             prompt user before each external program invocation
       --patch               compare patches for two revisions
    -I --include PATTERN [+] include names matching the given patterns
    -X --exclude PATTERN [+] exclude names matching the given patterns
@@ -889,6 +909,20 @@
     [diff-tools]
     kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
   
+  If a program has a graphical interface, it might be interesting to tell
+  Mercurial about it. It will prevent the program from being mistakenly used in
+  a terminal-only environment (such as an SSH terminal session), and will make
+  'hg extdiff --per-file' open multiple file diffs at once instead of one by one
+  (if you still want to open file diffs one by one, you can use the --confirm
+  option).
+  
+  Declaring that a tool has a graphical interface can be done with the "gui"
+  flag next to where "diffargs" are specified:
+  
+    [diff-tools]
+    kdiff3.diffargs=--L1 '$plabel1' --L2 '$clabel' $parent $child
+    kdiff3.gui = true
+  
   You can use -I/-X and list of file or directory names like normal 'hg diff'
   command. The extdiff extension makes snapshots of only needed files, so
   running the external diff program will actually be pretty fast (at least
@@ -928,7 +962,6 @@
   > @command(b'multirevs', [], b'ARG', norepo=True)
   > def multirevs(ui, repo, arg, *args, **opts):
   >     """multirevs command"""
-  >     pass
   > EOF
   $ echo "multirevs = multirevs.py" >> $HGRCPATH
 
--- a/tests/test-fastannotate-hg.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-fastannotate-hg.t	Wed Apr 17 13:41:18 2019 -0400
@@ -443,7 +443,7 @@
   > def reposetup(ui, repo):
   >     class legacyrepo(repo.__class__):
   >         def _filecommit(self, fctx, manifest1, manifest2,
-  >                         linkrev, tr, changelist):
+  >                         linkrev, tr, changelist, includecopymeta):
   >             fname = fctx.path()
   >             text = fctx.data()
   >             flog = self.file(fname)
@@ -593,7 +593,7 @@
   $ rm baz
   $ hg annotate -ncr "wdir()" baz
   abort: $TESTTMP/repo/baz: $ENOENT$ (windows !)
-  abort: $ENOENT$: $TESTTMP/repo/baz (no-windows !)
+  abort: $ENOENT$: '$TESTTMP/repo/baz' (no-windows !)
   [255]
 
 annotate removed file
@@ -601,7 +601,7 @@
   $ hg rm baz
   $ hg annotate -ncr "wdir()" baz
   abort: $TESTTMP/repo/baz: $ENOENT$ (windows !)
-  abort: $ENOENT$: $TESTTMP/repo/baz (no-windows !)
+  abort: $ENOENT$: '$TESTTMP/repo/baz' (no-windows !)
   [255]
 
 Test annotate with whitespace options
--- a/tests/test-fix.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-fix.t	Wed Apr 17 13:41:18 2019 -0400
@@ -354,6 +354,10 @@
 
   $ printf "modified!!!\n" > modified.whole
   $ printf "added\n" > added.whole
+
+Listing the files explicitly causes untracked files to also be fixed, but
+ignored files are still unaffected.
+
   $ hg fix --working-dir *.whole
 
   $ hg status --all
@@ -366,13 +370,12 @@
   I ignored.whole
   C .hgignore
 
-It would be better if this also fixed the unknown file.
   $ cat *.whole
   ADDED
   CLEAN
   ignored
   MODIFIED!!!
-  unknown
+  UNKNOWN
 
   $ cd ..
 
--- a/tests/test-flagprocessor.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-flagprocessor.t	Wed Apr 17 13:41:18 2019 -0400
@@ -209,11 +209,13 @@
       _insertflagprocessor(flag, processor, _flagprocessors)
     File "*/mercurial/revlog.py", line *, in _insertflagprocessor (glob)
       raise error.Abort(msg)
-  Abort: cannot register multiple processors on flag '0x8'.
+  mercurial.error.Abort: b"cannot register multiple processors on flag '0x8'." (py3 !)
+  Abort: cannot register multiple processors on flag '0x8'. (no-py3 !)
   *** failed to set up extension duplicate: cannot register multiple processors on flag '0x8'.
   $ hg st 2>&1 | egrep 'cannot register multiple processors|flagprocessorext'
     File "*/tests/flagprocessorext.py", line *, in extsetup (glob)
-  Abort: cannot register multiple processors on flag '0x8'.
+  mercurial.error.Abort: b"cannot register multiple processors on flag '0x8'." (py3 !)
+  Abort: cannot register multiple processors on flag '0x8'. (no-py3 !)
   *** failed to set up extension duplicate: cannot register multiple processors on flag '0x8'.
     File "*/tests/flagprocessorext.py", line *, in b64decode (glob)
 
--- a/tests/test-fncache.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-fncache.t	Wed Apr 17 13:41:18 2019 -0400
@@ -1,5 +1,19 @@
 #require repofncache
 
+An extension which will set fncache chunksize to 1 byte to make sure that logic
+does not break
+
+  $ cat > chunksize.py <<EOF
+  > from __future__ import absolute_import
+  > from mercurial import store
+  > store.fncache_chunksize = 1
+  > EOF
+
+  $ cat >> $HGRCPATH <<EOF
+  > [extensions]
+  > chunksize = $TESTTMP/chunksize.py
+  > EOF
+
 Init repo1:
 
   $ hg init repo1
@@ -88,7 +102,6 @@
   .hg/00manifest.i
   .hg/cache
   .hg/cache/branch2-served
-  .hg/cache/manifestfulltextcache (reporevlogstore !)
   .hg/cache/rbc-names-v1
   .hg/cache/rbc-revs-v1
   .hg/data
@@ -111,6 +124,7 @@
   .hg/wcache/checkisexec (execbit !)
   .hg/wcache/checklink (symlink !)
   .hg/wcache/checklink-target (symlink !)
+  .hg/wcache/manifestfulltextcache (reporevlogstore !)
   $ cd ..
 
 Non fncache repo:
@@ -126,7 +140,6 @@
   .hg/00changelog.i
   .hg/cache
   .hg/cache/branch2-served
-  .hg/cache/manifestfulltextcache (reporevlogstore !)
   .hg/cache/rbc-names-v1
   .hg/cache/rbc-revs-v1
   .hg/dirstate
@@ -152,6 +165,7 @@
   .hg/wcache/checkisexec (execbit !)
   .hg/wcache/checklink (symlink !)
   .hg/wcache/checklink-target (symlink !)
+  .hg/wcache/manifestfulltextcache (reporevlogstore !)
   $ cd ..
 
 Encoding of reserved / long paths in the store
--- a/tests/test-generaldelta.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-generaldelta.t	Wed Apr 17 13:41:18 2019 -0400
@@ -339,7 +339,7 @@
        52       5        1       -1    base        369        640        369   0.57656       369         0    0.00000
        53       6        1       -1    base          0          0          0   0.00000         0         0    0.00000
        54       7        1       -1    base        369        640        369   0.57656       369         0    0.00000
-  $ hg clone --pull source-repo --config experimental.maxdeltachainspan=0 noconst-chain --config format.generaldelta=yes
+  $ hg clone --pull source-repo --config experimental.maxdeltachainspan=0 noconst-chain --config format.usegeneraldelta=yes --config storage.revlog.reuse-external-delta-parent=no
   requesting all changes
   adding changesets
   adding manifests
--- a/tests/test-graft.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-graft.t	Wed Apr 17 13:41:18 2019 -0400
@@ -927,7 +927,20 @@
 
 NOTE: This is affected by issue5343, and will need updating when it's fixed
 
-Possible cases during a regular graft (when ca is between cta and c2):
+Consider this topology for a regular graft:
+
+o c1
+|
+| o c2
+| |
+| o ca # stands for "common ancestor"
+|/
+o cta # stands for "common topological ancestor"
+
+Note that in issue5343, ca==cta.
+
+The following table shows the possible cases. Here, "x->y" and, equivalently,
+"y<-x", where x is an ancestor of y, means that some copy happened from x to y.
 
 name | c1<-cta | cta<->ca | ca->c2
 A.0  |         |          |
@@ -955,6 +968,8 @@
 
 A.4 has a degenerate case a<-b<-a->a, where checkcopies isn't needed at all.
 A.5 has a special case a<-b<-b->a, which is treated like a<-b->a in a merge.
+A.5 has issue5343 as a special case.
+TODO: add test coverage for A.5
 A.6 has a special case a<-a<-b->a. Here, checkcopies will find a spurious
 incomplete divergence, which is in fact complete. This is handled later in
 mergecopies.
@@ -1044,8 +1059,8 @@
   $ HGEDITOR="echo D1 >" hg graft -r 'desc("D0")' --edit
   grafting 3:b69f5839d2d9 "D0"
   note: possible conflict - f3b was renamed multiple times to:
+   f3a
    f3d
-   f3a
   warning: can't find ancestor for 'f3d' copied from 'f3b'!
 
 Set up the repository for some further tests
@@ -1111,8 +1126,8 @@
   $ HGEDITOR="echo D2 >" hg graft -r 'desc("D0")' --edit
   grafting 3:b69f5839d2d9 "D0"
   note: possible conflict - f3b was renamed multiple times to:
+   f3d
    f3e
-   f3d
   merging f4e and f4a to f4e
   warning: can't find ancestor for 'f3d' copied from 'f3b'!
 
--- a/tests/test-grep.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-grep.t	Wed Apr 17 13:41:18 2019 -0400
@@ -32,13 +32,27 @@
   port:4:vaportight
   port:4:import/export
 
+simple from subdirectory
+
+  $ mkdir dir
+  $ cd dir
+  $ hg grep -r tip:0 port
+  port:4:export
+  port:4:vaportight
+  port:4:import/export
+  $ hg grep -r tip:0 port --config ui.relative-paths=yes
+  ../port:4:export
+  ../port:4:vaportight
+  ../port:4:import/export
+  $ cd ..
+
 simple with color
 
   $ hg --config extensions.color= grep --config color.mode=ansi \
   >     --color=always port port -r tip:0
-  \x1b[0;35mport\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;32m4\x1b[0m\x1b[0;36m:\x1b[0mex\x1b[0;31;1mport\x1b[0m (esc)
-  \x1b[0;35mport\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;32m4\x1b[0m\x1b[0;36m:\x1b[0mva\x1b[0;31;1mport\x1b[0might (esc)
-  \x1b[0;35mport\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;32m4\x1b[0m\x1b[0;36m:\x1b[0mim\x1b[0;31;1mport\x1b[0m/ex\x1b[0;31;1mport\x1b[0m (esc)
+  \x1b[0;35mport\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;34m4\x1b[0m\x1b[0;36m:\x1b[0mex\x1b[0;31;1mport\x1b[0m (esc)
+  \x1b[0;35mport\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;34m4\x1b[0m\x1b[0;36m:\x1b[0mva\x1b[0;31;1mport\x1b[0might (esc)
+  \x1b[0;35mport\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;34m4\x1b[0m\x1b[0;36m:\x1b[0mim\x1b[0;31;1mport\x1b[0m/ex\x1b[0;31;1mport\x1b[0m (esc)
 
 simple templated
 
@@ -285,6 +299,15 @@
   color:3:+:orange
   color:2:-:orange
   color:1:+:orange
+  $ hg grep --diff orange --color=debug
+  [grep.filename|color][grep.sep|:][grep.rev|3][grep.sep|:][grep.inserted grep.change|+][grep.sep|:][grep.match|orange]
+  [grep.filename|color][grep.sep|:][grep.rev|2][grep.sep|:][grep.deleted grep.change|-][grep.sep|:][grep.match|orange]
+  [grep.filename|color][grep.sep|:][grep.rev|1][grep.sep|:][grep.inserted grep.change|+][grep.sep|:][grep.match|orange]
+
+  $ hg grep --diff orange --color=yes
+  \x1b[0;35mcolor\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;34m3\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;32;1m+\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;31;1morange\x1b[0m (esc)
+  \x1b[0;35mcolor\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;34m2\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;31;1m-\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;31;1morange\x1b[0m (esc)
+  \x1b[0;35mcolor\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;34m1\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;32;1m+\x1b[0m\x1b[0;36m:\x1b[0m\x1b[0;31;1morange\x1b[0m (esc)
 
   $ hg grep --diff orange
   color:3:+:orange
@@ -503,5 +526,8 @@
   $ hg grep -r "0:2" "unmod" --all-files um
   um:0:unmod
   um:1:unmod
+  $ hg grep -r "0:2" "unmod" --all-files "glob:**/um" # Check that patterns also work
+  um:0:unmod
+  um:1:unmod
   $ cd ..
 
--- a/tests/test-hardlinks.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-hardlinks.t	Wed Apr 17 13:41:18 2019 -0400
@@ -239,7 +239,6 @@
   2 r4/.hg/branch
   2 r4/.hg/cache/branch2-base
   2 r4/.hg/cache/branch2-served
-  2 r4/.hg/cache/manifestfulltextcache (reporevlogstore !)
   2 r4/.hg/cache/rbc-names-v1
   2 r4/.hg/cache/rbc-revs-v1
   2 r4/.hg/dirstate
@@ -268,6 +267,7 @@
   2 r4/.hg/wcache/checkisexec (execbit !)
   2 r4/.hg/wcache/checklink-target (symlink !)
   2 r4/.hg/wcache/checknoexec (execbit !)
+  2 r4/.hg/wcache/manifestfulltextcache (reporevlogstore !)
   2 r4/d1/data1
   2 r4/d1/f2
   2 r4/f1
@@ -290,7 +290,6 @@
   1 r4/.hg/branch
   2 r4/.hg/cache/branch2-base
   2 r4/.hg/cache/branch2-served
-  2 r4/.hg/cache/manifestfulltextcache (reporevlogstore !)
   2 r4/.hg/cache/rbc-names-v1
   2 r4/.hg/cache/rbc-revs-v1
   1 r4/.hg/dirstate
@@ -319,6 +318,7 @@
   2 r4/.hg/wcache/checkisexec (execbit !)
   2 r4/.hg/wcache/checklink-target (symlink !)
   2 r4/.hg/wcache/checknoexec (execbit !)
+  1 r4/.hg/wcache/manifestfulltextcache (reporevlogstore !)
   2 r4/d1/data1
   2 r4/d1/f2
   1 r4/f1
--- a/tests/test-help.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-help.t	Wed Apr 17 13:41:18 2019 -0400
@@ -825,7 +825,6 @@
   > @command(b'hashelp', [], b'hg hashelp', norepo=True)
   > def hashelp(ui, *args, **kwargs):
   >     """Extension command's help"""
-  >     pass
   > 
   > def uisetup(ui):
   >     ui.setconfig(b'alias', b'shellalias', b'!echo hi', b'helpext')
@@ -1012,8 +1011,14 @@
    debugoptADV   (no help text available)
    debugoptDEP   (no help text available)
    debugoptEXP   (no help text available)
+   debugp1copies
+                 dump copy information compared to p1
+   debugp2copies
+                 dump copy information compared to p2
    debugpathcomplete
                  complete part or all of a tracked path
+   debugpathcopies
+                 show copies between two revisions
    debugpeer     establish a connection to a peer repository
    debugpickmergetool
                  examine which merge tool is chosen for specified file
@@ -1672,7 +1677,7 @@
 Test omit indicating for help
 
   $ cat > addverboseitems.py <<EOF
-  > '''extension to test omit indicating.
+  > r'''extension to test omit indicating.
   > 
   > This paragraph is never omitted (for extension)
   > 
@@ -1685,7 +1690,7 @@
   > '''
   > from __future__ import absolute_import
   > from mercurial import commands, help
-  > testtopic = b"""This paragraph is never omitted (for topic).
+  > testtopic = br"""This paragraph is never omitted (for topic).
   > 
   > .. container:: verbose
   > 
--- a/tests/test-hgignore.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-hgignore.t	Wed Apr 17 13:41:18 2019 -0400
@@ -356,7 +356,7 @@
   $ rm dir1/.hgignore
   $ echo "dir1/file*" >> .hgignore
   $ hg debugignore "dir1\file2"
-  dir1\file2 is ignored
+  dir1/file2 is ignored
   (ignore rule in $TESTTMP\ignorerepo\.hgignore, line 4: 'dir1/file*')
   $ hg up -qC .
 
--- a/tests/test-hgweb-auth.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-hgweb-auth.py	Wed Apr 17 13:41:18 2019 -0400
@@ -24,16 +24,26 @@
 def writeauth(items):
     ui = origui.copy()
     for name, value in items.items():
-        ui.setconfig('auth', name, value)
+        ui.setconfig(b'auth', name, value)
     return ui
 
+def _stringifyauthinfo(ai):
+    if ai is None:
+        return ai
+    realm, authuris, user, passwd = ai
+    return (pycompat.strurl(realm),
+            [pycompat.strurl(u) for u in authuris],
+            pycompat.strurl(user),
+            pycompat.strurl(passwd),
+    )
+
 def test(auth, urls=None):
     print('CFG:', pycompat.sysstr(stringutil.pprint(auth, bprefix=True)))
     prefixes = set()
     for k in auth:
-        prefixes.add(k.split('.', 1)[0])
+        prefixes.add(k.split(b'.', 1)[0])
     for p in prefixes:
-        for name in ('.username', '.password'):
+        for name in (b'.username', b'.password'):
             if (p + name) not in auth:
                 auth[p + name] = p
     auth = dict((k, v) for k, v in auth.items() if v is not None)
@@ -41,106 +51,109 @@
     ui = writeauth(auth)
 
     def _test(uri):
-        print('URI:', uri)
+        print('URI:', pycompat.strurl(uri))
         try:
             pm = url.passwordmgr(ui, urlreq.httppasswordmgrwithdefaultrealm())
             u, authinfo = util.url(uri).authinfo()
             if authinfo is not None:
-                pm.add_password(*authinfo)
-            print('    ', pm.find_user_password('test', u))
+                pm.add_password(*_stringifyauthinfo(authinfo))
+            print('    ', tuple(pycompat.strurl(a) for a in
+                                pm.find_user_password('test',
+                                                      pycompat.strurl(u))))
         except error.Abort:
             print('    ','abort')
 
     if not urls:
         urls = [
-            'http://example.org/foo',
-            'http://example.org/foo/bar',
-            'http://example.org/bar',
-            'https://example.org/foo',
-            'https://example.org/foo/bar',
-            'https://example.org/bar',
-            'https://x@example.org/bar',
-            'https://y@example.org/bar',
+            b'http://example.org/foo',
+            b'http://example.org/foo/bar',
+            b'http://example.org/bar',
+            b'https://example.org/foo',
+            b'https://example.org/foo/bar',
+            b'https://example.org/bar',
+            b'https://x@example.org/bar',
+            b'https://y@example.org/bar',
             ]
     for u in urls:
         _test(u)
 
 
 print('\n*** Test in-uri schemes\n')
-test({'x.prefix': 'http://example.org'})
-test({'x.prefix': 'https://example.org'})
-test({'x.prefix': 'http://example.org', 'x.schemes': 'https'})
-test({'x.prefix': 'https://example.org', 'x.schemes': 'http'})
+test({b'x.prefix': b'http://example.org'})
+test({b'x.prefix': b'https://example.org'})
+test({b'x.prefix': b'http://example.org', b'x.schemes': b'https'})
+test({b'x.prefix': b'https://example.org', b'x.schemes': b'http'})
 
 print('\n*** Test separately configured schemes\n')
-test({'x.prefix': 'example.org', 'x.schemes': 'http'})
-test({'x.prefix': 'example.org', 'x.schemes': 'https'})
-test({'x.prefix': 'example.org', 'x.schemes': 'http https'})
+test({b'x.prefix': b'example.org', b'x.schemes': b'http'})
+test({b'x.prefix': b'example.org', b'x.schemes': b'https'})
+test({b'x.prefix': b'example.org', b'x.schemes': b'http https'})
 
 print('\n*** Test prefix matching\n')
-test({'x.prefix': 'http://example.org/foo',
-      'y.prefix': 'http://example.org/bar'})
-test({'x.prefix': 'http://example.org/foo',
-      'y.prefix': 'http://example.org/foo/bar'})
-test({'x.prefix': '*', 'y.prefix': 'https://example.org/bar'})
+test({b'x.prefix': b'http://example.org/foo',
+      b'y.prefix': b'http://example.org/bar'})
+test({b'x.prefix': b'http://example.org/foo',
+      b'y.prefix': b'http://example.org/foo/bar'})
+test({b'x.prefix': b'*', b'y.prefix': b'https://example.org/bar'})
 
 print('\n*** Test user matching\n')
-test({'x.prefix': 'http://example.org/foo',
-      'x.username': None,
-      'x.password': 'xpassword'},
-     urls=['http://y@example.org/foo'])
-test({'x.prefix': 'http://example.org/foo',
-      'x.username': None,
-      'x.password': 'xpassword',
-      'y.prefix': 'http://example.org/foo',
-      'y.username': 'y',
-      'y.password': 'ypassword'},
-     urls=['http://y@example.org/foo'])
-test({'x.prefix': 'http://example.org/foo/bar',
-      'x.username': None,
-      'x.password': 'xpassword',
-      'y.prefix': 'http://example.org/foo',
-      'y.username': 'y',
-      'y.password': 'ypassword'},
-     urls=['http://y@example.org/foo/bar'])
+test({b'x.prefix': b'http://example.org/foo',
+      b'x.username': None,
+      b'x.password': b'xpassword'},
+     urls=[b'http://y@example.org/foo'])
+test({b'x.prefix': b'http://example.org/foo',
+      b'x.username': None,
+      b'x.password': b'xpassword',
+      b'y.prefix': b'http://example.org/foo',
+      b'y.username': b'y',
+      b'y.password': b'ypassword'},
+     urls=[b'http://y@example.org/foo'])
+test({b'x.prefix': b'http://example.org/foo/bar',
+      b'x.username': None,
+      b'x.password': b'xpassword',
+      b'y.prefix': b'http://example.org/foo',
+      b'y.username': b'y',
+      b'y.password': b'ypassword'},
+     urls=[b'http://y@example.org/foo/bar'])
 
 print('\n*** Test user matching with name in prefix\n')
 
 # prefix, username and URL have the same user
-test({'x.prefix': 'https://example.org/foo',
-      'x.username': None,
-      'x.password': 'xpassword',
-      'y.prefix': 'http://y@example.org/foo',
-      'y.username': 'y',
-      'y.password': 'ypassword'},
-     urls=['http://y@example.org/foo'])
+test({b'x.prefix': b'https://example.org/foo',
+      b'x.username': None,
+      b'x.password': b'xpassword',
+      b'y.prefix': b'http://y@example.org/foo',
+      b'y.username': b'y',
+      b'y.password': b'ypassword'},
+     urls=[b'http://y@example.org/foo'])
 # Prefix has a different user from username and URL
-test({'y.prefix': 'http://z@example.org/foo',
-      'y.username': 'y',
-      'y.password': 'ypassword'},
-     urls=['http://y@example.org/foo'])
+test({b'y.prefix': b'http://z@example.org/foo',
+      b'y.username': b'y',
+      b'y.password': b'ypassword'},
+     urls=[b'http://y@example.org/foo'])
 # Prefix has a different user from URL; no username
-test({'y.prefix': 'http://z@example.org/foo',
-      'y.password': 'ypassword'},
-     urls=['http://y@example.org/foo'])
+test({b'y.prefix': b'http://z@example.org/foo',
+      b'y.password': b'ypassword'},
+     urls=[b'http://y@example.org/foo'])
 # Prefix and URL have same user, but doesn't match username
-test({'y.prefix': 'http://y@example.org/foo',
-      'y.username': 'z',
-      'y.password': 'ypassword'},
-     urls=['http://y@example.org/foo'])
+test({b'y.prefix': b'http://y@example.org/foo',
+      b'y.username': b'z',
+      b'y.password': b'ypassword'},
+     urls=[b'http://y@example.org/foo'])
 # Prefix and URL have the same user; no username
-test({'y.prefix': 'http://y@example.org/foo',
-      'y.password': 'ypassword'},
-     urls=['http://y@example.org/foo'])
+test({b'y.prefix': b'http://y@example.org/foo',
+      b'y.password': b'ypassword'},
+     urls=[b'http://y@example.org/foo'])
 # Prefix user, but no URL user or username
-test({'y.prefix': 'http://y@example.org/foo',
-      'y.password': 'ypassword'},
-     urls=['http://example.org/foo'])
+test({b'y.prefix': b'http://y@example.org/foo',
+      b'y.password': b'ypassword'},
+     urls=[b'http://example.org/foo'])
 
 def testauthinfo(fullurl, authurl):
     print('URIs:', fullurl, authurl)
     pm = urlreq.httppasswordmgrwithdefaultrealm()
-    pm.add_password(*util.url(fullurl).authinfo()[1])
+    ai = _stringifyauthinfo(util.url(pycompat.bytesurl(fullurl)).authinfo()[1])
+    pm.add_password(*ai)
     print(pm.find_user_password('test', authurl))
 
 print('\n*** Test urllib2 and util.url\n')
--- a/tests/test-hgweb-json.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-hgweb-json.t	Wed Apr 17 13:41:18 2019 -0400
@@ -2196,7 +2196,8 @@
 Commit message with Japanese Kanji 'Noh', which ends with '\x5c'
 
   $ echo foo >> da/foo
-  $ HGENCODING=cp932 hg ci -m `"$PYTHON" -c 'print("\x94\x5c")'`
+  >>> open('msg', 'wb').write(b'\x94\x5c\x0a') and None
+  $ HGENCODING=cp932 hg ci -l msg
 
 Commit message with null character
 
--- a/tests/test-hgweb-no-request-uri.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-hgweb-no-request-uri.t	Wed Apr 17 13:41:18 2019 -0400
@@ -62,12 +62,12 @@
   > output = stringio()
   > env['PATH_INFO'] = '/'
   > env['QUERY_STRING'] = 'style=atom'
-  > process(hgweb.hgweb(b'.', name = b'repo'))
+  > process(hgweb.hgweb(b'.', name=b'repo'))
   > 
   > output = stringio()
   > env['PATH_INFO'] = '/file/tip/'
   > env['QUERY_STRING'] = 'style=raw'
-  > process(hgweb.hgweb(b'.', name = b'repo'))
+  > process(hgweb.hgweb(b'.', name=b'repo'))
   > 
   > output = stringio()
   > env['PATH_INFO'] = '/'
--- a/tests/test-hgweb.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-hgweb.t	Wed Apr 17 13:41:18 2019 -0400
@@ -910,7 +910,8 @@
 
 errors
 
-  $ cat errors.log
+  $ cat errors.log | "$PYTHON" $TESTDIR/filtertraceback.py
+  $ rm -f errors.log
 
 Uncaught exceptions result in a logged error and canned HTTP response
 
@@ -925,8 +926,11 @@
   [1]
 
   $ killdaemons.py
-  $ head -1 errors.log
+  $ cat errors.log | "$PYTHON" $TESTDIR/filtertraceback.py
   .* Exception happened during processing request '/raiseerror': (re)
+  Traceback (most recent call last):
+  AttributeError: I am an uncaught error!
+  
 
 Uncaught exception after partial content sent
 
--- a/tests/test-highlight.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-highlight.t	Wed Apr 17 13:41:18 2019 -0400
@@ -19,7 +19,7 @@
 
 create random Python file to exercise Pygments
 
-  $ cat <<EOF > primes.py
+  $ cat <<NO_CHECK_EOF > primes.py
   > """Fun with generators. Corresponding Haskell implementation:
   > 
   > primes = 2 : sieve [3, 5..]
@@ -51,7 +51,7 @@
   >         n = 10
   >     p = primes()
   >     print("The first %d primes: %s" % (n, list(itertools.islice(p, n))))
-  > EOF
+  > NO_CHECK_EOF
   $ echo >> primes.py  # to test html markup with an empty line just before EOF
   $ hg ci -Ama
   adding primes.py
--- a/tests/test-histedit-arguments.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-histedit-arguments.t	Wed Apr 17 13:41:18 2019 -0400
@@ -139,7 +139,6 @@
   > edit 08d98a8350f3 4 five
   > EOF
   1 files updated, 0 files merged, 0 files removed, 0 files unresolved
-  reverting alpha
   Editing (08d98a8350f3), you may commit or record as needed now.
   (hg histedit --continue to resume)
   [1]
@@ -362,7 +361,7 @@
   $ hg histedit --abort
   warning: encountered an exception during histedit --abort; the repository may not have been completely cleaned up
   abort: $TESTTMP/foo/.hg/strip-backup/*-histedit.hg: $ENOENT$ (glob) (windows !)
-  abort: $ENOENT$: $TESTTMP/foo/.hg/strip-backup/*-histedit.hg (glob) (no-windows !)
+  abort: $ENOENT$: '$TESTTMP/foo/.hg/strip-backup/*-histedit.hg' (glob) (no-windows !)
   [255]
 Histedit state has been exited
   $ hg summary -q
@@ -476,7 +475,6 @@
   > pick 8cde254db839
   > edit 6f2f0241f119
   > EOF
-  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
   merging foo
   warning: conflicts while merging foo! (edit, then use 'hg resolve --mark')
   Fix up the change (pick 8cde254db839)
--- a/tests/test-histedit-commute.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-histedit-commute.t	Wed Apr 17 13:41:18 2019 -0400
@@ -52,6 +52,7 @@
      summary:     a
   
 
+
 show the edit commands offered
   $ HGEDITOR=cat hg histedit 177f92b77385
   pick 177f92b77385 2 c
@@ -76,6 +77,33 @@
   #  r, roll = like fold, but discard this commit's description and date
   #
 
+
+test customization of revision summary
+  $ HGEDITOR=cat hg histedit 177f92b77385 \
+  >  --config histedit.summary-template='I am rev {rev} desc {desc} tags {tags}'
+  pick 177f92b77385 I am rev 2 desc c tags 
+  pick 055a42cdd887 I am rev 3 desc d tags 
+  pick e860deea161a I am rev 4 desc e tags 
+  pick 652413bf663e I am rev 5 desc f tags tip
+  
+  # Edit history between 177f92b77385 and 652413bf663e
+  #
+  # Commits are listed from least to most recent
+  #
+  # You can reorder changesets by reordering the lines
+  #
+  # Commands:
+  #
+  #  e, edit = use commit, but stop for amending
+  #  m, mess = edit commit message without changing commit content
+  #  p, pick = use commit
+  #  b, base = checkout changeset and apply further changesets from there
+  #  d, drop = remove commit from history
+  #  f, fold = use commit, but combine it with the one above
+  #  r, roll = like fold, but discard this commit's description and date
+  #
+
+
 edit the history
 (use a hacky editor to check histedit-last-edit.txt backup)
 
@@ -142,6 +170,7 @@
      summary:     a
   
 
+
 put things back
 
   $ hg histedit 177f92b77385 --commands - 2>&1 << EOF | fixbundle
@@ -184,6 +213,7 @@
      summary:     a
   
 
+
 slightly different this time
 
   $ hg histedit 177f92b77385 --commands - << EOF 2>&1 | fixbundle
@@ -225,6 +255,7 @@
      summary:     a
   
 
+
 keep prevents stripping dead revs
   $ hg histedit 799205341b6b --keep --commands - 2>&1 << EOF | fixbundle
   > pick 799205341b6b d
@@ -276,6 +307,7 @@
      summary:     a
   
 
+
 try with --rev
   $ hg histedit --commands - --rev -2 2>&1 <<EOF | fixbundle
   > pick de71b079d9ce e
@@ -326,6 +358,7 @@
      date:        Thu Jan 01 00:00:00 1970 +0000
      summary:     a
   
+
 Verify that revsetalias entries work with histedit:
   $ cat >> $HGRCPATH <<EOF
   > [revsetalias]
@@ -355,6 +388,7 @@
   #  r, roll = like fold, but discard this commit's description and date
   #
 
+
 should also work if a commit message is missing
   $ BUNDLE="$TESTDIR/missing-comment.hg"
   $ hg init missing
@@ -384,6 +418,7 @@
      date:        Mon Nov 28 16:35:28 2011 +0000
      summary:     Checked in text file
   
+
   $ hg histedit 0
   $ cd ..
 
@@ -440,6 +475,7 @@
   @@ -0,0 +1,1 @@
   +changed
 
+
   $ hg --config diff.git=yes export 1
   # HG changeset patch
   # User test
@@ -453,6 +489,7 @@
   rename from another-dir/initial-file
   rename to another-dir/renamed-file
 
+
   $ cd ..
 
 Test that branches are preserved and stays active
--- a/tests/test-histedit-edit.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-histedit-edit.t	Wed Apr 17 13:41:18 2019 -0400
@@ -370,9 +370,9 @@
   HG: branch 'default'
   HG: added f
   ====
-  note: commit message saved in .hg/last-message.txt
   transaction abort!
   rollback completed
+  note: commit message saved in .hg/last-message.txt
   abort: pretxncommit.unexpectedabort hook exited with status 1
   [255]
   $ cat .hg/last-message.txt
@@ -394,9 +394,9 @@
   HG: user: test
   HG: branch 'default'
   HG: added f
-  note: commit message saved in .hg/last-message.txt
   transaction abort!
   rollback completed
+  note: commit message saved in .hg/last-message.txt
   abort: pretxncommit.unexpectedabort hook exited with status 1
   [255]
 
@@ -433,7 +433,6 @@
   > edit cb9a9f314b8b a > $EDITED
   > EOF
   0 files updated, 0 files merged, 1 files removed, 0 files unresolved
-  adding a
   Editing (cb9a9f314b8b), you may commit or record as needed now.
   (hg histedit --continue to resume)
   [1]
--- a/tests/test-histedit-fold-non-commute.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-histedit-fold-non-commute.t	Wed Apr 17 13:41:18 2019 -0400
@@ -94,7 +94,6 @@
 
 edit the history
   $ hg histedit 3 --commands $EDITED 2>&1 | fixbundle
-  2 files updated, 0 files merged, 0 files removed, 0 files unresolved
   merging e
   warning: conflicts while merging e! (edit, then use 'hg resolve --mark')
   Fix up the change (fold 42abbb61bede)
@@ -249,7 +248,6 @@
 
 edit the history
   $ hg histedit 3 --commands $EDITED 2>&1 | fixbundle
-  2 files updated, 0 files merged, 0 files removed, 0 files unresolved
   merging e
   warning: conflicts while merging e! (edit, then use 'hg resolve --mark')
   Fix up the change (roll 42abbb61bede)
--- a/tests/test-histedit-fold.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-histedit-fold.t	Wed Apr 17 13:41:18 2019 -0400
@@ -287,7 +287,6 @@
   > drop 888f9082bf99 2 +5
   > fold 251d831eeec5 3 +6
   > EOF
-  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
   merging file
   warning: conflicts while merging file! (edit, then use 'hg resolve --mark')
   Fix up the change (fold 251d831eeec5)
@@ -361,7 +360,6 @@
   > drop 888f9082bf99 2 +5
   > fold 251d831eeec5 3 +6
   > EOF
-  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
   merging file
   warning: conflicts while merging file! (edit, then use 'hg resolve --mark')
   Fix up the change (fold 251d831eeec5)
@@ -541,6 +539,7 @@
   > fold b7389cc4d66e 3 foo2
   > fold 21679ff7675c 4 foo3
   > EOF
+  merging foo
   $ hg logt
   2:e8bedbda72c1 merged foos
   1:578c7455730c a
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/test-histedit-merge-tools.t	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,57 @@
+Test histedit extension: Merge tools
+====================================
+
+Initialization
+---------------
+
+  $ . "$TESTDIR/histedit-helpers.sh"
+
+  $ cat >> $HGRCPATH <<EOF
+  > [alias]
+  > logt = log --template '{rev}:{node|short} {desc|firstline}\n'
+  > [extensions]
+  > histedit=
+  > mockmakedate = $TESTDIR/mockmakedate.py
+  > [ui]
+  > pre-merge-tool-output-template='pre-merge message for {node}\n'
+  > EOF
+
+Merge conflict
+--------------
+
+  $ hg init r
+  $ cd r
+  $ echo foo > file
+  $ hg add file
+  $ hg ci -m "First" -d "1 0"
+  $ echo bar > file
+  $ hg ci -m "Second" -d "2 0"
+
+  $ hg logt --graph
+  @  1:2aa920f62fb9 Second
+  |
+  o  0:7181f42b8fca First
+  
+
+Invert the order of the commits, but fail the merge.
+  $ hg histedit --config ui.merge=false --commands - 2>&1 <<EOF | fixbundle
+  > pick 2aa920f62fb9 Second
+  > pick 7181f42b8fca First
+  > EOF
+  merging file
+  pre-merge message for b90fa2e91a6d11013945a5f684be45b84a8ca6ec
+  merging file failed!
+  Fix up the change (pick 7181f42b8fca)
+  (hg histedit --continue to resume)
+
+  $ hg histedit --abort | fixbundle
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+
+Invert the order of the commits, and pretend the merge succeeded.
+  $ hg histedit --config ui.merge=true --commands - 2>&1 <<EOF | fixbundle
+  > pick 2aa920f62fb9 Second
+  > pick 7181f42b8fca First
+  > EOF
+  merging file
+  pre-merge message for b90fa2e91a6d11013945a5f684be45b84a8ca6ec
+  7181f42b8fca: skipping changeset (no changes)
--- a/tests/test-histedit-non-commute.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-histedit-non-commute.t	Wed Apr 17 13:41:18 2019 -0400
@@ -87,7 +87,6 @@
 
 edit the history
   $ hg histedit 3 --commands $EDITED 2>&1 | fixbundle
-  2 files updated, 0 files merged, 0 files removed, 0 files unresolved
   merging e
   warning: conflicts while merging e! (edit, then use 'hg resolve --mark')
   Fix up the change (pick 39522b764e3d)
@@ -145,7 +144,6 @@
 
 edit the history
   $ hg histedit 3 --commands $EDITED 2>&1 | fixbundle
-  2 files updated, 0 files merged, 0 files removed, 0 files unresolved
   merging e
   warning: conflicts while merging e! (edit, then use 'hg resolve --mark')
   Fix up the change (pick 39522b764e3d)
@@ -241,7 +239,6 @@
 
 edit the history, this time with a fold action
   $ hg histedit 3 --commands $EDITED 2>&1 | fixbundle
-  2 files updated, 0 files merged, 0 files removed, 0 files unresolved
   merging e
   warning: conflicts while merging e! (edit, then use 'hg resolve --mark')
   Fix up the change (mess 39522b764e3d)
--- a/tests/test-histedit-obsolete.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-histedit-obsolete.t	Wed Apr 17 13:41:18 2019 -0400
@@ -216,7 +216,6 @@
   > edit b346ab9a313d 6 c
   > EOF
   0 files updated, 0 files merged, 1 files removed, 0 files unresolved
-  adding c
   Editing (b346ab9a313d), you may commit or record as needed now.
   (hg histedit --continue to resume)
   [1]
@@ -351,7 +350,6 @@
   > pick ee118ab9fa44 16 k
   > EOF
   0 files updated, 0 files merged, 6 files removed, 0 files unresolved
-  adding f
   Editing (b449568bf7fc), you may commit or record as needed now.
   (hg histedit --continue to resume)
   [1]
@@ -394,7 +392,6 @@
   > pick ee118ab9fa44 16 k
   > EOF
   0 files updated, 0 files merged, 6 files removed, 0 files unresolved
-  adding f
   Editing (b449568bf7fc), you may commit or record as needed now.
   (hg histedit --continue to resume)
   [1]
--- a/tests/test-hook.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-hook.t	Wed Apr 17 13:41:18 2019 -0400
@@ -14,32 +14,63 @@
   $ cd a
   $ cat > .hg/hgrc <<EOF
   > [hooks]
-  > commit = sh -c "HG_LOCAL= HG_TAG= printenv.py commit"
-  > commit.b = sh -c "HG_LOCAL= HG_TAG= printenv.py commit.b"
-  > precommit = sh -c  "HG_LOCAL= HG_NODE= HG_TAG= printenv.py precommit"
-  > pretxncommit = sh -c "HG_LOCAL= HG_TAG= printenv.py pretxncommit"
+  > commit = sh -c "HG_LOCAL= HG_TAG= printenv.py --line commit"
+  > commit.b = sh -c "HG_LOCAL= HG_TAG= printenv.py --line commit.b"
+  > precommit = sh -c  "HG_LOCAL= HG_NODE= HG_TAG= printenv.py --line precommit"
+  > pretxncommit = sh -c "HG_LOCAL= HG_TAG= printenv.py --line pretxncommit"
   > pretxncommit.tip = hg -q tip
-  > pre-identify = sh -c "printenv.py pre-identify 1"
-  > pre-cat = sh -c "printenv.py pre-cat"
-  > post-cat = sh -c "printenv.py post-cat"
-  > pretxnopen = sh -c "HG_LOCAL= HG_TAG= printenv.py pretxnopen"
-  > pretxnclose = sh -c "HG_LOCAL= HG_TAG= printenv.py pretxnclose"
-  > txnclose = sh -c "HG_LOCAL= HG_TAG= printenv.py txnclose"
+  > pre-identify = sh -c "printenv.py --line pre-identify 1"
+  > pre-cat = sh -c "printenv.py --line pre-cat"
+  > post-cat = sh -c "printenv.py --line post-cat"
+  > pretxnopen = sh -c "HG_LOCAL= HG_TAG= printenv.py --line pretxnopen"
+  > pretxnclose = sh -c "HG_LOCAL= HG_TAG= printenv.py --line pretxnclose"
+  > txnclose = sh -c "HG_LOCAL= HG_TAG= printenv.py --line txnclose"
   > txnabort.0 = python:$TESTTMP/txnabort.checkargs.py:showargs
-  > txnabort.1 = sh -c "HG_LOCAL= HG_TAG= printenv.py txnabort"
+  > txnabort.1 = sh -c "HG_LOCAL= HG_TAG= printenv.py --line txnabort"
   > txnclose.checklock = sh -c "hg debuglock > /dev/null"
   > EOF
   $ echo a > a
   $ hg add a
   $ hg commit -m a
-  precommit hook: HG_HOOKNAME=precommit HG_HOOKTYPE=precommit HG_PARENT1=0000000000000000000000000000000000000000
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  pretxncommit hook: HG_HOOKNAME=pretxncommit HG_HOOKTYPE=pretxncommit HG_NODE=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b HG_PARENT1=0000000000000000000000000000000000000000 HG_PENDING=$TESTTMP/a
+  precommit hook: HG_HOOKNAME=precommit
+  HG_HOOKTYPE=precommit
+  HG_PARENT1=0000000000000000000000000000000000000000
+  
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  pretxncommit hook: HG_HOOKNAME=pretxncommit
+  HG_HOOKTYPE=pretxncommit
+  HG_NODE=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  HG_PARENT1=0000000000000000000000000000000000000000
+  HG_PENDING=$TESTTMP/a
+  
   0:cb9a9f314b8b
-  pretxnclose hook: HG_HOOKNAME=pretxnclose HG_HOOKTYPE=pretxnclose HG_PENDING=$TESTTMP/a HG_PHASES_MOVED=1 HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  txnclose hook: HG_HOOKNAME=txnclose HG_HOOKTYPE=txnclose HG_PHASES_MOVED=1 HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  commit hook: HG_HOOKNAME=commit HG_HOOKTYPE=commit HG_NODE=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b HG_PARENT1=0000000000000000000000000000000000000000
-  commit.b hook: HG_HOOKNAME=commit.b HG_HOOKTYPE=commit HG_NODE=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b HG_PARENT1=0000000000000000000000000000000000000000
+  pretxnclose hook: HG_HOOKNAME=pretxnclose
+  HG_HOOKTYPE=pretxnclose
+  HG_PENDING=$TESTTMP/a
+  HG_PHASES_MOVED=1
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  txnclose hook: HG_HOOKNAME=txnclose
+  HG_HOOKTYPE=txnclose
+  HG_PHASES_MOVED=1
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  commit hook: HG_HOOKNAME=commit
+  HG_HOOKTYPE=commit
+  HG_NODE=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  HG_PARENT1=0000000000000000000000000000000000000000
+  
+  commit.b hook: HG_HOOKNAME=commit.b
+  HG_HOOKTYPE=commit
+  HG_NODE=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  HG_PARENT1=0000000000000000000000000000000000000000
+  
 
   $ hg clone . ../b
   updating to branch default
@@ -50,9 +81,9 @@
 
   $ cat > .hg/hgrc <<EOF
   > [hooks]
-  > prechangegroup = sh -c "printenv.py prechangegroup"
-  > changegroup = sh -c "printenv.py changegroup"
-  > incoming = sh -c "printenv.py incoming"
+  > prechangegroup = sh -c "printenv.py --line prechangegroup"
+  > changegroup = sh -c "printenv.py --line changegroup"
+  > incoming = sh -c "printenv.py --line incoming"
   > EOF
 
 pretxncommit and commit hooks can see both parents of merge
@@ -60,103 +91,319 @@
   $ cd ../a
   $ echo b >> a
   $ hg commit -m a1 -d "1 0"
-  precommit hook: HG_HOOKNAME=precommit HG_HOOKTYPE=precommit HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  pretxncommit hook: HG_HOOKNAME=pretxncommit HG_HOOKTYPE=pretxncommit HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b HG_PENDING=$TESTTMP/a
+  precommit hook: HG_HOOKNAME=precommit
+  HG_HOOKTYPE=precommit
+  HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  pretxncommit hook: HG_HOOKNAME=pretxncommit
+  HG_HOOKTYPE=pretxncommit
+  HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd
+  HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  HG_PENDING=$TESTTMP/a
+  
   1:ab228980c14d
-  pretxnclose hook: HG_HOOKNAME=pretxnclose HG_HOOKTYPE=pretxnclose HG_PENDING=$TESTTMP/a HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  txnclose hook: HG_HOOKNAME=txnclose HG_HOOKTYPE=txnclose HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  commit hook: HG_HOOKNAME=commit HG_HOOKTYPE=commit HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
-  commit.b hook: HG_HOOKNAME=commit.b HG_HOOKTYPE=commit HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  pretxnclose hook: HG_HOOKNAME=pretxnclose
+  HG_HOOKTYPE=pretxnclose
+  HG_PENDING=$TESTTMP/a
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  txnclose hook: HG_HOOKNAME=txnclose
+  HG_HOOKTYPE=txnclose
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  commit hook: HG_HOOKNAME=commit
+  HG_HOOKTYPE=commit
+  HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd
+  HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  
+  commit.b hook: HG_HOOKNAME=commit.b
+  HG_HOOKTYPE=commit
+  HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd
+  HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  
   $ hg update -C 0
   1 files updated, 0 files merged, 0 files removed, 0 files unresolved
   $ echo b > b
   $ hg add b
   $ hg commit -m b -d '1 0'
-  precommit hook: HG_HOOKNAME=precommit HG_HOOKTYPE=precommit HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  pretxncommit hook: HG_HOOKNAME=pretxncommit HG_HOOKTYPE=pretxncommit HG_NODE=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2 HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b HG_PENDING=$TESTTMP/a
+  precommit hook: HG_HOOKNAME=precommit
+  HG_HOOKTYPE=precommit
+  HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  pretxncommit hook: HG_HOOKNAME=pretxncommit
+  HG_HOOKTYPE=pretxncommit
+  HG_NODE=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2
+  HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  HG_PENDING=$TESTTMP/a
+  
   2:ee9deb46ab31
-  pretxnclose hook: HG_HOOKNAME=pretxnclose HG_HOOKTYPE=pretxnclose HG_PENDING=$TESTTMP/a HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
+  pretxnclose hook: HG_HOOKNAME=pretxnclose
+  HG_HOOKTYPE=pretxnclose
+  HG_PENDING=$TESTTMP/a
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
   created new head
-  txnclose hook: HG_HOOKNAME=txnclose HG_HOOKTYPE=txnclose HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  commit hook: HG_HOOKNAME=commit HG_HOOKTYPE=commit HG_NODE=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2 HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
-  commit.b hook: HG_HOOKNAME=commit.b HG_HOOKTYPE=commit HG_NODE=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2 HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  txnclose hook: HG_HOOKNAME=txnclose
+  HG_HOOKTYPE=txnclose
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  commit hook: HG_HOOKNAME=commit
+  HG_HOOKTYPE=commit
+  HG_NODE=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2
+  HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  
+  commit.b hook: HG_HOOKNAME=commit.b
+  HG_HOOKTYPE=commit
+  HG_NODE=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2
+  HG_PARENT1=cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b
+  
   $ hg merge 1
   1 files updated, 0 files merged, 0 files removed, 0 files unresolved
   (branch merge, don't forget to commit)
   $ hg commit -m merge -d '2 0'
-  precommit hook: HG_HOOKNAME=precommit HG_HOOKTYPE=precommit HG_PARENT1=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2 HG_PARENT2=ab228980c14deea8b9555d91c9581127383e40fd
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  pretxncommit hook: HG_HOOKNAME=pretxncommit HG_HOOKTYPE=pretxncommit HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2 HG_PARENT1=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2 HG_PARENT2=ab228980c14deea8b9555d91c9581127383e40fd HG_PENDING=$TESTTMP/a
+  precommit hook: HG_HOOKNAME=precommit
+  HG_HOOKTYPE=precommit
+  HG_PARENT1=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2
+  HG_PARENT2=ab228980c14deea8b9555d91c9581127383e40fd
+  
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  pretxncommit hook: HG_HOOKNAME=pretxncommit
+  HG_HOOKTYPE=pretxncommit
+  HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2
+  HG_PARENT1=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2
+  HG_PARENT2=ab228980c14deea8b9555d91c9581127383e40fd
+  HG_PENDING=$TESTTMP/a
+  
   3:07f3376c1e65
-  pretxnclose hook: HG_HOOKNAME=pretxnclose HG_HOOKTYPE=pretxnclose HG_PENDING=$TESTTMP/a HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  txnclose hook: HG_HOOKNAME=txnclose HG_HOOKTYPE=txnclose HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  commit hook: HG_HOOKNAME=commit HG_HOOKTYPE=commit HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2 HG_PARENT1=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2 HG_PARENT2=ab228980c14deea8b9555d91c9581127383e40fd
-  commit.b hook: HG_HOOKNAME=commit.b HG_HOOKTYPE=commit HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2 HG_PARENT1=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2 HG_PARENT2=ab228980c14deea8b9555d91c9581127383e40fd
+  pretxnclose hook: HG_HOOKNAME=pretxnclose
+  HG_HOOKTYPE=pretxnclose
+  HG_PENDING=$TESTTMP/a
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  txnclose hook: HG_HOOKNAME=txnclose
+  HG_HOOKTYPE=txnclose
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  commit hook: HG_HOOKNAME=commit
+  HG_HOOKTYPE=commit
+  HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2
+  HG_PARENT1=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2
+  HG_PARENT2=ab228980c14deea8b9555d91c9581127383e40fd
+  
+  commit.b hook: HG_HOOKNAME=commit.b
+  HG_HOOKTYPE=commit
+  HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2
+  HG_PARENT1=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2
+  HG_PARENT2=ab228980c14deea8b9555d91c9581127383e40fd
+  
 
 test generic hooks
 
   $ hg id
-  pre-identify hook: HG_ARGS=id HG_HOOKNAME=pre-identify HG_HOOKTYPE=pre-identify HG_OPTS={'bookmarks': None, 'branch': None, 'id': None, 'insecure': None, 'num': None, 'remotecmd': '', 'rev': '', 'ssh': '', 'tags': None, 'template': ''} HG_PATS=[]
+  pre-identify hook: HG_ARGS=id
+  HG_HOOKNAME=pre-identify
+  HG_HOOKTYPE=pre-identify
+  HG_OPTS={'bookmarks': None, 'branch': None, 'id': None, 'insecure': None, 'num': None, 'remotecmd': '', 'rev': '', 'ssh': '', 'tags': None, 'template': ''}
+  HG_PATS=[]
+  
   abort: pre-identify hook exited with status 1
   [255]
   $ hg cat b
-  pre-cat hook: HG_ARGS=cat b HG_HOOKNAME=pre-cat HG_HOOKTYPE=pre-cat HG_OPTS={'decode': None, 'exclude': [], 'include': [], 'output': '', 'rev': '', 'template': ''} HG_PATS=['b']
+  pre-cat hook: HG_ARGS=cat b
+  HG_HOOKNAME=pre-cat
+  HG_HOOKTYPE=pre-cat
+  HG_OPTS={'decode': None, 'exclude': [], 'include': [], 'output': '', 'rev': '', 'template': ''}
+  HG_PATS=['b']
+  
   b
-  post-cat hook: HG_ARGS=cat b HG_HOOKNAME=post-cat HG_HOOKTYPE=post-cat HG_OPTS={'decode': None, 'exclude': [], 'include': [], 'output': '', 'rev': '', 'template': ''} HG_PATS=['b'] HG_RESULT=0
+  post-cat hook: HG_ARGS=cat b
+  HG_HOOKNAME=post-cat
+  HG_HOOKTYPE=post-cat
+  HG_OPTS={'decode': None, 'exclude': [], 'include': [], 'output': '', 'rev': '', 'template': ''}
+  HG_PATS=['b']
+  HG_RESULT=0
+  
 
   $ cd ../b
   $ hg pull ../a
   pulling from ../a
   searching for changes
-  prechangegroup hook: HG_HOOKNAME=prechangegroup HG_HOOKTYPE=prechangegroup HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/a
+  prechangegroup hook: HG_HOOKNAME=prechangegroup
+  HG_HOOKTYPE=prechangegroup
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/a (glob)
+  HG_URL=file:$TESTTMP/a
+  
   adding changesets
   adding manifests
   adding file changes
   added 3 changesets with 2 changes to 2 files
   new changesets ab228980c14d:07f3376c1e65
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd HG_NODE_LAST=07f3376c1e655977439df2a814e3cc14b27abac2 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/a
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/a
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/a
-  incoming hook: HG_HOOKNAME=incoming HG_HOOKTYPE=incoming HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/a
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd
+  HG_NODE_LAST=07f3376c1e655977439df2a814e3cc14b27abac2
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/a (glob)
+  HG_URL=file:$TESTTMP/a
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=ab228980c14deea8b9555d91c9581127383e40fd
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/a (glob)
+  HG_URL=file:$TESTTMP/a
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=ee9deb46ab31e4cc3310f3cf0c3d668e4d8fffc2
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/a (glob)
+  HG_URL=file:$TESTTMP/a
+  
+  incoming hook: HG_HOOKNAME=incoming
+  HG_HOOKTYPE=incoming
+  HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/a (glob)
+  HG_URL=file:$TESTTMP/a
+  
   (run 'hg update' to get a working copy)
 
 tag hooks can see env vars
 
   $ cd ../a
   $ cat >> .hg/hgrc <<EOF
-  > pretag = sh -c "printenv.py pretag"
-  > tag = sh -c "HG_PARENT1= HG_PARENT2= printenv.py tag"
+  > pretag = sh -c "printenv.py --line pretag"
+  > tag = sh -c "HG_PARENT1= HG_PARENT2= printenv.py --line tag"
   > EOF
   $ hg tag -d '3 0' a
-  pretag hook: HG_HOOKNAME=pretag HG_HOOKTYPE=pretag HG_LOCAL=0 HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2 HG_TAG=a
-  precommit hook: HG_HOOKNAME=precommit HG_HOOKTYPE=precommit HG_PARENT1=07f3376c1e655977439df2a814e3cc14b27abac2
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  pretxncommit hook: HG_HOOKNAME=pretxncommit HG_HOOKTYPE=pretxncommit HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_PARENT1=07f3376c1e655977439df2a814e3cc14b27abac2 HG_PENDING=$TESTTMP/a
+  pretag hook: HG_HOOKNAME=pretag
+  HG_HOOKTYPE=pretag
+  HG_LOCAL=0
+  HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2
+  HG_TAG=a
+  
+  precommit hook: HG_HOOKNAME=precommit
+  HG_HOOKTYPE=precommit
+  HG_PARENT1=07f3376c1e655977439df2a814e3cc14b27abac2
+  
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  pretxncommit hook: HG_HOOKNAME=pretxncommit
+  HG_HOOKTYPE=pretxncommit
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_PARENT1=07f3376c1e655977439df2a814e3cc14b27abac2
+  HG_PENDING=$TESTTMP/a
+  
   4:539e4b31b6dc
-  pretxnclose hook: HG_HOOKNAME=pretxnclose HG_HOOKTYPE=pretxnclose HG_PENDING=$TESTTMP/a HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  tag hook: HG_HOOKNAME=tag HG_HOOKTYPE=tag HG_LOCAL=0 HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2 HG_TAG=a
-  txnclose hook: HG_HOOKNAME=txnclose HG_HOOKTYPE=txnclose HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  commit hook: HG_HOOKNAME=commit HG_HOOKTYPE=commit HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_PARENT1=07f3376c1e655977439df2a814e3cc14b27abac2
-  commit.b hook: HG_HOOKNAME=commit.b HG_HOOKTYPE=commit HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_PARENT1=07f3376c1e655977439df2a814e3cc14b27abac2
+  pretxnclose hook: HG_HOOKNAME=pretxnclose
+  HG_HOOKTYPE=pretxnclose
+  HG_PENDING=$TESTTMP/a
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  tag hook: HG_HOOKNAME=tag
+  HG_HOOKTYPE=tag
+  HG_LOCAL=0
+  HG_NODE=07f3376c1e655977439df2a814e3cc14b27abac2
+  HG_TAG=a
+  
+  txnclose hook: HG_HOOKNAME=txnclose
+  HG_HOOKTYPE=txnclose
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  commit hook: HG_HOOKNAME=commit
+  HG_HOOKTYPE=commit
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_PARENT1=07f3376c1e655977439df2a814e3cc14b27abac2
+  
+  commit.b hook: HG_HOOKNAME=commit.b
+  HG_HOOKTYPE=commit
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_PARENT1=07f3376c1e655977439df2a814e3cc14b27abac2
+  
   $ hg tag -l la
-  pretag hook: HG_HOOKNAME=pretag HG_HOOKTYPE=pretag HG_LOCAL=1 HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_TAG=la
-  tag hook: HG_HOOKNAME=tag HG_HOOKTYPE=tag HG_LOCAL=1 HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_TAG=la
+  pretag hook: HG_HOOKNAME=pretag
+  HG_HOOKTYPE=pretag
+  HG_LOCAL=1
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_TAG=la
+  
+  tag hook: HG_HOOKNAME=tag
+  HG_HOOKTYPE=tag
+  HG_LOCAL=1
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_TAG=la
+  
 
 pretag hook can forbid tagging
 
   $ cat >> .hg/hgrc <<EOF
-  > pretag.forbid = sh -c "printenv.py pretag.forbid 1"
+  > pretag.forbid = sh -c "printenv.py --line pretag.forbid 1"
   > EOF
   $ hg tag -d '4 0' fa
-  pretag hook: HG_HOOKNAME=pretag HG_HOOKTYPE=pretag HG_LOCAL=0 HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_TAG=fa
-  pretag.forbid hook: HG_HOOKNAME=pretag.forbid HG_HOOKTYPE=pretag HG_LOCAL=0 HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_TAG=fa
+  pretag hook: HG_HOOKNAME=pretag
+  HG_HOOKTYPE=pretag
+  HG_LOCAL=0
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_TAG=fa
+  
+  pretag.forbid hook: HG_HOOKNAME=pretag.forbid
+  HG_HOOKTYPE=pretag
+  HG_LOCAL=0
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_TAG=fa
+  
   abort: pretag.forbid hook exited with status 1
   [255]
   $ hg tag -l fla
-  pretag hook: HG_HOOKNAME=pretag HG_HOOKTYPE=pretag HG_LOCAL=1 HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_TAG=fla
-  pretag.forbid hook: HG_HOOKNAME=pretag.forbid HG_HOOKTYPE=pretag HG_LOCAL=1 HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_TAG=fla
+  pretag hook: HG_HOOKNAME=pretag
+  HG_HOOKTYPE=pretag
+  HG_LOCAL=1
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_TAG=fla
+  
+  pretag.forbid hook: HG_HOOKNAME=pretag.forbid
+  HG_HOOKTYPE=pretag
+  HG_LOCAL=1
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_TAG=fla
+  
   abort: pretag.forbid hook exited with status 1
   [255]
 
@@ -165,22 +412,43 @@
 
   $ cat >> .hg/hgrc <<EOF
   > pretxncommit.forbid0 = sh -c "hg tip -q"
-  > pretxncommit.forbid1 = sh -c "printenv.py pretxncommit.forbid 1"
+  > pretxncommit.forbid1 = sh -c "printenv.py --line pretxncommit.forbid 1"
   > EOF
   $ echo z > z
   $ hg add z
   $ hg -q tip
   4:539e4b31b6dc
   $ hg commit -m 'fail' -d '4 0'
-  precommit hook: HG_HOOKNAME=precommit HG_HOOKTYPE=precommit HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
-  pretxncommit hook: HG_HOOKNAME=pretxncommit HG_HOOKTYPE=pretxncommit HG_NODE=6f611f8018c10e827fee6bd2bc807f937e761567 HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_PENDING=$TESTTMP/a
+  precommit hook: HG_HOOKNAME=precommit
+  HG_HOOKTYPE=precommit
+  HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
+  pretxncommit hook: HG_HOOKNAME=pretxncommit
+  HG_HOOKTYPE=pretxncommit
+  HG_NODE=6f611f8018c10e827fee6bd2bc807f937e761567
+  HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_PENDING=$TESTTMP/a
+  
   5:6f611f8018c1
   5:6f611f8018c1
-  pretxncommit.forbid hook: HG_HOOKNAME=pretxncommit.forbid1 HG_HOOKTYPE=pretxncommit HG_NODE=6f611f8018c10e827fee6bd2bc807f937e761567 HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_PENDING=$TESTTMP/a
+  pretxncommit.forbid hook: HG_HOOKNAME=pretxncommit.forbid1
+  HG_HOOKTYPE=pretxncommit
+  HG_NODE=6f611f8018c10e827fee6bd2bc807f937e761567
+  HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_PENDING=$TESTTMP/a
+  
   transaction abort!
   txnabort Python hook: txnid,txnname
-  txnabort hook: HG_HOOKNAME=txnabort.1 HG_HOOKTYPE=txnabort HG_TXNID=TXN:$ID$ HG_TXNNAME=commit
+  txnabort hook: HG_HOOKNAME=txnabort.1
+  HG_HOOKTYPE=txnabort
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=commit
+  
   rollback completed
   abort: pretxncommit.forbid1 hook exited with status 1
   [255]
@@ -205,11 +473,17 @@
 precommit hook can prevent commit
 
   $ cat >> .hg/hgrc <<EOF
-  > precommit.forbid = sh -c "printenv.py precommit.forbid 1"
+  > precommit.forbid = sh -c "printenv.py --line precommit.forbid 1"
   > EOF
   $ hg commit -m 'fail' -d '4 0'
-  precommit hook: HG_HOOKNAME=precommit HG_HOOKTYPE=precommit HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
-  precommit.forbid hook: HG_HOOKNAME=precommit.forbid HG_HOOKTYPE=precommit HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  precommit hook: HG_HOOKNAME=precommit
+  HG_HOOKTYPE=precommit
+  HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  
+  precommit.forbid hook: HG_HOOKNAME=precommit.forbid
+  HG_HOOKTYPE=precommit
+  HG_PARENT1=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  
   abort: precommit.forbid hook exited with status 1
   [255]
   $ hg -q tip
@@ -218,26 +492,36 @@
 preupdate hook can prevent update
 
   $ cat >> .hg/hgrc <<EOF
-  > preupdate = sh -c "printenv.py preupdate"
+  > preupdate = sh -c "printenv.py --line preupdate"
   > EOF
   $ hg update 1
-  preupdate hook: HG_HOOKNAME=preupdate HG_HOOKTYPE=preupdate HG_PARENT1=ab228980c14d
+  preupdate hook: HG_HOOKNAME=preupdate
+  HG_HOOKTYPE=preupdate
+  HG_PARENT1=ab228980c14d
+  
   0 files updated, 0 files merged, 2 files removed, 0 files unresolved
 
 update hook
 
   $ cat >> .hg/hgrc <<EOF
-  > update = sh -c "printenv.py update"
+  > update = sh -c "printenv.py --line update"
   > EOF
   $ hg update
-  preupdate hook: HG_HOOKNAME=preupdate HG_HOOKTYPE=preupdate HG_PARENT1=539e4b31b6dc
-  update hook: HG_ERROR=0 HG_HOOKNAME=update HG_HOOKTYPE=update HG_PARENT1=539e4b31b6dc
+  preupdate hook: HG_HOOKNAME=preupdate
+  HG_HOOKTYPE=preupdate
+  HG_PARENT1=539e4b31b6dc
+  
+  update hook: HG_ERROR=0
+  HG_HOOKNAME=update
+  HG_HOOKTYPE=update
+  HG_PARENT1=539e4b31b6dc
+  
   2 files updated, 0 files merged, 0 files removed, 0 files unresolved
 
 pushkey hook
 
   $ cat >> .hg/hgrc <<EOF
-  > pushkey = sh -c "printenv.py pushkey"
+  > pushkey = sh -c "printenv.py --line pushkey"
   > EOF
   $ cd ../b
   $ hg bookmark -r null foo
@@ -245,10 +529,42 @@
   pushing to ../a
   searching for changes
   no changes found
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=push
-  pretxnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_HOOKNAME=pretxnclose HG_HOOKTYPE=pretxnclose HG_PENDING=$TESTTMP/a HG_SOURCE=push HG_TXNID=TXN:$ID$ HG_TXNNAME=push HG_URL=file:$TESTTMP/a
-  pushkey hook: HG_BUNDLE2=1 HG_HOOKNAME=pushkey HG_HOOKTYPE=pushkey HG_KEY=foo HG_NAMESPACE=bookmarks HG_NEW=0000000000000000000000000000000000000000 HG_PUSHKEYCOMPAT=1 HG_SOURCE=push HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/a
-  txnclose hook: HG_BOOKMARK_MOVED=1 HG_BUNDLE2=1 HG_HOOKNAME=txnclose HG_HOOKTYPE=txnclose HG_SOURCE=push HG_TXNID=TXN:$ID$ HG_TXNNAME=push HG_URL=file:$TESTTMP/a
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=push
+  
+  pretxnclose hook: HG_BOOKMARK_MOVED=1
+  HG_BUNDLE2=1
+  HG_HOOKNAME=pretxnclose
+  HG_HOOKTYPE=pretxnclose
+  HG_PENDING=$TESTTMP/a
+  HG_SOURCE=push
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=push
+  HG_URL=file:$TESTTMP/a
+  
+  pushkey hook: HG_BUNDLE2=1
+  HG_HOOKNAME=pushkey
+  HG_HOOKTYPE=pushkey
+  HG_KEY=foo
+  HG_NAMESPACE=bookmarks
+  HG_NEW=0000000000000000000000000000000000000000
+  HG_PUSHKEYCOMPAT=1
+  HG_SOURCE=push
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=push
+  HG_URL=file:$TESTTMP/a
+  
+  txnclose hook: HG_BOOKMARK_MOVED=1
+  HG_BUNDLE2=1
+  HG_HOOKNAME=txnclose
+  HG_HOOKTYPE=txnclose
+  HG_SOURCE=push
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=push
+  HG_URL=file:$TESTTMP/a
+  
   exporting bookmark foo
   [1]
   $ cd ../a
@@ -256,16 +572,35 @@
 listkeys hook
 
   $ cat >> .hg/hgrc <<EOF
-  > listkeys = sh -c "printenv.py listkeys"
+  > listkeys = sh -c "printenv.py --line listkeys"
   > EOF
   $ hg bookmark -r null bar
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=bookmark
-  pretxnclose hook: HG_BOOKMARK_MOVED=1 HG_HOOKNAME=pretxnclose HG_HOOKTYPE=pretxnclose HG_PENDING=$TESTTMP/a HG_TXNID=TXN:$ID$ HG_TXNNAME=bookmark
-  txnclose hook: HG_BOOKMARK_MOVED=1 HG_HOOKNAME=txnclose HG_HOOKTYPE=txnclose HG_TXNID=TXN:$ID$ HG_TXNNAME=bookmark
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=bookmark
+  
+  pretxnclose hook: HG_BOOKMARK_MOVED=1
+  HG_HOOKNAME=pretxnclose
+  HG_HOOKTYPE=pretxnclose
+  HG_PENDING=$TESTTMP/a
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=bookmark
+  
+  txnclose hook: HG_BOOKMARK_MOVED=1
+  HG_HOOKNAME=txnclose
+  HG_HOOKTYPE=txnclose
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=bookmark
+  
   $ cd ../b
   $ hg pull -B bar ../a
   pulling from ../a
-  listkeys hook: HG_HOOKNAME=listkeys HG_HOOKTYPE=listkeys HG_NAMESPACE=bookmarks HG_VALUES={'bar': '0000000000000000000000000000000000000000', 'foo': '0000000000000000000000000000000000000000'}
+  listkeys hook: HG_HOOKNAME=listkeys
+  HG_HOOKTYPE=listkeys
+  HG_NAMESPACE=bookmarks
+  HG_VALUES={'bar': '0000000000000000000000000000000000000000', 'foo': '0000000000000000000000000000000000000000'}
+  
   no changes found
   adding remote bookmark bar
   $ cd ../a
@@ -273,18 +608,41 @@
 test that prepushkey can prevent incoming keys
 
   $ cat >> .hg/hgrc <<EOF
-  > prepushkey = sh -c "printenv.py prepushkey.forbid 1"
+  > prepushkey = sh -c "printenv.py --line prepushkey.forbid 1"
   > EOF
   $ cd ../b
   $ hg bookmark -r null baz
   $ hg push -B baz ../a
   pushing to ../a
   searching for changes
-  listkeys hook: HG_HOOKNAME=listkeys HG_HOOKTYPE=listkeys HG_NAMESPACE=phases HG_VALUES={'cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b': '1', 'publishing': 'True'}
-  listkeys hook: HG_HOOKNAME=listkeys HG_HOOKTYPE=listkeys HG_NAMESPACE=bookmarks HG_VALUES={'bar': '0000000000000000000000000000000000000000', 'foo': '0000000000000000000000000000000000000000'}
+  listkeys hook: HG_HOOKNAME=listkeys
+  HG_HOOKTYPE=listkeys
+  HG_NAMESPACE=phases
+  HG_VALUES={'cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b': '1', 'publishing': 'True'}
+  
+  listkeys hook: HG_HOOKNAME=listkeys
+  HG_HOOKTYPE=listkeys
+  HG_NAMESPACE=bookmarks
+  HG_VALUES={'bar': '0000000000000000000000000000000000000000', 'foo': '0000000000000000000000000000000000000000'}
+  
   no changes found
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=push
-  prepushkey.forbid hook: HG_BUNDLE2=1 HG_HOOKNAME=prepushkey HG_HOOKTYPE=prepushkey HG_KEY=baz HG_NAMESPACE=bookmarks HG_NEW=0000000000000000000000000000000000000000 HG_PUSHKEYCOMPAT=1 HG_SOURCE=push HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/a
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=push
+  
+  prepushkey.forbid hook: HG_BUNDLE2=1
+  HG_HOOKNAME=prepushkey
+  HG_HOOKTYPE=prepushkey
+  HG_KEY=baz
+  HG_NAMESPACE=bookmarks
+  HG_NEW=0000000000000000000000000000000000000000
+  HG_PUSHKEYCOMPAT=1
+  HG_SOURCE=push
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=push
+  HG_URL=file:$TESTTMP/a
+  
   abort: prepushkey hook exited with status 1
   [255]
   $ cd ../a
@@ -292,16 +650,34 @@
 test that prelistkeys can prevent listing keys
 
   $ cat >> .hg/hgrc <<EOF
-  > prelistkeys = sh -c "printenv.py prelistkeys.forbid 1"
+  > prelistkeys = sh -c "printenv.py --line prelistkeys.forbid 1"
   > EOF
   $ hg bookmark -r null quux
-  pretxnopen hook: HG_HOOKNAME=pretxnopen HG_HOOKTYPE=pretxnopen HG_TXNID=TXN:$ID$ HG_TXNNAME=bookmark
-  pretxnclose hook: HG_BOOKMARK_MOVED=1 HG_HOOKNAME=pretxnclose HG_HOOKTYPE=pretxnclose HG_PENDING=$TESTTMP/a HG_TXNID=TXN:$ID$ HG_TXNNAME=bookmark
-  txnclose hook: HG_BOOKMARK_MOVED=1 HG_HOOKNAME=txnclose HG_HOOKTYPE=txnclose HG_TXNID=TXN:$ID$ HG_TXNNAME=bookmark
+  pretxnopen hook: HG_HOOKNAME=pretxnopen
+  HG_HOOKTYPE=pretxnopen
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=bookmark
+  
+  pretxnclose hook: HG_BOOKMARK_MOVED=1
+  HG_HOOKNAME=pretxnclose
+  HG_HOOKTYPE=pretxnclose
+  HG_PENDING=$TESTTMP/a
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=bookmark
+  
+  txnclose hook: HG_BOOKMARK_MOVED=1
+  HG_HOOKNAME=txnclose
+  HG_HOOKTYPE=txnclose
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=bookmark
+  
   $ cd ../b
   $ hg pull -B quux ../a
   pulling from ../a
-  prelistkeys.forbid hook: HG_HOOKNAME=prelistkeys HG_HOOKTYPE=prelistkeys HG_NAMESPACE=bookmarks
+  prelistkeys.forbid hook: HG_HOOKNAME=prelistkeys
+  HG_HOOKTYPE=prelistkeys
+  HG_NAMESPACE=bookmarks
+  
   abort: prelistkeys hook exited with status 1
   [255]
   $ cd ../a
@@ -314,12 +690,19 @@
   3:07f3376c1e65
   $ cat > .hg/hgrc <<EOF
   > [hooks]
-  > prechangegroup.forbid = sh -c "printenv.py prechangegroup.forbid 1"
+  > prechangegroup.forbid = sh -c "printenv.py --line prechangegroup.forbid 1"
   > EOF
   $ hg pull ../a
   pulling from ../a
   searching for changes
-  prechangegroup.forbid hook: HG_HOOKNAME=prechangegroup.forbid HG_HOOKTYPE=prechangegroup HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/a
+  prechangegroup.forbid hook: HG_HOOKNAME=prechangegroup.forbid
+  HG_HOOKTYPE=prechangegroup
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/a (glob)
+  HG_URL=file:$TESTTMP/a
+  
   abort: prechangegroup.forbid hook exited with status 1
   [255]
 
@@ -329,7 +712,7 @@
   $ cat > .hg/hgrc <<EOF
   > [hooks]
   > pretxnchangegroup.forbid0 = hg tip -q
-  > pretxnchangegroup.forbid1 = sh -c "printenv.py pretxnchangegroup.forbid 1"
+  > pretxnchangegroup.forbid1 = sh -c "printenv.py --line pretxnchangegroup.forbid 1"
   > EOF
   $ hg pull ../a
   pulling from ../a
@@ -339,7 +722,17 @@
   adding file changes
   added 1 changesets with 1 changes to 1 files
   4:539e4b31b6dc
-  pretxnchangegroup.forbid hook: HG_HOOKNAME=pretxnchangegroup.forbid1 HG_HOOKTYPE=pretxnchangegroup HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_NODE_LAST=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_PENDING=$TESTTMP/b HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=file:$TESTTMP/a
+  pretxnchangegroup.forbid hook: HG_HOOKNAME=pretxnchangegroup.forbid1
+  HG_HOOKTYPE=pretxnchangegroup
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_NODE_LAST=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_PENDING=$TESTTMP/b
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  file:/*/$TESTTMP/a (glob)
+  HG_URL=file:$TESTTMP/a
+  
   transaction abort!
   rollback completed
   abort: pretxnchangegroup.forbid1 hook exited with status 1
@@ -352,14 +745,21 @@
   $ rm .hg/hgrc
   $ cat > ../a/.hg/hgrc <<EOF
   > [hooks]
-  > preoutgoing = sh -c "printenv.py preoutgoing"
-  > outgoing = sh -c "printenv.py outgoing"
+  > preoutgoing = sh -c "printenv.py --line preoutgoing"
+  > outgoing = sh -c "printenv.py --line outgoing"
   > EOF
   $ hg pull ../a
   pulling from ../a
   searching for changes
-  preoutgoing hook: HG_HOOKNAME=preoutgoing HG_HOOKTYPE=preoutgoing HG_SOURCE=pull
-  outgoing hook: HG_HOOKNAME=outgoing HG_HOOKTYPE=outgoing HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10 HG_SOURCE=pull
+  preoutgoing hook: HG_HOOKNAME=preoutgoing
+  HG_HOOKTYPE=preoutgoing
+  HG_SOURCE=pull
+  
+  outgoing hook: HG_HOOKNAME=outgoing
+  HG_HOOKTYPE=outgoing
+  HG_NODE=539e4b31b6dc99b3cfbaa6b53cbc1c1f9a1e3a10
+  HG_SOURCE=pull
+  
   adding changesets
   adding manifests
   adding file changes
@@ -373,13 +773,19 @@
 preoutgoing hook can prevent outgoing changes
 
   $ cat >> ../a/.hg/hgrc <<EOF
-  > preoutgoing.forbid = sh -c "printenv.py preoutgoing.forbid 1"
+  > preoutgoing.forbid = sh -c "printenv.py --line preoutgoing.forbid 1"
   > EOF
   $ hg pull ../a
   pulling from ../a
   searching for changes
-  preoutgoing hook: HG_HOOKNAME=preoutgoing HG_HOOKTYPE=preoutgoing HG_SOURCE=pull
-  preoutgoing.forbid hook: HG_HOOKNAME=preoutgoing.forbid HG_HOOKTYPE=preoutgoing HG_SOURCE=pull
+  preoutgoing hook: HG_HOOKNAME=preoutgoing
+  HG_HOOKTYPE=preoutgoing
+  HG_SOURCE=pull
+  
+  preoutgoing.forbid hook: HG_HOOKNAME=preoutgoing.forbid
+  HG_HOOKTYPE=preoutgoing
+  HG_SOURCE=pull
+  
   abort: preoutgoing.forbid hook exited with status 1
   [255]
 
@@ -388,12 +794,19 @@
   $ cd ..
   $ cat > a/.hg/hgrc <<EOF
   > [hooks]
-  > preoutgoing = sh -c "printenv.py preoutgoing"
-  > outgoing = sh -c "printenv.py outgoing"
+  > preoutgoing = sh -c "printenv.py --line preoutgoing"
+  > outgoing = sh -c "printenv.py --line outgoing"
   > EOF
   $ hg clone a c
-  preoutgoing hook: HG_HOOKNAME=preoutgoing HG_HOOKTYPE=preoutgoing HG_SOURCE=clone
-  outgoing hook: HG_HOOKNAME=outgoing HG_HOOKTYPE=outgoing HG_NODE=0000000000000000000000000000000000000000 HG_SOURCE=clone
+  preoutgoing hook: HG_HOOKNAME=preoutgoing
+  HG_HOOKTYPE=preoutgoing
+  HG_SOURCE=clone
+  
+  outgoing hook: HG_HOOKNAME=outgoing
+  HG_HOOKTYPE=outgoing
+  HG_NODE=0000000000000000000000000000000000000000
+  HG_SOURCE=clone
+  
   updating to branch default
   3 files updated, 0 files merged, 0 files removed, 0 files unresolved
   $ rm -rf c
@@ -401,11 +814,17 @@
 preoutgoing hook can prevent outgoing changes for local clones
 
   $ cat >> a/.hg/hgrc <<EOF
-  > preoutgoing.forbid = sh -c "printenv.py preoutgoing.forbid 1"
+  > preoutgoing.forbid = sh -c "printenv.py --line preoutgoing.forbid 1"
   > EOF
   $ hg clone a zzz
-  preoutgoing hook: HG_HOOKNAME=preoutgoing HG_HOOKTYPE=preoutgoing HG_SOURCE=clone
-  preoutgoing.forbid hook: HG_HOOKNAME=preoutgoing.forbid HG_HOOKTYPE=preoutgoing HG_SOURCE=clone
+  preoutgoing hook: HG_HOOKNAME=preoutgoing
+  HG_HOOKTYPE=preoutgoing
+  HG_SOURCE=clone
+  
+  preoutgoing.forbid hook: HG_HOOKNAME=preoutgoing.forbid
+  HG_HOOKTYPE=preoutgoing
+  HG_SOURCE=clone
+  
   abort: preoutgoing.forbid hook exited with status 1
   [255]
 
@@ -452,7 +871,7 @@
   > def printtags(ui, repo, **args):
   >     ui.write(b'[%s]\n' % b', '.join(sorted(repo.tags())))
   > 
-  > class container:
+  > class container(object):
   >     unreachable = 1
   > EOF
 
@@ -690,7 +1109,7 @@
 
   $ hg up null
   loading update.ne hook failed:
-  abort: $ENOENT$: $TESTTMP/d/repo/nonexistent.py
+  abort: $ENOENT$: '$TESTTMP/d/repo/nonexistent.py'
   [255]
 
   $ hg id
@@ -780,10 +1199,16 @@
   $ cd ..
   $ cat << EOF >> hgrc-with-post-init-hook
   > [hooks]
-  > post-init = sh -c "printenv.py post-init"
+  > post-init = sh -c "printenv.py --line post-init"
   > EOF
   $ HGRCPATH=hgrc-with-post-init-hook hg init to
-  post-init hook: HG_ARGS=init to HG_HOOKNAME=post-init HG_HOOKTYPE=post-init HG_OPTS={'insecure': None, 'remotecmd': '', 'ssh': ''} HG_PATS=['to'] HG_RESULT=0
+  post-init hook: HG_ARGS=init to
+  HG_HOOKNAME=post-init
+  HG_HOOKTYPE=post-init
+  HG_OPTS={'insecure': None, 'remotecmd': '', 'ssh': ''}
+  HG_PATS=['to']
+  HG_RESULT=0
+  
 
 new commits must be visible in pretxnchangegroup (issue3428)
 
--- a/tests/test-http-api-httpv2.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-http-api-httpv2.t	Wed Apr 17 13:41:18 2019 -0400
@@ -18,6 +18,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/exp-http-v2-0003 HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -46,6 +47,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/badcommand HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -67,6 +69,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/exp-http-v2-0003/ro/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -88,6 +91,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -110,6 +114,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: invalid\r\n
@@ -134,6 +139,7 @@
   >     content-type: badmedia
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -160,6 +166,7 @@
   >     frame 1 1 stream-begin command-request new cbor:{b'name': b'customreadonly'}
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     *\r\n (glob)
@@ -196,6 +203,7 @@
   > EOF
   creating http peer for wire protocol version 2
   sending customreadonly command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -216,23 +224,19 @@
   s>     \t\x00\x00\x01\x00\x02\x01\x92
   s>     Hidentity
   s>     \r\n
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   s>     13\r\n
   s>     \x0b\x00\x00\x01\x00\x02\x041
   s>     \xa1FstatusBok
   s>     \r\n
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     27\r\n
   s>     \x1f\x00\x00\x01\x00\x02\x041
   s>     X\x1dcustomreadonly bytes response
   s>     \r\n
-  received frame(size=31; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     8\r\n
   s>     \x00\x00\x00\x01\x00\x02\x002
   s>     \r\n
   s>     0\r\n
   s>     \r\n
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   response: gen[
     b'customreadonly bytes response'
   ]
@@ -247,6 +251,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/exp-http-v2-0003/rw/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -268,6 +273,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/exp-http-v2-0003/rw/badcommand HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -289,6 +295,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/rw/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -327,6 +334,7 @@
   >     frame 1 1 stream-begin command-request new cbor:{b'name': b'customreadonly'}
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/rw/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -366,6 +374,7 @@
   >     accept: $MEDIATYPE
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/rw/badcommand HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -388,6 +397,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/debugreflect HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -428,6 +438,7 @@
   >     frame 1 1 stream-begin command-request new cbor:{b'name': b'command1', b'args': {b'foo': b'val1', b'bar1': b'val'}}
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/debugreflect HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -459,6 +470,7 @@
   >     frame 1 1 stream-begin command-request new cbor:{b'name': b'customreadonly'}
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/customreadonly HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -501,6 +513,7 @@
   >     frame 3 1 0 command-request new cbor:{b'name': b'customreadonly'}
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/multirequest HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     *\r\n (glob)
@@ -554,6 +567,7 @@
   >     frame 1 1 0 command-request continuation IbookmarksDnameHlistkeys
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/multirequest HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -619,6 +633,7 @@
   >     frame 1 1 stream-begin command-request new cbor:{b'name': b'pushkey'}
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/multirequest HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -645,6 +660,7 @@
   creating http peer for wire protocol version 2
   sending heads command
   wire protocol version 2 encoder referenced in config (badencoder) is not known; ignoring
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/heads HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -665,23 +681,19 @@
   s>     \t\x00\x00\x01\x00\x02\x01\x92
   s>     Hidentity
   s>     \r\n
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   s>     13\r\n
   s>     \x0b\x00\x00\x01\x00\x02\x041
   s>     \xa1FstatusBok
   s>     \r\n
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     1e\r\n
   s>     \x16\x00\x00\x01\x00\x02\x041
   s>     \x81T\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
   s>     \r\n
-  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     8\r\n
   s>     \x00\x00\x00\x01\x00\x02\x002
   s>     \r\n
   s>     0\r\n
   s>     \r\n
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   response: [
     b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
   ]
@@ -694,6 +706,7 @@
   > EOF
   creating http peer for wire protocol version 2
   sending heads command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/heads HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -714,12 +727,10 @@
   s>     \t\x00\x00\x01\x00\x02\x01\x92
   s>     Hzstd-8mb
   s>     \r\n
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   s>     25\r\n
   s>     \x1d\x00\x00\x01\x00\x02\x042
-  s>     (\xb5/\xfd\x00P\xa4\x00\x00p\xa1FstatusBok\x81T\x00\x01\x00\tP\x02
+  s>     (\xb5/\xfd\x00X\xa4\x00\x00p\xa1FstatusBok\x81T\x00\x01\x00\tP\x02
   s>     \r\n
-  received frame(size=29; request=1; stream=2; streamflags=encoded; type=command-response; flags=eos)
   s>     0\r\n
   s>     \r\n
   response: [
--- a/tests/test-http-api.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-http-api.t	Wed Apr 17 13:41:18 2019 -0400
@@ -156,6 +156,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -177,6 +178,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/ HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -200,6 +202,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/unknown HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -222,6 +225,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/exp-http-v2-0003 HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -255,6 +259,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -276,6 +281,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/ HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
--- a/tests/test-http-bad-server.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-http-bad-server.t	Wed Apr 17 13:41:18 2019 -0400
@@ -94,7 +94,7 @@
 
   $ cat error.log
   readline(40 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(7 from -1) -> (7) Accept-
+  readline(7 from *) -> (7) Accept- (glob)
   read limit reached; closing socket
 
   $ rm -f error.log
@@ -111,28 +111,32 @@
 
   $ cat error.log
   readline(210 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(177 from -1) -> (27) Accept-Encoding: identity\r\n
-  readline(150 from -1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(115 from -1) -> (*) host: localhost:$HGPORT\r\n (glob)
-  readline(* from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
-  readline(* from -1) -> (2) \r\n (glob)
-  write(36) -> HTTP/1.1 200 Script output follows\r\n
-  write(23) -> Server: badhttpserver\r\n
-  write(37) -> Date: $HTTP_DATE$\r\n
-  write(41) -> Content-Type: application/mercurial-0.1\r\n
-  write(21) -> Content-Length: 450\r\n
-  write(2) -> \r\n
-  write(450) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
+  readline(177 from *) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(150 from *) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(115 from *) -> (*) host: localhost:$HGPORT\r\n (glob)
+  readline(* from *) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(* from *) -> (2) \r\n (glob)
+  sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py36 !)
+  sendall(450) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py36 !)
+  write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py3 no-py36 !)
+  write(450) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py3 no-py36 !)
+  write(36) -> HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23) -> Server: badhttpserver\r\n (no-py3 !)
+  write(37) -> Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41) -> Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(21) -> Content-Length: 450\r\n (no-py3 !)
+  write(2) -> \r\n (no-py3 !)
+  write(450) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (no-py3 !)
   readline(4? from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n (glob)
-  readline(1? from -1) -> (1?) Accept-Encoding* (glob)
+  readline(1? from *) -> (1?) Accept-Encoding* (glob)
   read limit reached; closing socket
   readline(223 from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
-  readline(197 from -1) -> (27) Accept-Encoding: identity\r\n
-  readline(170 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(141 from -1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
-  readline(100 from -1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(39 from -1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(4 from -1) -> (4) host
+  readline(197 from *) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(170 from *) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(141 from *) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
+  readline(100 from *) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(39 from *) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(4 from *) -> (4) host (glob)
   read limit reached; closing socket
 
   $ rm -f error.log
@@ -152,46 +156,54 @@
   readline(1 from -1) -> (1) x (?)
   readline(1 from -1) -> (1) x (?)
   readline(308 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(275 from -1) -> (27) Accept-Encoding: identity\r\n
-  readline(248 from -1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(213 from -1) -> (*) host: localhost:$HGPORT\r\n (glob)
-  readline(* from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
-  readline(* from -1) -> (2) \r\n (glob)
-  write(36) -> HTTP/1.1 200 Script output follows\r\n
-  write(23) -> Server: badhttpserver\r\n
-  write(37) -> Date: $HTTP_DATE$\r\n
-  write(41) -> Content-Type: application/mercurial-0.1\r\n
-  write(21) -> Content-Length: 450\r\n
-  write(2) -> \r\n
-  write(450) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
+  readline(275 from *) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(248 from *) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(213 from *) -> (*) host: localhost:$HGPORT\r\n (glob)
+  readline(* from *) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(* from *) -> (2) \r\n (glob)
+  sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py36 !)
+  sendall(450) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py36 !)
+  write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py3 no-py36 !)
+  write(450) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py3 no-py36 !)
+  write(36) -> HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23) -> Server: badhttpserver\r\n (no-py3 !)
+  write(37) -> Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41) -> Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(21) -> Content-Length: 450\r\n (no-py3 !)
+  write(2) -> \r\n (no-py3 !)
+  write(450) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (no-py3 !)
   readline(13? from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n (glob)
-  readline(1?? from -1) -> (27) Accept-Encoding: identity\r\n (glob)
-  readline(8? from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
-  readline(5? from -1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
-  readline(1? from -1) -> (1?) x-hgproto-1:* (glob)
+  readline(1?? from *) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(8? from *) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(5? from *) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
+  readline(1? from *) -> (1?) x-hgproto-1:* (glob)
   read limit reached; closing socket
   readline(317 from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
-  readline(291 from -1) -> (27) Accept-Encoding: identity\r\n
-  readline(264 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(235 from -1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
-  readline(194 from -1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(133 from -1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(98 from -1) -> (*) host: localhost:$HGPORT\r\n (glob)
-  readline(* from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
-  readline(* from -1) -> (2) \r\n (glob)
-  write(36) -> HTTP/1.1 200 Script output follows\r\n
-  write(23) -> Server: badhttpserver\r\n
-  write(37) -> Date: $HTTP_DATE$\r\n
-  write(41) -> Content-Type: application/mercurial-0.1\r\n
-  write(20) -> Content-Length: 42\r\n
-  write(2) -> \r\n
-  write(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
+  readline(291 from *) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(264 from *) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(235 from *) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
+  readline(194 from *) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(133 from *) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(98 from *) -> (*) host: localhost:$HGPORT\r\n (glob)
+  readline(* from *) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(* from *) -> (2) \r\n (glob)
+  sendall(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !)
+  sendall(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py36 !)
+  write(159) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py3 no-py36 !)
+  write(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py3 no-py36 !)
+  write(36) -> HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23) -> Server: badhttpserver\r\n (no-py3 !)
+  write(37) -> Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41) -> Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(20) -> Content-Length: 42\r\n (no-py3 !)
+  write(2) -> \r\n (no-py3 !)
+  write(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (no-py3 !)
   readline(* from 65537) -> (*) GET /?cmd=getbundle HTTP* (glob)
   read limit reached; closing socket
   readline(304 from 65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
-  readline(274 from -1) -> (27) Accept-Encoding: identity\r\n
-  readline(247 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(218 from -1) -> (218) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtag
+  readline(274 from *) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(247 from *) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(218 from *) -> (218) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtag (glob)
   read limit reached; closing socket
 
   $ rm -f error.log
@@ -207,41 +219,50 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ cat error.log
+  $ cat error.log | "$PYTHON" $TESTDIR/filtertraceback.py
   readline(329 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(296 from -1) -> (27) Accept-Encoding: identity\r\n
-  readline(269 from -1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(234 from -1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(* from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
-  readline(* from -1) -> (2) \r\n (glob)
-  write(36) -> HTTP/1.1 200 Script output follows\r\n
-  write(23) -> Server: badhttpserver\r\n
-  write(37) -> Date: $HTTP_DATE$\r\n
-  write(41) -> Content-Type: application/mercurial-0.1\r\n
-  write(21) -> Content-Length: 463\r\n
-  write(2) -> \r\n
-  write(463) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx httppostargs known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
+  readline(296 from *) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(269 from *) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(234 from *) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(* from *) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(* from *) -> (2) \r\n (glob)
+  sendall(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 463\r\n\r\n (py36 !)
+  sendall(463) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx httppostargs known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py36 !)
+  write(160) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 463\r\n\r\n (py3 no-py36 !)
+  write(463) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx httppostargs known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py3 no-py36 !)
+  write(36) -> HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23) -> Server: badhttpserver\r\n (no-py3 !)
+  write(37) -> Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41) -> Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(21) -> Content-Length: 463\r\n (no-py3 !)
+  write(2) -> \r\n (no-py3 !)
+  write(463) -> batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx httppostargs known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (no-py3 !)
   readline(1?? from 65537) -> (27) POST /?cmd=batch HTTP/1.1\r\n (glob)
-  readline(1?? from -1) -> (27) Accept-Encoding: identity\r\n (glob)
-  readline(1?? from -1) -> (41) content-type: application/mercurial-0.1\r\n (glob)
-  readline(6? from -1) -> (33) vary: X-HgArgs-Post,X-HgProto-1\r\n (glob)
-  readline(3? from -1) -> (19) x-hgargs-post: 28\r\n (glob)
-  readline(1? from -1) -> (1?) x-hgproto-1: * (glob)
+  readline(1?? from *) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(1?? from *) -> (41) content-type: application/mercurial-0.1\r\n (glob)
+  readline(6? from *) -> (33) vary: X-HgArgs-Post,X-HgProto-1\r\n (glob)
+  readline(3? from *) -> (19) x-hgargs-post: 28\r\n (glob)
+  readline(1? from *) -> (1?) x-hgproto-1: * (glob)
   read limit reached; closing socket
   readline(344 from 65537) -> (27) POST /?cmd=batch HTTP/1.1\r\n
-  readline(317 from -1) -> (27) Accept-Encoding: identity\r\n
-  readline(290 from -1) -> (41) content-type: application/mercurial-0.1\r\n
-  readline(249 from -1) -> (33) vary: X-HgArgs-Post,X-HgProto-1\r\n
-  readline(216 from -1) -> (19) x-hgargs-post: 28\r\n
-  readline(197 from -1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(136 from -1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(101 from -1) -> (20) content-length: 28\r\n
-  readline(81 from -1) -> (*) host: localhost:$HGPORT\r\n (glob)
-  readline(* from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
-  readline(* from -1) -> (2) \r\n (glob)
+  readline(317 from *) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(290 from *) -> (41) content-type: application/mercurial-0.1\r\n (glob)
+  readline(249 from *) -> (33) vary: X-HgArgs-Post,X-HgProto-1\r\n (glob)
+  readline(216 from *) -> (19) x-hgargs-post: 28\r\n (glob)
+  readline(197 from *) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(136 from *) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(101 from *) -> (20) content-length: 28\r\n (glob)
+  readline(81 from *) -> (*) host: localhost:$HGPORT\r\n (glob)
+  readline(* from *) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(* from *) -> (2) \r\n (glob)
   read(* from 28) -> (*) cmds=* (glob)
   read limit reached, closing socket
-  write(36) -> HTTP/1.1 500 Internal Server Error\r\n
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=batch': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after receiving N bytes
+  
+  write(126) -> HTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (py3 no-py36 !)
+  write(36) -> HTTP/1.1 500 Internal Server Error\r\n (no-py3 !)
 
   $ rm -f error.log
 
@@ -258,16 +279,23 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ cat error.log
+  $ cat error.log | "$PYTHON" $TESTDIR/filtertraceback.py
   readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(1 from 36) -> (0) H
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(1 from 160) -> (0) H (py36 !)
+  write(1 from 160) -> (0) H (py3 no-py36 !)
+  write(1 from 36) -> (0) H (no-py3 !)
   write limit reached; closing socket
-  write(36) -> HTTP/1.1 500 Internal Server Error\r\n
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=capabilities': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+  write(286) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (py3 no-py36 !)
+  write(36) -> HTTP/1.1 500 Internal Server Error\r\n (no-py3 !)
 
   $ rm -f error.log
 
@@ -283,21 +311,29 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ cat error.log
+  $ cat error.log | "$PYTHON" $TESTDIR/filtertraceback.py
   readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (144) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (121) Server: badhttpserver\r\n
-  write(37 from 37) -> (84) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (43) Content-Type: application/mercurial-0.1\r\n
-  write(21 from 21) -> (22) Content-Length: 450\r\n
-  write(2 from 2) -> (20) \r\n
-  write(20 from 450) -> (0) batch branchmap bund
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(160 from 160) -> (20) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py36 !)
+  sendall(20 from 450) -> (0) batch branchmap bund (py36 !)
+  write(160 from 160) -> (20) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py3 no-py36 !)
+  write(20 from 450) -> (0) batch branchmap bund (py3 no-py36 !)
+  write(36 from 36) -> (144) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (121) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (84) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (43) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(21 from 21) -> (22) Content-Length: 450\r\n (no-py3 !)
+  write(2 from 2) -> (20) \r\n (no-py3 !)
+  write(20 from 450) -> (0) batch branchmap bund (no-py3 !)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=capabilities': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
 
   $ rm -f error.log
 
@@ -318,35 +354,46 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ cat error.log
+  $ cat error.log | "$PYTHON" $TESTDIR/filtertraceback.py
   readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (692) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (669) Server: badhttpserver\r\n
-  write(37 from 37) -> (632) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (591) Content-Type: application/mercurial-0.1\r\n
-  write(21 from 21) -> (570) Content-Length: 450\r\n
-  write(2 from 2) -> (568) \r\n
-  write(450 from 450) -> (118) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(160 from 160) -> (568) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py36 !)
+  sendall(450 from 450) -> (118) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py36 !)
+  write(160 from 160) -> (568) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py3 no-py36 !)
+  write(450 from 450) -> (118) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py3 no-py36 !)
+  write(36 from 36) -> (692) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (669) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (632) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (591) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(21 from 21) -> (570) Content-Length: 450\r\n (no-py3 !)
+  write(2 from 2) -> (568) \r\n (no-py3 !)
+  write(450 from 450) -> (118) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (no-py3 !)
   readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
-  readline(-1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (82) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (59) Server: badhttpserver\r\n
-  write(37 from 37) -> (22) Date: $HTTP_DATE$\r\n
-  write(22 from 41) -> (0) Content-Type: applicat
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(*) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
+  readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(118 from 159) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: applicat (py36 !)
+  write(118 from 159) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: applicat (py3 no-py36 !)
+  write(36 from 36) -> (82) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (59) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (22) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(22 from 41) -> (0) Content-Type: applicat (no-py3 !)
   write limit reached; closing socket
-  write(36) -> HTTP/1.1 500 Internal Server Error\r\n
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=batch': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+  write(285) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (py3 no-py36 !)
+  write(36) -> HTTP/1.1 500 Internal Server Error\r\n (no-py3 !)
 
   $ rm -f error.log
 
@@ -366,37 +413,49 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ cat error.log
+  $ cat error.log | "$PYTHON" $TESTDIR/filtertraceback.py
   readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (757) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (734) Server: badhttpserver\r\n
-  write(37 from 37) -> (697) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (656) Content-Type: application/mercurial-0.1\r\n
-  write(21 from 21) -> (635) Content-Length: 450\r\n
-  write(2 from 2) -> (633) \r\n
-  write(450 from 450) -> (183) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(160 from 160) -> (633) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py36 !)
+  sendall(450 from 450) -> (183) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py36 !)
+  write(160 from 160) -> (633) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py3 no-py36 !)
+  write(450 from 450) -> (183) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py3 no-py36 !)
+  write(36 from 36) -> (757) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (734) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (697) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (656) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(21 from 21) -> (635) Content-Length: 450\r\n (no-py3 !)
+  write(2 from 2) -> (633) \r\n (no-py3 !)
+  write(450 from 450) -> (183) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (no-py3 !)
   readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
-  readline(-1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (147) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (124) Server: badhttpserver\r\n
-  write(37 from 37) -> (87) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (46) Content-Type: application/mercurial-0.1\r\n
-  write(20 from 20) -> (26) Content-Length: 42\r\n
-  write(2 from 2) -> (24) \r\n
-  write(24 from 42) -> (0) 96ee1d7354c4ad7372047672
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(*) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
+  readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(159 from 159) -> (24) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !)
+  sendall(24 from 42) -> (0) 96ee1d7354c4ad7372047672 (py36 !)
+  write(159 from 159) -> (24) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py3 no-py36 !)
+  write(24 from 42) -> (0) 96ee1d7354c4ad7372047672 (py3 no-py36 !)
+  write(36 from 36) -> (147) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (124) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (87) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (46) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(20 from 20) -> (26) Content-Length: 42\r\n (no-py3 !)
+  write(2 from 2) -> (24) \r\n (no-py3 !)
+  write(24 from 42) -> (0) 96ee1d7354c4ad7372047672 (no-py3 !)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=batch': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
 
   $ rm -f error.log
 
@@ -418,51 +477,66 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ cat error.log
+  $ cat error.log | "$PYTHON" $TESTDIR/filtertraceback.py
   readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (904) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (881) Server: badhttpserver\r\n
-  write(37 from 37) -> (844) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (803) Content-Type: application/mercurial-0.1\r\n
-  write(21 from 21) -> (782) Content-Length: 450\r\n
-  write(2 from 2) -> (780) \r\n
-  write(450 from 450) -> (330) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(160 from 160) -> (780) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py36 !)
+  sendall(450 from 450) -> (330) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py36 !)
+  write(160 from 160) -> (780) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py3 no-py36 !)
+  write(450 from 450) -> (330) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py3 no-py36 !)
+  write(36 from 36) -> (904) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (881) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (844) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (803) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(21 from 21) -> (782) Content-Length: 450\r\n (no-py3 !)
+  write(2 from 2) -> (780) \r\n (no-py3 !)
+  write(450 from 450) -> (330) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (no-py3 !)
   readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
-  readline(-1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (294) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (271) Server: badhttpserver\r\n
-  write(37 from 37) -> (234) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (193) Content-Type: application/mercurial-0.1\r\n
-  write(20 from 20) -> (173) Content-Length: 42\r\n
-  write(2 from 2) -> (171) \r\n
-  write(42 from 42) -> (129) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(*) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
+  readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(159 from 159) -> (171) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !)
+  sendall(42 from 42) -> (129) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py36 !)
+  write(159 from 159) -> (171) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py3 no-py36 !)
+  write(42 from 42) -> (129) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py3 no-py36 !)
+  write(36 from 36) -> (294) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (271) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (234) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (193) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(20 from 20) -> (173) Content-Length: 42\r\n (no-py3 !)
+  write(2 from 2) -> (171) \r\n (no-py3 !)
+  write(42 from 42) -> (129) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (no-py3 !)
   readline(65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(-1) -> (461) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
-  readline(-1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (93) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (70) Server: badhttpserver\r\n
-  write(37 from 37) -> (33) Date: $HTTP_DATE$\r\n
-  write(33 from 41) -> (0) Content-Type: application/mercuri
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(*) -> (461) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n (glob)
+  readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(129 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercuri (py36 !)
+  write(129 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercuri (py3 no-py36 !)
+  write(36 from 36) -> (93) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (70) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (33) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(33 from 41) -> (0) Content-Type: application/mercuri (no-py3 !)
   write limit reached; closing socket
-  write(36) -> HTTP/1.1 500 Internal Server Error\r\n
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+  write(293) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (py3 no-py36 !)
+  write(36) -> HTTP/1.1 500 Internal Server Error\r\n (no-py3 !)
 
   $ rm -f error.log
 
@@ -478,11 +552,20 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -4 error.log
-  write(41 from 41) -> (25) Content-Type: application/mercurial-0.2\r\n
-  write(25 from 28) -> (0) Transfer-Encoding: chunke
-  write limit reached; closing socket
-  write(36) -> HTTP/1.1 500 Internal Server Error\r\n
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -3
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -4
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+  write(293) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (py3 !)
+  write(36) -> HTTP/1.1 500 Internal Server Error\r\n (no-py3 !)
+#endif
 
   $ rm -f error.log
 
@@ -499,53 +582,68 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ cat error.log
+  $ cat error.log | "$PYTHON" $TESTDIR/filtertraceback.py
   readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (942) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (919) Server: badhttpserver\r\n
-  write(37 from 37) -> (882) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (841) Content-Type: application/mercurial-0.1\r\n
-  write(21 from 21) -> (820) Content-Length: 450\r\n
-  write(2 from 2) -> (818) \r\n
-  write(450 from 450) -> (368) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(160 from 160) -> (818) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py36 !)
+  sendall(450 from 450) -> (368) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py36 !)
+  write(160 from 160) -> (818) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py3 no-py36 !)
+  write(450 from 450) -> (368) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py3 no-py36 !)
+  write(36 from 36) -> (942) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (919) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (882) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (841) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(21 from 21) -> (820) Content-Length: 450\r\n (no-py3 !)
+  write(2 from 2) -> (818) \r\n (no-py3 !)
+  write(450 from 450) -> (368) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (no-py3 !)
   readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
-  readline(-1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (332) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (309) Server: badhttpserver\r\n
-  write(37 from 37) -> (272) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (231) Content-Type: application/mercurial-0.1\r\n
-  write(20 from 20) -> (211) Content-Length: 42\r\n
-  write(2 from 2) -> (209) \r\n
-  write(42 from 42) -> (167) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(*) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
+  readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(159 from 159) -> (209) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !)
+  sendall(42 from 42) -> (167) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py36 !)
+  write(159 from 159) -> (209) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py3 no-py36 !)
+  write(42 from 42) -> (167) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py3 no-py36 !)
+  write(36 from 36) -> (332) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (309) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (272) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (231) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(20 from 20) -> (211) Content-Length: 42\r\n (no-py3 !)
+  write(2 from 2) -> (209) \r\n (no-py3 !)
+  write(42 from 42) -> (167) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (no-py3 !)
   readline(65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(-1) -> (461) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
-  readline(-1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (131) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (108) Server: badhttpserver\r\n
-  write(37 from 37) -> (71) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (30) Content-Type: application/mercurial-0.2\r\n
-  write(28 from 28) -> (2) Transfer-Encoding: chunked\r\n
-  write(2 from 2) -> (0) \r\n
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(*) -> (461) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n (glob)
+  readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(167 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py36 !)
+  write(167 from 167) -> (0) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py3 no-py36 !)
+  write(36 from 36) -> (131) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (108) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (71) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (30) Content-Type: application/mercurial-0.2\r\n (no-py3 !)
+  write(28 from 28) -> (2) Transfer-Encoding: chunked\r\n (no-py3 !)
+  write(2 from 2) -> (0) \r\n (no-py3 !)
   write limit reached; closing socket
-  write(36) -> HTTP/1.1 500 Internal Server Error\r\n
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+  write(293) -> HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\nHTTP/1.1 500 Internal Server Error\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nTransfer-Encoding: chunked\r\n\r\n (py3 no-py36 !)
+  write(36) -> HTTP/1.1 500 Internal Server Error\r\n (no-py3 !)
 
   $ rm -f error.log
 
@@ -562,56 +660,72 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ cat error.log
+  $ cat error.log | "$PYTHON" $TESTDIR/filtertraceback.py
   readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (966) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (943) Server: badhttpserver\r\n
-  write(37 from 37) -> (906) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (865) Content-Type: application/mercurial-0.1\r\n
-  write(21 from 21) -> (844) Content-Length: 450\r\n
-  write(2 from 2) -> (842) \r\n
-  write(450 from 450) -> (392) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(160 from 160) -> (842) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py36 !)
+  sendall(450 from 450) -> (392) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py36 !)
+  write(160 from 160) -> (842) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 450\r\n\r\n (py3 no-py36 !)
+  write(450 from 450) -> (392) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (py3 no-py36 !)
+  write(36 from 36) -> (966) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (943) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (906) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (865) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(21 from 21) -> (844) Content-Length: 450\r\n (no-py3 !)
+  write(2 from 2) -> (842) \r\n (no-py3 !)
+  write(450 from 450) -> (392) batch branchmap $USUAL_BUNDLE2_CAPS_NO_PHASES$ changegroupsubset compression=none getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1 unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash (no-py3 !)
   readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
-  readline(-1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (356) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (333) Server: badhttpserver\r\n
-  write(37 from 37) -> (296) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (255) Content-Type: application/mercurial-0.1\r\n
-  write(20 from 20) -> (235) Content-Length: 42\r\n
-  write(2 from 2) -> (233) \r\n
-  write(42 from 42) -> (191) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(*) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
+  readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(159 from 159) -> (233) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py36 !)
+  sendall(42 from 42) -> (191) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (py36 !)
+  write(159 from 159) -> (233) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.1\r\nContent-Length: 42\r\n\r\n (py3 no-py36 !)
+  write(36 from 36) -> (356) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (333) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (296) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (255) Content-Type: application/mercurial-0.1\r\n (no-py3 !)
+  write(20 from 20) -> (235) Content-Length: 42\r\n (no-py3 !)
+  write(2 from 2) -> (233) \r\n (no-py3 !)
+  write(42 from 42) -> (191) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n; (no-py3 !)
   readline(65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
-  readline(-1) -> (27) Accept-Encoding: identity\r\n
-  readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
-  readline(-1) -> (461) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
-  readline(-1) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n
-  readline(-1) -> (35) accept: application/mercurial-0.1\r\n
-  readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
-  readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
-  readline(-1) -> (2) \r\n
-  write(36 from 36) -> (155) HTTP/1.1 200 Script output follows\r\n
-  write(23 from 23) -> (132) Server: badhttpserver\r\n
-  write(37 from 37) -> (95) Date: $HTTP_DATE$\r\n
-  write(41 from 41) -> (54) Content-Type: application/mercurial-0.2\r\n
-  write(28 from 28) -> (26) Transfer-Encoding: chunked\r\n
-  write(2 from 2) -> (24) \r\n
-  write(6 from 6) -> (18) 1\\r\\n\x04\\r\\n (esc)
-  write(9 from 9) -> (9) 4\r\nnone\r\n
-  write(9 from 9) -> (0) 4\r\nHG20\r\n
+  readline(*) -> (27) Accept-Encoding: identity\r\n (glob)
+  readline(*) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
+  readline(*) -> (461) x-hgarg-1: bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n (glob)
+  readline(*) -> (61) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull\r\n (glob)
+  readline(*) -> (35) accept: application/mercurial-0.1\r\n (glob)
+  readline(*) -> (2?) host: localhost:$HGPORT\r\n (glob)
+  readline(*) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
+  readline(*) -> (2) \r\n (glob)
+  sendall(167 from 167) -> (24) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py36 !)
+  sendall(6 from 6) -> (18) 1\\r\\n\x04\\r\\n (esc) (py36 !)
+  sendall(9 from 9) -> (9) 4\r\nnone\r\n (py36 !)
+  sendall(9 from 9) -> (0) 4\r\nHG20\r\n (py36 !)
+  write(167 from 167) -> (24) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py3 no-py36 !)
+  write(36 from 36) -> (155) HTTP/1.1 200 Script output follows\r\n (no-py3 !)
+  write(23 from 23) -> (132) Server: badhttpserver\r\n (no-py3 !)
+  write(37 from 37) -> (95) Date: $HTTP_DATE$\r\n (no-py3 !)
+  write(41 from 41) -> (54) Content-Type: application/mercurial-0.2\r\n (no-py3 !)
+  write(28 from 28) -> (26) Transfer-Encoding: chunked\r\n (no-py3 !)
+  write(2 from 2) -> (24) \r\n (no-py3 !)
+  write(6 from 6) -> (18) 1\\r\\n\x04\\r\\n (esc) (no-py3 !)
+  write(9 from 9) -> (9) 4\r\nnone\r\n (no-py3 !)
+  write(9 from 9) -> (0) 4\r\nHG20\r\n (no-py3 !)
   write limit reached; closing socket
-  write(27) -> 15\r\nInternal Server Error\r\n
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+  write(27) -> 15\r\nInternal Server Error\r\n (no-py3 !)
 
   $ rm -f error.log
 
@@ -622,20 +736,41 @@
 
   $ hg clone http://localhost:$HGPORT/ clone
   requesting all changes
-  abort: HTTP request error (incomplete response; expected 4 bytes got 3)
+  abort: HTTP request error (incomplete response) (py3 !)
+  abort: HTTP request error (incomplete response; expected 4 bytes got 3) (no-py3 !)
   (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
   [255]
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -7 error.log
-  write(28 from 28) -> (23) Transfer-Encoding: chunked\r\n
-  write(2 from 2) -> (21) \r\n
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -9
+  sendall(167 from 167) -> (21) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n
+  sendall(6 from 6) -> (15) 1\\r\\n\x04\\r\\n (esc)
+  sendall(9 from 9) -> (6) 4\r\nnone\r\n
+  sendall(6 from 9) -> (0) 4\r\nHG2
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -11
+  readline(65537) -> (2) \r\n (py3 !)
+  write(167 from 167) -> (21) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py3 !)
+  write(28 from 28) -> (23) Transfer-Encoding: chunked\r\n (no-py3 !)
+  write(2 from 2) -> (21) \r\n (no-py3 !)
   write(6 from 6) -> (15) 1\\r\\n\x04\\r\\n (esc)
   write(9 from 9) -> (6) 4\r\nnone\r\n
   write(6 from 9) -> (0) 4\r\nHG2
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
 
@@ -646,21 +781,43 @@
 
   $ hg clone http://localhost:$HGPORT/ clone
   requesting all changes
-  abort: HTTP request error (incomplete response; expected 4 bytes got 3)
+  abort: HTTP request error (incomplete response) (py3 !)
+  abort: HTTP request error (incomplete response; expected 4 bytes got 3) (no-py3 !)
   (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
   [255]
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -8 error.log
-  write(28 from 28) -> (32) Transfer-Encoding: chunked\r\n
-  write(2 from 2) -> (30) \r\n
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -10
+  sendall(167 from 167) -> (30) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n
+  sendall(6 from 6) -> (24) 1\\r\\n\x04\\r\\n (esc)
+  sendall(9 from 9) -> (15) 4\r\nnone\r\n
+  sendall(9 from 9) -> (6) 4\r\nHG20\r\n
+  sendall(6 from 9) -> (0) 4\\r\\n\x00\x00\x00 (esc)
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -12
+  readline(65537) -> (2) \r\n (py3 !)
+  write(167 from 167) -> (30) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py3 !)
+  write(28 from 28) -> (32) Transfer-Encoding: chunked\r\n (no-py3 !)
+  write(2 from 2) -> (30) \r\n (no-py3 !)
   write(6 from 6) -> (24) 1\\r\\n\x04\\r\\n (esc)
   write(9 from 9) -> (15) 4\r\nnone\r\n
   write(9 from 9) -> (6) 4\r\nHG20\r\n
   write(6 from 9) -> (0) 4\\r\\n\x00\x00\x00 (esc)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
 
@@ -677,15 +834,36 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -8 error.log
-  write(28 from 28) -> (35) Transfer-Encoding: chunked\r\n
-  write(2 from 2) -> (33) \r\n
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -10
+  sendall(167 from 167) -> (33) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n
+  sendall(6 from 6) -> (27) 1\\r\\n\x04\\r\\n (esc)
+  sendall(9 from 9) -> (18) 4\r\nnone\r\n
+  sendall(9 from 9) -> (9) 4\r\nHG20\r\n
+  sendall(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -12
+  readline(65537) -> (2) \r\n (py3 !)
+  write(167 from 167) -> (33) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py3 !)
+  write(28 from 28) -> (35) Transfer-Encoding: chunked\r\n (no-py3 !)
+  write(2 from 2) -> (33) \r\n (no-py3 !)
   write(6 from 6) -> (27) 1\\r\\n\x04\\r\\n (esc)
   write(9 from 9) -> (18) 4\r\nnone\r\n
   write(9 from 9) -> (9) 4\r\nHG20\r\n
   write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
 
@@ -702,16 +880,39 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -9 error.log
-  write(28 from 28) -> (44) Transfer-Encoding: chunked\r\n
-  write(2 from 2) -> (42) \r\n
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -11
+  sendall(167 from 167) -> (42) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n
+  sendall(6 from 6) -> (36) 1\\r\\n\x04\\r\\n (esc)
+  sendall(9 from 9) -> (27) 4\r\nnone\r\n
+  sendall(9 from 9) -> (18) 4\r\nHG20\r\n
+  sendall(9 from 9) -> (9) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (0) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -13
+  readline(65537) -> (2) \r\n (py3 !)
+  write(167 from 167) -> (42) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py3 !)
+  write(28 from 28) -> (44) Transfer-Encoding: chunked\r\n (no-py3 !)
+  write(2 from 2) -> (42) \r\n (no-py3 !)
   write(6 from 6) -> (36) 1\\r\\n\x04\\r\\n (esc)
   write(9 from 9) -> (27) 4\r\nnone\r\n
   write(9 from 9) -> (18) 4\r\nHG20\r\n
   write(9 from 9) -> (9) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
   write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
 
@@ -731,9 +932,27 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -10 error.log
-  write(28 from 28) -> (91) Transfer-Encoding: chunked\r\n
-  write(2 from 2) -> (89) \r\n
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -12
+  sendall(167 from 167) -> (89) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n
+  sendall(6 from 6) -> (83) 1\\r\\n\x04\\r\\n (esc)
+  sendall(9 from 9) -> (74) 4\r\nnone\r\n
+  sendall(9 from 9) -> (65) 4\r\nHG20\r\n
+  sendall(9 from 9) -> (56) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (47) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
+  sendall(47 from 47) -> (0) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02	\x01version02nbchanges1\\r\\n (esc)
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -14
+  readline(65537) -> (2) \r\n (py3 !)
+  write(167 from 167) -> (89) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py3 !)
+  write(28 from 28) -> (91) Transfer-Encoding: chunked\r\n (no-py3 !)
+  write(2 from 2) -> (89) \r\n (no-py3 !)
   write(6 from 6) -> (83) 1\\r\\n\x04\\r\\n (esc)
   write(9 from 9) -> (74) 4\r\nnone\r\n
   write(9 from 9) -> (65) 4\r\nHG20\r\n
@@ -741,7 +960,12 @@
   write(9 from 9) -> (47) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
   write(47 from 47) -> (0) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02	\x01version02nbchanges1\\r\\n (esc)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
 
@@ -755,14 +979,34 @@
   adding changesets
   transaction abort!
   rollback completed
-  abort: HTTP request error (incomplete response; expected 466 bytes got 7)
+  abort: HTTP request error (incomplete response) (py3 !)
+  abort: HTTP request error (incomplete response; expected 466 bytes got 7) (no-py3 !)
   (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
   [255]
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -11 error.log
-  write(2 from 2) -> (110) \r\n
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -14
+  sendall(167 from 167) -> (110) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n
+  sendall(6 from 6) -> (104) 1\\r\\n\x04\\r\\n (esc)
+  sendall(9 from 9) -> (95) 4\r\nnone\r\n
+  sendall(9 from 9) -> (86) 4\r\nHG20\r\n
+  sendall(9 from 9) -> (77) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (68) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
+  sendall(47 from 47) -> (21) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02	\x01version02nbchanges1\\r\\n (esc)
+  sendall(9 from 9) -> (12) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
+  sendall(12 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1d (esc)
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -15
+  write(167 from 167) -> (110) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py3 !)
+  write(2 from 2) -> (110) \r\n (no-py3 !)
   write(6 from 6) -> (104) 1\\r\\n\x04\\r\\n (esc)
   write(9 from 9) -> (95) 4\r\nnone\r\n
   write(9 from 9) -> (86) 4\r\nHG20\r\n
@@ -772,7 +1016,12 @@
   write(9 from 9) -> (12) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
   write(12 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1d (esc)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
 
@@ -792,9 +1041,29 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -12 error.log
-  write(28 from 28) -> (573) Transfer-Encoding: chunked\r\n
-  write(2 from 2) -> (571) \r\n
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -14
+  sendall(167 from 167) -> (571) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n
+  sendall(6 from 6) -> (565) 1\\r\\n\x04\\r\\n (esc)
+  sendall(9 from 9) -> (556) 4\r\nnone\r\n
+  sendall(9 from 9) -> (547) 4\r\nHG20\r\n
+  sendall(9 from 9) -> (538) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (529) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
+  sendall(47 from 47) -> (482) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02	\x01version02nbchanges1\\r\\n (esc)
+  sendall(9 from 9) -> (473) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
+  sendall(473 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -16
+  readline(65537) -> (2) \r\n (py3 !)
+  write(167 from 167) -> (571) HTTP/1.1 200 Script output follows\r\nServer: badhttpserver\r\nDate: $HTTP_DATE$\r\nContent-Type: application/mercurial-0.2\r\nTransfer-Encoding: chunked\r\n\r\n (py3 !)
+  write(28 from 28) -> (573) Transfer-Encoding: chunked\r\n (no-py3 !)
+  write(2 from 2) -> (571) \r\n (no-py3 !)
   write(6 from 6) -> (565) 1\\r\\n\x04\\r\\n (esc)
   write(9 from 9) -> (556) 4\r\nnone\r\n
   write(9 from 9) -> (547) 4\r\nHG20\r\n
@@ -804,7 +1073,12 @@
   write(9 from 9) -> (473) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
   write(473 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
 
@@ -821,13 +1095,34 @@
   added 1 changesets with 1 changes to 1 files
   transaction abort!
   rollback completed
-  abort: HTTP request error (incomplete response; expected 32 bytes got 9)
+  abort: HTTP request error (incomplete response) (py3 !)
+  abort: HTTP request error (incomplete response; expected 32 bytes got 9) (no-py3 !)
   (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
   [255]
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -13 error.log
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -16
+  sendall(6 from 6) -> (596) 1\\r\\n\x04\\r\\n (esc)
+  sendall(9 from 9) -> (587) 4\r\nnone\r\n
+  sendall(9 from 9) -> (578) 4\r\nHG20\r\n
+  sendall(9 from 9) -> (569) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (560) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
+  sendall(47 from 47) -> (513) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02	\x01version02nbchanges1\\r\\n (esc)
+  sendall(9 from 9) -> (504) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
+  sendall(473 from 473) -> (31) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (22) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (13) 4\\r\\n\x00\x00\x00 \\r\\n (esc)
+  sendall(13 from 38) -> (0) 20\\r\\n\x08LISTKEYS (esc)
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -17
   write(6 from 6) -> (596) 1\\r\\n\x04\\r\\n (esc)
   write(9 from 9) -> (587) 4\r\nnone\r\n
   write(9 from 9) -> (578) 4\r\nHG20\r\n
@@ -840,7 +1135,12 @@
   write(9 from 9) -> (13) 4\\r\\n\x00\x00\x00 \\r\\n (esc)
   write(13 from 38) -> (0) 20\\r\\n\x08LISTKEYS (esc)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
 
@@ -863,7 +1163,36 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -22 error.log
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -25
+  sendall(9 from 9) -> (851) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (842) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
+  sendall(47 from 47) -> (795) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02	\x01version02nbchanges1\\r\\n (esc)
+  sendall(9 from 9) -> (786) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
+  sendall(473 from 473) -> (313) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (304) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (295) 4\\r\\n\x00\x00\x00 \\r\\n (esc)
+  sendall(38 from 38) -> (257) 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00	\x06namespacephases\\r\\n (esc)
+  sendall(9 from 9) -> (248) 4\\r\\n\x00\x00\x00:\\r\\n (esc)
+  sendall(64 from 64) -> (184) 3a\r\n96ee1d7354c4ad7372047672c36a1f561e3a6a4c	1\npublishing	True\r\n
+  sendall(9 from 9) -> (175) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (166) 4\\r\\n\x00\x00\x00#\\r\\n (esc)
+  sendall(41 from 41) -> (125) 23\\r\\n\x08LISTKEYS\x00\x00\x00\x02\x01\x00		namespacebookmarks\\r\\n (esc)
+  sendall(9 from 9) -> (116) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (107) 4\\r\\n\x00\x00\x00\x1d\\r\\n (esc)
+  sendall(35 from 35) -> (72) 1d\\r\\n\x16cache:rev-branch-cache\x00\x00\x00\x03\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (63) 4\\r\\n\x00\x00\x00'\\r\\n (esc)
+  sendall(45 from 45) -> (18) 27\\r\\n\x00\x00\x00\x07\x00\x00\x00\x01\x00\x00\x00\x00default\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\\r\\n (esc)
+  sendall(9 from 9) -> (9) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -26
   write(9 from 9) -> (851) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
   write(9 from 9) -> (842) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
   write(47 from 47) -> (795) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02	\x01version02nbchanges1\\r\\n (esc)
@@ -885,7 +1214,12 @@
   write(9 from 9) -> (9) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
   write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
   $ rm -rf clone
@@ -907,7 +1241,37 @@
 
   $ killdaemons.py $DAEMON_PIDS
 
-  $ tail -23 error.log
+#if py36
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -26
+  sendall(9 from 9) -> (854) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (845) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
+  sendall(47 from 47) -> (798) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02	\x01version02nbchanges1\\r\\n (esc)
+  sendall(9 from 9) -> (789) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
+  sendall(473 from 473) -> (316) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (307) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (298) 4\\r\\n\x00\x00\x00 \\r\\n (esc)
+  sendall(38 from 38) -> (260) 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00	\x06namespacephases\\r\\n (esc)
+  sendall(9 from 9) -> (251) 4\\r\\n\x00\x00\x00:\\r\\n (esc)
+  sendall(64 from 64) -> (187) 3a\r\n96ee1d7354c4ad7372047672c36a1f561e3a6a4c	1\npublishing	True\r\n
+  sendall(9 from 9) -> (178) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (169) 4\\r\\n\x00\x00\x00#\\r\\n (esc)
+  sendall(41 from 41) -> (128) 23\\r\\n\x08LISTKEYS\x00\x00\x00\x02\x01\x00		namespacebookmarks\\r\\n (esc)
+  sendall(9 from 9) -> (119) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (110) 4\\r\\n\x00\x00\x00\x1d\\r\\n (esc)
+  sendall(35 from 35) -> (75) 1d\\r\\n\x16cache:rev-branch-cache\x00\x00\x00\x03\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (66) 4\\r\\n\x00\x00\x00'\\r\\n (esc)
+  sendall(45 from 45) -> (21) 27\\r\\n\x00\x00\x00\x07\x00\x00\x00\x01\x00\x00\x00\x00default\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\\r\\n (esc)
+  sendall(9 from 9) -> (12) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(9 from 9) -> (3) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
+  sendall(3 from 5) -> (0) 0\r\n
+  write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
+
+#else
+  $ "$PYTHON" $TESTDIR/filtertraceback.py < error.log | tail -27
   write(9 from 9) -> (854) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
   write(9 from 9) -> (845) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
   write(47 from 47) -> (798) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02	\x01version02nbchanges1\\r\\n (esc)
@@ -930,7 +1294,12 @@
   write(9 from 9) -> (3) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
   write(3 from 5) -> (0) 0\r\n
   write limit reached; closing socket
+  $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/?cmd=getbundle': (glob)
+  Traceback (most recent call last):
+  Exception: connection closed after sending N bytes
+  
   write(27) -> 15\r\nInternal Server Error\r\n
+#endif
 
   $ rm -f error.log
   $ rm -rf clone
--- a/tests/test-http-bundle1.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-http-bundle1.t	Wed Apr 17 13:41:18 2019 -0400
@@ -151,7 +151,7 @@
   $ cd copy-pull
   $ cat >> .hg/hgrc <<EOF
   > [hooks]
-  > changegroup = sh -c "printenv.py changegroup"
+  > changegroup = sh -c "printenv.py --line changegroup"
   > EOF
   $ hg pull
   pulling from http://localhost:$HGPORT1/
@@ -161,7 +161,16 @@
   adding file changes
   added 1 changesets with 1 changes to 1 files
   new changesets 5fed3813f7f5
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=5fed3813f7f5e1824344fdc9cf8f63bb662c292d HG_NODE_LAST=5fed3813f7f5e1824344fdc9cf8f63bb662c292d HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=http://localhost:$HGPORT1/
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=5fed3813f7f5e1824344fdc9cf8f63bb662c292d
+  HG_NODE_LAST=5fed3813f7f5e1824344fdc9cf8f63bb662c292d
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  http://localhost:$HGPORT1/
+  HG_URL=http://localhost:$HGPORT1/
+  
   (run 'hg update' to get a working copy)
   $ cd ..
 
@@ -175,22 +184,9 @@
 + use the same server to test server side streaming preference
 
   $ cd test
-  $ cat << EOT > userpass.py
-  > import base64
-  > from mercurial.hgweb import common
-  > def perform_authentication(hgweb, req, op):
-  >     auth = req.headers.get(b'Authorization')
-  >     if not auth:
-  >         raise common.ErrorResponse(common.HTTP_UNAUTHORIZED, b'who',
-  >                 [(b'WWW-Authenticate', b'Basic Realm="mercurial"')])
-  >     if base64.b64decode(auth.split()[1]).split(b':', 1) != [b'user',
-  >                                                             b'pass']:
-  >         raise common.ErrorResponse(common.HTTP_FORBIDDEN, b'no')
-  > def extsetup(ui):
-  >     common.permhooks.insert(0, perform_authentication)
-  > EOT
-  $ hg serve --config extensions.x=userpass.py -p $HGPORT2 -d --pid-file=pid \
-  >    --config server.preferuncompressed=True \
+
+  $ hg serve --config extensions.x=$TESTDIR/httpserverauth.py -p $HGPORT2 -d \
+  >    --pid-file=pid --config server.preferuncompressed=True \
   >    --config web.push_ssl=False --config web.allow_push=* -A ../access.log
   $ cat pid >> $DAEMON_PIDS
 
--- a/tests/test-http-protocol.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-http-protocol.t	Wed Apr 17 13:41:18 2019 -0400
@@ -96,7 +96,7 @@
 
   $ get-with-headers.py --hgproto '0.2 comp=zstd' $LOCALIP:$HGPORT '?cmd=getbundle&heads=e93700bd72895c5addab234c56d4024b487a362f&common=0000000000000000000000000000000000000000' > resp
   $ f --size --hexdump --bytes 36 --sha1 resp
-  resp: size=248, sha1=4d8d8f87fb82bd542ce52881fdc94f850748
+  resp: size=248, sha1=f11b5c098c638068b3d5fe2f9e6241bf5228
   0000: 32 30 30 20 53 63 72 69 70 74 20 6f 75 74 70 75 |200 Script outpu|
   0010: 74 20 66 6f 6c 6c 6f 77 73 0a 0a 04 7a 73 74 64 |t follows...zstd|
   0020: 28 b5 2f fd                                     |(./.|
@@ -179,6 +179,7 @@
   > command listkeys
   >     namespace namespaces
   > EOF
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-0.1\r\n
@@ -194,6 +195,7 @@
   s>     \r\n
   s>     batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset compression=$BUNDLE2_COMPRESSIONS$ getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
   sending listkeys command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=listkeys HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgArg-1,X-HgProto-1\r\n
@@ -228,6 +230,7 @@
   >     x-hgarg-1: namespace=namespaces
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=listkeys HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -250,6 +253,7 @@
   $ hg --config experimental.httppeer.advertise-v2=true --verbose debugwireproto http://$LOCALIP:$HGPORT << EOF
   > command heads
   > EOF
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1,X-HgUpgrade-1\r\n
@@ -268,6 +272,7 @@
   s>     \r\n
   s>     batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset compression=$BUNDLE2_COMPRESSIONS$ getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
   sending heads command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=heads HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1\r\n
@@ -299,6 +304,7 @@
   $ hg --config experimental.httppeer.advertise-v2=true --config experimental.httppeer.v2-encoder-order=identity --verbose debugwireproto http://$LOCALIP:$HGPORT << EOF
   > command heads
   > EOF
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1,X-HgUpgrade-1\r\n
@@ -317,6 +323,7 @@
   s>     \r\n
   s>     \xa3GapibaseDapi/Dapis\xa1Pexp-http-v2-0003\xa4Hcommands\xacIbranchmap\xa2Dargs\xa0Kpermissions\x81DpullLcapabilities\xa2Dargs\xa0Kpermissions\x81DpullMchangesetdata\xa2Dargs\xa2Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84IbookmarksGparentsEphaseHrevisionIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullHfiledata\xa2Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x83HlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDpath\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullIfilesdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84NfirstchangesetHlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDdictIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullTrecommendedbatchsize\x19\xc3PEheads\xa2Dargs\xa1Jpubliconly\xa3Gdefault\xf4Hrequired\xf4DtypeDboolKpermissions\x81DpullEknown\xa2Dargs\xa1Enodes\xa3Gdefault\x80Hrequired\xf4DtypeDlistKpermissions\x81DpullHlistkeys\xa2Dargs\xa1Inamespace\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullFlookup\xa2Dargs\xa1Ckey\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullLmanifestdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x82GparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDtree\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullTrecommendedbatchsize\x1a\x00\x01\x86\xa0Gpushkey\xa2Dargs\xa4Ckey\xa2Hrequired\xf5DtypeEbytesInamespace\xa2Hrequired\xf5DtypeEbytesCnew\xa2Hrequired\xf5DtypeEbytesCold\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpushPrawstorefiledata\xa2Dargs\xa2Efiles\xa2Hrequired\xf5DtypeDlistJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDlistKpermissions\x81DpullQframingmediatypes\x81X&application/mercurial-exp-framing-0006Rpathfilterprefixes\xd9\x01\x02\x82Epath:Lrootfilesin:Nrawrepoformats\x83LgeneraldeltaHrevlogv1LsparserevlogNv1capabilitiesY\x01\xe0batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset compression=$BUNDLE2_COMPRESSIONS$ getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
   sending heads command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/heads HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -337,23 +344,19 @@
   s>     \t\x00\x00\x01\x00\x02\x01\x92
   s>     Hidentity
   s>     \r\n
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   s>     13\r\n
   s>     \x0b\x00\x00\x01\x00\x02\x041
   s>     \xa1FstatusBok
   s>     \r\n
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     1e\r\n
   s>     \x16\x00\x00\x01\x00\x02\x041
   s>     \x81T\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
   s>     \r\n
-  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     8\r\n
   s>     \x00\x00\x00\x01\x00\x02\x002
   s>     \r\n
   s>     0\r\n
   s>     \r\n
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   response: [
     b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
   ]
@@ -386,7 +389,7 @@
   >     relpath = path[len(b'/redirector'):]
   >     res.status = b'301 Redirect'
   >     newurl = b'%s/redirected%s' % (req.baseurl, relpath)
-  >     if not repo.ui.configbool('testing', 'redirectqs', True) and b'?' in newurl:
+  >     if not repo.ui.configbool(b'testing', b'redirectqs', True) and b'?' in newurl:
   >         newurl = newurl[0:newurl.index(b'?')]
   >     res.headers[b'Location'] = newurl
   >     res.headers[b'Content-Type'] = b'text/plain'
@@ -408,6 +411,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirector?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -422,6 +426,7 @@
   s>     Content-Length: 10\r\n
   s>     \r\n
   s>     redirected
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirected?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -441,6 +446,7 @@
   $ hg --verbose debugwireproto http://$LOCALIP:$HGPORT/redirector << EOF
   > command heads
   > EOF
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirector?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-0.1\r\n
@@ -456,6 +462,7 @@
   s>     Content-Length: 10\r\n
   s>     \r\n
   s>     redirected
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirected?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-0.1\r\n
@@ -472,6 +479,7 @@
   real URL is http://$LOCALIP:$HGPORT/redirected (glob)
   s>     batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset compression=$BUNDLE2_COMPRESSIONS$ getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
   sending heads command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirected?cmd=heads HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1\r\n
@@ -509,6 +517,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirector?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -523,6 +532,7 @@
   s>     Content-Length: 10\r\n
   s>     \r\n
   s>     redirected
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirected HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -664,6 +674,7 @@
   $ hg --verbose debugwireproto http://$LOCALIP:$HGPORT/redirector << EOF
   > command heads
   > EOF
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirector?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-0.1\r\n
@@ -679,6 +690,7 @@
   s>     Content-Length: 10\r\n
   s>     \r\n
   s>     redirected
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirected HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-0.1\r\n
@@ -721,6 +733,7 @@
   s>     <li class="active">log</li>\n
   s>     <li><a href="/redirected/graph/tip">graph</a></li>\n
   s>     <li><a href="/redirected/tags">tags</a
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirected?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-0.1\r\n
@@ -737,6 +750,7 @@
   real URL is http://$LOCALIP:$HGPORT/redirected (glob)
   s>     batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset compression=$BUNDLE2_COMPRESSIONS$ getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
   sending heads command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /redirected?cmd=heads HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1\r\n
--- a/tests/test-http.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-http.t	Wed Apr 17 13:41:18 2019 -0400
@@ -156,6 +156,8 @@
   HG_NODE_LAST=5fed3813f7f5e1824344fdc9cf8f63bb662c292d
   HG_SOURCE=pull
   HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  http://localhost:$HGPORT1/
   HG_URL=http://localhost:$HGPORT1/
   
   (run 'hg update' to get a working copy)
@@ -171,21 +173,9 @@
 + use the same server to test server side streaming preference
 
   $ cd test
-  $ cat << EOT > userpass.py
-  > import base64
-  > from mercurial.hgweb import common
-  > def perform_authentication(hgweb, req, op):
-  >     auth = req.headers.get(b'Authorization')
-  >     if not auth:
-  >         raise common.ErrorResponse(common.HTTP_UNAUTHORIZED, b'who',
-  >                 [(b'WWW-Authenticate', b'Basic Realm="mercurial"')])
-  >     if base64.b64decode(auth.split()[1]).split(b':', 1) != [b'user', b'pass']:
-  >         raise common.ErrorResponse(common.HTTP_FORBIDDEN, b'no')
-  > def extsetup(ui):
-  >     common.permhooks.insert(0, perform_authentication)
-  > EOT
-  $ hg serve --config extensions.x=userpass.py -p $HGPORT2 -d --pid-file=pid \
-  >    --config server.preferuncompressed=True \
+
+  $ hg serve --config extensions.x=$TESTDIR/httpserverauth.py -p $HGPORT2 -d \
+  >    --pid-file=pid --config server.preferuncompressed=True -E ../errors2.log \
   >    --config web.push_ssl=False --config web.allow_push=* -A ../access.log
   $ cat pid >> $DAEMON_PIDS
 
@@ -221,6 +211,25 @@
   $ hg id http://user@localhost:$HGPORT2/
   5fed3813f7f5
 
+  $ cat > use_digests.py << EOF
+  > from mercurial import (
+  >     exthelper,
+  >     url,
+  > )
+  > 
+  > eh = exthelper.exthelper()
+  > uisetup = eh.finaluisetup
+  > 
+  > @eh.wrapfunction(url, 'opener')
+  > def urlopener(orig, *args, **kwargs):
+  >     opener = orig(*args, **kwargs)
+  >     opener.addheaders.append((r'X-HgTest-AuthType', r'Digest'))
+  >     return opener
+  > EOF
+
+  $ hg id http://localhost:$HGPORT2/ --config extensions.x=use_digests.py
+  5fed3813f7f5
+
 #if no-reposimplestore
   $ hg clone http://user:pass@localhost:$HGPORT2/ dest 2>&1
   streaming all changes
@@ -374,6 +383,14 @@
   "GET /?cmd=lookup HTTP/1.1" 200 - x-hgarg-1:key=tip x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull
   "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=namespaces x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull
   "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull
+  "GET /?cmd=capabilities HTTP/1.1" 401 - x-hgtest-authtype:Digest
+  "GET /?cmd=capabilities HTTP/1.1" 200 - x-hgtest-authtype:Digest
+  "GET /?cmd=lookup HTTP/1.1" 401 - x-hgarg-1:key=tip x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest
+  "GET /?cmd=lookup HTTP/1.1" 200 - x-hgarg-1:key=tip x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest
+  "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=namespaces x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest
+  "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=namespaces x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest
+  "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest
+  "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest
   "GET /?cmd=capabilities HTTP/1.1" 401 - (no-reposimplestore !)
   "GET /?cmd=capabilities HTTP/1.1" 200 - (no-reposimplestore !)
   "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (no-reposimplestore !)
@@ -443,6 +460,8 @@
 
   $ cat error.log
 
+  $ cat errors2.log
+
 check abort error reporting while pulling/cloning
 
   $ $RUNTESTDIR/killdaemons.py
--- a/tests/test-https.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-https.t	Wed Apr 17 13:41:18 2019 -0400
@@ -213,7 +213,7 @@
   $ cd copy-pull
   $ cat >> .hg/hgrc <<EOF
   > [hooks]
-  > changegroup = sh -c "printenv.py changegroup"
+  > changegroup = sh -c "printenv.py --line changegroup"
   > EOF
   $ hg pull $DISABLECACERTS
   pulling from https://localhost:$HGPORT/
@@ -232,7 +232,16 @@
   adding file changes
   added 1 changesets with 1 changes to 1 files
   new changesets 5fed3813f7f5
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=5fed3813f7f5e1824344fdc9cf8f63bb662c292d HG_NODE_LAST=5fed3813f7f5e1824344fdc9cf8f63bb662c292d HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=https://localhost:$HGPORT/
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=5fed3813f7f5e1824344fdc9cf8f63bb662c292d
+  HG_NODE_LAST=5fed3813f7f5e1824344fdc9cf8f63bb662c292d
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  https://localhost:$HGPORT/
+  HG_URL=https://localhost:$HGPORT/
+  
   (run 'hg update' to get a working copy)
   $ cd ..
 
--- a/tests/test-impexp-branch.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-impexp-branch.t	Wed Apr 17 13:41:18 2019 -0400
@@ -6,7 +6,7 @@
   > import re
   > import sys
   > 
-  > head_re = re.compile('^#(?:(?:\\s+([A-Za-z][A-Za-z0-9_]*)(?:\\s.*)?)|(?:\\s*))$')
+  > head_re = re.compile(r'^#(?:(?:\\s+([A-Za-z][A-Za-z0-9_]*)(?:\\s.*)?)|(?:\\s*))$')
   > 
   > for line in sys.stdin:
   >     hmatch = head_re.match(line)
--- a/tests/test-import-context.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-import-context.t	Wed Apr 17 13:41:18 2019 -0400
@@ -12,9 +12,9 @@
   >     count = int(pattern[0:-1])
   >     char = pattern[-1].encode('utf8') + b'\n'
   >     if not lasteol and i == len(patterns) - 1:
-  >         fp.write((char*count)[:-1])
+  >         fp.write((char * count)[:-1])
   >     else:
-  >         fp.write(char*count)
+  >         fp.write(char * count)
   > fp.close()
   > EOF
   $ cat > cat.py <<EOF
--- a/tests/test-import-eol.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-import-eol.t	Wed Apr 17 13:41:18 2019 -0400
@@ -17,9 +17,9 @@
   >    'empty:stripped-crlf': b'\r\n'}[sys.argv[1]])
   > w(b' d\n')
   > w(b'-e\n')
-  > w(b'\ No newline at end of file\n')
+  > w(b'\\\\ No newline at end of file\n')
   > w(b'+z\r\n')
-  > w(b'\ No newline at end of file\r\n')
+  > w(b'\\\\ No newline at end of file\r\n')
   > EOF
 
   $ hg init repo
--- a/tests/test-import-git.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-import-git.t	Wed Apr 17 13:41:18 2019 -0400
@@ -826,7 +826,7 @@
 
   $ hg revert -qa
   $ hg --encoding utf-8 import - <<EOF
-  > From: =?UTF-8?q?Rapha=C3=ABl=20Hertzog?= <hertzog@debian.org>
+  > From: =?utf-8?q?Rapha=C3=ABl_Hertzog_=3Chertzog=40debian=2Eorg=3E?=
   > Subject: [PATCH] =?UTF-8?q?=C5=A7=E2=82=AC=C3=9F=E1=B9=AA?=
   > 
   > diff --git a/a b/a
--- a/tests/test-inherit-mode.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-inherit-mode.t	Wed Apr 17 13:41:18 2019 -0400
@@ -71,7 +71,6 @@
   00600 ./.hg/00changelog.i
   00770 ./.hg/cache/
   00660 ./.hg/cache/branch2-served
-  00660 ./.hg/cache/manifestfulltextcache (reporevlogstore !)
   00660 ./.hg/cache/rbc-names-v1
   00660 ./.hg/cache/rbc-revs-v1
   00660 ./.hg/dirstate
@@ -105,6 +104,7 @@
   00711 ./.hg/wcache/checkisexec
   007.. ./.hg/wcache/checklink (re)
   00600 ./.hg/wcache/checklink-target
+  00660 ./.hg/wcache/manifestfulltextcache (reporevlogstore !)
   00700 ./dir/
   00600 ./dir/bar
   00600 ./foo
--- a/tests/test-install.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-install.t	Wed Apr 17 13:41:18 2019 -0400
@@ -161,6 +161,7 @@
   > import subprocess
   > import sys
   > import xml.etree.ElementTree as ET
+  > from mercurial import pycompat
   > 
   > # MSYS mangles the path if it expands $TESTDIR
   > testdir = os.environ['TESTDIR']
@@ -177,7 +178,7 @@
   >     files = node.findall('./{%(wix)s}Component/{%(wix)s}File' % ns)
   > 
   >     for f in files:
-  >         yield relpath + f.attrib['Name']
+  >         yield pycompat.sysbytes(relpath + f.attrib['Name'])
   > 
   > def hgdirectory(relpath):
   >     '''generator of tracked files, rooted at relpath'''
@@ -187,16 +188,15 @@
   >                             stderr=subprocess.PIPE)
   >     output = proc.communicate()[0]
   > 
-  >     slash = '/'
   >     for line in output.splitlines():
   >         if os.name == 'nt':
-  >             yield line.replace(os.sep, slash)
+  >             yield line.replace(pycompat.sysbytes(os.sep), b'/')
   >         else:
   >             yield line
   > 
   > tracked = [f for f in hgdirectory(sys.argv[1])]
   > 
-  > xml = ET.parse("%s/../contrib/wix/%s.wxs" % (testdir, sys.argv[1]))
+  > xml = ET.parse("%s/../contrib/packaging/wix/%s.wxs" % (testdir, sys.argv[1]))
   > root = xml.getroot()
   > dir = root.find('.//{%(wix)s}DirectoryRef' % ns)
   > 
@@ -204,11 +204,11 @@
   > 
   > print('Not installed:')
   > for f in sorted(set(tracked) - set(installed)):
-  >     print('  %s' % f)
+  >     print('  %s' % pycompat.sysstr(f))
   > 
   > print('Not tracked:')
   > for f in sorted(set(installed) - set(tracked)):
-  >     print('  %s' % f)
+  >     print('  %s' % pycompat.sysstr(f))
   > EOF
 
   $ ( testrepohgenv; "$PYTHON" wixxml.py help )
@@ -238,9 +238,11 @@
 the default for them.
   $ unset PYTHONPATH
   $ "$PYTHON" -m virtualenv --no-site-packages --never-download installenv >> pip.log
+  DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. (?)
 Note: we use this weird path to run pip and hg to avoid platform differences,
 since it's bin on most platforms but Scripts on Windows.
   $ ./installenv/*/pip install --no-index $TESTDIR/.. >> pip.log
+  DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. (?)
   $ ./installenv/*/hg debuginstall || cat pip.log
   checking encoding (ascii)...
   checking Python executable (*) (glob)
--- a/tests/test-issue1175.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-issue1175.t	Wed Apr 17 13:41:18 2019 -0400
@@ -14,8 +14,8 @@
   $ hg mv a a2
   $ hg up
   note: possible conflict - a was renamed multiple times to:
+   a1
    a2
-   a1
   1 files updated, 0 files merged, 0 files removed, 0 files unresolved
 
   $ hg ci -m2
--- a/tests/test-journal-exists.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-journal-exists.t	Wed Apr 17 13:41:18 2019 -0400
@@ -29,7 +29,7 @@
 
   $ hg -R foo unbundle repo.hg
   adding changesets
-  abort: Permission denied: $TESTTMP/foo/.hg/store/.00changelog.i-* (glob)
+  abort: Permission denied: '$TESTTMP/foo/.hg/store/.00changelog.i-*' (glob)
   [255]
 
   $ if test -f foo/.hg/store/journal; then echo 'journal exists :-('; fi
--- a/tests/test-keyword.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-keyword.t	Wed Apr 17 13:41:18 2019 -0400
@@ -383,13 +383,10 @@
   >>> open('a', 'wb').writelines(lines)
   $ hg record -d '10 1' -m rectest a<<EOF
   > y
-  > y
   > n
   > EOF
   diff --git a/a b/a
   2 hunks, 2 lines changed
-  examine changes to 'a'? [Ynesfdaq?] y
-  
   @@ -1,3 +1,4 @@
    expand $Id$
   +foo
@@ -448,8 +445,6 @@
   > EOF
   diff --git a/a b/a
   2 hunks, 2 lines changed
-  examine changes to 'a'? [Ynesfdaq?] y
-  
   @@ -1,3 +1,4 @@
    expand $Id$
   +foo
@@ -519,8 +514,6 @@
   > EOF
   diff --git a/r b/r
   new file mode 100644
-  examine changes to 'r'? [Ynesfdaq?] y
-  
   @@ -0,0 +1,1 @@
   +$Id$
   record this change to 'r'? [Ynesfdaq?] y
--- a/tests/test-largefiles-misc.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-largefiles-misc.t	Wed Apr 17 13:41:18 2019 -0400
@@ -578,7 +578,7 @@
   $ echo moremore >> anotherlarge
   $ hg revert anotherlarge -v --config 'ui.origbackuppath=.hg/origbackups'
   creating directory: $TESTTMP/addrm2/.hg/origbackups/.hglf/sub
-  saving current version of ../.hglf/sub/anotherlarge as $TESTTMP/addrm2/.hg/origbackups/.hglf/sub/anotherlarge
+  saving current version of ../.hglf/sub/anotherlarge as ../.hg/origbackups/.hglf/sub/anotherlarge
   reverting ../.hglf/sub/anotherlarge
   creating directory: $TESTTMP/addrm2/.hg/origbackups/sub
   found 90c622cf65cebe75c5842f9136c459333faf392e in store
--- a/tests/test-largefiles-small-disk.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-largefiles-small-disk.t	Wed Apr 17 13:41:18 2019 -0400
@@ -9,7 +9,7 @@
   > #
   > # this makes the original largefiles code abort:
   > _origcopyfileobj = shutil.copyfileobj
-  > def copyfileobj(fsrc, fdst, length=16*1024):
+  > def copyfileobj(fsrc, fdst, length=16 * 1024):
   >     # allow journal files (used by transaction) to be written
   >     if b'journal.' in fdst.name:
   >         return _origcopyfileobj(fsrc, fdst, length)
--- a/tests/test-largefiles-wireproto.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-largefiles-wireproto.t	Wed Apr 17 13:41:18 2019 -0400
@@ -420,20 +420,8 @@
   $ rm "${USERCACHE}"/*
 
   $ cd ..
-  $ cat << EOT > userpass.py
-  > import base64
-  > from mercurial.hgweb import common
-  > def perform_authentication(hgweb, req, op):
-  >     auth = req.headers.get(b'Authorization')
-  >     if not auth:
-  >         raise common.ErrorResponse(common.HTTP_UNAUTHORIZED, b'who',
-  >                 [(b'WWW-Authenticate', b'Basic Realm="mercurial"')])
-  >     if base64.b64decode(auth.split()[1]).split(b':', 1) != [b'user', b'pass']:
-  >         raise common.ErrorResponse(common.HTTP_FORBIDDEN, b'no')
-  > def extsetup(ui):
-  >     common.permhooks.insert(0, perform_authentication)
-  > EOT
-  $ hg serve --config extensions.x=userpass.py -R credentialmain \
+
+  $ hg serve --config extensions.x=$TESTDIR/httpserverauth.py -R credentialmain \
   >          -d -p $HGPORT --pid-file hg.pid -A access.log
   $ cat hg.pid >> $DAEMON_PIDS
   $ cat << EOF > get_pass.py
--- a/tests/test-lfs-serve-access.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-lfs-serve-access.t	Wed Apr 17 13:41:18 2019 -0400
@@ -227,9 +227,9 @@
   >             # One time simulation of a read error
   >             if _readerr:
   >                 _readerr = False
-  >                 raise IOError(errno.EIO, '%s: I/O error' % oid)
+  >                 raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
   >             # Simulate corrupt content on client download
-  >             blobstore._verify(oid, 'dummy content')
+  >             blobstore._verify(oid, b'dummy content')
   > 
   >         def verify(self, oid):
   >             '''Called in the server to populate the Batch API response,
@@ -240,7 +240,7 @@
   >             global _numverifies
   >             _numverifies += 1
   >             if _numverifies <= 2:
-  >                 raise IOError(errno.EIO, '%s: I/O error' % oid)
+  >                 raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
   >             return super(badstore, self).verify(oid)
   > 
   >     store.__class__ = badstore
@@ -340,14 +340,14 @@
   $LOCALIP - - [$ERRDATE$] HG error:  Exception happened while processing request '/.git/info/lfs/objects/batch': (glob)
   $LOCALIP - - [$ERRDATE$] HG error:  Traceback (most recent call last): (glob)
   $LOCALIP - - [$ERRDATE$] HG error:      verifies = store.verify(oid) (glob)
-  $LOCALIP - - [$ERRDATE$] HG error:      raise IOError(errno.EIO, '%s: I/O error' % oid) (glob)
-  $LOCALIP - - [$ERRDATE$] HG error:  IOError: [Errno 5] f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e: I/O error (glob)
+  $LOCALIP - - [$ERRDATE$] HG error:      raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8")) (glob)
+  $LOCALIP - - [$ERRDATE$] HG error:  *Error: [Errno 5] f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e: I/O error (glob)
   $LOCALIP - - [$ERRDATE$] HG error:   (glob)
   $LOCALIP - - [$ERRDATE$] HG error:  Exception happened while processing request '/.git/info/lfs/objects/batch': (glob)
   $LOCALIP - - [$ERRDATE$] HG error:  Traceback (most recent call last): (glob)
   $LOCALIP - - [$ERRDATE$] HG error:      verifies = store.verify(oid) (glob)
-  $LOCALIP - - [$ERRDATE$] HG error:      raise IOError(errno.EIO, '%s: I/O error' % oid) (glob)
-  $LOCALIP - - [$ERRDATE$] HG error:  IOError: [Errno 5] b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c: I/O error (glob)
+  $LOCALIP - - [$ERRDATE$] HG error:      raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8")) (glob)
+  $LOCALIP - - [$ERRDATE$] HG error:  *Error: [Errno 5] b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c: I/O error (glob)
   $LOCALIP - - [$ERRDATE$] HG error:   (glob)
   $LOCALIP - - [$ERRDATE$] HG error:  Exception happened while processing request '/.hg/lfs/objects/b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c': (glob)
   $LOCALIP - - [$ERRDATE$] HG error:  Traceback (most recent call last): (glob)
@@ -363,19 +363,19 @@
       for chunk in self.server.application(env, self._start_response):
       for r in self._runwsgi(req, res, repo):
       rctx, req, res, self.check_perm)
-      return func(*(args + a), **kw)
+      return func(*(args + a), **kw) (no-py3 !)
       lambda perm:
       res.setbodybytes(localstore.read(oid))
       blob = self._read(self.vfs, oid, verify)
-      raise IOError(errno.EIO, '%s: I/O error' % oid)
-  IOError: [Errno 5] 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d: I/O error
+      raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
+  *Error: [Errno 5] 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d: I/O error (glob)
   
   $LOCALIP - - [$ERRDATE$] HG error:  Exception happened while processing request '/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d': (glob)
   $LOCALIP - - [$ERRDATE$] HG error:  Traceback (most recent call last): (glob)
   $LOCALIP - - [$ERRDATE$] HG error:      res.setbodybytes(localstore.read(oid)) (glob)
   $LOCALIP - - [$ERRDATE$] HG error:      blob = self._read(self.vfs, oid, verify) (glob)
-  $LOCALIP - - [$ERRDATE$] HG error:      blobstore._verify(oid, 'dummy content') (glob)
-  $LOCALIP - - [$ERRDATE$] HG error:      hint=_('run hg verify')) (glob)
+  $LOCALIP - - [$ERRDATE$] HG error:      blobstore._verify(oid, b'dummy content') (glob)
+  $LOCALIP - - [$ERRDATE$] HG error:      hint=_(b'run hg verify')) (glob)
   $LOCALIP - - [$ERRDATE$] HG error:  LfsCorruptionError: detected corrupt lfs object: 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d (glob)
   $LOCALIP - - [$ERRDATE$] HG error:   (glob)
 
@@ -394,22 +394,7 @@
   > l.password=pass
   > EOF
 
-  $ cat << EOF > userpass.py
-  > import base64
-  > from mercurial.hgweb import common
-  > def perform_authentication(hgweb, req, op):
-  >     auth = req.headers.get(b'Authorization')
-  >     if not auth:
-  >         raise common.ErrorResponse(common.HTTP_UNAUTHORIZED, b'who',
-  >                 [(b'WWW-Authenticate', b'Basic Realm="mercurial"')])
-  >     if base64.b64decode(auth.split()[1]).split(b':', 1) != [b'user',
-  >                                                             b'pass']:
-  >         raise common.ErrorResponse(common.HTTP_FORBIDDEN, b'no')
-  > def extsetup(ui):
-  >     common.permhooks.insert(0, perform_authentication)
-  > EOF
-
-  $ hg --config extensions.x=$TESTTMP/userpass.py \
+  $ hg --config extensions.x=$TESTDIR/httpserverauth.py \
   >    -R server serve -d -p $HGPORT1 --pid-file=hg.pid \
   >    -A $TESTTMP/access.log -E $TESTTMP/errors.log
   $ mv hg.pid $DAEMON_PIDS
@@ -437,6 +422,32 @@
 
   $ echo 'another blob' > auth_clone/lfs.blob
   $ hg -R auth_clone ci -Aqm 'add blob'
+
+  $ cat > use_digests.py << EOF
+  > from mercurial import (
+  >     exthelper,
+  >     url,
+  > )
+  > 
+  > eh = exthelper.exthelper()
+  > uisetup = eh.finaluisetup
+  > 
+  > @eh.wrapfunction(url, 'opener')
+  > def urlopener(orig, *args, **kwargs):
+  >     opener = orig(*args, **kwargs)
+  >     opener.addheaders.append((r'X-HgTest-AuthType', r'Digest'))
+  >     return opener
+  > EOF
+
+Test that Digest Auth fails gracefully before testing the successful Basic Auth
+
+  $ hg -R auth_clone push --config extensions.x=use_digests.py
+  pushing to http://localhost:$HGPORT1/
+  searching for changes
+  abort: LFS HTTP error: HTTP Error 401: the server must support Basic Authentication!
+  (api=http://localhost:$HGPORT1/.git/info/lfs/objects/batch, action=upload)
+  [255]
+
   $ hg -R auth_clone --debug push | egrep '^[{}]|  '
   {
     "objects": [
@@ -468,6 +479,19 @@
   $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - (glob)
   $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
   $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 200 - (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 401 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 401 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
+  $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - x-hgtest-authtype:Digest (glob)
   $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - (glob)
   $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
   $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
--- a/tests/test-lfs-serve.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-lfs-serve.t	Wed Apr 17 13:41:18 2019 -0400
@@ -51,16 +51,15 @@
   >     opts[b'manifest'] = False
   >     opts[b'dir'] = False
   >     rl = cmdutil.openrevlog(repo, b'debugprocessors', file_, opts)
-  >     for flag, proc in rl._flagprocessors.iteritems():
+  >     for flag, proc in rl._flagprocessors.items():
   >         ui.status(b"registered processor '%#x'\n" % (flag))
   > EOF
 
 Skip the experimental.changegroup3=True config.  Failure to agree on this comes
-first, and causes a "ValueError: no common changegroup version" or "abort:
-HTTP Error 500: Internal Server Error", if the extension is only loaded on one
-side.  If that *is* enabled, the subsequent failure is "abort: missing processor
-for flag '0x2000'!" if the extension is only loaded on one side (possibly also
-masked by the Internal Server Error message).
+first, and causes an "abort: no common changegroup version" if the extension is
+only loaded on one side. If that *is* enabled, the subsequent failure is "abort:
+missing processor for flag '0x2000'!" if the extension is only loaded on one side
+(possibly also masked by the Internal Server Error message).
   $ cat >> $HGRCPATH <<EOF
   > [extensions]
   > debugprocessors = $TESTTMP/debugprocessors.py
@@ -110,14 +109,14 @@
   ... def diff(server):
   ...     readchannel(server)
   ...     # run an arbitrary command in the repo with the extension loaded
-  ...     runcommand(server, ['id', '-R', '../cmdservelfs'])
+  ...     runcommand(server, [b'id', b'-R', b'../cmdservelfs'])
   ...     # now run a command in a repo without the extension to ensure that
   ...     # files are added safely..
-  ...     runcommand(server, ['ci', '-Aqm', 'non-lfs'])
+  ...     runcommand(server, [b'ci', b'-Aqm', b'non-lfs'])
   ...     # .. and that scmutil.prefetchfiles() safely no-ops..
-  ...     runcommand(server, ['diff', '-r', '.~1'])
+  ...     runcommand(server, [b'diff', b'-r', b'.~1'])
   ...     # .. and that debugupgraderepo safely no-ops.
-  ...     runcommand(server, ['debugupgraderepo', '-q', '--run'])
+  ...     runcommand(server, [b'debugupgraderepo', b'-q', b'--run'])
   *** runcommand id -R ../cmdservelfs
   000000000000 tip
   *** runcommand ci -Aqm non-lfs
@@ -257,12 +256,12 @@
   ... def addrequirement(server):
   ...     readchannel(server)
   ...     # change the repo in a way that adds the lfs requirement
-  ...     runcommand(server, ['pull', '-qu'])
+  ...     runcommand(server, [b'pull', b'-qu'])
   ...     # Now cause the requirement adding hook to fire again, without going
   ...     # through reposetup() again.
   ...     with open('file.txt', 'wb') as fp:
-  ...         fp.write('data')
-  ...     runcommand(server, ['ci', '-Aqm', 'non-lfs'])
+  ...         fp.write(b'data')
+  ...     runcommand(server, [b'ci', b'-Aqm', b'non-lfs'])
   *** runcommand pull -qu
   *** runcommand ci -Aqm non-lfs
 
@@ -317,8 +316,11 @@
 TODO: fail more gracefully.
 
   $ hg init $TESTTMP/client4_pull
-  $ hg -R $TESTTMP/client4_pull pull -q http://localhost:$HGPORT
-  abort: HTTP Error 500: Internal Server Error
+  $ hg -R $TESTTMP/client4_pull pull http://localhost:$HGPORT
+  pulling from http://localhost:$HGPORT/
+  requesting all changes
+  remote: abort: no common changegroup version
+  abort: pull failed on remote
   [255]
   $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
   $TESTTMP/server/.hg/requires:lfs
@@ -359,22 +361,24 @@
   $ cp $HGRCPATH.orig $HGRCPATH
 
   >>> from __future__ import absolute_import
-  >>> from hgclient import check, readchannel, runcommand
+  >>> from hgclient import bprint, check, readchannel, runcommand, stdout
   >>> @check
   ... def checkflags(server):
   ...     readchannel(server)
-  ...     print('')
-  ...     print('# LFS required- both lfs and non-lfs revlogs have 0x2000 flag')
-  ...     runcommand(server, ['debugprocessors', 'lfs.bin', '-R',
-  ...                '../server'])
-  ...     runcommand(server, ['debugprocessors', 'nonlfs2.txt', '-R',
-  ...                '../server'])
-  ...     runcommand(server, ['config', 'extensions', '--cwd',
-  ...                '../server'])
+  ...     bprint(b'')
+  ...     bprint(b'# LFS required- both lfs and non-lfs revlogs have 0x2000 flag')
+  ...     stdout.flush()
+  ...     runcommand(server, [b'debugprocessors', b'lfs.bin', b'-R',
+  ...                b'../server'])
+  ...     runcommand(server, [b'debugprocessors', b'nonlfs2.txt', b'-R',
+  ...                b'../server'])
+  ...     runcommand(server, [b'config', b'extensions', b'--cwd',
+  ...                b'../server'])
   ... 
-  ...     print("\n# LFS not enabled- revlogs don't have 0x2000 flag")
-  ...     runcommand(server, ['debugprocessors', 'nonlfs3.txt'])
-  ...     runcommand(server, ['config', 'extensions'])
+  ...     bprint(b"\n# LFS not enabled- revlogs don't have 0x2000 flag")
+  ...     stdout.flush()
+  ...     runcommand(server, [b'debugprocessors', b'nonlfs3.txt'])
+  ...     runcommand(server, [b'config', b'extensions'])
   
   # LFS required- both lfs and non-lfs revlogs have 0x2000 flag
   *** runcommand debugprocessors lfs.bin -R ../server
@@ -403,28 +407,31 @@
   > EOF
 
   >>> from __future__ import absolute_import, print_function
-  >>> from hgclient import check, readchannel, runcommand
+  >>> from hgclient import bprint, check, readchannel, runcommand, stdout
   >>> @check
   ... def checkflags2(server):
   ...     readchannel(server)
-  ...     print('')
-  ...     print('# LFS enabled- both lfs and non-lfs revlogs have 0x2000 flag')
-  ...     runcommand(server, ['debugprocessors', 'lfs.bin', '-R',
-  ...                '../server'])
-  ...     runcommand(server, ['debugprocessors', 'nonlfs2.txt', '-R',
-  ...                '../server'])
-  ...     runcommand(server, ['config', 'extensions', '--cwd',
-  ...                '../server'])
+  ...     bprint(b'')
+  ...     bprint(b'# LFS enabled- both lfs and non-lfs revlogs have 0x2000 flag')
+  ...     stdout.flush()
+  ...     runcommand(server, [b'debugprocessors', b'lfs.bin', b'-R',
+  ...                b'../server'])
+  ...     runcommand(server, [b'debugprocessors', b'nonlfs2.txt', b'-R',
+  ...                b'../server'])
+  ...     runcommand(server, [b'config', b'extensions', b'--cwd',
+  ...                b'../server'])
   ... 
-  ...     print('\n# LFS enabled without requirement- revlogs have 0x2000 flag')
-  ...     runcommand(server, ['debugprocessors', 'nonlfs3.txt'])
-  ...     runcommand(server, ['config', 'extensions'])
+  ...     bprint(b'\n# LFS enabled without requirement- revlogs have 0x2000 flag')
+  ...     stdout.flush()
+  ...     runcommand(server, [b'debugprocessors', b'nonlfs3.txt'])
+  ...     runcommand(server, [b'config', b'extensions'])
   ... 
-  ...     print("\n# LFS disabled locally- revlogs don't have 0x2000 flag")
-  ...     runcommand(server, ['debugprocessors', 'nonlfs.txt', '-R',
-  ...                '../nonlfs'])
-  ...     runcommand(server, ['config', 'extensions', '--cwd',
-  ...                '../nonlfs'])
+  ...     bprint(b"\n# LFS disabled locally- revlogs don't have 0x2000 flag")
+  ...     stdout.flush()
+  ...     runcommand(server, [b'debugprocessors', b'nonlfs.txt', b'-R',
+  ...                b'../nonlfs'])
+  ...     runcommand(server, [b'config', b'extensions', b'--cwd',
+  ...                b'../nonlfs'])
   
   # LFS enabled- both lfs and non-lfs revlogs have 0x2000 flag
   *** runcommand debugprocessors lfs.bin -R ../server
@@ -657,10 +664,4 @@
 
   $ "$PYTHON" $TESTDIR/killdaemons.py $DAEMON_PIDS
 
-#if lfsremote-on
-  $ cat $TESTTMP/errors.log | grep '^[A-Z]'
-  Traceback (most recent call last):
-  ValueError: no common changegroup version
-#else
   $ cat $TESTTMP/errors.log
-#endif
--- a/tests/test-linelog.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-linelog.py	Wed Apr 17 13:41:18 2019 -0400
@@ -15,7 +15,6 @@
 def _genedits(seed, endrev):
     lines = []
     random.seed(seed)
-    rev = 0
     for rev in range(0, endrev):
         n = len(lines)
         a1 = random.randint(0, n)
--- a/tests/test-locate.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-locate.t	Wed Apr 17 13:41:18 2019 -0400
@@ -123,6 +123,24 @@
   ../t.h
   ../t/e.h
   ../t/x
+  $ hg files --config ui.relative-paths=yes
+  ../b
+  ../dir.h/foo
+  ../t.h
+  ../t/e.h
+  ../t/x
+  $ hg files --config ui.relative-paths=no
+  b
+  dir.h/foo
+  t.h
+  t/e.h
+  t/x
+  $ hg files --config ui.relative-paths=legacy
+  ../b
+  ../dir.h/foo
+  ../t.h
+  ../t/e.h
+  ../t/x
 
   $ hg locate b
   ../b
--- a/tests/test-lock.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-lock.py	Wed Apr 17 13:41:18 2019 -0400
@@ -141,7 +141,7 @@
         state.assertacquirecalled(True)
 
         # fake a fork
-        forklock = copy.deepcopy(lock)
+        forklock = copy.copy(lock)
         forklock._pidoffset = 1
         forklock.release()
         state.assertreleasecalled(False)
@@ -238,7 +238,7 @@
             childstate.assertacquirecalled(True)
 
             # fork the child lock
-            forkchildlock = copy.deepcopy(childlock)
+            forkchildlock = copy.copy(childlock)
             forkchildlock._pidoffset += 1
             forkchildlock.release()
             childstate.assertreleasecalled(False)
@@ -290,7 +290,7 @@
             self.fail("unexpected lock acquisition")
         except error.LockHeld as why:
             self.assertTrue(why.errno == errno.ETIMEDOUT)
-            self.assertTrue(why.locker == "")
+            self.assertTrue(why.locker == b"")
             state.assertlockexists(False)
 
 if __name__ == '__main__':
--- a/tests/test-manifest.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-manifest.py	Wed Apr 17 13:41:18 2019 -0400
@@ -289,8 +289,7 @@
         the resulting manifest.'''
         m = self.parsemanifest(A_HUGE_MANIFEST)
 
-        match = matchmod.match(b'/', b'',
-                [b'file1', b'file200', b'file300'], exact=True)
+        match = matchmod.exact([b'file1', b'file200', b'file300'])
         m2 = m.matches(match)
 
         w = (b'file1\0%sx\n'
@@ -304,10 +303,8 @@
         '''
         m = self.parsemanifest(A_DEEPER_MANIFEST)
 
-        match = matchmod.match(b'/', b'',
-                [b'a/b/c/bar.txt', b'a/b/d/qux.py',
-                 b'readme.txt', b'nonexistent'],
-                exact=True)
+        match = matchmod.exact([b'a/b/c/bar.txt', b'a/b/d/qux.py',
+                                b'readme.txt', b'nonexistent'])
         m2 = m.matches(match)
 
         self.assertEqual(
@@ -330,7 +327,7 @@
         m = self.parsemanifest(A_HUGE_MANIFEST)
 
         flist = m.keys()[80:300]
-        match = matchmod.match(b'/', b'', flist, exact=True)
+        match = matchmod.exact(flist)
         m2 = m.matches(match)
 
         self.assertEqual(flist, m2.keys())
@@ -364,7 +361,7 @@
         against a directory.'''
         m = self.parsemanifest(A_DEEPER_MANIFEST)
 
-        match = matchmod.match(b'/', b'', [b'a/b'], exact=True)
+        match = matchmod.exact([b'a/b'])
         m2 = m.matches(match)
 
         self.assertEqual([], m2.keys())
--- a/tests/test-manifest.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-manifest.t	Wed Apr 17 13:41:18 2019 -0400
@@ -93,3 +93,111 @@
   $ hg manifest -r tip tip
   abort: please specify just one revision
   [255]
+
+Testing the manifest full text cache utility
+--------------------------------------------
+
+Reminder of the manifest log content
+
+  $ hg log --debug | grep 'manifest:'
+  manifest:    1:1e01206b1d2f72bd55f2a33fa8ccad74144825b7
+  manifest:    0:fce2a30dedad1eef4da95ca1dc0004157aa527cf
+
+Showing the content of the caches after the above operations
+
+  $ hg debugmanifestfulltextcache
+  cache contains 1 manifest entries, in order of most to least recent:
+  id: 1e01206b1d2f72bd55f2a33fa8ccad74144825b7, size 133 bytes
+  total cache data size 157 bytes, on-disk 157 bytes
+
+(Clearing the cache in case of any content)
+
+  $ hg debugmanifestfulltextcache --clear
+
+Adding a new persistent entry in the cache
+
+  $ hg debugmanifestfulltextcache --add 1e01206b1d2f72bd55f2a33fa8ccad74144825b7
+
+  $ hg debugmanifestfulltextcache
+  cache contains 1 manifest entries, in order of most to least recent:
+  id: 1e01206b1d2f72bd55f2a33fa8ccad74144825b7, size 133 bytes
+  total cache data size 157 bytes, on-disk 157 bytes
+
+Check we don't duplicated entry (added from the debug command)
+
+  $ hg debugmanifestfulltextcache --add 1e01206b1d2f72bd55f2a33fa8ccad74144825b7
+  $ hg debugmanifestfulltextcache
+  cache contains 1 manifest entries, in order of most to least recent:
+  id: 1e01206b1d2f72bd55f2a33fa8ccad74144825b7, size 133 bytes
+  total cache data size 157 bytes, on-disk 157 bytes
+
+Adding a second entry
+
+  $ hg debugmanifestfulltextcache --add fce2a30dedad1eef4da95ca1dc0004157aa527cf
+  $ hg debugmanifestfulltextcache
+  cache contains 2 manifest entries, in order of most to least recent:
+  id: fce2a30dedad1eef4da95ca1dc0004157aa527cf, size 87 bytes
+  id: 1e01206b1d2f72bd55f2a33fa8ccad74144825b7, size 133 bytes
+  total cache data size 268 bytes, on-disk 268 bytes
+
+Accessing the initial entry again, refresh their order
+
+  $ hg debugmanifestfulltextcache --add 1e01206b1d2f72bd55f2a33fa8ccad74144825b7
+  $ hg debugmanifestfulltextcache
+  cache contains 2 manifest entries, in order of most to least recent:
+  id: 1e01206b1d2f72bd55f2a33fa8ccad74144825b7, size 133 bytes
+  id: fce2a30dedad1eef4da95ca1dc0004157aa527cf, size 87 bytes
+  total cache data size 268 bytes, on-disk 268 bytes
+
+Check cache clearing
+
+  $ hg debugmanifestfulltextcache --clear
+  $ hg debugmanifestfulltextcache
+  cache empty
+
+Check adding multiple entry in one go:
+
+  $ hg debugmanifestfulltextcache --add fce2a30dedad1eef4da95ca1dc0004157aa527cf  --add 1e01206b1d2f72bd55f2a33fa8ccad74144825b7
+  $ hg debugmanifestfulltextcache
+  cache contains 2 manifest entries, in order of most to least recent:
+  id: 1e01206b1d2f72bd55f2a33fa8ccad74144825b7, size 133 bytes
+  id: fce2a30dedad1eef4da95ca1dc0004157aa527cf, size 87 bytes
+  total cache data size 268 bytes, on-disk 268 bytes
+  $ hg debugmanifestfulltextcache --clear
+
+Test caching behavior on actual operation
+-----------------------------------------
+
+Make sure we start empty
+
+  $ hg debugmanifestfulltextcache
+  cache empty
+
+Commit should have the new node cached:
+
+  $ echo a >> b/a
+  $ hg commit -m 'foo'
+  $ hg debugmanifestfulltextcache
+  cache contains 2 manifest entries, in order of most to least recent:
+  id: 26b8653b67af8c1a0a0317c4ee8dac50a41fdb65, size 133 bytes
+  id: 1e01206b1d2f72bd55f2a33fa8ccad74144825b7, size 133 bytes
+  total cache data size 314 bytes, on-disk 314 bytes
+  $ hg log -r 'ancestors(., 1)' --debug | grep 'manifest:'
+  manifest:    1:1e01206b1d2f72bd55f2a33fa8ccad74144825b7
+  manifest:    2:26b8653b67af8c1a0a0317c4ee8dac50a41fdb65
+
+hg update should warm the cache too
+
+(force dirstate check to avoid flackiness in manifest order)
+  $ hg debugrebuilddirstate
+
+  $ hg update 0
+  0 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  $ hg debugmanifestfulltextcache
+  cache contains 3 manifest entries, in order of most to least recent:
+  id: fce2a30dedad1eef4da95ca1dc0004157aa527cf, size 87 bytes
+  id: 26b8653b67af8c1a0a0317c4ee8dac50a41fdb65, size 133 bytes
+  id: 1e01206b1d2f72bd55f2a33fa8ccad74144825b7, size 133 bytes
+  total cache data size 425 bytes, on-disk 425 bytes
+  $ hg log -r '0' --debug | grep 'manifest:'
+  manifest:    0:fce2a30dedad1eef4da95ca1dc0004157aa527cf
--- a/tests/test-match.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-match.py	Wed Apr 17 13:41:18 2019 -0400
@@ -12,36 +12,36 @@
 class BaseMatcherTests(unittest.TestCase):
 
     def testVisitdir(self):
-        m = matchmod.basematcher(b'', b'')
+        m = matchmod.basematcher()
         self.assertTrue(m.visitdir(b'.'))
         self.assertTrue(m.visitdir(b'dir'))
 
     def testVisitchildrenset(self):
-        m = matchmod.basematcher(b'', b'')
+        m = matchmod.basematcher()
         self.assertEqual(m.visitchildrenset(b'.'), b'this')
         self.assertEqual(m.visitchildrenset(b'dir'), b'this')
 
 class AlwaysMatcherTests(unittest.TestCase):
 
     def testVisitdir(self):
-        m = matchmod.alwaysmatcher(b'', b'')
+        m = matchmod.alwaysmatcher()
         self.assertEqual(m.visitdir(b'.'), b'all')
         self.assertEqual(m.visitdir(b'dir'), b'all')
 
     def testVisitchildrenset(self):
-        m = matchmod.alwaysmatcher(b'', b'')
+        m = matchmod.alwaysmatcher()
         self.assertEqual(m.visitchildrenset(b'.'), b'all')
         self.assertEqual(m.visitchildrenset(b'dir'), b'all')
 
 class NeverMatcherTests(unittest.TestCase):
 
     def testVisitdir(self):
-        m = matchmod.nevermatcher(b'', b'')
+        m = matchmod.nevermatcher()
         self.assertFalse(m.visitdir(b'.'))
         self.assertFalse(m.visitdir(b'dir'))
 
     def testVisitchildrenset(self):
-        m = matchmod.nevermatcher(b'', b'')
+        m = matchmod.nevermatcher()
         self.assertEqual(m.visitchildrenset(b'.'), set())
         self.assertEqual(m.visitchildrenset(b'dir'), set())
 
@@ -50,12 +50,12 @@
     # this is equivalent to BaseMatcherTests.
 
     def testVisitdir(self):
-        m = matchmod.predicatematcher(b'', b'', lambda *a: False)
+        m = matchmod.predicatematcher(lambda *a: False)
         self.assertTrue(m.visitdir(b'.'))
         self.assertTrue(m.visitdir(b'dir'))
 
     def testVisitchildrenset(self):
-        m = matchmod.predicatematcher(b'', b'', lambda *a: False)
+        m = matchmod.predicatematcher(lambda *a: False)
         self.assertEqual(m.visitchildrenset(b'.'), b'this')
         self.assertEqual(m.visitchildrenset(b'dir'), b'this')
 
@@ -185,8 +185,7 @@
 class ExactMatcherTests(unittest.TestCase):
 
     def testVisitdir(self):
-        m = matchmod.match(b'x', b'', patterns=[b'dir/subdir/foo.txt'],
-                           exact=True)
+        m = matchmod.exact(files=[b'dir/subdir/foo.txt'])
         assert isinstance(m, matchmod.exactmatcher)
         self.assertTrue(m.visitdir(b'.'))
         self.assertTrue(m.visitdir(b'dir'))
@@ -197,8 +196,7 @@
         self.assertFalse(m.visitdir(b'folder'))
 
     def testVisitchildrenset(self):
-        m = matchmod.match(b'x', b'', patterns=[b'dir/subdir/foo.txt'],
-                           exact=True)
+        m = matchmod.exact(files=[b'dir/subdir/foo.txt'])
         assert isinstance(m, matchmod.exactmatcher)
         self.assertEqual(m.visitchildrenset(b'.'), {b'dir'})
         self.assertEqual(m.visitchildrenset(b'dir'), {b'subdir'})
@@ -208,12 +206,11 @@
         self.assertEqual(m.visitchildrenset(b'folder'), set())
 
     def testVisitchildrensetFilesAndDirs(self):
-        m = matchmod.match(b'x', b'', patterns=[b'rootfile.txt',
-                                                b'a/file1.txt',
-                                                b'a/b/file2.txt',
-                                                # no file in a/b/c
-                                                b'a/b/c/d/file4.txt'],
-                           exact=True)
+        m = matchmod.exact(files=[b'rootfile.txt',
+                                  b'a/file1.txt',
+                                  b'a/b/file2.txt',
+                                  # no file in a/b/c
+                                  b'a/b/c/d/file4.txt'])
         assert isinstance(m, matchmod.exactmatcher)
         self.assertEqual(m.visitchildrenset(b'.'), {b'a', b'rootfile.txt'})
         self.assertEqual(m.visitchildrenset(b'a'), {b'b', b'file1.txt'})
@@ -226,8 +223,8 @@
 class DifferenceMatcherTests(unittest.TestCase):
 
     def testVisitdirM2always(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.alwaysmatcher()
         dm = matchmod.differencematcher(m1, m2)
         # dm should be equivalent to a nevermatcher.
         self.assertFalse(dm.visitdir(b'.'))
@@ -239,8 +236,8 @@
         self.assertFalse(dm.visitdir(b'folder'))
 
     def testVisitchildrensetM2always(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.alwaysmatcher()
         dm = matchmod.differencematcher(m1, m2)
         # dm should be equivalent to a nevermatcher.
         self.assertEqual(dm.visitchildrenset(b'.'), set())
@@ -252,27 +249,26 @@
         self.assertEqual(dm.visitchildrenset(b'folder'), set())
 
     def testVisitdirM2never(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.nevermatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.nevermatcher()
         dm = matchmod.differencematcher(m1, m2)
-        # dm should be equivalent to a alwaysmatcher. OPT: if m2 is a
-        # nevermatcher, we could return 'all' for these.
+        # dm should be equivalent to a alwaysmatcher.
         #
         # We're testing Equal-to-True instead of just 'assertTrue' since
         # assertTrue does NOT verify that it's a bool, just that it's truthy.
         # While we may want to eventually make these return 'all', they should
         # not currently do so.
-        self.assertEqual(dm.visitdir(b'.'), True)
-        self.assertEqual(dm.visitdir(b'dir'), True)
-        self.assertEqual(dm.visitdir(b'dir/subdir'), True)
-        self.assertEqual(dm.visitdir(b'dir/subdir/z'), True)
-        self.assertEqual(dm.visitdir(b'dir/foo'), True)
-        self.assertEqual(dm.visitdir(b'dir/subdir/x'), True)
-        self.assertEqual(dm.visitdir(b'folder'), True)
+        self.assertEqual(dm.visitdir(b'.'), b'all')
+        self.assertEqual(dm.visitdir(b'dir'), b'all')
+        self.assertEqual(dm.visitdir(b'dir/subdir'), b'all')
+        self.assertEqual(dm.visitdir(b'dir/subdir/z'), b'all')
+        self.assertEqual(dm.visitdir(b'dir/foo'), b'all')
+        self.assertEqual(dm.visitdir(b'dir/subdir/x'), b'all')
+        self.assertEqual(dm.visitdir(b'folder'), b'all')
 
     def testVisitchildrensetM2never(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.nevermatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.nevermatcher()
         dm = matchmod.differencematcher(m1, m2)
         # dm should be equivalent to a alwaysmatcher.
         self.assertEqual(dm.visitchildrenset(b'.'), b'all')
@@ -284,7 +280,7 @@
         self.assertEqual(dm.visitchildrenset(b'folder'), b'all')
 
     def testVisitdirM2SubdirPrefix(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
         m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
         dm = matchmod.differencematcher(m1, m2)
         self.assertEqual(dm.visitdir(b'.'), True)
@@ -295,12 +291,11 @@
         # an 'all' pattern, just True.
         self.assertEqual(dm.visitdir(b'dir/subdir/z'), True)
         self.assertEqual(dm.visitdir(b'dir/subdir/x'), True)
-        # OPT: We could return 'all' for these.
-        self.assertEqual(dm.visitdir(b'dir/foo'), True)
-        self.assertEqual(dm.visitdir(b'folder'), True)
+        self.assertEqual(dm.visitdir(b'dir/foo'), b'all')
+        self.assertEqual(dm.visitdir(b'folder'), b'all')
 
     def testVisitchildrensetM2SubdirPrefix(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
         m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
         dm = matchmod.differencematcher(m1, m2)
         self.assertEqual(dm.visitchildrenset(b'.'), b'this')
@@ -322,7 +317,7 @@
         dm = matchmod.differencematcher(m1, m2)
         self.assertEqual(dm.visitdir(b'.'), True)
         self.assertEqual(dm.visitdir(b'dir'), True)
-        self.assertEqual(dm.visitdir(b'dir/subdir'), True)
+        self.assertEqual(dm.visitdir(b'dir/subdir'), b'all')
         self.assertFalse(dm.visitdir(b'dir/foo'))
         self.assertFalse(dm.visitdir(b'folder'))
         # OPT: We should probably return False for these; we don't because
@@ -349,8 +344,8 @@
 class IntersectionMatcherTests(unittest.TestCase):
 
     def testVisitdirM2always(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.alwaysmatcher()
         im = matchmod.intersectmatchers(m1, m2)
         # im should be equivalent to a alwaysmatcher.
         self.assertEqual(im.visitdir(b'.'), b'all')
@@ -362,8 +357,8 @@
         self.assertEqual(im.visitdir(b'folder'), b'all')
 
     def testVisitchildrensetM2always(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.alwaysmatcher()
         im = matchmod.intersectmatchers(m1, m2)
         # im should be equivalent to a alwaysmatcher.
         self.assertEqual(im.visitchildrenset(b'.'), b'all')
@@ -375,8 +370,8 @@
         self.assertEqual(im.visitchildrenset(b'folder'), b'all')
 
     def testVisitdirM2never(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.nevermatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.nevermatcher()
         im = matchmod.intersectmatchers(m1, m2)
         # im should be equivalent to a nevermatcher.
         self.assertFalse(im.visitdir(b'.'))
@@ -388,8 +383,8 @@
         self.assertFalse(im.visitdir(b'folder'))
 
     def testVisitchildrensetM2never(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.nevermatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.nevermatcher()
         im = matchmod.intersectmatchers(m1, m2)
         # im should be equivalent to a nevermqtcher.
         self.assertEqual(im.visitchildrenset(b'.'), set())
@@ -401,7 +396,7 @@
         self.assertEqual(im.visitchildrenset(b'folder'), set())
 
     def testVisitdirM2SubdirPrefix(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
         m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
         im = matchmod.intersectmatchers(m1, m2)
         self.assertEqual(im.visitdir(b'.'), True)
@@ -416,7 +411,7 @@
         self.assertEqual(im.visitdir(b'dir/subdir/x'), True)
 
     def testVisitchildrensetM2SubdirPrefix(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
         m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
         im = matchmod.intersectmatchers(m1, m2)
         self.assertEqual(im.visitchildrenset(b'.'), {b'dir'})
@@ -541,8 +536,8 @@
 class UnionMatcherTests(unittest.TestCase):
 
     def testVisitdirM2always(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.alwaysmatcher()
         um = matchmod.unionmatcher([m1, m2])
         # um should be equivalent to a alwaysmatcher.
         self.assertEqual(um.visitdir(b'.'), b'all')
@@ -554,8 +549,8 @@
         self.assertEqual(um.visitdir(b'folder'), b'all')
 
     def testVisitchildrensetM2always(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.alwaysmatcher()
         um = matchmod.unionmatcher([m1, m2])
         # um should be equivalent to a alwaysmatcher.
         self.assertEqual(um.visitchildrenset(b'.'), b'all')
@@ -567,8 +562,8 @@
         self.assertEqual(um.visitchildrenset(b'folder'), b'all')
 
     def testVisitdirM1never(self):
-        m1 = matchmod.nevermatcher(b'', b'')
-        m2 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.nevermatcher()
+        m2 = matchmod.alwaysmatcher()
         um = matchmod.unionmatcher([m1, m2])
         # um should be equivalent to a alwaysmatcher.
         self.assertEqual(um.visitdir(b'.'), b'all')
@@ -580,8 +575,8 @@
         self.assertEqual(um.visitdir(b'folder'), b'all')
 
     def testVisitchildrensetM1never(self):
-        m1 = matchmod.nevermatcher(b'', b'')
-        m2 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.nevermatcher()
+        m2 = matchmod.alwaysmatcher()
         um = matchmod.unionmatcher([m1, m2])
         # um should be equivalent to a alwaysmatcher.
         self.assertEqual(um.visitchildrenset(b'.'), b'all')
@@ -593,8 +588,8 @@
         self.assertEqual(um.visitchildrenset(b'folder'), b'all')
 
     def testVisitdirM2never(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.nevermatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.nevermatcher()
         um = matchmod.unionmatcher([m1, m2])
         # um should be equivalent to a alwaysmatcher.
         self.assertEqual(um.visitdir(b'.'), b'all')
@@ -606,8 +601,8 @@
         self.assertEqual(um.visitdir(b'folder'), b'all')
 
     def testVisitchildrensetM2never(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
-        m2 = matchmod.nevermatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
+        m2 = matchmod.nevermatcher()
         um = matchmod.unionmatcher([m1, m2])
         # um should be equivalent to a alwaysmatcher.
         self.assertEqual(um.visitchildrenset(b'.'), b'all')
@@ -619,7 +614,7 @@
         self.assertEqual(um.visitchildrenset(b'folder'), b'all')
 
     def testVisitdirM2SubdirPrefix(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
         m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
         um = matchmod.unionmatcher([m1, m2])
         self.assertEqual(um.visitdir(b'.'), b'all')
@@ -631,7 +626,7 @@
         self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
 
     def testVisitchildrensetM2SubdirPrefix(self):
-        m1 = matchmod.alwaysmatcher(b'', b'')
+        m1 = matchmod.alwaysmatcher()
         m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
         um = matchmod.unionmatcher([m1, m2])
         self.assertEqual(um.visitchildrenset(b'.'), b'all')
@@ -782,7 +777,7 @@
     def testVisitdir(self):
         m = matchmod.match(util.localpath(b'root/d'), b'e/f',
                 [b'../a.txt', b'b.txt'])
-        pm = matchmod.prefixdirmatcher(b'root', b'd/e/f', b'd', m)
+        pm = matchmod.prefixdirmatcher(b'd', m)
 
         # `m` elides 'd' because it's part of the root, and the rest of the
         # patterns are relative.
@@ -814,7 +809,7 @@
     def testVisitchildrenset(self):
         m = matchmod.match(util.localpath(b'root/d'), b'e/f',
                 [b'../a.txt', b'b.txt'])
-        pm = matchmod.prefixdirmatcher(b'root', b'd/e/f', b'd', m)
+        pm = matchmod.prefixdirmatcher(b'd', m)
 
         # OPT: visitchildrenset could possibly return {'e'} and {'f'} for these
         # next two, respectively; patternmatcher does not have this
--- a/tests/test-merge10.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-merge10.t	Wed Apr 17 13:41:18 2019 -0400
@@ -37,8 +37,9 @@
   (run 'hg heads' to see heads, 'hg merge' to merge)
   $ hg up -C 2
   0 files updated, 0 files merged, 0 files removed, 0 files unresolved
-  $ hg merge
-  merging testdir/subdir/a and testdir/a to testdir/subdir/a
+Abuse this test for also testing that merge respects ui.relative-paths
+  $ hg --cwd testdir merge --config ui.relative-paths=yes
+  merging subdir/a and a to subdir/a
   0 files updated, 1 files merged, 0 files removed, 0 files unresolved
   (branch merge, don't forget to commit)
   $ hg stat
--- a/tests/test-missing-capability.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-missing-capability.t	Wed Apr 17 13:41:18 2019 -0400
@@ -15,7 +15,7 @@
   > from mercurial import extensions, wireprotov1server
   > def wcapabilities(orig, *args, **kwargs):
   >   cap = orig(*args, **kwargs)
-  >   cap.remove('$1')
+  >   cap.remove(b'$1')
   >   return cap
   > extensions.wrapfunction(wireprotov1server, '_capabilities', wcapabilities)
   > EOF
--- a/tests/test-mq-eol.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-mq-eol.t	Wed Apr 17 13:41:18 2019 -0400
@@ -23,17 +23,21 @@
   > w(b' c\r\n')
   > w(b' d\n')
   > w(b'-e\n')
-  > w(b'\ No newline at end of file\n')
+  > w(b'\\\\ No newline at end of file\n')
   > w(b'+z\r\n')
-  > w(b'\ No newline at end of file\r\n')
+  > w(b'\\\\ No newline at end of file\r\n')
   > EOF
 
   $ cat > cateol.py <<EOF
   > import sys
+  > try:
+  >     stdout = sys.stdout.buffer
+  > except AttributeError:
+  >     stdout = sys.stdout
   > for line in open(sys.argv[1], 'rb'):
   >     line = line.replace(b'\r', b'<CR>')
   >     line = line.replace(b'\n', b'<LF>')
-  >     print(line)
+  >     stdout.write(line + b'\n')
   > EOF
 
   $ hg init repo
--- a/tests/test-mq-missingfiles.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-mq-missingfiles.t	Wed Apr 17 13:41:18 2019 -0400
@@ -5,16 +5,20 @@
 
   $ cat > writelines.py <<EOF
   > import sys
+  > if sys.version_info[0] >= 3:
+  >     encode = lambda x: x.encode('utf-8').decode('unicode_escape').encode('utf-8')
+  > else:
+  >     encode = lambda x: x.decode('string_escape')
   > path = sys.argv[1]
   > args = sys.argv[2:]
   > assert (len(args) % 2) == 0
   > 
   > f = open(path, 'wb')
   > for i in range(len(args) // 2):
-  >    count, s = args[2*i:2*i+2]
+  >    count, s = args[2 * i:2 * i + 2]
   >    count = int(count)
-  >    s = s.decode('string_escape')
-  >    f.write(s*count)
+  >    s = encode(s)
+  >    f.write(s * count)
   > f.close()
   > EOF
 
--- a/tests/test-mq-qimport.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-mq-qimport.t	Wed Apr 17 13:41:18 2019 -0400
@@ -1,15 +1,19 @@
   $ cat > writelines.py <<EOF
   > import sys
+  > if sys.version_info[0] >= 3:
+  >     encode = lambda x: x.encode('utf-8').decode('unicode_escape').encode('utf-8')
+  > else:
+  >     encode = lambda x: x.decode('string_escape')
   > path = sys.argv[1]
   > args = sys.argv[2:]
   > assert (len(args) % 2) == 0
   > 
   > f = open(path, 'wb')
-  > for i in range(len(args)//2):
-  >    count, s = args[2*i:2*i+2]
+  > for i in range(len(args) // 2):
+  >    count, s = args[2 * i:2 * i + 2]
   >    count = int(count)
-  >    s = s.decode('string_escape')
-  >    f.write(s*count)
+  >    s = encode(s)
+  >    f.write(s * count)
   > f.close()
   > 
   > EOF
--- a/tests/test-mq-qnew.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-mq-qnew.t	Wed Apr 17 13:41:18 2019 -0400
@@ -305,9 +305,9 @@
   HG: branch 'default'
   HG: no files changed
   ====
-  note: commit message saved in .hg/last-message.txt
   transaction abort!
   rollback completed
+  note: commit message saved in .hg/last-message.txt
   abort: pretxncommit.unexpectedabort hook exited with status 1
   [255]
   $ cat .hg/last-message.txt
--- a/tests/test-mq-subrepo-svn.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-mq-subrepo-svn.t	Wed Apr 17 13:41:18 2019 -0400
@@ -23,11 +23,7 @@
   $ svnadmin create svn-repo-2499
 
   $ SVNREPOPATH=`pwd`/svn-repo-2499/project
-#if windows
-  $ SVNREPOURL=file:///`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#else
-  $ SVNREPOURL=file://`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#endif
+  $ SVNREPOURL="`"$PYTHON" $TESTDIR/svnurlof.py \"$SVNREPOPATH\"`"
 
   $ mkdir -p svn-project-2499/trunk
   $ svn import -qm 'init project' svn-project-2499 "$SVNREPOURL"
--- a/tests/test-mq.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-mq.t	Wed Apr 17 13:41:18 2019 -0400
@@ -305,6 +305,7 @@
 working dir diff:
 
   $ hg diff --nodates -q
+  diff -r dde259bd5934 a
   --- a/a
   +++ b/a
   @@ -1,1 +1,2 @@
@@ -1406,7 +1407,7 @@
   $ hg qpush -f --verbose --config 'ui.origbackuppath=.hg/origbackups'
   applying empty
   creating directory: $TESTTMP/forcepush/.hg/origbackups
-  saving current version of hello.txt as $TESTTMP/forcepush/.hg/origbackups/hello.txt
+  saving current version of hello.txt as .hg/origbackups/hello.txt
   patching file hello.txt
   committing files:
   hello.txt
--- a/tests/test-narrow-trackedcmd.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-narrow-trackedcmd.t	Wed Apr 17 13:41:18 2019 -0400
@@ -218,3 +218,13 @@
   adding file changes
   added 3 changesets with 0 changes to 0 files
   new changesets *:* (glob)
+
+  $ cd ..
+
+Testing tracked command on a non-narrow repo
+
+  $ hg init non-narrow
+  $ cd non-narrow
+  $ hg tracked --addinclude foobar
+  abort: the tracked command is only supported on respositories cloned with --narrow
+  [255]
--- a/tests/test-narrow-widen-no-ellipsis.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-narrow-widen-no-ellipsis.t	Wed Apr 17 13:41:18 2019 -0400
@@ -406,7 +406,7 @@
    * bookmark                  11:* (glob)
   $ hg unbundle .hg/strip-backup/*-widen.hg
   abort: .hg/strip-backup/*-widen.hg: $ENOTDIR$ (windows !)
-  abort: $ENOENT$: .hg/strip-backup/*-widen.hg (no-windows !)
+  abort: $ENOENT$: '.hg/strip-backup/*-widen.hg' (no-windows !)
   [255]
   $ hg log -T "{if(ellipsis, '...')}{rev}: {desc}\n"
   11: local
--- a/tests/test-newcgi.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-newcgi.t	Wed Apr 17 13:41:18 2019 -0400
@@ -18,7 +18,7 @@
   > from mercurial.hgweb.request import wsgiapplication
   > 
   > def make_web_app():
-  >     return hgweb("test", "Empty test repository")
+  >     return hgweb(b"test", b"Empty test repository")
   > 
   > wsgicgi.launch(wsgiapplication(make_web_app))
   > HGWEB
@@ -44,7 +44,7 @@
   > from mercurial.hgweb.request import wsgiapplication
   > 
   > def make_web_app():
-  >     return hgwebdir("hgweb.config")
+  >     return hgwebdir(b"hgweb.config")
   > 
   > wsgicgi.launch(wsgiapplication(make_web_app))
   > HGWEBDIR
--- a/tests/test-notify.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-notify.t	Wed Apr 17 13:41:18 2019 -0400
@@ -455,7 +455,7 @@
   > test = False
   > mbox = mbox
   > EOF
-  $ "$PYTHON" -c 'open("a/a", "ab").write("no" * 500 + "\xd1\x84" + "\n")'
+  $ "$PYTHON" -c 'open("a/a", "ab").write(b"no" * 500 + b"\xd1\x84" + b"\n")'
   $ hg --cwd a commit -A -m "long line"
   $ hg --traceback --cwd b pull ../a
   pulling from ../a
--- a/tests/test-obsmarker-template.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-obsmarker-template.t	Wed Apr 17 13:41:18 2019 -0400
@@ -2429,6 +2429,23 @@
      date:        Thu Jan 01 00:00:00 1970 +0000
      summary:     ROOT
   
+Check that {negrev} shows usable negative revisions despite hidden commits
+
+  $ hg log -G -T "{negrev}\n"
+  @  -3
+  |
+  o  -4
+  
+
+  $ hg log -G -T "{negrev}\n" --hidden
+  x  -1
+  |
+  | x  -2
+  |/
+  | @  -3
+  |/
+  o  -4
+  
 
 Test templates with splitted and pruned commit
 ==============================================
@@ -2639,3 +2656,10 @@
   |/     Obsfate: rewritten using amend as 2:718c0d00cee1 by test (at 1970-01-01 00:00 +0000);
   o  ea207398892e
   
+  $ hg log -G -T "{negrev}\n"
+  @  -1
+  |
+  o  -2
+  |
+  o  -5
+  
--- a/tests/test-obsolete-distributed.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-obsolete-distributed.t	Wed Apr 17 13:41:18 2019 -0400
@@ -488,6 +488,37 @@
   d33b0a3a64647d79583526be8107802b1f9fedfa 5b5708a437f27665db42c5a261a539a1bcb2a8c2 0 (Thu Jan 01 00:00:00 1970 +0000) {'ef1': '1', 'operation': 'amend', 'user': 'bob'}
   ef908e42ce65ef57f970d799acaddde26f58a4cc 5ffb9e311b35f6ab6f76f667ca5d6e595645481b 0 (Thu Jan 01 00:00:00 1970 +0000) {'ef1': '4', 'operation': 'rebase', 'user': 'bob'}
 
+
+Same tests, but with --rev, this prevent regressing case where `hg pull --rev
+X` has to process a X that is filtered locally.
+
+  $ hg rollback
+  repository tip rolled back to revision 4 (undo unbundle)
+  $ hg pull ../repo-Bob --rev 956063ac4557
+  pulling from ../repo-Bob
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 2 changesets with 0 changes to 2 files (+1 heads)
+  (2 other changesets obsolete on arrival)
+  (run 'hg heads' to see heads)
+
+With --update
+
+  $ hg rollback
+  repository tip rolled back to revision 4 (undo pull)
+  $ hg pull ../repo-Bob --rev 956063ac4557 --update
+  pulling from ../repo-Bob
+  searching for changes
+  adding changesets
+  adding manifests
+  adding file changes
+  added 2 changesets with 0 changes to 2 files (+1 heads)
+  (2 other changesets obsolete on arrival)
+  abort: cannot update to target: filtered revision '6'!
+  [255]
+
   $ cd ..
 
 Test pull report consistency
--- a/tests/test-oldcgi.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-oldcgi.t	Wed Apr 17 13:41:18 2019 -0400
@@ -55,7 +55,7 @@
   > # Alternatively you can pass a list of ('virtual/path', '/real/path') tuples
   > # or use a dictionary with entries like 'virtual/path': '/real/path'
   > 
-  > h = hgweb.hgwebdir("hgweb.config")
+  > h = hgweb.hgwebdir(b"hgweb.config")
   > h.run()
   > HGWEBDIR
 
--- a/tests/test-parseindex.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-parseindex.t	Wed Apr 17 13:41:18 2019 -0400
@@ -27,7 +27,7 @@
   
   $ cat >> test.py << EOF
   > from __future__ import print_function
-  > from mercurial import changelog, node, vfs
+  > from mercurial import changelog, node, pycompat, vfs
   > 
   > class singlebyteread(object):
   >     def __init__(self, real):
@@ -55,10 +55,10 @@
   >         return singlebyteread(f)
   >     return wrapper
   > 
-  > cl = changelog.changelog(opener('.hg/store'))
+  > cl = changelog.changelog(opener(b'.hg/store'))
   > print(len(cl), 'revisions:')
   > for r in cl:
-  >     print(node.short(cl.node(r)))
+  >     print(pycompat.sysstr(node.short(cl.node(r))))
   > EOF
   $ "$PYTHON" test.py
   2 revisions:
@@ -76,7 +76,7 @@
   $ "$PYTHON" <<EOF
   > from __future__ import print_function
   > from mercurial import changelog, vfs
-  > cl = changelog.changelog(vfs.vfs('.hg/store'))
+  > cl = changelog.changelog(vfs.vfs(b'.hg/store'))
   > print('good heads:')
   > for head in [0, len(cl) - 1, -1]:
   >     print('%s: %r' % (head, cl.reachableroots(0, [head], [0])))
@@ -112,7 +112,7 @@
   10000: head out of range
   -2: head out of range
   -10000: head out of range
-  None: an integer is required
+  None: an integer is required( .got type NoneType.)? (re)
   good roots:
   0: [0]
   1: [1]
@@ -123,7 +123,7 @@
   -2: []
   -10000: []
   bad roots:
-  None: an integer is required
+  None: an integer is required( .got type NoneType.)? (re)
 
   $ cd ..
 
@@ -178,8 +178,8 @@
   $ cat <<EOF > test.py
   > from __future__ import print_function
   > import sys
-  > from mercurial import changelog, vfs
-  > cl = changelog.changelog(vfs.vfs(sys.argv[1]))
+  > from mercurial import changelog, pycompat, vfs
+  > cl = changelog.changelog(vfs.vfs(pycompat.fsencode(sys.argv[1])))
   > n0, n1 = cl.node(0), cl.node(1)
   > ops = [
   >     ('reachableroots',
--- a/tests/test-patch-offset.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-patch-offset.t	Wed Apr 17 13:41:18 2019 -0400
@@ -9,7 +9,7 @@
   > for pattern in patterns:
   >     count = int(pattern[0:-1])
   >     char = pattern[-1].encode('utf8') + b'\n'
-  >     fp.write(char*count)
+  >     fp.write(char * count)
   > fp.close()
   > EOF
 
--- a/tests/test-permissions.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-permissions.t	Wed Apr 17 13:41:18 2019 -0400
@@ -22,7 +22,7 @@
   checking manifests
   crosschecking files in changesets and manifests
   checking files
-  abort: Permission denied: $TESTTMP/t/.hg/store/data/a.i
+  abort: Permission denied: '$TESTTMP/t/.hg/store/data/a.i'
   [255]
 
   $ chmod +r .hg/store/data/a.i
@@ -39,7 +39,7 @@
   $ echo barber > a
   $ hg commit -m "2"
   trouble committing a!
-  abort: Permission denied: $TESTTMP/t/.hg/store/data/a.i
+  abort: Permission denied: '$TESTTMP/t/.hg/store/data/a.i'
   [255]
 
   $ chmod -w .
--- a/tests/test-phabricator.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-phabricator.t	Wed Apr 17 13:41:18 2019 -0400
@@ -48,22 +48,24 @@
   >  --test-vcr "$VCR/accept-4564.json"
 
 Create a differential diff:
+  $ HGENCODING=utf-8; export HGENCODING
   $ echo alpha > alpha
-  $ hg ci --addremove -m 'create alpha for phabricator test'
+  $ hg ci --addremove -m 'create alpha for phabricator test €'
   adding alpha
   $ hg phabsend -r . --test-vcr "$VCR/phabsend-create-alpha.json"
-  D4596 - created - 5206a4fa1e6c: create alpha for phabricator test
-  saved backup bundle to $TESTTMP/repo/.hg/strip-backup/5206a4fa1e6c-dec9e777-phabsend.hg
+  D6054 - created - d386117f30e6: create alpha for phabricator test \xe2\x82\xac (esc)
+  saved backup bundle to $TESTTMP/repo/.hg/strip-backup/d386117f30e6-24ffe649-phabsend.hg
   $ echo more >> alpha
   $ HGEDITOR=true hg ci --amend
-  saved backup bundle to $TESTTMP/repo/.hg/strip-backup/d8f232f7d799-c573510a-amend.hg
+  saved backup bundle to $TESTTMP/repo/.hg/strip-backup/cb03845d6dd9-870f61a6-amend.hg
   $ echo beta > beta
   $ hg ci --addremove -m 'create beta for phabricator test'
   adding beta
   $ hg phabsend -r ".^::" --test-vcr "$VCR/phabsend-update-alpha-create-beta.json"
-  D4596 - updated - f70265671c65: create alpha for phabricator test
-  D4597 - created - 1a5640df7bbf: create beta for phabricator test
-  saved backup bundle to $TESTTMP/repo/.hg/strip-backup/1a5640df7bbf-6daf3e6e-phabsend.hg
+  D6054 - updated - 939d862f0318: create alpha for phabricator test \xe2\x82\xac (esc)
+  D6055 - created - f55f947ed0f8: create beta for phabricator test
+  saved backup bundle to $TESTTMP/repo/.hg/strip-backup/f55f947ed0f8-0d1e502e-phabsend.hg
+  $ unset HGENCODING
 
 The amend won't explode after posting a public commit.  The local tag is left
 behind to identify it.
@@ -74,13 +76,13 @@
   $ echo 'draft change' > alpha
   $ hg ci -m 'create draft change for phabricator testing'
   $ hg phabsend --amend -r '.^::' --test-vcr "$VCR/phabsend-create-public.json"
-  D5544 - created - 540a21d3fbeb: create public change for phabricator testing
-  D5545 - created - 6bca752686cd: create draft change for phabricator testing
-  warning: not updating public commit 2:540a21d3fbeb
-  saved backup bundle to $TESTTMP/repo/.hg/strip-backup/6bca752686cd-41faefb4-phabsend.hg
+  D5544 - created - a56e5ebd77e6: create public change for phabricator testing
+  D5545 - created - 6a0ade3e3ec2: create draft change for phabricator testing
+  warning: not updating public commit 2:a56e5ebd77e6
+  saved backup bundle to $TESTTMP/repo/.hg/strip-backup/6a0ade3e3ec2-aca7d23c-phabsend.hg
   $ hg tags -v
-  tip                                3:620a50fd6ed9
-  D5544                              2:540a21d3fbeb local
+  tip                                3:90532860b5e1
+  D5544                              2:a56e5ebd77e6 local
 
   $ hg debugcallconduit user.search --test-vcr "$VCR/phab-conduit.json" <<EOF
   > {
@@ -107,13 +109,13 @@
   $ hg log -T'{rev} {phabreview|json}\n'
   3 {"id": "D5545", "url": "https://phab.mercurial-scm.org/D5545"}
   2 {"id": "D5544", "url": "https://phab.mercurial-scm.org/D5544"}
-  1 {"id": "D4597", "url": "https://phab.mercurial-scm.org/D4597"}
-  0 {"id": "D4596", "url": "https://phab.mercurial-scm.org/D4596"}
+  1 {"id": "D6055", "url": "https://phab.mercurial-scm.org/D6055"}
+  0 {"id": "D6054", "url": "https://phab.mercurial-scm.org/D6054"}
 
   $ hg log -T'{rev} {if(phabreview, "{phabreview.url} {phabreview.id}")}\n'
   3 https://phab.mercurial-scm.org/D5545 D5545
   2 https://phab.mercurial-scm.org/D5544 D5544
-  1 https://phab.mercurial-scm.org/D4597 D4597
-  0 https://phab.mercurial-scm.org/D4596 D4596
+  1 https://phab.mercurial-scm.org/D6055 D6055
+  0 https://phab.mercurial-scm.org/D6054 D6054
 
   $ cd ..
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/test-phase-archived.t	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,143 @@
+=========================================================
+Test features and behaviors related to the archived phase
+=========================================================
+
+  $ cat << EOF >> $HGRCPATH
+  > [format]
+  > internal-phase=yes
+  > [extensions]
+  > strip=
+  > [experimental]
+  > EOF
+
+  $ hg init repo
+  $ cd repo
+  $ echo  root > a
+  $ hg add a
+  $ hg ci -m 'root'
+
+Test that bundle can unarchive a changeset
+------------------------------------------
+
+  $ echo foo >> a
+  $ hg st
+  M a
+  $ hg ci -m 'unbundletesting'
+  $ hg log -G
+  @  changeset:   1:883aadbbf309
+  |  tag:         tip
+  |  user:        test
+  |  date:        Thu Jan 01 00:00:00 1970 +0000
+  |  summary:     unbundletesting
+  |
+  o  changeset:   0:c1863a3840c6
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     root
+  
+  $ hg strip --soft --rev '.'
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  saved backup bundle to $TESTTMP/repo/.hg/strip-backup/883aadbbf309-efc55adc-backup.hg
+  $ hg log -G
+  @  changeset:   0:c1863a3840c6
+     tag:         tip
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     root
+  
+  $ hg log -G --hidden
+  o  changeset:   1:883aadbbf309
+  |  tag:         tip
+  |  user:        test
+  |  date:        Thu Jan 01 00:00:00 1970 +0000
+  |  summary:     unbundletesting
+  |
+  @  changeset:   0:c1863a3840c6
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     root
+  
+  $ hg unbundle .hg/strip-backup/883aadbbf309-efc55adc-backup.hg
+  adding changesets
+  adding manifests
+  adding file changes
+  added 0 changesets with 0 changes to 1 files
+  (run 'hg update' to get a working copy)
+  $ hg log -G
+  o  changeset:   1:883aadbbf309
+  |  tag:         tip
+  |  user:        test
+  |  date:        Thu Jan 01 00:00:00 1970 +0000
+  |  summary:     unbundletesting
+  |
+  @  changeset:   0:c1863a3840c6
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     root
+  
+
+Test that history rewriting command can use the archived phase when allowed to
+------------------------------------------------------------------------------
+
+  $ hg up 'desc(unbundletesting)'
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ echo bar >> a
+  $ hg commit --amend --config experimental.cleanup-as-archived=yes
+  $ hg log -G
+  @  changeset:   2:d1e73e428f29
+  |  tag:         tip
+  |  parent:      0:c1863a3840c6
+  |  user:        test
+  |  date:        Thu Jan 01 00:00:00 1970 +0000
+  |  summary:     unbundletesting
+  |
+  o  changeset:   0:c1863a3840c6
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     root
+  
+  $ hg log -G --hidden
+  @  changeset:   2:d1e73e428f29
+  |  tag:         tip
+  |  parent:      0:c1863a3840c6
+  |  user:        test
+  |  date:        Thu Jan 01 00:00:00 1970 +0000
+  |  summary:     unbundletesting
+  |
+  | o  changeset:   1:883aadbbf309
+  |/   user:        test
+  |    date:        Thu Jan 01 00:00:00 1970 +0000
+  |    summary:     unbundletesting
+  |
+  o  changeset:   0:c1863a3840c6
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     root
+  
+  $ ls -1 .hg/strip-backup/
+  883aadbbf309-efc55adc-amend.hg
+  883aadbbf309-efc55adc-backup.hg
+  $ hg unbundle .hg/strip-backup/883aadbbf309*amend.hg
+  adding changesets
+  adding manifests
+  adding file changes
+  added 0 changesets with 0 changes to 1 files
+  (run 'hg update' to get a working copy)
+  $ hg log -G
+  @  changeset:   2:d1e73e428f29
+  |  tag:         tip
+  |  parent:      0:c1863a3840c6
+  |  user:        test
+  |  date:        Thu Jan 01 00:00:00 1970 +0000
+  |  summary:     unbundletesting
+  |
+  | o  changeset:   1:883aadbbf309
+  |/   user:        test
+  |    date:        Thu Jan 01 00:00:00 1970 +0000
+  |    summary:     unbundletesting
+  |
+  o  changeset:   0:c1863a3840c6
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     root
+  
--- a/tests/test-pull-bundle.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-pull-bundle.t	Wed Apr 17 13:41:18 2019 -0400
@@ -120,6 +120,38 @@
   * sending pullbundle "1.hg" (glob)
   $ rm repo/.hg/blackbox.log
 
+Test pullbundle functionality for incoming
+
+  $ cd repo
+  $ hg --config blackbox.track=debug --debug serve -p $HGPORT2 -d --pid-file=../repo.pid
+  listening at http://*:$HGPORT2/ (bound to $LOCALIP:$HGPORT2) (glob) (?)
+  $ cat ../repo.pid >> $DAEMON_PIDS
+  $ cd ..
+  $ hg clone http://localhost:$HGPORT2/ repo.pullbundle2a -r 0
+  adding changesets
+  adding manifests
+  adding file changes
+  added 1 changesets with 1 changes to 1 files
+  new changesets bbd179dfa0a7 (1 drafts)
+  updating to branch default
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ cd repo.pullbundle2a
+  $ hg incoming -r ed1b79f46b9a
+  comparing with http://localhost:$HGPORT2/
+  searching for changes
+  changeset:   1:ed1b79f46b9a
+  tag:         tip
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     change foo
+  
+  $ cd ..
+  $ killdaemons.py
+  $ grep 'sending pullbundle ' repo/.hg/blackbox.log
+  * sending pullbundle "0.hg" (glob)
+  * sending pullbundle "1.hg" (glob)
+  $ rm repo/.hg/blackbox.log
+
 Test recovery from misconfigured server sending no new data
 
   $ cd repo
--- a/tests/test-pull.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-pull.t	Wed Apr 17 13:41:18 2019 -0400
@@ -75,6 +75,12 @@
   abort: unknown revision 'xxxxxxxxxxxxxxxxxx y'!
   [255]
 
+Test pull of working copy revision
+  $ hg pull -r 'ffffffffffff'
+  pulling from http://foo@localhost:$HGPORT/
+  abort: unknown revision 'ffffffffffff'!
+  [255]
+
 Issue622: hg init && hg pull -u URL doesn't checkout default branch
 
   $ cd ..
--- a/tests/test-purge.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-purge.t	Wed Apr 17 13:41:18 2019 -0400
@@ -52,7 +52,7 @@
   $ "$PYTHON" <<EOF
   > import os
   > import stat
-  > f= 'untracked_file_readonly'
+  > f = 'untracked_file_readonly'
   > os.chmod(f, stat.S_IMODE(os.stat(f).st_mode) & ~stat.S_IWRITE)
   > EOF
   $ hg purge -p
--- a/tests/test-push-http.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-push-http.t	Wed Apr 17 13:41:18 2019 -0400
@@ -74,8 +74,8 @@
   $ cat >> .hg/hgrc <<EOF
   > allow_push = *
   > [hooks]
-  > changegroup = sh -c "printenv.py changegroup 0"
-  > pushkey = sh -c "printenv.py pushkey 0"
+  > changegroup = sh -c "printenv.py --line changegroup 0"
+  > pushkey = sh -c "printenv.py --line pushkey 0"
   > txnclose-phase.test = sh $TESTTMP/hook.sh 
   > EOF
   $ req "--debug --config extensions.blackbox="
@@ -94,8 +94,17 @@
   remote: phase-move: cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b:  draft -> public
   remote: running hook txnclose-phase.test: sh $TESTTMP/hook.sh
   remote: phase-move: ba677d0156c1196c1a699fa53f390dcfc3ce3872:   -> public
-  remote: running hook changegroup: sh -c "printenv.py changegroup 0"
-  remote: changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob)
+  remote: running hook changegroup: sh -c "printenv.py --line changegroup 0"
+  remote: changegroup hook: HG_HOOKNAME=changegroup
+  remote: HG_HOOKTYPE=changegroup
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: remote:http:$LOCALIP: (glob)
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   % serve errors
   $ hg rollback
   repository tip rolled back to revision 0 (undo serve)
@@ -114,8 +123,17 @@
   remote: phase-move: cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b:  draft -> public
   remote: running hook txnclose-phase.test: sh $TESTTMP/hook.sh
   remote: phase-move: ba677d0156c1196c1a699fa53f390dcfc3ce3872:   -> public
-  remote: running hook changegroup: sh -c "printenv.py changegroup 0"
-  remote: changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob)
+  remote: running hook changegroup: sh -c "printenv.py --line changegroup 0"
+  remote: changegroup hook: HG_HOOKNAME=changegroup
+  remote: HG_HOOKTYPE=changegroup
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: remote:http:$LOCALIP: (glob)
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   % serve errors
   $ hg rollback
   repository tip rolled back to revision 0 (undo serve)
@@ -125,8 +143,8 @@
   $ cat >> .hg/hgrc <<EOF
   > allow_push = *
   > [hooks]
-  > changegroup = sh -c "printenv.py changegroup 0"
-  > pushkey = sh -c "printenv.py pushkey 0"
+  > changegroup = sh -c "printenv.py --line changegroup 0"
+  > pushkey = sh -c "printenv.py --line pushkey 0"
   > txnclose-phase.test = sh $TESTTMP/hook.sh 
   > EOF
   $ req
@@ -138,7 +156,16 @@
   remote: added 1 changesets with 1 changes to 1 files
   remote: phase-move: cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b:  draft -> public
   remote: phase-move: ba677d0156c1196c1a699fa53f390dcfc3ce3872:   -> public
-  remote: changegroup hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob)
+  remote: changegroup hook: HG_BUNDLE2=1
+  remote: HG_HOOKNAME=changegroup
+  remote: HG_HOOKTYPE=changegroup
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   % serve errors
   $ hg rollback
   repository tip rolled back to revision 0 (undo serve)
@@ -157,8 +184,18 @@
   remote: added 1 changesets with 1 changes to 1 files
   remote: phase-move: cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b:  draft -> public
   remote: phase-move: ba677d0156c1196c1a699fa53f390dcfc3ce3872:   -> public
-  remote: changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob) (bundle1 !)
-  remote: changegroup hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob) (bundle2 !)
+  remote: changegroup hook: HG_HOOKNAME=changegroup (no-bundle2 !)
+  remote: changegroup hook: HG_BUNDLE2=1 (bundle2 !)
+  remote: HG_HOOKNAME=changegroup (bundle2 !)
+  remote: HG_HOOKTYPE=changegroup
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: remote:http:$LOCALIP: (glob) (no-bundle2 !)
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   % serve errors
   $ hg rollback
   repository tip rolled back to revision 0 (undo serve)
@@ -176,8 +213,18 @@
   remote: added 1 changesets with 1 changes to 1 files
   remote: phase-move: cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b:  draft -> public
   remote: phase-move: ba677d0156c1196c1a699fa53f390dcfc3ce3872:   -> public
-  remote: changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob) (bundle1 !)
-  remote: changegroup hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob) (bundle2 !)
+  remote: changegroup hook: HG_HOOKNAME=changegroup (no-bundle2 !)
+  remote: changegroup hook: HG_BUNDLE2=1 (bundle2 !)
+  remote: HG_HOOKNAME=changegroup (bundle2 !)
+  remote: HG_HOOKTYPE=changegroup
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: remote:http:$LOCALIP: (glob) (no-bundle2 !)
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   % serve errors
   $ hg rollback
   repository tip rolled back to revision 0 (undo serve)
@@ -209,6 +256,16 @@
   remote: phase-move: cb9a9f314b8b07ba71012fcdbc544b5a4d82ff5b:  draft -> public
   remote: phase-move: ba677d0156c1196c1a699fa53f390dcfc3ce3872:   -> public
   remote: changegroup hook: * (glob)
+  remote: HG_HOOKNAME=changegroup (bundle2 !)
+  remote: HG_HOOKTYPE=changegroup
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: remote:http:$LOCALIP: (glob) (no-bundle2 !)
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   % serve errors
   $ hg rollback
   repository tip rolled back to revision 0 (undo serve)
@@ -221,7 +278,7 @@
   > push_ssl = false
   > allow_push = *
   > [hooks]
-  > prepushkey = sh -c "printenv.py prepushkey 1"
+  > prepushkey = sh -c "printenv.py --line prepushkey 1"
   > [devel]
   > legacy.exchange=phases
   > EOF
@@ -253,7 +310,22 @@
   remote: adding manifests
   remote: adding file changes
   remote: added 1 changesets with 1 changes to 1 files
-  remote: prepushkey hook: HG_BUNDLE2=1 HG_HOOKNAME=prepushkey HG_HOOKTYPE=prepushkey HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NAMESPACE=phases HG_NEW=0 HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_OLD=1 HG_PENDING=$TESTTMP/test HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob)
+  remote: prepushkey hook: HG_BUNDLE2=1
+  remote: HG_HOOKNAME=prepushkey
+  remote: HG_HOOKTYPE=prepushkey
+  remote: HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NAMESPACE=phases
+  remote: HG_NEW=0
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_OLD=1
+  remote: HG_PENDING=$TESTTMP/test
+  remote: HG_PHASES_MOVED=1
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   remote: pushkey-abort: prepushkey hook exited with status 1
   remote: transaction abort!
   remote: rollback completed
@@ -267,7 +339,7 @@
 
   $ cat >> .hg/hgrc <<EOF
   > [hooks]
-  > prepushkey = sh -c "printenv.py prepushkey 0"
+  > prepushkey = sh -c "printenv.py --line prepushkey 0"
   > EOF
 
 We don't need to test bundle1 because it succeeded above.
@@ -280,7 +352,22 @@
   remote: adding manifests
   remote: adding file changes
   remote: added 1 changesets with 1 changes to 1 files
-  remote: prepushkey hook: HG_BUNDLE2=1 HG_HOOKNAME=prepushkey HG_HOOKTYPE=prepushkey HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NAMESPACE=phases HG_NEW=0 HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_OLD=1 HG_PENDING=$TESTTMP/test HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob)
+  remote: prepushkey hook: HG_BUNDLE2=1
+  remote: HG_HOOKNAME=prepushkey
+  remote: HG_HOOKTYPE=prepushkey
+  remote: HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NAMESPACE=phases
+  remote: HG_NEW=0
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_OLD=1
+  remote: HG_PENDING=$TESTTMP/test
+  remote: HG_PHASES_MOVED=1
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   % serve errors
 #endif
 
@@ -293,7 +380,7 @@
   > [phases]
   > publish = false
   > [hooks]
-  > prepushkey = sh -c "printenv.py prepushkey 1"
+  > prepushkey = sh -c "printenv.py --line prepushkey 1"
   > EOF
 
 #if bundle1
@@ -304,7 +391,13 @@
   remote: adding manifests
   remote: adding file changes
   remote: added 1 changesets with 1 changes to 1 files
-  remote: prepushkey hook: HG_HOOKNAME=prepushkey HG_HOOKTYPE=prepushkey HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NAMESPACE=phases HG_NEW=0 HG_OLD=1
+  remote: prepushkey hook: HG_HOOKNAME=prepushkey
+  remote: HG_HOOKTYPE=prepushkey
+  remote: HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NAMESPACE=phases
+  remote: HG_NEW=0
+  remote: HG_OLD=1
+  remote: 
   remote: pushkey-abort: prepushkey hook exited with status 1
   updating ba677d0156c1 to public failed!
   % serve errors
@@ -318,7 +411,22 @@
   remote: adding manifests
   remote: adding file changes
   remote: added 1 changesets with 1 changes to 1 files
-  remote: prepushkey hook: HG_BUNDLE2=1 HG_HOOKNAME=prepushkey HG_HOOKTYPE=prepushkey HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NAMESPACE=phases HG_NEW=0 HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_OLD=1 HG_PENDING=$TESTTMP/test HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob)
+  remote: prepushkey hook: HG_BUNDLE2=1
+  remote: HG_HOOKNAME=prepushkey
+  remote: HG_HOOKTYPE=prepushkey
+  remote: HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NAMESPACE=phases
+  remote: HG_NEW=0
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_OLD=1
+  remote: HG_PENDING=$TESTTMP/test
+  remote: HG_PHASES_MOVED=1
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   remote: pushkey-abort: prepushkey hook exited with status 1
   remote: transaction abort!
   remote: rollback completed
@@ -331,7 +439,7 @@
 
   $ cat >> .hg/hgrc <<EOF
   > [hooks]
-  > prepushkey = sh -c "printenv.py prepushkey 0"
+  > prepushkey = sh -c "printenv.py --line prepushkey 0"
   > EOF
 
 #if bundle1
@@ -339,7 +447,13 @@
   pushing to http://localhost:$HGPORT/
   searching for changes
   no changes found
-  remote: prepushkey hook: HG_HOOKNAME=prepushkey HG_HOOKTYPE=prepushkey HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NAMESPACE=phases HG_NEW=0 HG_OLD=1
+  remote: prepushkey hook: HG_HOOKNAME=prepushkey
+  remote: HG_HOOKTYPE=prepushkey
+  remote: HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NAMESPACE=phases
+  remote: HG_NEW=0
+  remote: HG_OLD=1
+  remote: 
   % serve errors
   [1]
 #endif
@@ -352,7 +466,22 @@
   remote: adding manifests
   remote: adding file changes
   remote: added 1 changesets with 1 changes to 1 files
-  remote: prepushkey hook: HG_BUNDLE2=1 HG_HOOKNAME=prepushkey HG_HOOKTYPE=prepushkey HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NAMESPACE=phases HG_NEW=0 HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872 HG_OLD=1 HG_PENDING=$TESTTMP/test HG_PHASES_MOVED=1 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:http:$LOCALIP: (glob)
+  remote: prepushkey hook: HG_BUNDLE2=1
+  remote: HG_HOOKNAME=prepushkey
+  remote: HG_HOOKTYPE=prepushkey
+  remote: HG_KEY=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NAMESPACE=phases
+  remote: HG_NEW=0
+  remote: HG_NODE=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_NODE_LAST=ba677d0156c1196c1a699fa53f390dcfc3ce3872
+  remote: HG_OLD=1
+  remote: HG_PENDING=$TESTTMP/test
+  remote: HG_PHASES_MOVED=1
+  remote: HG_SOURCE=serve
+  remote: HG_TXNID=TXN:$ID$
+  remote: HG_TXNNAME=serve
+  remote: HG_URL=remote:http:$LOCALIP: (glob)
+  remote: 
   % serve errors
 #endif
 
--- a/tests/test-qrecord.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-qrecord.t	Wed Apr 17 13:41:18 2019 -0400
@@ -422,3 +422,42 @@
   $ hg diff --nodates
 
   $ cd ..
+
+qrecord should throw an error when histedit in process
+
+  $ hg init issue5981
+  $ cd issue5981
+  $ cat >> $HGRCPATH <<EOF
+  > [extensions]
+  > histedit=
+  > mq=
+  > EOF
+  $ echo > a
+  $ hg ci -Am 'foo bar'
+  adding a
+  $ hg log
+  changeset:   0:ea55e2ae468f
+  tag:         tip
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     foo bar
+  
+  $ hg histedit tip --commands - 2>&1 <<EOF
+  > edit ea55e2ae468f foo bar
+  > EOF
+  0 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  Editing (ea55e2ae468f), you may commit or record as needed now.
+  (hg histedit --continue to resume)
+  [1]
+  $ echo 'foo bar' > a
+  $ hg qrecord -d '0 0' -m aaa a.patch <<EOF
+  > y
+  > y
+  > n
+  > y
+  > y
+  > n
+  > EOF
+  abort: histedit in progress
+  (use 'hg histedit --continue' or 'hg histedit --abort')
+  [255]
--- a/tests/test-rebase-conflicts.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-rebase-conflicts.t	Wed Apr 17 13:41:18 2019 -0400
@@ -330,6 +330,7 @@
   bundle2-input-bundle: 2 parts total
   updating the branch cache
   invalid branchheads cache (served): tip differs
+  invalid branchheads cache (served.hidden): tip differs
   rebase completed
 
 Test minimization of merge conflicts
--- a/tests/test-rebase-dest.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-rebase-dest.t	Wed Apr 17 13:41:18 2019 -0400
@@ -206,6 +206,18 @@
   abort: source and destination form a cycle
   [255]
 
+BUG: cycles aren't flagged correctly when --dry-run is set:
+  $ rebasewithdag -s B -d 'SRC' --dry-run <<'EOS'
+  > C
+  > |
+  > B
+  > |
+  > Z
+  > EOS
+  abort: source and destination form a cycle
+  starting dry-run rebase; repository will not be changed
+  [255]
+
 Switch roots:
 
   $ rebasewithdag -s 'all() - roots(all())' -d 'roots(all()) - ::SRC' <<'EOS'
--- a/tests/test-rebase-inmemory.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-rebase-inmemory.t	Wed Apr 17 13:41:18 2019 -0400
@@ -240,19 +240,19 @@
   |/
   o  0: b173517d0057 'a'
   
-  $ mkdir c
-  $ echo c > c/c
-  $ hg add c/c
-  $ hg ci -m 'c/c'
+  $ mkdir -p c/subdir
+  $ echo c > c/subdir/file.txt
+  $ hg add c/subdir/file.txt
+  $ hg ci -m 'c/subdir/file.txt'
   $ hg rebase -r . -d 3 -n
   starting dry-run rebase; repository will not be changed
-  rebasing 8:755f0104af9b "c/c" (tip)
-  abort: error: 'c/c' conflicts with file 'c' in 3.
+  rebasing 8:e147e6e3c490 "c/subdir/file.txt" (tip)
+  abort: error: 'c/subdir/file.txt' conflicts with file 'c' in 3.
   [255]
   $ hg rebase -r 3 -d . -n
   starting dry-run rebase; repository will not be changed
   rebasing 3:844a7de3e617 "c"
-  abort: error: file 'c' cannot be written because  'c/' is a folder in 755f0104af9b (containing 1 entries: c/c)
+  abort: error: file 'c' cannot be written because  'c/' is a directory in e147e6e3c490 (containing 1 entries: c/subdir/file.txt)
   [255]
 
   $ cd ..
@@ -718,3 +718,45 @@
   diff --git a/foo.txt b/foo.txt
   old mode 100644
   new mode 100755
+
+Test rebasing a commit with copy information, but no content changes
+
+  $ cd ..
+  $ hg clone -q repo1 merge-and-rename
+  $ cd merge-and-rename
+  $ cat << EOF >> .hg/hgrc
+  > [experimental]
+  > evolution.createmarkers=True
+  > evolution.allowunstable=True
+  > EOF
+  $ hg co -q 1
+  $ hg mv d e
+  $ hg ci -qm 'rename d to e'
+  $ hg co -q 3
+  $ hg merge -q 4
+  $ hg ci -m 'merge'
+  $ hg co -q 2
+  $ mv d e
+  $ hg addremove -qs 0
+  $ hg ci -qm 'untracked rename of d to e'
+  $ hg debugobsolete -q `hg log -T '{node}' -r 4` `hg log -T '{node}' -r .`
+  1 new orphan changesets
+  $ hg tglog
+  @  6: 676538af172d 'untracked rename of d to e'
+  |
+  | *    5: 71cb43376053 'merge'
+  | |\
+  | | x  4: 2c8b5dad7956 'rename d to e'
+  | | |
+  | o |  3: ca58782ad1e4 'b'
+  |/ /
+  o /  2: 814f6bd05178 'c'
+  |/
+  o  1: 02952614a83d 'd'
+  |
+  o  0: b173517d0057 'a'
+  
+  $ hg rebase -b 5 -d tip
+  rebasing 3:ca58782ad1e4 "b"
+  rebasing 5:71cb43376053 "merge"
+  note: not rebasing 5:71cb43376053 "merge", its destination already has all its changes
--- a/tests/test-record.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-record.t	Wed Apr 17 13:41:18 2019 -0400
@@ -76,10 +76,8 @@
   > EOF
   diff --git a/empty-rw b/empty-rw
   new file mode 100644
-  examine changes to 'empty-rw'? [Ynesfdaq?] n
-  
-  no changes to record
-  [1]
+  abort: empty commit message
+  [255]
 
   $ hg tip -p
   changeset:   -1:000000000000
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/test-remote-hidden.t	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,111 @@
+========================================================
+Test the ability to access a hidden revision on a server
+========================================================
+
+#require serve
+
+  $ . $TESTDIR/testlib/obsmarker-common.sh
+  $ cat >> $HGRCPATH << EOF
+  > [phases]
+  > # public changeset are not obsolete
+  > publish=false
+  > [experimental]
+  > evolution=all
+  > [ui]
+  > logtemplate='{rev}:{node|short} {desc} [{phase}]\n'
+  > EOF
+
+Setup a simple repository with some hidden revisions
+----------------------------------------------------
+
+Testing the `served.hidden` view
+
+  $ hg init repo-with-hidden
+  $ cd repo-with-hidden
+
+  $ echo 0 > a
+  $ hg ci -qAm "c_Public"
+  $ hg phase --public
+  $ echo 1 > a
+  $ hg ci -m "c_Amend_Old"
+  $ echo 2 > a
+  $ hg ci -m "c_Amend_New" --amend
+  $ hg up ".^"
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ echo 3 > a
+  $ hg ci -m "c_Pruned"
+  created new head
+  $ hg debugobsolete --record-parents `getid 'desc("c_Pruned")'` -d '0 0'
+  obsoleted 1 changesets
+  $ hg up ".^"
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ echo 4 > a
+  $ hg ci -m "c_Secret" --secret
+  created new head
+  $ echo 5 > a
+  $ hg ci -m "c_Secret_Pruned" --secret
+  $ hg debugobsolete --record-parents `getid 'desc("c_Secret_Pruned")'` -d '0 0'
+  obsoleted 1 changesets
+  $ hg up null
+  0 files updated, 0 files merged, 1 files removed, 0 files unresolved
+
+  $ hg log -G -T '{rev}:{node|short} {desc} [{phase}]\n' --hidden
+  x  5:8d28cbe335f3 c_Secret_Pruned [secret]
+  |
+  o  4:1c6afd79eb66 c_Secret [secret]
+  |
+  | x  3:5d1575e42c25 c_Pruned [draft]
+  |/
+  | o  2:c33affeb3f6b c_Amend_New [draft]
+  |/
+  | x  1:be215fbb8c50 c_Amend_Old [draft]
+  |/
+  o  0:5f354f46e585 c_Public [public]
+  
+  $ hg debugobsolete
+  be215fbb8c5090028b00154c1fe877ad1b376c61 c33affeb3f6b4e9621d1839d6175ddc07708807c 0 (Thu Jan 01 00:00:00 1970 +0000) {'ef1': '9', 'operation': 'amend', 'user': 'test'}
+  5d1575e42c25b7f2db75cd4e0b881b1c35158fae 0 {5f354f46e5853535841ec7a128423e991ca4d59b} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
+  8d28cbe335f311bc89332d7bbe8a07889b6914a0 0 {1c6afd79eb6663275bbe30097e162b1c24ced0f0} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
+
+  $ cd ..
+
+Test the feature
+================
+
+Check cache pre-warm
+--------------------
+
+  $ ls -1 repo-with-hidden/.hg/cache
+  branch2
+  branch2-base
+  branch2-served
+  branch2-served.hidden
+  branch2-visible
+  rbc-names-v1
+  rbc-revs-v1
+  tags2
+  tags2-visible
+
+Check that the `served.hidden` repoview
+---------------------------------------
+
+  $ hg -R repo-with-hidden serve -p $HGPORT -d --pid-file hg.pid --config web.view=served.hidden
+  $ cat hg.pid >> $DAEMON_PIDS
+
+changesets in secret and higher phases are not visible through hgweb
+
+  $ hg -R repo-with-hidden log --template "revision:    {rev}\\n" --rev "reverse(not secret())"
+  revision:    2
+  revision:    0
+  $ hg -R repo-with-hidden log --template "revision:    {rev}\\n" --rev "reverse(not secret())" --hidden
+  revision:    3
+  revision:    2
+  revision:    1
+  revision:    0
+  $ get-with-headers.py localhost:$HGPORT 'log?style=raw' | grep revision:
+  revision:    3
+  revision:    2
+  revision:    1
+  revision:    0
+
+  $ killdaemons.py
--- a/tests/test-remotefilelog-bgprefetch.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-remotefilelog-bgprefetch.t	Wed Apr 17 13:41:18 2019 -0400
@@ -105,6 +105,7 @@
   $ hg debugwaitonprefetch >/dev/null 2>%1
   $ sleep 0.5
   $ hg debugwaitonrepack >/dev/null 2>%1
+  $ sleep 0.5
   $ find $CACHEDIR -type f | sort
   $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histidx
   $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histpack
@@ -141,6 +142,7 @@
   $ hg debugwaitonprefetch >/dev/null 2>%1
   $ sleep 1
   $ hg debugwaitonrepack >/dev/null 2>%1
+  $ sleep 1
   $ find $CACHEDIR -type f | sort
   $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
   $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
@@ -193,6 +195,7 @@
   $ hg debugwaitonprefetch >/dev/null 2>%1
   $ sleep 1
   $ hg debugwaitonrepack >/dev/null 2>%1
+  $ sleep 1
   $ find $CACHEDIR -type f | sort
   $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
   $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
@@ -243,6 +246,7 @@
   $ hg debugwaitonprefetch >/dev/null 2>%1
   $ sleep 1
   $ hg debugwaitonrepack >/dev/null 2>%1
+  $ sleep 1
 
 # Ensure that file 'y' was prefetched - it was not part of the rebase operation and therefore
 # could only be downloaded by the background prefetch
@@ -284,6 +288,7 @@
 
   $ sleep 0.5
   $ hg debugwaitonrepack >/dev/null 2>%1
+  $ sleep 0.5
 
   $ find $CACHEDIR -type f | sort
   $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
@@ -328,6 +333,7 @@
   * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
   $ sleep 0.5
   $ hg debugwaitonrepack >/dev/null 2>%1
+  $ sleep 0.5
 
   $ find $CACHEDIR -type f | sort
   $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
--- a/tests/test-remotefilelog-blame.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-remotefilelog-blame.t	Wed Apr 17 13:41:18 2019 -0400
@@ -30,3 +30,11 @@
   1: y
   2: z
   2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
+
+Test grepping the working directory.
+
+  $ hg grep --all-files x
+  x:x
+  $ echo foo >> x
+  $ hg grep --all-files x
+  x:x
--- a/tests/test-remotefilelog-cacheprocess.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-remotefilelog-cacheprocess.t	Wed Apr 17 13:41:18 2019 -0400
@@ -56,11 +56,11 @@
   >                 log('requested %r\n' % key)
   >             sys.stdout.flush()
   >         elif cmd == 'set':
-  >             assert False, 'todo writing'
+  >             raise Exception('todo writing')
   >         else:
-  >             assert False, 'unknown command! %r' % cmd
+  >             raise Exception('unknown command! %r' % cmd)
   > except Exception as e:
-  >     log('Exception! %r\n' % e)
+  >     log('Exception! %s\n' % e)
   >     raise
   > EOF
 
@@ -79,7 +79,7 @@
   requested 'master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a'
   requested 'master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca'
   got command 'set'
-  Exception! AssertionError('todo writing',)
+  Exception! todo writing
 
 Test cache hits.
   $ mv hgcache oldhgcache
@@ -110,7 +110,7 @@
   requested 'y\x00master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca'
   requested 'z\x00master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a'
   got command 'set'
-  Exception! AssertionError('todo writing',)
+  Exception! todo writing
 
 Test cache hits with includepath.
   $ mv hgcache oldhgcache
--- a/tests/test-remotefilelog-datapack.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-remotefilelog-datapack.py	Wed Apr 17 13:41:18 2019 -0400
@@ -40,7 +40,7 @@
             shutil.rmtree(d)
 
     def makeTempDir(self):
-        tempdir = tempfile.mkdtemp()
+        tempdir = pycompat.bytestr(tempfile.mkdtemp())
         self.tempdirs.append(tempdir)
         return tempdir
 
@@ -48,11 +48,12 @@
         return hashlib.sha1(content).digest()
 
     def getFakeHash(self):
-        return ''.join(chr(random.randint(0, 255)) for _ in range(20))
+        return b''.join(pycompat.bytechr(random.randint(0, 255))
+                        for _ in range(20))
 
     def createPack(self, revisions=None, packdir=None):
         if revisions is None:
-            revisions = [("filename", self.getFakeHash(), nullid, "content")]
+            revisions = [(b"filename", self.getFakeHash(), nullid, b"content")]
 
         if packdir is None:
             packdir = self.makeTempDir()
@@ -73,23 +74,23 @@
     def _testAddSingle(self, content):
         """Test putting a simple blob into a pack and reading it out.
         """
-        filename = "foo"
+        filename = b"foo"
         node = self.getHash(content)
 
         revisions = [(filename, node, nullid, content)]
         pack = self.createPack(revisions)
         if self.paramsavailable:
-            self.assertEquals(pack.params.fanoutprefix,
-                              basepack.SMALLFANOUTPREFIX)
+            self.assertEqual(pack.params.fanoutprefix,
+                             basepack.SMALLFANOUTPREFIX)
 
         chain = pack.getdeltachain(filename, node)
-        self.assertEquals(content, chain[0][4])
+        self.assertEqual(content, chain[0][4])
 
     def testAddSingle(self):
-        self._testAddSingle('')
+        self._testAddSingle(b'')
 
     def testAddSingleEmpty(self):
-        self._testAddSingle('abcdef')
+        self._testAddSingle(b'abcdef')
 
     def testAddMultiple(self):
         """Test putting multiple unrelated blobs into a pack and reading them
@@ -97,8 +98,8 @@
         """
         revisions = []
         for i in range(10):
-            filename = "foo%s" % i
-            content = "abcdef%s" % i
+            filename = b"foo%d" % i
+            content = b"abcdef%d" % i
             node = self.getHash(content)
             revisions.append((filename, node, self.getFakeHash(), content))
 
@@ -106,19 +107,19 @@
 
         for filename, node, base, content in revisions:
             entry = pack.getdelta(filename, node)
-            self.assertEquals((content, filename, base, {}), entry)
+            self.assertEqual((content, filename, base, {}), entry)
 
             chain = pack.getdeltachain(filename, node)
-            self.assertEquals(content, chain[0][4])
+            self.assertEqual(content, chain[0][4])
 
     def testAddDeltas(self):
         """Test putting multiple delta blobs into a pack and read the chain.
         """
         revisions = []
-        filename = "foo"
+        filename = b"foo"
         lastnode = nullid
         for i in range(10):
-            content = "abcdef%s" % i
+            content = b"abcdef%d" % i
             node = self.getHash(content)
             revisions.append((filename, node, lastnode, content))
             lastnode = node
@@ -127,13 +128,13 @@
 
         entry = pack.getdelta(filename, revisions[0][1])
         realvalue = (revisions[0][3], filename, revisions[0][2], {})
-        self.assertEquals(entry, realvalue)
+        self.assertEqual(entry, realvalue)
 
         # Test that the chain for the final entry has all the others
         chain = pack.getdeltachain(filename, node)
         for i in range(10):
-            content = "abcdef%s" % i
-            self.assertEquals(content, chain[-i - 1][4])
+            content = b"abcdef%d" % i
+            self.assertEqual(content, chain[-i - 1][4])
 
     def testPackMany(self):
         """Pack many related and unrelated objects.
@@ -143,10 +144,10 @@
         blobs = {}
         random.seed(0)
         for i in range(100):
-            filename = "filename-%s" % i
+            filename = b"filename-%d" % i
             filerevs = []
             for j in range(random.randint(1, 100)):
-                content = "content-%s" % j
+                content = b"content-%d" % j
                 node = self.getHash(content)
                 lastnode = nullid
                 if len(filerevs) > 0:
@@ -158,22 +159,22 @@
         pack = self.createPack(revisions)
 
         # Verify the pack contents
-        for (filename, node, lastnode), content in sorted(blobs.iteritems()):
+        for (filename, node, lastnode), content in sorted(blobs.items()):
             chain = pack.getdeltachain(filename, node)
             for entry in chain:
                 expectedcontent = blobs[(entry[0], entry[1], entry[3])]
-                self.assertEquals(entry[4], expectedcontent)
+                self.assertEqual(entry[4], expectedcontent)
 
     def testPackMetadata(self):
         revisions = []
         for i in range(100):
-            filename = '%s.txt' % i
-            content = 'put-something-here \n' * i
+            filename = b'%d.txt' % i
+            content = b'put-something-here \n' * i
             node = self.getHash(content)
             meta = {constants.METAKEYFLAG: i ** 4,
                     constants.METAKEYSIZE: len(content),
-                    'Z': 'random_string',
-                    '_': '\0' * i}
+                    b'Z': b'random_string',
+                    b'_': b'\0' * i}
             revisions.append((filename, node, nullid, content, meta))
         pack = self.createPack(revisions)
         for name, node, x, content, origmeta in revisions:
@@ -181,50 +182,51 @@
             # flag == 0 should be optimized out
             if origmeta[constants.METAKEYFLAG] == 0:
                 del origmeta[constants.METAKEYFLAG]
-            self.assertEquals(parsedmeta, origmeta)
+            self.assertEqual(parsedmeta, origmeta)
 
     def testGetMissing(self):
         """Test the getmissing() api.
         """
         revisions = []
-        filename = "foo"
+        filename = b"foo"
         lastnode = nullid
         for i in range(10):
-            content = "abcdef%s" % i
+            content = b"abcdef%d" % i
             node = self.getHash(content)
             revisions.append((filename, node, lastnode, content))
             lastnode = node
 
         pack = self.createPack(revisions)
 
-        missing = pack.getmissing([("foo", revisions[0][1])])
+        missing = pack.getmissing([(b"foo", revisions[0][1])])
         self.assertFalse(missing)
 
-        missing = pack.getmissing([("foo", revisions[0][1]),
-                                   ("foo", revisions[1][1])])
+        missing = pack.getmissing([(b"foo", revisions[0][1]),
+                                   (b"foo", revisions[1][1])])
         self.assertFalse(missing)
 
         fakenode = self.getFakeHash()
-        missing = pack.getmissing([("foo", revisions[0][1]), ("foo", fakenode)])
-        self.assertEquals(missing, [("foo", fakenode)])
+        missing = pack.getmissing([(b"foo", revisions[0][1]),
+                                   (b"foo", fakenode)])
+        self.assertEqual(missing, [(b"foo", fakenode)])
 
     def testAddThrows(self):
         pack = self.createPack()
 
         try:
-            pack.add('filename', nullid, 'contents')
+            pack.add(b'filename', nullid, b'contents')
             self.assertTrue(False, "datapack.add should throw")
         except RuntimeError:
             pass
 
     def testBadVersionThrows(self):
         pack = self.createPack()
-        path = pack.path + '.datapack'
-        with open(path) as f:
+        path = pack.path + b'.datapack'
+        with open(path, 'rb') as f:
             raw = f.read()
         raw = struct.pack('!B', 255) + raw[1:]
         os.chmod(path, os.stat(path).st_mode | stat.S_IWRITE)
-        with open(path, 'w+') as f:
+        with open(path, 'wb+') as f:
             f.write(raw)
 
         try:
@@ -235,10 +237,10 @@
 
     def testMissingDeltabase(self):
         fakenode = self.getFakeHash()
-        revisions = [("filename", fakenode, self.getFakeHash(), "content")]
+        revisions = [(b"filename", fakenode, self.getFakeHash(), b"content")]
         pack = self.createPack(revisions)
-        chain = pack.getdeltachain("filename", fakenode)
-        self.assertEquals(len(chain), 1)
+        chain = pack.getdeltachain(b"filename", fakenode)
+        self.assertEqual(len(chain), 1)
 
     def testLargePack(self):
         """Test creating and reading from a large pack with over X entries.
@@ -247,7 +249,7 @@
         blobs = {}
         total = basepack.SMALLFANOUTCUTOFF + 1
         for i in pycompat.xrange(total):
-            filename = "filename-%s" % i
+            filename = b"filename-%d" % i
             content = filename
             node = self.getHash(content)
             blobs[(filename, node)] = content
@@ -255,12 +257,12 @@
 
         pack = self.createPack(revisions)
         if self.paramsavailable:
-            self.assertEquals(pack.params.fanoutprefix,
-                              basepack.LARGEFANOUTPREFIX)
+            self.assertEqual(pack.params.fanoutprefix,
+                             basepack.LARGEFANOUTPREFIX)
 
-        for (filename, node), content in blobs.iteritems():
+        for (filename, node), content in blobs.items():
             actualcontent = pack.getdeltachain(filename, node)[0][4]
-            self.assertEquals(actualcontent, content)
+            self.assertEqual(actualcontent, content)
 
     def testPacksCache(self):
         """Test that we remember the most recent packs while fetching the delta
@@ -274,12 +276,12 @@
 
         for i in range(numpacks):
             chain = []
-            revision = (str(i), self.getFakeHash(), nullid, "content")
+            revision = (b'%d' % i, self.getFakeHash(), nullid, b"content")
 
             for _ in range(revisionsperpack):
                 chain.append(revision)
                 revision = (
-                    str(i),
+                    b'%d' % i,
                     self.getFakeHash(),
                     revision[1],
                     self.getFakeHash()
@@ -290,7 +292,7 @@
 
         class testdatapackstore(datapack.datapackstore):
             # Ensures that we are not keeping everything in the cache.
-            DEFAULTCACHESIZE = numpacks / 2
+            DEFAULTCACHESIZE = numpacks // 2
 
         store = testdatapackstore(uimod.ui(), packdir)
 
@@ -300,12 +302,12 @@
             chain = store.getdeltachain(revision[0], revision[1])
 
             mostrecentpack = next(iter(store.packs), None)
-            self.assertEquals(
+            self.assertEqual(
                 mostrecentpack.getdeltachain(revision[0], revision[1]),
                 chain
             )
 
-            self.assertEquals(randomchain.index(revision) + 1, len(chain))
+            self.assertEqual(randomchain.index(revision) + 1, len(chain))
 
     # perf test off by default since it's slow
     def _testIndexPerf(self):
@@ -330,8 +332,8 @@
         for packsize in packsizes:
             revisions = []
             for i in pycompat.xrange(packsize):
-                filename = "filename-%s" % i
-                content = "content-%s" % i
+                filename = b"filename-%d" % i
+                content = b"content-%d" % i
                 node = self.getHash(content)
                 revisions.append((filename, node, nullid, content))
 
@@ -350,9 +352,9 @@
                 start = time.time()
                 pack.getmissing(findnodes[:lookupsize])
                 elapsed = time.time() - start
-                print ("%s pack %s lookups = %0.04f" %
-                       (('%s' % packsize).rjust(7),
-                        ('%s' % lookupsize).rjust(7),
+                print ("%s pack %d lookups = %0.04f" %
+                       (('%d' % packsize).rjust(7),
+                        ('%d' % lookupsize).rjust(7),
                         elapsed))
 
             print("")
--- a/tests/test-remotefilelog-gc.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-remotefilelog-gc.t	Wed Apr 17 13:41:18 2019 -0400
@@ -107,6 +107,6 @@
 # Test that warning is displayed when the repo path is malformed
 
   $ printf "asdas\0das" >> $CACHEDIR/repos
-  $ hg gc 2>&1 | head -n2
-  warning: malformed path: * (glob)
-  Traceback (most recent call last):
+  $ hg gc
+  abort: invalid path asdas\x00da: .*(null|NULL).* (re)
+  [255]
--- a/tests/test-remotefilelog-histpack.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-remotefilelog-histpack.py	Wed Apr 17 13:41:18 2019 -0400
@@ -52,7 +52,7 @@
         node, p1node, p2node, and linknode.
         """
         if revisions is None:
-            revisions = [("filename", self.getFakeHash(), nullid, nullid,
+            revisions = [(b"filename", self.getFakeHash(), nullid, nullid,
                           self.getFakeHash(), None)]
 
         packdir = pycompat.fsencode(self.makeTempDir())
@@ -68,7 +68,7 @@
     def testAddSingle(self):
         """Test putting a single entry into a pack and reading it out.
         """
-        filename = "foo"
+        filename = b"foo"
         node = self.getFakeHash()
         p1 = self.getFakeHash()
         p2 = self.getFakeHash()
@@ -78,9 +78,9 @@
         pack = self.createPack(revisions)
 
         actual = pack.getancestors(filename, node)[node]
-        self.assertEquals(p1, actual[0])
-        self.assertEquals(p2, actual[1])
-        self.assertEquals(linknode, actual[2])
+        self.assertEqual(p1, actual[0])
+        self.assertEqual(p2, actual[1])
+        self.assertEqual(linknode, actual[2])
 
     def testAddMultiple(self):
         """Test putting multiple unrelated revisions into a pack and reading
@@ -88,7 +88,7 @@
         """
         revisions = []
         for i in range(10):
-            filename = "foo-%s" % i
+            filename = b"foo-%d" % i
             node = self.getFakeHash()
             p1 = self.getFakeHash()
             p2 = self.getFakeHash()
@@ -99,10 +99,10 @@
 
         for filename, node, p1, p2, linknode, copyfrom in revisions:
             actual = pack.getancestors(filename, node)[node]
-            self.assertEquals(p1, actual[0])
-            self.assertEquals(p2, actual[1])
-            self.assertEquals(linknode, actual[2])
-            self.assertEquals(copyfrom, actual[3])
+            self.assertEqual(p1, actual[0])
+            self.assertEqual(p2, actual[1])
+            self.assertEqual(linknode, actual[2])
+            self.assertEqual(copyfrom, actual[3])
 
     def testAddAncestorChain(self):
         """Test putting multiple revisions in into a pack and read the ancestor
@@ -124,10 +124,10 @@
         ancestors = pack.getancestors(revisions[0][0], revisions[0][1])
         for filename, node, p1, p2, linknode, copyfrom in revisions:
             ap1, ap2, alinknode, acopyfrom = ancestors[node]
-            self.assertEquals(ap1, p1)
-            self.assertEquals(ap2, p2)
-            self.assertEquals(alinknode, linknode)
-            self.assertEquals(acopyfrom, copyfrom)
+            self.assertEqual(ap1, p1)
+            self.assertEqual(ap2, p2)
+            self.assertEqual(alinknode, linknode)
+            self.assertEqual(acopyfrom, copyfrom)
 
     def testPackMany(self):
         """Pack many related and unrelated ancestors.
@@ -161,16 +161,16 @@
         pack = self.createPack(revisions)
 
         # Verify the pack contents
-        for (filename, node), (p1, p2, lastnode) in allentries.items():
+        for (filename, node) in allentries:
             ancestors = pack.getancestors(filename, node)
-            self.assertEquals(ancestorcounts[(filename, node)],
-                              len(ancestors))
+            self.assertEqual(ancestorcounts[(filename, node)],
+                             len(ancestors))
             for anode, (ap1, ap2, alinknode, copyfrom) in ancestors.items():
                 ep1, ep2, elinknode = allentries[(filename, anode)]
-                self.assertEquals(ap1, ep1)
-                self.assertEquals(ap2, ep2)
-                self.assertEquals(alinknode, elinknode)
-                self.assertEquals(copyfrom, None)
+                self.assertEqual(ap1, ep1)
+                self.assertEqual(ap2, ep2)
+                self.assertEqual(alinknode, elinknode)
+                self.assertEqual(copyfrom, None)
 
     def testGetNodeInfo(self):
         revisions = []
@@ -186,10 +186,10 @@
         # Test that getnodeinfo returns the expected results
         for filename, node, p1, p2, linknode, copyfrom in revisions:
             ap1, ap2, alinknode, acopyfrom = pack.getnodeinfo(filename, node)
-            self.assertEquals(ap1, p1)
-            self.assertEquals(ap2, p2)
-            self.assertEquals(alinknode, linknode)
-            self.assertEquals(acopyfrom, copyfrom)
+            self.assertEqual(ap1, p1)
+            self.assertEqual(ap2, p2)
+            self.assertEqual(alinknode, linknode)
+            self.assertEqual(acopyfrom, copyfrom)
 
     def testGetMissing(self):
         """Test the getmissing() api.
@@ -215,11 +215,11 @@
         fakenode = self.getFakeHash()
         missing = pack.getmissing([(filename, revisions[0][1]),
                                    (filename, fakenode)])
-        self.assertEquals(missing, [(filename, fakenode)])
+        self.assertEqual(missing, [(filename, fakenode)])
 
         # Test getmissing on a non-existant filename
-        missing = pack.getmissing([("bar", fakenode)])
-        self.assertEquals(missing, [("bar", fakenode)])
+        missing = pack.getmissing([(b"bar", fakenode)])
+        self.assertEqual(missing, [(b"bar", fakenode)])
 
     def testAddThrows(self):
         pack = self.createPack()
@@ -232,12 +232,12 @@
 
     def testBadVersionThrows(self):
         pack = self.createPack()
-        path = pack.path + '.histpack'
-        with open(path) as f:
+        path = pack.path + b'.histpack'
+        with open(path, 'rb') as f:
             raw = f.read()
         raw = struct.pack('!B', 255) + raw[1:]
         os.chmod(path, os.stat(path).st_mode | stat.S_IWRITE)
-        with open(path, 'w+') as f:
+        with open(path, 'wb+') as f:
             f.write(raw)
 
         try:
@@ -260,14 +260,14 @@
             revisions.append((filename, node, p1, p2, linknode, None))
 
         pack = self.createPack(revisions)
-        self.assertEquals(pack.params.fanoutprefix, basepack.LARGEFANOUTPREFIX)
+        self.assertEqual(pack.params.fanoutprefix, basepack.LARGEFANOUTPREFIX)
 
         for filename, node, p1, p2, linknode, copyfrom in revisions:
             actual = pack.getancestors(filename, node)[node]
-            self.assertEquals(p1, actual[0])
-            self.assertEquals(p2, actual[1])
-            self.assertEquals(linknode, actual[2])
-            self.assertEquals(copyfrom, actual[3])
+            self.assertEqual(p1, actual[0])
+            self.assertEqual(p2, actual[1])
+            self.assertEqual(linknode, actual[2])
+            self.assertEqual(copyfrom, actual[3])
 # TODO:
 # histpack store:
 # - repack two packs into one
--- a/tests/test-remotefilelog-prefetch.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-remotefilelog-prefetch.t	Wed Apr 17 13:41:18 2019 -0400
@@ -197,6 +197,9 @@
   $ mv x x2
   $ mv y y2
   $ mv z z2
+  $ echo a > a
+  $ hg add a
+  $ rm a
   $ clearcache
   $ hg addremove -s 50 > /dev/null
   3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
--- a/tests/test-removeemptydirs.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-removeemptydirs.t	Wed Apr 17 13:41:18 2019 -0400
@@ -265,91 +265,3 @@
   0:d17db4b0303a add bar
 
   $ cd $TESTTMP
-
-Testing `hg split` being run from inside of a directory that was created in the
-commit being split:
-
-  $ hg init hgsplit
-  $ cd hgsplit
-  $ cat >> .hg/hgrc << EOF
-  > [ui]
-  > interactive = 1
-  > [extensions]
-  > split =
-  > EOF
-  $ echo anchor > anchor.txt
-  $ hg ci -qAm anchor
-
-Create a changeset with '/otherfile_in_root' and 'somedir/foo', then try to
-split it.
-  $ echo otherfile > otherfile_in_root
-  $ mkdir somedir
-  $ cd somedir
-  $ echo hi > foo
-  $ hg ci -qAm split_me
-(Note: need to make this file not in this directory, or else the bug doesn't
-reproduce; we're using a separate file due to concerns of portability on
-`echo -e`)
-  $ cat > ../split_commands << EOF
-  > n
-  > y
-  > y
-  > a
-  > EOF
-
-The split succeeds on no-rmcwd platforms, which alters the rest of the tests
-#if rmcwd
-  $ cat ../split_commands | hg split
-  current directory was removed
-  (consider changing to repo root: $TESTTMP/hgsplit)
-  diff --git a/otherfile_in_root b/otherfile_in_root
-  new file mode 100644
-  examine changes to 'otherfile_in_root'? [Ynesfdaq?] n
-  
-  diff --git a/somedir/foo b/somedir/foo
-  new file mode 100644
-  examine changes to 'somedir/foo'? [Ynesfdaq?] y
-  
-  @@ -0,0 +1,1 @@
-  +hi
-  record change 2/2 to 'somedir/foo'? [Ynesfdaq?] y
-  
-  abort: $ENOENT$
-  [255]
-#endif
-
-Let's try that again without the rmdir
-  $ cd $TESTTMP/hgsplit/somedir
-Show that the previous split didn't do anything
-  $ hg log -T '{rev}:{node|short} {desc}\n'
-  1:e26b22a4f0b7 split_me
-  0:7e53273730c0 anchor
-  $ hg status
-  ? split_commands
-Try again
-  $ cat ../split_commands | hg $NO_RM split
-  diff --git a/otherfile_in_root b/otherfile_in_root
-  new file mode 100644
-  examine changes to 'otherfile_in_root'? [Ynesfdaq?] n
-  
-  diff --git a/somedir/foo b/somedir/foo
-  new file mode 100644
-  examine changes to 'somedir/foo'? [Ynesfdaq?] y
-  
-  @@ -0,0 +1,1 @@
-  +hi
-  record change 2/2 to 'somedir/foo'? [Ynesfdaq?] y
-  
-  created new head
-  diff --git a/otherfile_in_root b/otherfile_in_root
-  new file mode 100644
-  examine changes to 'otherfile_in_root'? [Ynesfdaq?] a
-  
-  saved backup bundle to $TESTTMP/hgsplit/.hg/strip-backup/*-split.hg (glob)
-Show that this split did something
-  $ hg log -T '{rev}:{node|short} {desc}\n'
-  2:a440f24fca4f split_me
-  1:c994f20276ab split_me
-  0:7e53273730c0 anchor
-  $ hg status
-  ? split_commands
--- a/tests/test-rename-merge1.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-rename-merge1.t	Wed Apr 17 13:41:18 2019 -0400
@@ -37,8 +37,8 @@
    branchmerge: True, force: False, partial: False
    ancestor: af1939970a1c, local: 044f8520aeeb+, remote: 85c198ef2f6c
   note: possible conflict - a2 was renamed multiple times to:
+   b2
    c2
-   b2
    preserving a for resolve of b
   removing a
    b2: remote created -> g
--- a/tests/test-repair-strip.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-repair-strip.t	Wed Apr 17 13:41:18 2019 -0400
@@ -53,7 +53,7 @@
   rollback failed - please run hg recover
   (failure reason: [Errno 13] Permission denied .hg/store/data/b.i')
   strip failed, backup bundle
-  abort: Permission denied .hg/store/data/b.i
+  abort: Permission denied .hg/store/data/b.i'
   % after update 0, strip 2
   abandoned transaction found - run hg recover
   checking changesets
@@ -85,7 +85,7 @@
   date:        Thu Jan 01 00:00:00 1970 +0000
   summary:     a
   
-  abort: Permission denied .hg/store/data/b.i
+  abort: Permission denied .hg/store/data/b.i'
   % after update 0, strip 2
   checking changesets
   checking manifests
@@ -107,7 +107,7 @@
   rollback failed - please run hg recover
   (failure reason: [Errno 13] Permission denied .hg/store/00manifest.i')
   strip failed, backup bundle
-  abort: Permission denied .hg/store/00manifest.i
+  abort: Permission denied .hg/store/00manifest.i'
   % after update 0, strip 2
   abandoned transaction found - run hg recover
   checking changesets
--- a/tests/test-repo-compengines.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-repo-compengines.t	Wed Apr 17 13:41:18 2019 -0400
@@ -21,8 +21,8 @@
 
 Unknown compression engine to format.compression aborts
 
-  $ hg --config experimental.format.compression=unknown init unknown
-  abort: compression engine unknown defined by experimental.format.compression not available
+  $ hg --config format.revlog-compression=unknown init unknown
+  abort: compression engine unknown defined by format.revlog-compression not available
   (run "hg debuginstall" to list available compression engines)
   [255]
 
@@ -40,13 +40,13 @@
 
 #if zstd
 
-  $ hg --config experimental.format.compression=zstd init zstd
+  $ hg --config format.revlog-compression=zstd init zstd
   $ cd zstd
   $ cat .hg/requires
   dotencode
-  exp-compression-zstd
   fncache
   generaldelta
+  revlog-compression-zstd
   revlogv1
   sparserevlog
   store
@@ -66,7 +66,7 @@
 
   $ cd default
   $ touch bar
-  $ hg --config experimental.format.compression=zstd -q commit -A -m 'add bar with a lot of repeated repeated repeated text'
+  $ hg --config format.revlog-compression=zstd -q commit -A -m 'add bar with a lot of repeated repeated repeated text'
 
   $ cat .hg/requires
   dotencode
@@ -82,3 +82,114 @@
       0x78 (x)  : 199 (100.00%)
 
 #endif
+
+checking zlib options
+=====================
+
+  $ hg init zlib-level-default
+  $ hg init zlib-level-1
+  $ cat << EOF >> zlib-level-1/.hg/hgrc
+  > [storage]
+  > revlog.zlib.level=1
+  > EOF
+  $ hg init zlib-level-9
+  $ cat << EOF >> zlib-level-9/.hg/hgrc
+  > [storage]
+  > revlog.zlib.level=9
+  > EOF
+
+
+  $ commitone() {
+  >    repo=$1
+  >    cp $RUNTESTDIR/bundles/issue4438-r1.hg $repo/a
+  >    hg -R $repo add $repo/a
+  >    hg -R $repo commit -m some-commit
+  > }
+
+  $ for repo in zlib-level-default zlib-level-1 zlib-level-9; do
+  >     commitone $repo
+  > done
+
+  $ $RUNTESTDIR/f -s */.hg/store/data/*
+  zlib-level-1/.hg/store/data/a.i: size=4146
+  zlib-level-9/.hg/store/data/a.i: size=4138
+  zlib-level-default/.hg/store/data/a.i: size=4138
+
+Test error cases
+
+  $ hg init zlib-level-invalid
+  $ cat << EOF >> zlib-level-invalid/.hg/hgrc
+  > [storage]
+  > revlog.zlib.level=foobar
+  > EOF
+  $ commitone zlib-level-invalid
+  abort: storage.revlog.zlib.level is not a valid integer ('foobar')
+  abort: storage.revlog.zlib.level is not a valid integer ('foobar')
+  [255]
+
+  $ hg init zlib-level-out-of-range
+  $ cat << EOF >> zlib-level-out-of-range/.hg/hgrc
+  > [storage]
+  > revlog.zlib.level=42
+  > EOF
+
+  $ commitone zlib-level-out-of-range
+  abort: invalid value for `storage.revlog.zlib.level` config: 42
+  abort: invalid value for `storage.revlog.zlib.level` config: 42
+  [255]
+
+checking zstd options
+=====================
+
+  $ hg init zstd-level-default --config format.revlog-compression=zstd
+  $ hg init zstd-level-1 --config format.revlog-compression=zstd
+  $ cat << EOF >> zstd-level-1/.hg/hgrc
+  > [storage]
+  > revlog.zstd.level=1
+  > EOF
+  $ hg init zstd-level-22 --config format.revlog-compression=zstd
+  $ cat << EOF >> zstd-level-22/.hg/hgrc
+  > [storage]
+  > revlog.zstd.level=22
+  > EOF
+
+
+  $ commitone() {
+  >    repo=$1
+  >    cp $RUNTESTDIR/bundles/issue4438-r1.hg $repo/a
+  >    hg -R $repo add $repo/a
+  >    hg -R $repo commit -m some-commit
+  > }
+
+  $ for repo in zstd-level-default zstd-level-1 zstd-level-22; do
+  >     commitone $repo
+  > done
+
+  $ $RUNTESTDIR/f -s zstd-*/.hg/store/data/*
+  zstd-level-1/.hg/store/data/a.i: size=4097
+  zstd-level-22/.hg/store/data/a.i: size=4091
+  zstd-level-default/.hg/store/data/a.i: size=4094
+
+Test error cases
+
+  $ hg init zstd-level-invalid --config format.revlog-compression=zstd
+  $ cat << EOF >> zstd-level-invalid/.hg/hgrc
+  > [storage]
+  > revlog.zstd.level=foobar
+  > EOF
+  $ commitone zstd-level-invalid
+  abort: storage.revlog.zstd.level is not a valid integer ('foobar')
+  abort: storage.revlog.zstd.level is not a valid integer ('foobar')
+  [255]
+
+  $ hg init zstd-level-out-of-range --config format.revlog-compression=zstd
+  $ cat << EOF >> zstd-level-out-of-range/.hg/hgrc
+  > [storage]
+  > revlog.zstd.level=42
+  > EOF
+
+  $ commitone zstd-level-out-of-range
+  abort: invalid value for `storage.revlog.zstd.level` config: 42
+  abort: invalid value for `storage.revlog.zstd.level` config: 42
+  [255]
+
--- a/tests/test-resolve.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-resolve.t	Wed Apr 17 13:41:18 2019 -0400
@@ -67,6 +67,9 @@
   $ hg resolve -l
   R file1
   U file2
+  $ hg resolve -l --config ui.relative-paths=yes
+  R ../file1
+  U ../file2
   $ hg resolve --re-merge filez file2
   arguments do not match paths that need resolving
   (try: hg resolve --re-merge path:filez path:file2)
--- a/tests/test-revert-interactive.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-revert-interactive.t	Wed Apr 17 13:41:18 2019 -0400
@@ -149,11 +149,9 @@
   g
 
 Test that a noop revert doesn't do an unnecessary backup
-  $ (echo y; echo n) | hg revert -i -r 2 folder1/g
+  $ (echo n) | hg revert -i -r 2 folder1/g
   diff --git a/folder1/g b/folder1/g
   1 hunks, 1 lines changed
-  examine changes to 'folder1/g'? [Ynesfdaq?] y
-  
   @@ -3,4 +3,3 @@
    3
    4
@@ -165,11 +163,9 @@
   g
 
 Test --no-backup
-  $ (echo y; echo y) | hg revert -i -C -r 2 folder1/g
+  $ (echo y) | hg revert -i -C -r 2 folder1/g
   diff --git a/folder1/g b/folder1/g
   1 hunks, 1 lines changed
-  examine changes to 'folder1/g'? [Ynesfdaq?] y
-  
   @@ -3,4 +3,3 @@
    3
    4
@@ -270,7 +266,6 @@
   M f
   M folder1/g
   $ hg revert --interactive f << EOF
-  > y
   > ?
   > y
   > n
@@ -278,8 +273,6 @@
   > EOF
   diff --git a/f b/f
   2 hunks, 2 lines changed
-  examine changes to 'f'? [Ynesfdaq?] y
-  
   @@ -1,6 +1,5 @@
   -a
    1
@@ -327,6 +320,25 @@
   4
   5
   $ rm f.orig
+
+Patterns
+
+  $ hg revert -i 'glob:f*' << EOF
+  > y
+  > n
+  > EOF
+  diff --git a/f b/f
+  1 hunks, 1 lines changed
+  examine changes to 'f'? [Ynesfdaq?] y
+  
+  @@ -4,4 +4,3 @@
+   3
+   4
+   5
+  -b
+  discard this change to 'f'? [Ynesfdaq?] n
+  
+
   $ hg update -C .
   2 files updated, 0 files merged, 0 files removed, 0 files unresolved
 
@@ -424,3 +436,72 @@
   b: no such file in rev b40d1912accf
 
   $ cd ..
+
+Prompt before undeleting file(issue6008)
+  $ hg init repo
+  $ cd repo
+  $ echo a > a
+  $ hg ci -qAm a
+  $ hg rm a
+  $ hg revert -i<<EOF
+  > y
+  > EOF
+  add back removed file a (Yn)? y
+  undeleting a
+  $ ls
+  a
+  $ hg rm a
+  $ hg revert -i<<EOF
+  > n
+  > EOF
+  add back removed file a (Yn)? n
+  $ ls
+  $ hg revert -a
+  undeleting a
+  $ cd ..
+
+Test "keep" mode
+
+  $ cat <<EOF >> $HGRCPATH
+  > [experimental]
+  > revert.interactive.select-to-keep = true
+  > EOF
+
+  $ cd repo
+  $ printf "x\na\ny\n" > a
+  $ hg diff
+  diff -r cb9a9f314b8b a
+  --- a/a	Thu Jan 01 00:00:00 1970 +0000
+  +++ b/a	Thu Jan 01 00:00:00 1970 +0000
+  @@ -1,1 +1,3 @@
+  +x
+   a
+  +y
+  $ cat > $TESTTMP/editor.sh << '__EOF__'
+  > echo "+new line" >> "$1"
+  > __EOF__
+
+  $ HGEDITOR="\"sh\" \"${TESTTMP}/editor.sh\"" hg revert -i  <<EOF
+  > y
+  > n
+  > e
+  > EOF
+  diff --git a/a b/a
+  2 hunks, 2 lines changed
+  examine changes to 'a'? [Ynesfdaq?] y
+  
+  @@ -1,1 +1,2 @@
+  +x
+   a
+  keep change 1/2 to 'a'? [Ynesfdaq?] n
+  
+  @@ -1,1 +2,2 @@
+   a
+  +y
+  keep change 2/2 to 'a'? [Ynesfdaq?] e
+  
+  reverting a
+  $ cat a
+  a
+  y
+  new line
--- a/tests/test-revert.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-revert.t	Wed Apr 17 13:41:18 2019 -0400
@@ -92,7 +92,7 @@
   $ echo z > e
   $ hg revert --all -v --config 'ui.origbackuppath=.hg/origbackups'
   creating directory: $TESTTMP/repo/.hg/origbackups
-  saving current version of e as $TESTTMP/repo/.hg/origbackups/e
+  saving current version of e as .hg/origbackups/e
   reverting e
   $ rm -rf .hg/origbackups
 
@@ -289,6 +289,23 @@
   $ hg revert .
   reverting b/b
 
+respects ui.relative-paths
+--------------------------
+
+  $ echo foo > newdir/newfile
+  $ hg add newdir/newfile
+  $ hg revert --all --cwd newdir
+  forgetting newfile
+
+  $ echo foo > newdir/newfile
+  $ hg add newdir/newfile
+  $ hg revert --all --cwd newdir --config ui.relative-paths=True
+  forgetting newfile
+
+  $ echo foo > newdir/newfile
+  $ hg add newdir/newfile
+  $ hg revert --all --cwd newdir --config ui.relative-paths=False
+  forgetting newdir/newfile
 
 reverting a rename target should revert the source
 --------------------------------------------------
--- a/tests/test-revlog-raw.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-revlog-raw.py	Wed Apr 17 13:41:18 2019 -0400
@@ -417,7 +417,6 @@
         print('  got:      %s' % result15)
 
 def maintest():
-    expected = rl = None
     with newtransaction() as tr:
         rl = newrevlog(recreate=True)
         expected = writecases(rl, tr)
--- a/tests/test-revset.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-revset.t	Wed Apr 17 13:41:18 2019 -0400
@@ -12,9 +12,9 @@
   >     """
   >     if 3 not in subset:
   >        if 2 in subset:
-  >            return baseset([2,2])
+  >            return baseset([2, 2])
   >        return baseset()
-  >     return baseset([3,3,2,2])
+  >     return baseset([3, 3, 2, 2])
   > 
   > mercurial.revset.symbols[b'r3232'] = r3232
   > EOF
@@ -643,10 +643,13 @@
   [255]
 
   $ hg debugrevspec '.#generations[a]'
-  hg: parse error: relation subscript must be an integer
+  hg: parse error: relation subscript must be an integer or a range
   [255]
   $ hg debugrevspec '.#generations[1-2]'
-  hg: parse error: relation subscript must be an integer
+  hg: parse error: relation subscript must be an integer or a range
+  [255]
+  $ hg debugrevspec '.#generations[foo:bar]'
+  hg: parse error: relation subscript bounds must be integers
   [255]
 
 suggested relations
@@ -1274,6 +1277,31 @@
   $ log '.#g[(-1)]'
   8
 
+  $ log '6#generations[0:1]'
+  6
+  7
+  $ log '6#generations[-1:1]'
+  4
+  5
+  6
+  7
+  $ log '6#generations[0:]'
+  6
+  7
+  $ log '5#generations[:0]'
+  0
+  1
+  3
+  5
+  $ log '3#generations[:]'
+  0
+  1
+  3
+  5
+  6
+  7
+  $ log 'tip#generations[1:-1]'
+
   $ hg debugrevspec -p parsed 'roots(:)#g[2]'
   * parsed:
   (relsubscript
@@ -2950,3 +2978,63 @@
   * set:
   <baseset+ [0]>
   0
+
+abort if the revset doesn't expect given size
+  $ log 'expectsize()'
+  hg: parse error: invalid set of arguments
+  [255]
+  $ log 'expectsize(0:2, a)'
+  hg: parse error: expectsize requires a size range or a positive integer
+  [255]
+  $ log 'expectsize(0:2, 3)'
+  0
+  1
+  2
+
+  $ log 'expectsize(2:0, 3)'
+  2
+  1
+  0
+  $ log 'expectsize(0:1, 1)'
+  abort: revset size mismatch. expected 1, got 2!
+  [255]
+  $ log 'expectsize(0:4, -1)'
+  hg: parse error: negative size
+  [255]
+  $ log 'expectsize(0:2, 2:4)'
+  0
+  1
+  2
+  $ log 'expectsize(0:1, 3:5)'
+  abort: revset size mismatch. expected between 3 and 5, got 2!
+  [255]
+  $ log 'expectsize(0:1, -1:2)'
+  hg: parse error: negative size
+  [255]
+  $ log 'expectsize(0:1, 1:-2)'
+  hg: parse error: negative size
+  [255]
+  $ log 'expectsize(0:2, a:4)'
+  hg: parse error: size range bounds must be integers
+  [255]
+  $ log 'expectsize(0:2, 2:b)'
+  hg: parse error: size range bounds must be integers
+  [255]
+  $ log 'expectsize(0:2, 2:)'
+  0
+  1
+  2
+  $ log 'expectsize(0:2, :5)'
+  0
+  1
+  2
+  $ log 'expectsize(0:2, :)'
+  0
+  1
+  2
+  $ log 'expectsize(0:2, 4:)'
+  abort: revset size mismatch. expected between 4 and 11, got 3!
+  [255]
+  $ log 'expectsize(0:2, :2)'
+  abort: revset size mismatch. expected between 0 and 2, got 3!
+  [255]
--- a/tests/test-revset2.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-revset2.t	Wed Apr 17 13:41:18 2019 -0400
@@ -1525,8 +1525,8 @@
   $ hg init problematicencoding
   $ cd problematicencoding
 
-  $ "$PYTHON" > setup.sh <<EOF
-  > print(u'''
+  $ "$PYTHON" <<EOF
+  > open('setup.sh', 'wb').write(u'''
   > echo a > text
   > hg add text
   > hg --encoding utf-8 commit -u '\u30A2' -m none
@@ -1541,8 +1541,8 @@
   $ sh < setup.sh
 
 test in problematic encoding
-  $ "$PYTHON" > test.sh <<EOF
-  > print(u'''
+  $ "$PYTHON" <<EOF
+  > open('test.sh', 'wb').write(u'''
   > hg --encoding cp932 log --template '{rev}\\n' -r 'author(\u30A2)'
   > echo ====
   > hg --encoding cp932 log --template '{rev}\\n' -r 'author(\u30C2)'
@@ -1627,6 +1627,7 @@
   > printprevset = $TESTTMP/printprevset.py
   > EOF
 
+  $ unset P
   $ hg --config revsetalias.P=1 printprevset
   P=[1]
   $ P=3 hg --config revsetalias.P=2 printprevset
--- a/tests/test-rollback.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-rollback.t	Wed Apr 17 13:41:18 2019 -0400
@@ -113,9 +113,9 @@
   > echo "another precious commit message" > "$1"
   > __EOF__
   $ HGEDITOR="\"sh\" \"`pwd`/editor.sh\"" hg --config hooks.pretxncommit=false commit 2>&1
-  note: commit message saved in .hg/last-message.txt
   transaction abort!
   rollback completed
+  note: commit message saved in .hg/last-message.txt
   abort: pretxncommit hook exited with status * (glob)
   [255]
   $ cat .hg/last-message.txt
--- a/tests/test-run-tests.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-run-tests.py	Wed Apr 17 13:41:18 2019 -0400
@@ -37,8 +37,8 @@
     """
     assert (expected.endswith(b'\n')
             and output.endswith(b'\n')), 'missing newline'
-    assert not re.search(br'[^ \w\\/\r\n()*?]', expected + output), \
-           b'single backslash or unknown char'
+    assert not re.search(br'[^ \w\\/\r\n()*?]', expected + output), (
+           b'single backslash or unknown char')
     test = run_tests.TTest(b'test-run-test.t', b'.', b'.')
     match, exact = test.linematch(expected, output)
     if isinstance(match, str):
--- a/tests/test-run-tests.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-run-tests.t	Wed Apr 17 13:41:18 2019 -0400
@@ -324,8 +324,8 @@
   
   ERROR: test-failure-unicode.t output changed
   !
+  Failed test-failure-unicode.t: output changed
   Failed test-failure.t: output changed
-  Failed test-failure-unicode.t: output changed
   # Ran 3 tests, 0 skipped, 2 failed.
   python hash seed: * (glob)
   [1]
@@ -356,8 +356,8 @@
   
   ERROR: test-failure-unicode.t output changed
   !
+  Failed test-failure-unicode.t: output changed
   Failed test-failure.t: output changed
-  Failed test-failure-unicode.t: output changed
   # Ran 3 tests, 0 skipped, 2 failed.
   python hash seed: * (glob)
   [1]
@@ -393,8 +393,8 @@
   
   ERROR: test-failure-unicode.t output changed
   !
+  Failed test-failure-unicode.t: output changed
   Failed test-failure.t: output changed
-  Failed test-failure-unicode.t: output changed
   # Ran 3 tests, 0 skipped, 2 failed.
   python hash seed: * (glob)
   [1]
@@ -1174,31 +1174,31 @@
   $ cat report.json
   testreport ={
       "test-failure.t": [\{] (re)
-          "csys": "\s*[\d\.]{4,5}", ? (re)
-          "cuser": "\s*[\d\.]{4,5}", ? (re)
+          "csys": "\s*\d+\.\d{3,4}", ? (re)
+          "cuser": "\s*\d+\.\d{3,4}", ? (re)
           "diff": "---.+\+\+\+.+", ? (re)
-          "end": "\s*[\d\.]{4,5}", ? (re)
+          "end": "\s*\d+\.\d{3,4}", ? (re)
           "result": "failure", ? (re)
-          "start": "\s*[\d\.]{4,5}", ? (re)
-          "time": "\s*[\d\.]{4,5}" (re)
+          "start": "\s*\d+\.\d{3,4}", ? (re)
+          "time": "\s*\d+\.\d{3,4}" (re)
       }, ? (re)
       "test-skip.t": {
-          "csys": "\s*[\d\.]{4,5}", ? (re)
-          "cuser": "\s*[\d\.]{4,5}", ? (re)
+          "csys": "\s*\d+\.\d{3,4}", ? (re)
+          "cuser": "\s*\d+\.\d{3,4}", ? (re)
           "diff": "", ? (re)
-          "end": "\s*[\d\.]{4,5}", ? (re)
+          "end": "\s*\d+\.\d{3,4}", ? (re)
           "result": "skip", ? (re)
-          "start": "\s*[\d\.]{4,5}", ? (re)
-          "time": "\s*[\d\.]{4,5}" (re)
+          "start": "\s*\d+\.\d{3,4}", ? (re)
+          "time": "\s*\d+\.\d{3,4}" (re)
       }, ? (re)
       "test-success.t": [\{] (re)
-          "csys": "\s*[\d\.]{4,5}", ? (re)
-          "cuser": "\s*[\d\.]{4,5}", ? (re)
+          "csys": "\s*\d+\.\d{3,4}", ? (re)
+          "cuser": "\s*\d+\.\d{3,4}", ? (re)
           "diff": "", ? (re)
-          "end": "\s*[\d\.]{4,5}", ? (re)
+          "end": "\s*\d+\.\d{3,4}", ? (re)
           "result": "success", ? (re)
-          "start": "\s*[\d\.]{4,5}", ? (re)
-          "time": "\s*[\d\.]{4,5}" (re)
+          "start": "\s*\d+\.\d{3,4}", ? (re)
+          "time": "\s*\d+\.\d{3,4}" (re)
       }
   } (no-eol)
 --json with --outputdir
@@ -1231,31 +1231,31 @@
   $ cat output/report.json
   testreport ={
       "test-failure.t": [\{] (re)
-          "csys": "\s*[\d\.]{4,5}", ? (re)
-          "cuser": "\s*[\d\.]{4,5}", ? (re)
+          "csys": "\s*\d+\.\d{3,4}", ? (re)
+          "cuser": "\s*\d+\.\d{3,4}", ? (re)
           "diff": "---.+\+\+\+.+", ? (re)
-          "end": "\s*[\d\.]{4,5}", ? (re)
+          "end": "\s*\d+\.\d{3,4}", ? (re)
           "result": "failure", ? (re)
-          "start": "\s*[\d\.]{4,5}", ? (re)
-          "time": "\s*[\d\.]{4,5}" (re)
+          "start": "\s*\d+\.\d{3,4}", ? (re)
+          "time": "\s*\d+\.\d{3,4}" (re)
       }, ? (re)
       "test-skip.t": {
-          "csys": "\s*[\d\.]{4,5}", ? (re)
-          "cuser": "\s*[\d\.]{4,5}", ? (re)
+          "csys": "\s*\d+\.\d{3,4}", ? (re)
+          "cuser": "\s*\d+\.\d{3,4}", ? (re)
           "diff": "", ? (re)
-          "end": "\s*[\d\.]{4,5}", ? (re)
+          "end": "\s*\d+\.\d{3,4}", ? (re)
           "result": "skip", ? (re)
-          "start": "\s*[\d\.]{4,5}", ? (re)
-          "time": "\s*[\d\.]{4,5}" (re)
+          "start": "\s*\d+\.\d{3,4}", ? (re)
+          "time": "\s*\d+\.\d{3,4}" (re)
       }, ? (re)
       "test-success.t": [\{] (re)
-          "csys": "\s*[\d\.]{4,5}", ? (re)
-          "cuser": "\s*[\d\.]{4,5}", ? (re)
+          "csys": "\s*\d+\.\d{3,4}", ? (re)
+          "cuser": "\s*\d+\.\d{3,4}", ? (re)
           "diff": "", ? (re)
-          "end": "\s*[\d\.]{4,5}", ? (re)
+          "end": "\s*\d+\.\d{3,4}", ? (re)
           "result": "success", ? (re)
-          "start": "\s*[\d\.]{4,5}", ? (re)
-          "time": "\s*[\d\.]{4,5}" (re)
+          "start": "\s*\d+\.\d{3,4}", ? (re)
+          "time": "\s*\d+\.\d{3,4}" (re)
       }
   } (no-eol)
   $ ls -a output
@@ -1287,31 +1287,31 @@
   $ cat report.json
   testreport ={
       "test-failure.t": [\{] (re)
-          "csys": "\s*[\d\.]{4,5}", ? (re)
-          "cuser": "\s*[\d\.]{4,5}", ? (re)
+          "csys": "\s*\d+\.\d{3,4}", ? (re)
+          "cuser": "\s*\d+\.\d{3,4}", ? (re)
           "diff": "", ? (re)
-          "end": "\s*[\d\.]{4,5}", ? (re)
+          "end": "\s*\d+\.\d{3,4}", ? (re)
           "result": "success", ? (re)
-          "start": "\s*[\d\.]{4,5}", ? (re)
-          "time": "\s*[\d\.]{4,5}" (re)
+          "start": "\s*\d+\.\d{3,4}", ? (re)
+          "time": "\s*\d+\.\d{3,4}" (re)
       }, ? (re)
       "test-skip.t": {
-          "csys": "\s*[\d\.]{4,5}", ? (re)
-          "cuser": "\s*[\d\.]{4,5}", ? (re)
+          "csys": "\s*\d+\.\d{3,4}", ? (re)
+          "cuser": "\s*\d+\.\d{3,4}", ? (re)
           "diff": "", ? (re)
-          "end": "\s*[\d\.]{4,5}", ? (re)
+          "end": "\s*\d+\.\d{3,4}", ? (re)
           "result": "skip", ? (re)
-          "start": "\s*[\d\.]{4,5}", ? (re)
-          "time": "\s*[\d\.]{4,5}" (re)
+          "start": "\s*\d+\.\d{3,4}", ? (re)
+          "time": "\s*\d+\.\d{3,4}" (re)
       }, ? (re)
       "test-success.t": [\{] (re)
-          "csys": "\s*[\d\.]{4,5}", ? (re)
-          "cuser": "\s*[\d\.]{4,5}", ? (re)
+          "csys": "\s*\d+\.\d{3,4}", ? (re)
+          "cuser": "\s*\d+\.\d{3,4}", ? (re)
           "diff": "", ? (re)
-          "end": "\s*[\d\.]{4,5}", ? (re)
+          "end": "\s*\d+\.\d{3,4}", ? (re)
           "result": "success", ? (re)
-          "start": "\s*[\d\.]{4,5}", ? (re)
-          "time": "\s*[\d\.]{4,5}" (re)
+          "start": "\s*\d+\.\d{3,4}", ? (re)
+          "time": "\s*\d+\.\d{3,4}" (re)
       }
   } (no-eol)
   $ mv backup test-failure.t
--- a/tests/test-rust-ancestor.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-rust-ancestor.py	Wed Apr 17 13:41:18 2019 -0400
@@ -19,6 +19,7 @@
         LazyAncestors,
         MissingAncestors,
     )
+    from mercurial.rustext import dagop
 
 try:
     from mercurial.cext import parsers as cparsers
@@ -165,6 +166,10 @@
         with self.assertRaises(error.WdirUnsupported):
             list(AncestorsIterator(idx, [node.wdirrev], -1, False))
 
+    def testheadrevs(self):
+        idx = self.parseindex()
+        self.assertEqual(dagop.headrevs(idx, [1, 2, 3]), {3})
+
 if __name__ == '__main__':
     import silenttestrunner
     silenttestrunner.main(__name__)
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/tests/test-server-view.t	Wed Apr 17 13:41:18 2019 -0400
@@ -0,0 +1,38 @@
+  $ hg init test
+  $ cd test
+  $ hg debugbuilddag '+2'
+  $ hg phase --public 0
+
+  $ hg serve -p $HGPORT -d --pid-file=hg.pid -E errors.log
+  $ cat hg.pid >> $DAEMON_PIDS
+  $ cd ..
+  $ hg init test2
+  $ cd test2
+  $ hg incoming http://foo:xyzzy@localhost:$HGPORT/
+  comparing with http://foo:***@localhost:$HGPORT/
+  changeset:   0:1ea73414a91b
+  user:        debugbuilddag
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     r0
+  
+  changeset:   1:66f7d451a68b
+  tag:         tip
+  user:        debugbuilddag
+  date:        Thu Jan 01 00:00:01 1970 +0000
+  summary:     r1
+  
+  $ killdaemons.py
+
+  $ cd ..
+  $ hg -R test --config server.view=immutable serve -p $HGPORT -d --pid-file=hg.pid -E errors.log
+  $ cat hg.pid >> $DAEMON_PIDS
+  $ hg -R test2 incoming http://foo:xyzzy@localhost:$HGPORT/
+  comparing with http://foo:***@localhost:$HGPORT/
+  changeset:   0:1ea73414a91b
+  tag:         tip
+  user:        debugbuilddag
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     r0
+  
+  $ cat errors.log
+  $ killdaemons.py
--- a/tests/test-setdiscovery.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-setdiscovery.t	Wed Apr 17 13:41:18 2019 -0400
@@ -43,46 +43,125 @@
   comparing with b
   searching for changes
   unpruned common: 01241442b3c2 66f7d451a68b b5714e113bc0
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          2
+      also local heads:          2
+      also remote heads:         1
+    local heads:                 2
+      common:                    2
+      missing:                   0
+    remote heads:                3
+      common:                    1
+      unknown:                   2
+  local changesets:              7
+    common:                      7
+    missing:                     0
   common heads: 01241442b3c2 b5714e113bc0
-  local is subset
   
   % -- a -> b set
   comparing with b
   query 1; heads
   searching for changes
   all local heads known remotely
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          2
+      also local heads:          2
+      also remote heads:         1
+    local heads:                 2
+      common:                    2
+      missing:                   0
+    remote heads:                3
+      common:                    1
+      unknown:                   2
+  local changesets:              7
+    common:                      7
+    missing:                     0
   common heads: 01241442b3c2 b5714e113bc0
-  local is subset
   
   % -- a -> b set (tip only)
   comparing with b
   query 1; heads
   searching for changes
   all local heads known remotely
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          1
+      also remote heads:         0
+    local heads:                 2
+      common:                    1
+      missing:                   1
+    remote heads:                3
+      common:                    0
+      unknown:                   3
+  local changesets:              7
+    common:                      6
+    missing:                     1
   common heads: b5714e113bc0
   
   % -- b -> a tree
   comparing with a
   searching for changes
   unpruned common: 01241442b3c2 b5714e113bc0
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          2
+      also local heads:          1
+      also remote heads:         2
+    local heads:                 3
+      common:                    1
+      missing:                   2
+    remote heads:                2
+      common:                    2
+      unknown:                   0
+  local changesets:             15
+    common:                      7
+    missing:                     8
   common heads: 01241442b3c2 b5714e113bc0
-  remote is subset
   
   % -- b -> a set
   comparing with a
   query 1; heads
   searching for changes
   all remote heads known locally
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          2
+      also local heads:          1
+      also remote heads:         2
+    local heads:                 3
+      common:                    1
+      missing:                   2
+    remote heads:                2
+      common:                    2
+      unknown:                   0
+  local changesets:             15
+    common:                      7
+    missing:                     8
   common heads: 01241442b3c2 b5714e113bc0
-  remote is subset
   
   % -- b -> a set (tip only)
   comparing with a
   query 1; heads
   searching for changes
   all remote heads known locally
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          2
+      also local heads:          1
+      also remote heads:         2
+    local heads:                 3
+      common:                    1
+      missing:                   2
+    remote heads:                2
+      common:                    2
+      unknown:                   0
+  local changesets:             15
+    common:                      7
+    missing:                     8
   common heads: 01241442b3c2 b5714e113bc0
-  remote is subset
 
 
 Many new:
@@ -95,6 +174,20 @@
   comparing with b
   searching for changes
   unpruned common: bebd167eb94d
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          1
+      also remote heads:         0
+    local heads:                 2
+      common:                    1
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             35
+    common:                      5
+    missing:                    30
   common heads: bebd167eb94d
   
   % -- a -> b set
@@ -105,6 +198,20 @@
   searching: 2 queries
   query 2; still undecided: 29, sample size is: 29
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          1
+      also remote heads:         0
+    local heads:                 2
+      common:                    1
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             35
+    common:                      5
+    missing:                    30
   common heads: bebd167eb94d
   
   % -- a -> b set (tip only)
@@ -115,12 +222,40 @@
   searching: 2 queries
   query 2; still undecided: 31, sample size is: 31
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 2
+      common:                    0
+      missing:                   2
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             35
+    common:                      2
+    missing:                    33
   common heads: 66f7d451a68b
   
   % -- b -> a tree
   comparing with a
   searching for changes
   unpruned common: 66f7d451a68b bebd167eb94d
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         1
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                2
+      common:                    1
+      unknown:                   1
+  local changesets:              8
+    common:                      5
+    missing:                     3
   common heads: bebd167eb94d
   
   % -- b -> a set
@@ -131,6 +266,20 @@
   searching: 2 queries
   query 2; still undecided: 2, sample size is: 2
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         1
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                2
+      common:                    1
+      unknown:                   1
+  local changesets:              8
+    common:                      5
+    missing:                     3
   common heads: bebd167eb94d
   
   % -- b -> a set (tip only)
@@ -141,6 +290,20 @@
   searching: 2 queries
   query 2; still undecided: 2, sample size is: 2
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         1
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                2
+      common:                    1
+      unknown:                   1
+  local changesets:              8
+    common:                      5
+    missing:                     3
   common heads: bebd167eb94d
 
 Both sides many new with stub:
@@ -153,6 +316,20 @@
   comparing with b
   searching for changes
   unpruned common: 2dc09a01254d
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          1
+      also remote heads:         0
+    local heads:                 2
+      common:                    1
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             34
+    common:                      4
+    missing:                    30
   common heads: 2dc09a01254d
   
   % -- a -> b set
@@ -163,6 +340,20 @@
   searching: 2 queries
   query 2; still undecided: 29, sample size is: 29
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          1
+      also remote heads:         0
+    local heads:                 2
+      common:                    1
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             34
+    common:                      4
+    missing:                    30
   common heads: 2dc09a01254d
   
   % -- a -> b set (tip only)
@@ -173,12 +364,40 @@
   searching: 2 queries
   query 2; still undecided: 31, sample size is: 31
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 2
+      common:                    0
+      missing:                   2
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             34
+    common:                      2
+    missing:                    32
   common heads: 66f7d451a68b
   
   % -- b -> a tree
   comparing with a
   searching for changes
   unpruned common: 2dc09a01254d 66f7d451a68b
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         1
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                2
+      common:                    1
+      unknown:                   1
+  local changesets:             34
+    common:                      4
+    missing:                    30
   common heads: 2dc09a01254d
   
   % -- b -> a set
@@ -189,6 +408,20 @@
   searching: 2 queries
   query 2; still undecided: 29, sample size is: 29
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         1
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                2
+      common:                    1
+      unknown:                   1
+  local changesets:             34
+    common:                      4
+    missing:                    30
   common heads: 2dc09a01254d
   
   % -- b -> a set (tip only)
@@ -199,6 +432,20 @@
   searching: 2 queries
   query 2; still undecided: 29, sample size is: 29
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         1
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                2
+      common:                    1
+      unknown:                   1
+  local changesets:             34
+    common:                      4
+    missing:                    30
   common heads: 2dc09a01254d
 
 
@@ -212,6 +459,20 @@
   comparing with b
   searching for changes
   unpruned common: 66f7d451a68b
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             32
+    common:                      2
+    missing:                    30
   common heads: 66f7d451a68b
   
   % -- a -> b set
@@ -222,6 +483,20 @@
   searching: 2 queries
   query 2; still undecided: 31, sample size is: 31
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             32
+    common:                      2
+    missing:                    30
   common heads: 66f7d451a68b
   
   % -- a -> b set (tip only)
@@ -232,12 +507,40 @@
   searching: 2 queries
   query 2; still undecided: 31, sample size is: 31
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             32
+    common:                      2
+    missing:                    30
   common heads: 66f7d451a68b
   
   % -- b -> a tree
   comparing with a
   searching for changes
   unpruned common: 66f7d451a68b
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             32
+    common:                      2
+    missing:                    30
   common heads: 66f7d451a68b
   
   % -- b -> a set
@@ -248,6 +551,20 @@
   searching: 2 queries
   query 2; still undecided: 31, sample size is: 31
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             32
+    common:                      2
+    missing:                    30
   common heads: 66f7d451a68b
   
   % -- b -> a set (tip only)
@@ -258,6 +575,20 @@
   searching: 2 queries
   query 2; still undecided: 31, sample size is: 31
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             32
+    common:                      2
+    missing:                    30
   common heads: 66f7d451a68b
 
 
@@ -271,6 +602,20 @@
   comparing with b
   searching for changes
   unpruned common: 66f7d451a68b
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             52
+    common:                      2
+    missing:                    50
   common heads: 66f7d451a68b
   
   % -- a -> b set
@@ -281,6 +626,20 @@
   searching: 2 queries
   query 2; still undecided: 51, sample size is: 51
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             52
+    common:                      2
+    missing:                    50
   common heads: 66f7d451a68b
   
   % -- a -> b set (tip only)
@@ -291,12 +650,40 @@
   searching: 2 queries
   query 2; still undecided: 51, sample size is: 51
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             52
+    common:                      2
+    missing:                    50
   common heads: 66f7d451a68b
   
   % -- b -> a tree
   comparing with a
   searching for changes
   unpruned common: 66f7d451a68b
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             32
+    common:                      2
+    missing:                    30
   common heads: 66f7d451a68b
   
   % -- b -> a set
@@ -307,6 +694,20 @@
   searching: 2 queries
   query 2; still undecided: 31, sample size is: 31
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             32
+    common:                      2
+    missing:                    30
   common heads: 66f7d451a68b
   
   % -- b -> a set (tip only)
@@ -317,6 +718,20 @@
   searching: 2 queries
   query 2; still undecided: 31, sample size is: 31
   2 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:             32
+    common:                      2
+    missing:                    30
   common heads: 66f7d451a68b
 
 
@@ -330,6 +745,20 @@
   comparing with b
   searching for changes
   unpruned common: 7ead0cba2838
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:           1050
+    common:                   1000
+    missing:                    50
   common heads: 7ead0cba2838
   
   % -- a -> b set
@@ -343,6 +772,20 @@
   searching: 3 queries
   query 3; still undecided: 31, sample size is: 31
   3 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:           1050
+    common:                   1000
+    missing:                    50
   common heads: 7ead0cba2838
   
   % -- a -> b set (tip only)
@@ -356,12 +799,40 @@
   searching: 3 queries
   query 3; still undecided: 31, sample size is: 31
   3 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:           1050
+    common:                   1000
+    missing:                    50
   common heads: 7ead0cba2838
   
   % -- b -> a tree
   comparing with a
   searching for changes
   unpruned common: 7ead0cba2838
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:           1030
+    common:                   1000
+    missing:                    30
   common heads: 7ead0cba2838
   
   % -- b -> a set
@@ -375,6 +846,20 @@
   searching: 3 queries
   query 3; still undecided: 15, sample size is: 15
   3 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:           1030
+    common:                   1000
+    missing:                    30
   common heads: 7ead0cba2838
   
   % -- b -> a set (tip only)
@@ -388,6 +873,20 @@
   searching: 3 queries
   query 3; still undecided: 15, sample size is: 15
   3 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:                 1
+      common:                    0
+      missing:                   1
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:           1030
+    common:                   1000
+    missing:                    30
   common heads: 7ead0cba2838
 
 
@@ -453,6 +952,20 @@
   searching: 6 queries
   query 6; still undecided: \d+, sample size is: \d+ (re)
   6 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:               260
+      common:                    0
+      missing:                 260
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:           1340
+    common:                    300
+    missing:                  1040
   common heads: 3ee37d65064a
   $ hg -R a debugdiscovery b --debug --verbose --config progress.debug=true --rev tip
   comparing with b
@@ -465,6 +978,20 @@
   searching: 3 queries
   query 3; still undecided: 3, sample size is: 3
   3 total queries in *.????s (glob)
+  elapsed time:  * seconds (glob)
+  heads summary:
+    total common heads:          1
+      also local heads:          0
+      also remote heads:         0
+    local heads:               260
+      common:                    0
+      missing:                 260
+    remote heads:                1
+      common:                    0
+      unknown:                   1
+  local changesets:           1340
+    common:                    300
+    missing:                  1040
   common heads: 3ee37d65064a
 
 Test actual protocol when pulling one new head in addition to common heads
--- a/tests/test-share.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-share.t	Wed Apr 17 13:41:18 2019 -0400
@@ -34,9 +34,9 @@
   checkisexec (execbit !)
   checklink (symlink !)
   checklink-target (symlink !)
+  manifestfulltextcache (reporevlogstore !)
   $ ls -1 ../repo1/.hg/cache
   branch2-served
-  manifestfulltextcache (reporevlogstore !)
   rbc-names-v1
   rbc-revs-v1
   tags2-visible
@@ -124,6 +124,15 @@
   -rw-r--r-- 2 b
   
   
+Cloning a shared repo via bundle2 results in a non-shared clone
+
+  $ cd ..
+  $ hg clone -q --stream --config ui.ssh="\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/`pwd`/repo2 cloned-via-bundle2
+  $ cat ./cloned-via-bundle2/.hg/requires | grep "shared"
+  [1]
+  $ hg id --cwd cloned-via-bundle2 -r tip
+  c2e0ac586386 tip
+  $ cd repo2
 
 test unshare command
 
--- a/tests/test-shelve.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-shelve.t	Wed Apr 17 13:41:18 2019 -0400
@@ -76,6 +76,7 @@
       --date DATE           shelve with the specified commit date
    -d --delete              delete the named shelved change(s)
    -e --edit                invoke editor on commit messages
+   -k --keep                shelve, but keep changes in the working directory
    -l --list                list current shelves
    -m --message TEXT        use text as shelve message
    -n --name NAME           use the given name for the shelved commit
@@ -927,6 +928,29 @@
   Stream params: {Compression: BZ}
   changegroup -- {nbchanges: 1, version: 02} (mandatory: True)
       330882a04d2ce8487636b1fb292e5beea77fa1e3
+
+Test shelve --keep
+
+  $ hg unshelve
+  unshelving change 'default'
+  $ hg shelve --keep --list
+  abort: options '--list' and '--keep' may not be used together
+  [255]
+  $ hg shelve --keep --patch
+  abort: options '--patch' and '--keep' may not be used together
+  [255]
+  $ hg shelve --keep --delete
+  abort: options '--delete' and '--keep' may not be used together
+  [255]
+  $ hg shelve --keep
+  shelved as default
+  $ hg diff
+  diff --git a/jungle b/jungle
+  new file mode 100644
+  --- /dev/null
+  +++ b/jungle
+  @@ -0,0 +1,1 @@
+  +babar
   $ cd ..
 
 Test visibility of in-memory changes inside transaction to external hook
@@ -1087,3 +1111,49 @@
      test                      (4|13):33f7f61e6c5e (re)
 
   $ cd ..
+
+Abort unshelve while merging (issue5123)
+----------------------------------------
+
+  $ hg init issue5123
+  $ cd issue5123
+  $ echo > a
+  $ hg ci -Am a
+  adding a
+  $ hg co null
+  0 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  $ echo > b
+  $ hg ci -Am b
+  adding b
+  created new head
+  $ echo > c
+  $ hg add c
+  $ hg shelve
+  shelved as default
+  0 files updated, 0 files merged, 1 files removed, 0 files unresolved
+  $ hg co 1
+  0 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  $ hg merge 0
+  1 files updated, 0 files merged, 0 files removed, 0 files unresolved
+  (branch merge, don't forget to commit)
+-- successful merge with two parents
+  $ hg log -G
+  @  changeset:   1:406bf70c274f
+     tag:         tip
+     parent:      -1:000000000000
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     b
+  
+  @  changeset:   0:ada8c9eb8252
+     user:        test
+     date:        Thu Jan 01 00:00:00 1970 +0000
+     summary:     a
+  
+-- trying to pull in the shelve bits
+-- unshelve should abort otherwise, it'll eat my second parent.
+  $ hg unshelve
+  abort: cannot unshelve while merging
+  [255]
+
+  $ cd ..
--- a/tests/test-shelve2.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-shelve2.t	Wed Apr 17 13:41:18 2019 -0400
@@ -130,13 +130,28 @@
   e
   $ cat e.orig
   z
+  $ rm e.orig
 
+restores backup of unknown file to right directory
+
+  $ hg shelve
+  shelved as default
+  0 files updated, 0 files merged, 2 files removed, 0 files unresolved
+  $ echo z > e
+  $ mkdir dir
+  $ hg unshelve --cwd dir
+  unshelving change 'default'
+  $ rmdir dir
+  $ cat e
+  e
+  $ cat e.orig
+  z
 
 unshelve and conflicts with tracked and untracked files
 
  preparing:
 
-  $ rm *.orig
+  $ rm -f *.orig
   $ hg ci -qm 'commit stuff'
   $ hg phase -p null:
 
--- a/tests/test-simplekeyvaluefile.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-simplekeyvaluefile.py	Wed Apr 17 13:41:18 2019 -0400
@@ -82,8 +82,8 @@
         dw = {b'key1': b'value1'}
         scmutil.simplekeyvaluefile(self.vfs, b'fl').write(dw, firstline=b'1.0')
         self.assertEqual(self.vfs.read(b'fl'), b'1.0\nkey1=value1\n')
-        dr = scmutil.simplekeyvaluefile(self.vfs, b'fl')\
-                    .read(firstlinenonkeyval=True)
+        dr = scmutil.simplekeyvaluefile(
+            self.vfs, b'fl').read(firstlinenonkeyval=True)
         self.assertEqual(dr, {b'__firstline': b'1.0', b'key1': b'value1'})
 
 if __name__ == "__main__":
--- a/tests/test-sparse-revlog.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-sparse-revlog.t	Wed Apr 17 13:41:18 2019 -0400
@@ -12,10 +12,22 @@
   $ bundlepath="$TESTDIR/artifacts/cache/big-file-churn.hg"
 
   $ expectedhash=`cat "$bundlepath".md5`
+
+#if slow
+
+  $ if [ ! -f "$bundlepath" ]; then
+  >     "$TESTDIR"/artifacts/scripts/generate-churning-bundle.py > /dev/null
+  > fi
+
+#else
+
   $ if [ ! -f "$bundlepath" ]; then
   >     echo 'skipped: missing artifact, run "'"$TESTDIR"'/artifacts/scripts/generate-churning-bundle.py"'
   >     exit 80
   > fi
+
+#endif
+
   $ currenthash=`f -M "$bundlepath" | cut -d = -f 2`
   $ if [ "$currenthash" != "$expectedhash" ]; then
   >     echo 'skipped: outdated artifact, md5 "'"$currenthash"'" expected "'"$expectedhash"'" run "'"$TESTDIR"'/artifacts/scripts/generate-churning-bundle.py"'
@@ -28,8 +40,7 @@
   > maxchainlen = 15
   > [storage]
   > revlog.optimize-delta-parent-choice = yes
-  > [format]
-  > generaldelta = yes
+  > revlog.reuse-external-delta = no
   > EOF
   $ hg init sparse-repo
   $ cd sparse-repo
--- a/tests/test-split.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-split.t	Wed Apr 17 13:41:18 2019 -0400
@@ -26,6 +26,8 @@
   > [diff]
   > git=1
   > unified=0
+  > [commands]
+  > commit.interactive.unified=0
   > [alias]
   > glog=log -G -T '{rev}:{node|short} {desc} {bookmarks}\n'
   > EOF
@@ -103,6 +105,12 @@
   abort: cannot split multiple revisions
   [255]
 
+This function splits a bit strangely primarily to avoid changing the behavior of
+the test after a bug was fixed with how split/commit --interactive handled
+`commands.commit.interactive.unified=0`: when there were no context lines,
+it kept only the last diff hunk. When running split, this meant that runsplit
+was always recording three commits, one for each diff hunk, in reverse order
+(the base commit was the last diff hunk in the file).
   $ runsplit() {
   > cat > $TESTTMP/messages <<EOF
   > split 1
@@ -113,8 +121,11 @@
   > EOF
   > cat <<EOF | hg split "$@"
   > y
+  > n
+  > n
   > y
   > y
+  > n
   > y
   > y
   > y
@@ -123,13 +134,23 @@
 
   $ HGEDITOR=false runsplit
   diff --git a/a b/a
-  1 hunks, 1 lines changed
+  3 hunks, 3 lines changed
   examine changes to 'a'? [Ynesfdaq?] y
   
+  @@ -1,1 +1,1 @@
+  -1
+  +11
+  record change 1/3 to 'a'? [Ynesfdaq?] n
+  
+  @@ -3,1 +3,1 @@ 2
+  -3
+  +33
+  record change 2/3 to 'a'? [Ynesfdaq?] n
+  
   @@ -5,1 +5,1 @@ 4
   -5
   +55
-  record this change to 'a'? [Ynesfdaq?] y
+  record change 3/3 to 'a'? [Ynesfdaq?] y
   
   transaction abort!
   rollback completed
@@ -140,13 +161,23 @@
   $ HGEDITOR="\"$PYTHON\" $TESTTMP/editor.py"
   $ runsplit
   diff --git a/a b/a
-  1 hunks, 1 lines changed
+  3 hunks, 3 lines changed
   examine changes to 'a'? [Ynesfdaq?] y
   
+  @@ -1,1 +1,1 @@
+  -1
+  +11
+  record change 1/3 to 'a'? [Ynesfdaq?] n
+  
+  @@ -3,1 +3,1 @@ 2
+  -3
+  +33
+  record change 2/3 to 'a'? [Ynesfdaq?] n
+  
   @@ -5,1 +5,1 @@ 4
   -5
   +55
-  record this change to 'a'? [Ynesfdaq?] y
+  record change 3/3 to 'a'? [Ynesfdaq?] y
   
   EDITOR: HG: Splitting 1df0d5c5a3ab. Write commit message for the first split changeset.
   EDITOR: a2
@@ -160,13 +191,18 @@
   EDITOR: HG: changed a
   created new head
   diff --git a/a b/a
-  1 hunks, 1 lines changed
+  2 hunks, 2 lines changed
   examine changes to 'a'? [Ynesfdaq?] y
   
+  @@ -1,1 +1,1 @@
+  -1
+  +11
+  record change 1/2 to 'a'? [Ynesfdaq?] n
+  
   @@ -3,1 +3,1 @@ 2
   -3
   +33
-  record this change to 'a'? [Ynesfdaq?] y
+  record change 2/2 to 'a'? [Ynesfdaq?] y
   
   EDITOR: HG: Splitting 1df0d5c5a3ab. So far it has been split into:
   EDITOR: HG: - e704349bd21b: split 1
@@ -565,3 +601,169 @@
   a09ad58faae3 draft
   e704349bd21b draft
   a61bcde8c529 draft
+
+`hg split` with ignoreblanklines=1 does not infinite loop
+
+  $ mkdir $TESTTMP/f
+  $ hg init $TESTTMP/f/a
+  $ cd $TESTTMP/f/a
+  $ printf '1\n2\n3\n4\n5\n' > foo
+  $ cp foo bar
+  $ hg ci -qAm initial
+  $ printf '1\n\n2\n3\ntest\n4\n5\n' > bar
+  $ printf '1\n2\n3\ntest\n4\n5\n' > foo
+  $ hg ci -qm splitme
+  $ cat > $TESTTMP/messages <<EOF
+  > split 1
+  > --
+  > split 2
+  > EOF
+  $ printf 'f\nn\nf\n' | hg --config extensions.split= --config diff.ignoreblanklines=1 split
+  diff --git a/bar b/bar
+  2 hunks, 2 lines changed
+  examine changes to 'bar'? [Ynesfdaq?] f
+  
+  diff --git a/foo b/foo
+  1 hunks, 1 lines changed
+  examine changes to 'foo'? [Ynesfdaq?] n
+  
+  EDITOR: HG: Splitting dd3c45017cbf. Write commit message for the first split changeset.
+  EDITOR: splitme
+  EDITOR: 
+  EDITOR: 
+  EDITOR: HG: Enter commit message.  Lines beginning with 'HG:' are removed.
+  EDITOR: HG: Leave message empty to abort commit.
+  EDITOR: HG: --
+  EDITOR: HG: user: test
+  EDITOR: HG: branch 'default'
+  EDITOR: HG: changed bar
+  created new head
+  diff --git a/foo b/foo
+  1 hunks, 1 lines changed
+  examine changes to 'foo'? [Ynesfdaq?] f
+  
+  EDITOR: HG: Splitting dd3c45017cbf. So far it has been split into:
+  EDITOR: HG: - f205aea1c624: split 1
+  EDITOR: HG: Write commit message for the next split changeset.
+  EDITOR: splitme
+  EDITOR: 
+  EDITOR: 
+  EDITOR: HG: Enter commit message.  Lines beginning with 'HG:' are removed.
+  EDITOR: HG: Leave message empty to abort commit.
+  EDITOR: HG: --
+  EDITOR: HG: user: test
+  EDITOR: HG: branch 'default'
+  EDITOR: HG: changed foo
+  saved backup bundle to $TESTTMP/f/a/.hg/strip-backup/dd3c45017cbf-463441b5-split.hg (obsstore-off !)
+
+Let's try that again, with a slightly different set of patches, to ensure that
+the ignoreblanklines thing isn't somehow position dependent.
+
+  $ hg init $TESTTMP/f/b
+  $ cd $TESTTMP/f/b
+  $ printf '1\n2\n3\n4\n5\n' > foo
+  $ cp foo bar
+  $ hg ci -qAm initial
+  $ printf '1\n2\n3\ntest\n4\n5\n' > bar
+  $ printf '1\n2\n3\ntest\n4\n\n5\n' > foo
+  $ hg ci -qm splitme
+  $ cat > $TESTTMP/messages <<EOF
+  > split 1
+  > --
+  > split 2
+  > EOF
+  $ printf 'f\nn\nf\n' | hg --config extensions.split= --config diff.ignoreblanklines=1 split
+  diff --git a/bar b/bar
+  1 hunks, 1 lines changed
+  examine changes to 'bar'? [Ynesfdaq?] f
+  
+  diff --git a/foo b/foo
+  2 hunks, 2 lines changed
+  examine changes to 'foo'? [Ynesfdaq?] n
+  
+  EDITOR: HG: Splitting 904c80b40a4a. Write commit message for the first split changeset.
+  EDITOR: splitme
+  EDITOR: 
+  EDITOR: 
+  EDITOR: HG: Enter commit message.  Lines beginning with 'HG:' are removed.
+  EDITOR: HG: Leave message empty to abort commit.
+  EDITOR: HG: --
+  EDITOR: HG: user: test
+  EDITOR: HG: branch 'default'
+  EDITOR: HG: changed bar
+  created new head
+  diff --git a/foo b/foo
+  2 hunks, 2 lines changed
+  examine changes to 'foo'? [Ynesfdaq?] f
+  
+  EDITOR: HG: Splitting 904c80b40a4a. So far it has been split into:
+  EDITOR: HG: - ffecf40fa954: split 1
+  EDITOR: HG: Write commit message for the next split changeset.
+  EDITOR: splitme
+  EDITOR: 
+  EDITOR: 
+  EDITOR: HG: Enter commit message.  Lines beginning with 'HG:' are removed.
+  EDITOR: HG: Leave message empty to abort commit.
+  EDITOR: HG: --
+  EDITOR: HG: user: test
+  EDITOR: HG: branch 'default'
+  EDITOR: HG: changed foo
+  saved backup bundle to $TESTTMP/f/b/.hg/strip-backup/904c80b40a4a-47fb907f-split.hg (obsstore-off !)
+
+
+Testing the case in split when commiting flag-only file changes (issue5864)
+---------------------------------------------------------------------------
+  $ hg init $TESTTMP/issue5864
+  $ cd $TESTTMP/issue5864
+  $ echo foo > foo
+  $ hg add foo
+  $ hg ci -m "initial"
+  $ hg import -q --bypass -m "make executable" - <<EOF
+  > diff --git a/foo b/foo
+  > old mode 100644
+  > new mode 100755
+  > EOF
+  $ hg up -q
+
+  $ hg glog
+  @  1:3a2125f0f4cb make executable
+  |
+  o  0:51f273a58d82 initial
+  
+
+#if no-windows
+  $ cat > $TESTTMP/messages <<EOF
+  > split 1
+  > EOF
+  $ printf 'y\n' | hg split
+  diff --git a/foo b/foo
+  old mode 100644
+  new mode 100755
+  examine changes to 'foo'? [Ynesfdaq?] y
+  
+  EDITOR: HG: Splitting 3a2125f0f4cb. Write commit message for the first split changeset.
+  EDITOR: make executable
+  EDITOR: 
+  EDITOR: 
+  EDITOR: HG: Enter commit message.  Lines beginning with 'HG:' are removed.
+  EDITOR: HG: Leave message empty to abort commit.
+  EDITOR: HG: --
+  EDITOR: HG: user: test
+  EDITOR: HG: branch 'default'
+  EDITOR: HG: changed foo
+  created new head
+  saved backup bundle to $TESTTMP/issue5864/.hg/strip-backup/3a2125f0f4cb-629e4432-split.hg (obsstore-off !)
+
+  $ hg log -G -T "{node|short} {desc}\n"
+  @  b154670c87da split 1
+  |
+  o  51f273a58d82 initial
+  
+#else
+
+TODO: Fix this on Windows. See issue 2020 and 5883
+
+  $ printf 'y\ny\ny\n' | hg split
+  abort: cannot split an empty revision
+  [255]
+#endif
--- a/tests/test-sqlitestore.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-sqlitestore.t	Wed Apr 17 13:41:18 2019 -0400
@@ -71,17 +71,17 @@
 
 That results in a row being inserted into various tables
 
-  $ sqlite3 .hg/store/db.sqlite << EOF
+  $ sqlite3 .hg/store/db.sqlite -init /dev/null << EOF
   > SELECT * FROM filepath;
   > EOF
   1|foo
 
-  $ sqlite3 .hg/store/db.sqlite << EOF
+  $ sqlite3 .hg/store/db.sqlite -init /dev/null << EOF
   > SELECT * FROM fileindex;
   > EOF
   1|1|0|-1|-1|0|0|1||6/\xef(L\xe2\xca\x02\xae\xcc\x8d\xe6\xd5\xe8\xa1\xc3\xaf\x05V\xfe (esc)
 
-  $ sqlite3 .hg/store/db.sqlite << EOF
+  $ sqlite3 .hg/store/db.sqlite -init /dev/null << EOF
   > SELECT * FROM delta;
   > EOF
   1|1|	\xd2\xaf\x8d\xd2"\x01\xdd\x8dH\xe5\xdc\xfc\xae\xd2\x81\xff\x94"\xc7|0 (esc)
@@ -93,7 +93,7 @@
   $ hg commit -A -m 'add bar'
   adding bar
 
-  $ sqlite3 .hg/store/db.sqlite << EOF
+  $ sqlite3 .hg/store/db.sqlite -init /dev/null << EOF
   > SELECT * FROM filedata ORDER BY id ASC;
   > EOF
   1|1|foo|0|6/\xef(L\xe2\xca\x02\xae\xcc\x8d\xe6\xd5\xe8\xa1\xc3\xaf\x05V\xfe|-1|-1|0|0|1| (esc)
@@ -104,7 +104,7 @@
   $ echo a >> foo
   $ hg commit -m 'modify foo'
 
-  $ sqlite3 .hg/store/db.sqlite << EOF
+  $ sqlite3 .hg/store/db.sqlite -init /dev/null << EOF
   > SELECT * FROM filedata ORDER BY id ASC;
   > EOF
   1|1|foo|0|6/\xef(L\xe2\xca\x02\xae\xcc\x8d\xe6\xd5\xe8\xa1\xc3\xaf\x05V\xfe|-1|-1|0|0|1| (esc)
--- a/tests/test-ssh-bundle1.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-ssh-bundle1.t	Wed Apr 17 13:41:18 2019 -0400
@@ -46,7 +46,7 @@
   > uncompressed = True
   > 
   > [hooks]
-  > changegroup = sh -c "printenv.py changegroup-in-remote 0 ../dummylog"
+  > changegroup = sh -c "printenv.py --line changegroup-in-remote 0 ../dummylog"
   > EOF
   $ cd $TESTTMP
 
@@ -131,7 +131,7 @@
   checked 3 changesets with 2 changes to 2 files
   $ cat >> .hg/hgrc <<EOF
   > [hooks]
-  > changegroup = sh -c "printenv.py changegroup-in-local 0 ../dummylog"
+  > changegroup = sh -c "printenv.py --line changegroup-in-local 0 ../dummylog"
   > EOF
 
 empty default pull
@@ -514,7 +514,16 @@
   Got arguments 1:user@dummy 2:hg -R local serve --stdio
   Got arguments 1:user@dummy 2:hg -R $TESTTMP/local serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
-  changegroup-in-remote hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=a28a9d1a809cab7d4e2fde4bee738a9ede948b60 HG_NODE_LAST=a28a9d1a809cab7d4e2fde4bee738a9ede948b60 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
+  changegroup-in-remote hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=a28a9d1a809cab7d4e2fde4bee738a9ede948b60
+  HG_NODE_LAST=a28a9d1a809cab7d4e2fde4bee738a9ede948b60
+  HG_SOURCE=serve
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=serve
+  remote:ssh:$LOCALIP
+  HG_URL=remote:ssh:$LOCALIP
+  
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
@@ -524,7 +533,16 @@
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
-  changegroup-in-remote hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=1383141674ec756a6056f6a9097618482fe0f4a6 HG_NODE_LAST=1383141674ec756a6056f6a9097618482fe0f4a6 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
+  changegroup-in-remote hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=1383141674ec756a6056f6a9097618482fe0f4a6
+  HG_NODE_LAST=1383141674ec756a6056f6a9097618482fe0f4a6
+  HG_SOURCE=serve
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=serve
+  remote:ssh:$LOCALIP
+  HG_URL=remote:ssh:$LOCALIP
+  
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg init 'a repo'
   Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
@@ -532,7 +550,16 @@
   Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
   Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
-  changegroup-in-remote hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=65c38f4125f9602c8db4af56530cc221d93b8ef8 HG_NODE_LAST=65c38f4125f9602c8db4af56530cc221d93b8ef8 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
+  changegroup-in-remote hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=65c38f4125f9602c8db4af56530cc221d93b8ef8
+  HG_NODE_LAST=65c38f4125f9602c8db4af56530cc221d93b8ef8
+  HG_SOURCE=serve
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=serve
+  remote:ssh:$LOCALIP
+  HG_URL=remote:ssh:$LOCALIP
+  
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
 
 remote hook failure is attributed to remote
--- a/tests/test-ssh-repoerror.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-ssh-repoerror.t	Wed Apr 17 13:41:18 2019 -0400
@@ -34,7 +34,7 @@
   > done
 
   $ hg id ssh://user@dummy/other
-  remote: abort: Permission denied: $TESTTMP/other/.hg/requires
+  remote: abort: Permission denied: '$TESTTMP/other/.hg/requires'
   abort: no suitable response from remote hg!
   [255]
 
--- a/tests/test-ssh.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-ssh.t	Wed Apr 17 13:41:18 2019 -0400
@@ -36,7 +36,7 @@
   > uncompressed = True
   > 
   > [hooks]
-  > changegroup = sh -c "printenv.py changegroup-in-remote 0 ../dummylog"
+  > changegroup = sh -c "printenv.py --line changegroup-in-remote 0 ../dummylog"
   > EOF
   $ cd $TESTTMP
 
@@ -563,7 +563,16 @@
   Got arguments 1:user@dummy 2:hg -R local serve --stdio
   Got arguments 1:user@dummy 2:hg -R $TESTTMP/local serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
-  changegroup-in-remote hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=a28a9d1a809cab7d4e2fde4bee738a9ede948b60 HG_NODE_LAST=a28a9d1a809cab7d4e2fde4bee738a9ede948b60 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
+  changegroup-in-remote hook: HG_BUNDLE2=1
+  HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=a28a9d1a809cab7d4e2fde4bee738a9ede948b60
+  HG_NODE_LAST=a28a9d1a809cab7d4e2fde4bee738a9ede948b60
+  HG_SOURCE=serve
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=serve
+  HG_URL=remote:ssh:$LOCALIP
+  
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
@@ -573,9 +582,27 @@
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
-  changegroup-in-remote hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=1383141674ec756a6056f6a9097618482fe0f4a6 HG_NODE_LAST=1383141674ec756a6056f6a9097618482fe0f4a6 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
+  changegroup-in-remote hook: HG_BUNDLE2=1
+  HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=1383141674ec756a6056f6a9097618482fe0f4a6
+  HG_NODE_LAST=1383141674ec756a6056f6a9097618482fe0f4a6
+  HG_SOURCE=serve
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=serve
+  HG_URL=remote:ssh:$LOCALIP
+  
   Got arguments 1:user@dummy 2:chg -R remote serve --stdio (chg !)
-  changegroup-in-remote hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=1383141674ec756a6056f6a9097618482fe0f4a6 HG_NODE_LAST=1383141674ec756a6056f6a9097618482fe0f4a6 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP (chg !)
+  changegroup-in-remote hook: HG_BUNDLE2=1 (chg !)
+  HG_HOOKNAME=changegroup (chg !)
+  HG_HOOKTYPE=changegroup (chg !)
+  HG_NODE=1383141674ec756a6056f6a9097618482fe0f4a6 (chg !)
+  HG_NODE_LAST=1383141674ec756a6056f6a9097618482fe0f4a6 (chg !)
+  HG_SOURCE=serve (chg !)
+  HG_TXNID=TXN:$ID$ (chg !)
+  HG_TXNNAME=serve (chg !)
+  HG_URL=remote:ssh:$LOCALIP (chg !)
+   (chg !)
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
   Got arguments 1:user@dummy 2:hg init 'a repo'
   Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
@@ -583,9 +610,19 @@
   Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
   Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
-  changegroup-in-remote hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=65c38f4125f9602c8db4af56530cc221d93b8ef8 HG_NODE_LAST=65c38f4125f9602c8db4af56530cc221d93b8ef8 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
+  changegroup-in-remote hook: HG_BUNDLE2=1
+  HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=65c38f4125f9602c8db4af56530cc221d93b8ef8
+  HG_NODE_LAST=65c38f4125f9602c8db4af56530cc221d93b8ef8
+  HG_SOURCE=serve
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=serve
+  HG_URL=remote:ssh:$LOCALIP
+  
   Got arguments 1:user@dummy 2:hg -R remote serve --stdio
 
+
 remote hook failure is attributed to remote
 
   $ cat > $TESTTMP/failhook << EOF
--- a/tests/test-static-http.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-static-http.t	Wed Apr 17 13:41:18 2019 -0400
@@ -57,7 +57,7 @@
   $ cd ../local
   $ cat >> .hg/hgrc <<EOF
   > [hooks]
-  > changegroup = sh -c "printenv.py changegroup"
+  > changegroup = sh -c "printenv.py --line changegroup"
   > EOF
   $ hg pull
   pulling from static-http://localhost:$HGPORT/remote
@@ -67,7 +67,16 @@
   adding file changes
   added 1 changesets with 1 changes to 1 files
   new changesets 4ac2e3648604
-  changegroup hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=4ac2e3648604439c580c69b09ec9d93a88d93432 HG_NODE_LAST=4ac2e3648604439c580c69b09ec9d93a88d93432 HG_SOURCE=pull HG_TXNID=TXN:$ID$ HG_URL=http://localhost:$HGPORT/remote
+  changegroup hook: HG_HOOKNAME=changegroup
+  HG_HOOKTYPE=changegroup
+  HG_NODE=4ac2e3648604439c580c69b09ec9d93a88d93432
+  HG_NODE_LAST=4ac2e3648604439c580c69b09ec9d93a88d93432
+  HG_SOURCE=pull
+  HG_TXNID=TXN:$ID$
+  HG_TXNNAME=pull
+  http://localhost:$HGPORT/remote
+  HG_URL=http://localhost:$HGPORT/remote
+  
   (run 'hg update' to get a working copy)
 
 trying to push
@@ -227,9 +236,11 @@
   /.hg/requires
   /.hg/store/00changelog.i
   /.hg/store/00manifest.i
-  /.hg/store/data/%7E2ehgsub.i
-  /.hg/store/data/%7E2ehgsubstate.i
+  /.hg/store/data/%7E2ehgsub.i (no-py37 !)
+  /.hg/store/data/%7E2ehgsubstate.i (no-py37 !)
   /.hg/store/data/a.i
+  /.hg/store/data/~2ehgsub.i (py37 !)
+  /.hg/store/data/~2ehgsubstate.i (py37 !)
   /notarepo/.hg/00changelog.i
   /notarepo/.hg/requires
   /remote-with-names/.hg/bookmarks
@@ -243,8 +254,9 @@
   /remote-with-names/.hg/requires
   /remote-with-names/.hg/store/00changelog.i
   /remote-with-names/.hg/store/00manifest.i
-  /remote-with-names/.hg/store/data/%7E2ehgtags.i
+  /remote-with-names/.hg/store/data/%7E2ehgtags.i (no-py37 !)
   /remote-with-names/.hg/store/data/foo.i
+  /remote-with-names/.hg/store/data/~2ehgtags.i (py37 !)
   /remote/.hg/bookmarks
   /remote/.hg/bookmarks.current
   /remote/.hg/cache/branch2-base
@@ -258,10 +270,12 @@
   /remote/.hg/requires
   /remote/.hg/store/00changelog.i
   /remote/.hg/store/00manifest.i
-  /remote/.hg/store/data/%7E2edotfile%20with%20spaces.i
-  /remote/.hg/store/data/%7E2ehgtags.i
+  /remote/.hg/store/data/%7E2edotfile%20with%20spaces.i (no-py37 !)
+  /remote/.hg/store/data/%7E2ehgtags.i (no-py37 !)
   /remote/.hg/store/data/bar.i
   /remote/.hg/store/data/quux.i
+  /remote/.hg/store/data/~2edotfile%20with%20spaces.i (py37 !)
+  /remote/.hg/store/data/~2ehgtags.i (py37 !)
   /remotempty/.hg/bookmarks
   /remotempty/.hg/bookmarks.current
   /remotempty/.hg/requires
@@ -275,5 +289,6 @@
   /sub/.hg/requires
   /sub/.hg/store/00changelog.i
   /sub/.hg/store/00manifest.i
-  /sub/.hg/store/data/%7E2ehgtags.i
+  /sub/.hg/store/data/%7E2ehgtags.i (no-py37 !)
   /sub/.hg/store/data/test.i
+  /sub/.hg/store/data/~2ehgtags.i (py37 !)
--- a/tests/test-status.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-status.t	Wed Apr 17 13:41:18 2019 -0400
@@ -132,7 +132,26 @@
 
 relative paths can be requested
 
+  $ hg status --cwd a --config ui.relative-paths=yes
+  ? 1/in_a_1
+  ? in_a
+  ? ../b/1/in_b_1
+  ? ../b/2/in_b_2
+  ? ../b/in_b
+  ? ../in_root
+
+  $ hg status --cwd a . --config ui.relative-paths=legacy
+  ? 1/in_a_1
+  ? in_a
+  $ hg status --cwd a . --config ui.relative-paths=no
+  ? a/1/in_a_1
+  ? a/in_a
+
+commands.status.relative overrides ui.relative-paths
+
   $ cat >> $HGRCPATH <<EOF
+  > [ui]
+  > relative-paths = False
   > [commands]
   > status.relative = True
   > EOF
@@ -271,7 +290,8 @@
 
   $ hg status -A -Tpickle > pickle
   >>> from __future__ import print_function
-  >>> import pickle
+  >>> from mercurial import util
+  >>> pickle = util.pickle
   >>> data = sorted((x[b'status'].decode(), x[b'path'].decode()) for x in pickle.load(open("pickle", r"rb")))
   >>> for s, p in data: print("%s %s" % (s, p))
   ! deleted
--- a/tests/test-subrepo-git.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-subrepo-git.t	Wed Apr 17 13:41:18 2019 -0400
@@ -924,9 +924,9 @@
   $ echo 'bloop' > s/foobar
   $ hg revert --all --verbose --config 'ui.origbackuppath=.hg/origbackups'
   reverting subrepo ../gitroot
-  creating directory: $TESTTMP/tc/.hg/origbackups
-  saving current version of foobar as $TESTTMP/tc/.hg/origbackups/foobar
-  $ ls .hg/origbackups
+  creating directory: $TESTTMP/tc/.hg/origbackups/s
+  saving current version of foobar as .hg/origbackups/s/foobar
+  $ ls .hg/origbackups/s
   foobar
   $ rm -rf .hg/origbackups
 
--- a/tests/test-subrepo-svn.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-subrepo-svn.t	Wed Apr 17 13:41:18 2019 -0400
@@ -1,11 +1,7 @@
 #require svn15
 
   $ SVNREPOPATH=`pwd`/svn-repo
-#if windows
-  $ SVNREPOURL=file:///`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#else
-  $ SVNREPOURL=file://`"$PYTHON" -c "import urllib, sys; sys.stdout.write(urllib.quote(sys.argv[1]))" "$SVNREPOPATH"`
-#endif
+  $ SVNREPOURL="`"$PYTHON" $TESTDIR/svnurlof.py \"$SVNREPOPATH\"`"
 
   $ filter_svn_output () {
   >     egrep -v 'Committing|Transmitting|Updating|(^$)' || true
--- a/tests/test-subrepo.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-subrepo.t	Wed Apr 17 13:41:18 2019 -0400
@@ -31,6 +31,13 @@
   a
   s/a
 
+`hg files` respects ui.relative-paths
+BROKEN: shows subrepo paths relative to the subrepo
+  $ hg files -S --config ui.relative-paths=no
+  .hgsub
+  a
+  s/a
+
   $ hg -R s ci -Ams0
   $ hg sum
   parent: 0:f7b1eb17ad24 tip
@@ -1257,6 +1264,7 @@
   ../shared/subrepo-2/.hg/wcache/checkisexec (execbit !)
   ../shared/subrepo-2/.hg/wcache/checklink (symlink !)
   ../shared/subrepo-2/.hg/wcache/checklink-target (symlink !)
+  ../shared/subrepo-2/.hg/wcache/manifestfulltextcache (reporevlogstore !)
   ../shared/subrepo-2/file
   $ hg -R ../shared in
   abort: repository default not found!
@@ -1867,6 +1875,19 @@
   @@ -0,0 +1,1 @@
   +bar
 
+  $ hg diff -X '.hgsub*' --nodates s
+  diff -r 000000000000 s/a
+  --- /dev/null
+  +++ b/s/a
+  @@ -0,0 +1,1 @@
+  +a
+  $ hg diff -X '.hgsub*' --nodates s/a
+  diff -r 000000000000 s/a
+  --- /dev/null
+  +++ b/s/a
+  @@ -0,0 +1,1 @@
+  +a
+
   $ cd ..
 
 test for ssh exploit 2017-07-25
--- a/tests/test-tag.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-tag.t	Wed Apr 17 13:41:18 2019 -0400
@@ -320,9 +320,9 @@
   HG: branch 'tag-and-branch-same-name'
   HG: changed .hgtags
   ====
-  note: commit message saved in .hg/last-message.txt
   transaction abort!
   rollback completed
+  note: commit message saved in .hg/last-message.txt
   abort: pretxncommit.unexpectedabort hook exited with status 1
   [255]
   $ cat .hg/last-message.txt
--- a/tests/test-tags.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-tags.t	Wed Apr 17 13:41:18 2019 -0400
@@ -759,3 +759,69 @@
   2 files updated, 0 files merged, 0 files removed, 0 files unresolved
   $ (cd tags-local-clone/.hg/cache/; ls -1 tag*)
   tags2-visible
+
+Avoid writing logs on trying to delete an already deleted tag
+  $ hg init issue5752
+  $ cd issue5752
+  $ echo > a
+  $ hg commit -Am 'add a'
+  adding a
+  $ hg tag a
+  $ hg tags
+  tip                                1:bd7ee4f3939b
+  a                                  0:a8a82d372bb3
+  $ hg log
+  changeset:   1:bd7ee4f3939b
+  tag:         tip
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     Added tag a for changeset a8a82d372bb3
+  
+  changeset:   0:a8a82d372bb3
+  tag:         a
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     add a
+  
+  $ hg tag --remove a
+  $ hg log
+  changeset:   2:e7feacc7ec9e
+  tag:         tip
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     Removed tag a
+  
+  changeset:   1:bd7ee4f3939b
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     Added tag a for changeset a8a82d372bb3
+  
+  changeset:   0:a8a82d372bb3
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     add a
+  
+  $ hg tag --remove a
+  abort: tag 'a' is already removed
+  [255]
+  $ hg log
+  changeset:   2:e7feacc7ec9e
+  tag:         tip
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     Removed tag a
+  
+  changeset:   1:bd7ee4f3939b
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     Added tag a for changeset a8a82d372bb3
+  
+  changeset:   0:a8a82d372bb3
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     add a
+  
+  $ cat .hgtags
+  a8a82d372bb35b42ff736e74f07c23bcd99c371f a
+  a8a82d372bb35b42ff736e74f07c23bcd99c371f a
+  0000000000000000000000000000000000000000 a
--- a/tests/test-template-functions.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-template-functions.t	Wed Apr 17 13:41:18 2019 -0400
@@ -1495,6 +1495,36 @@
      1200000.00
      1300000.00
 
+Test cbor filter:
+
+  $ cat <<'EOF' > "$TESTTMP/decodecbor.py"
+  > from __future__ import absolute_import
+  > from mercurial import (
+  >     dispatch,
+  >     pycompat,
+  > )
+  > from mercurial.utils import (
+  >     cborutil,
+  >     stringutil,
+  > )
+  > dispatch.initstdio()
+  > items = cborutil.decodeall(pycompat.stdin.read())
+  > pycompat.stdout.write(stringutil.pprint(items, indent=1) + b'\n')
+  > EOF
+
+  $ hg log -T "{rev|cbor}" -R a -l2 | "$PYTHON" "$TESTTMP/decodecbor.py"
+  [
+   10,
+   9
+  ]
+
+  $ hg log -T "{extras|cbor}" -R a -l1 | "$PYTHON" "$TESTTMP/decodecbor.py"
+  [
+   {
+    'branch': 'default'
+   }
+  ]
+
 json filter should escape HTML tags so that the output can be embedded in hgweb:
 
   $ hg log -T "{'<foo@example.org>'|json}\n" -R a -l1
@@ -1549,4 +1579,31 @@
   $ HGENCODING=utf-8 hg debugtemplate "{pad('`cat utf-8`', 2, '-')}\n"
   \xc3\xa9- (esc)
 
+read config options:
+
+  $ hg log -T "{config('templateconfig', 'knob', 'foo')}\n"
+  foo
+  $ hg log -T "{config('templateconfig', 'knob', 'foo')}\n" \
+  > --config templateconfig.knob=bar
+  bar
+  $ hg log -T "{configbool('templateconfig', 'knob', True)}\n"
+  True
+  $ hg log -T "{configbool('templateconfig', 'knob', True)}\n" \
+  > --config templateconfig.knob=0
+  False
+  $ hg log -T "{configint('templateconfig', 'knob', 123)}\n"
+  123
+  $ hg log -T "{configint('templateconfig', 'knob', 123)}\n" \
+  > --config templateconfig.knob=456
+  456
+  $ hg log -T "{config('templateconfig', 'knob')}\n"
+  devel-warn: config item requires an explicit default value: 'templateconfig.knob' at: * (glob)
+  
+  $ hg log -T "{configbool('ui', 'interactive')}\n"
+  False
+  $ hg log -T "{configbool('ui', 'interactive')}\n" --config ui.interactive=1
+  True
+  $ hg log -T "{config('templateconfig', 'knob', if(true, 'foo', 'bar'))}\n"
+  foo
+
   $ cd ..
--- a/tests/test-template-keywords.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-template-keywords.t	Wed Apr 17 13:41:18 2019 -0400
@@ -76,6 +76,12 @@
   $ hg log -r 'wdir()' -T '{manifest}\n'
   2147483647:ffffffffffff
 
+However, for negrev, we refuse to output anything (as well as for null)
+
+  $ hg log -r 'wdir() + null' -T 'bla{negrev}nk\n'
+  blank
+  blank
+
 Changectx-derived keywords are disabled within {manifest} as {node} changes:
 
   $ hg log -r0 -T 'outer:{p1node} {manifest % "inner:{p1node}"}\n'
--- a/tests/test-template-map.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-template-map.t	Wed Apr 17 13:41:18 2019 -0400
@@ -669,6 +669,74 @@
   </log>
 
 
+test CBOR style:
+
+  $ cat <<'EOF' > "$TESTTMP/decodecborarray.py"
+  > from __future__ import absolute_import
+  > from mercurial import (
+  >     dispatch,
+  >     pycompat,
+  > )
+  > from mercurial.utils import (
+  >     cborutil,
+  >     stringutil,
+  > )
+  > dispatch.initstdio()
+  > data = pycompat.stdin.read()
+  > # our CBOR decoder doesn't support parsing indefinite-length arrays,
+  > # but the log output is indefinite stream by nature.
+  > assert data[:1] == cborutil.BEGIN_INDEFINITE_ARRAY
+  > assert data[-1:] == cborutil.BREAK
+  > items = cborutil.decodeall(data[1:-1])
+  > pycompat.stdout.write(stringutil.pprint(items, indent=1) + b'\n')
+  > EOF
+
+  $ hg log -k nosuch -Tcbor | "$PYTHON" "$TESTTMP/decodecborarray.py"
+  []
+
+  $ hg log -qr0:1 -Tcbor | "$PYTHON" "$TESTTMP/decodecborarray.py"
+  [
+   {
+    'node': '1e4e1b8f71e05681d422154f5421e385fec3454f',
+    'rev': 0
+   },
+   {
+    'node': 'b608e9d1a3f0273ccf70fb85fd6866b3482bf965',
+    'rev': 1
+   }
+  ]
+
+  $ hg log -vpr . -Tcbor --stat | "$PYTHON" "$TESTTMP/decodecborarray.py"
+  [
+   {
+    'bookmarks': [],
+    'branch': 'default',
+    'date': [
+     1577872860,
+     0
+    ],
+    'desc': 'third',
+    'diff': 'diff -r 29114dbae42b -r 95c24699272e fourth\n--- /dev/null\tThu Jan 01 00:00:00 1970 +0000\n+++ b/fourth\tWed Jan 01 10:01:00 2020 +0000\n@@ -0,0 +1,1 @@\n+second\ndiff -r 29114dbae42b -r 95c24699272e second\n--- a/second\tMon Jan 12 13:46:40 1970 +0000\n+++ /dev/null\tThu Jan 01 00:00:00 1970 +0000\n@@ -1,1 +0,0 @@\n-second\ndiff -r 29114dbae42b -r 95c24699272e third\n--- /dev/null\tThu Jan 01 00:00:00 1970 +0000\n+++ b/third\tWed Jan 01 10:01:00 2020 +0000\n@@ -0,0 +1,1 @@\n+third\n',
+    'diffstat': ' fourth |  1 +\n second |  1 -\n third  |  1 +\n 3 files changed, 2 insertions(+), 1 deletions(-)\n',
+    'files': [
+     'fourth',
+     'second',
+     'third'
+    ],
+    'node': '95c24699272ef57d062b8bccc32c878bf841784a',
+    'parents': [
+     '29114dbae42b9f078cf2714dbe3a86bba8ec7453'
+    ],
+    'phase': 'draft',
+    'rev': 8,
+    'tags': [
+     'tip'
+    ],
+    'user': 'test'
+   }
+  ]
+
+
 Test JSON style:
 
   $ hg log -k nosuch -Tjson
@@ -1039,7 +1107,7 @@
   $ touch q
   $ chmod 0 q
   $ hg log --style ./q
-  abort: Permission denied: ./q
+  abort: Permission denied: './q'
   [255]
 #endif
 
--- a/tests/test-transplant.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-transplant.t	Wed Apr 17 13:41:18 2019 -0400
@@ -39,12 +39,12 @@
   1 files updated, 0 files merged, 0 files removed, 0 files unresolved
   (branch merge, don't forget to commit)
   $ hg transplant 1
-  abort: outstanding uncommitted merges
+  abort: outstanding uncommitted merge
   [255]
   $ hg up -qC tip
   $ echo b0 > b1
   $ hg transplant 1
-  abort: outstanding local changes
+  abort: uncommitted changes
   [255]
   $ hg up -qC tip
   $ echo b2 > b2
@@ -599,6 +599,7 @@
   > EOF
   0:17ab29e464c6
   apply changeset? [ynmpcq?]: p
+  diff -r 000000000000 -r 17ab29e464c6 r1
   --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
   +++ b/r1	Thu Jan 01 00:00:00 1970 +0000
   @@ -0,0 +1,1 @@
--- a/tests/test-trusted.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-trusted.py	Wed Apr 17 13:41:18 2019 -0400
@@ -5,19 +5,34 @@
 from __future__ import absolute_import, print_function
 
 import os
+import sys
+
 from mercurial import (
     error,
+    pycompat,
     ui as uimod,
     util,
 )
+from mercurial.utils import stringutil
 
 hgrc = os.environ['HGRCPATH']
-f = open(hgrc)
+f = open(hgrc, 'rb')
 basehgrc = f.read()
 f.close()
 
-def testui(user='foo', group='bar', tusers=(), tgroups=(),
-           cuser='foo', cgroup='bar', debug=False, silent=False,
+def _maybesysstr(v):
+    if isinstance(v, bytes):
+        return pycompat.sysstr(v)
+    return pycompat.sysstr(stringutil.pprint(v))
+
+def bprint(*args, **kwargs):
+    print(*[_maybesysstr(a) for a in args],
+          **{k: _maybesysstr(v) for k, v in kwargs.items()})
+    # avoid awkward interleaving with ui object's output
+    sys.stdout.flush()
+
+def testui(user=b'foo', group=b'bar', tusers=(), tgroups=(),
+           cuser=b'foo', cgroup=b'bar', debug=False, silent=False,
            report=True):
     # user, group => owners of the file
     # tusers, tgroups => trusted users/groups
@@ -25,17 +40,17 @@
 
     # write a global hgrc with the list of trusted users/groups and
     # some setting so that we can be sure it was read
-    f = open(hgrc, 'w')
+    f = open(hgrc, 'wb')
     f.write(basehgrc)
-    f.write('\n[paths]\n')
-    f.write('global = /some/path\n\n')
+    f.write(b'\n[paths]\n')
+    f.write(b'global = /some/path\n\n')
 
     if tusers or tgroups:
-        f.write('[trusted]\n')
+        f.write(b'[trusted]\n')
         if tusers:
-            f.write('users = %s\n' % ', '.join(tusers))
+            f.write(b'users = %s\n' % b', '.join(tusers))
         if tgroups:
-            f.write('groups = %s\n' % ', '.join(tgroups))
+            f.write(b'groups = %s\n' % b', '.join(tgroups))
     f.close()
 
     # override the functions that give names to uids and gids
@@ -47,7 +62,7 @@
 
     def groupname(gid=None):
         if gid is None:
-            return 'bar'
+            return b'bar'
         return group
     util.groupname = groupname
 
@@ -58,13 +73,14 @@
     # try to read everything
     #print '# File belongs to user %s, group %s' % (user, group)
     #print '# trusted users = %s; trusted groups = %s' % (tusers, tgroups)
-    kind = ('different', 'same')
-    who = ('', 'user', 'group', 'user and the group')
+    kind = (b'different', b'same')
+    who = (b'', b'user', b'group', b'user and the group')
     trusted = who[(user in tusers) + 2*(group in tgroups)]
     if trusted:
-        trusted = ', but we trust the ' + trusted
-    print('# %s user, %s group%s' % (kind[user == cuser], kind[group == cgroup],
-                                     trusted))
+        trusted = b', but we trust the ' + trusted
+    bprint(b'# %s user, %s group%s' % (kind[user == cuser],
+                                       kind[group == cgroup],
+                                       trusted))
 
     u = uimod.ui.load()
     # disable the configuration registration warning
@@ -72,33 +88,33 @@
     # the purpose of this test is to check the old behavior, not to validate the
     # behavior from registered item. so we silent warning related to unregisted
     # config.
-    u.setconfig('devel', 'warn-config-unknown', False, 'test')
-    u.setconfig('devel', 'all-warnings', False, 'test')
-    u.setconfig('ui', 'debug', str(bool(debug)))
-    u.setconfig('ui', 'report_untrusted', str(bool(report)))
-    u.readconfig('.hg/hgrc')
+    u.setconfig(b'devel', b'warn-config-unknown', False, b'test')
+    u.setconfig(b'devel', b'all-warnings', False, b'test')
+    u.setconfig(b'ui', b'debug', pycompat.bytestr(bool(debug)))
+    u.setconfig(b'ui', b'report_untrusted', pycompat.bytestr(bool(report)))
+    u.readconfig(b'.hg/hgrc')
     if silent:
         return u
-    print('trusted')
-    for name, path in u.configitems('paths'):
-        print('   ', name, '=', util.pconvert(path))
-    print('untrusted')
-    for name, path in u.configitems('paths', untrusted=True):
-        print('.', end=' ')
-        u.config('paths', name) # warning with debug=True
-        print('.', end=' ')
-        u.config('paths', name, untrusted=True) # no warnings
-        print(name, '=', util.pconvert(path))
+    bprint(b'trusted')
+    for name, path in u.configitems(b'paths'):
+        bprint(b'   ', name, b'=', util.pconvert(path))
+    bprint(b'untrusted')
+    for name, path in u.configitems(b'paths', untrusted=True):
+        bprint(b'.', end=b' ')
+        u.config(b'paths', name) # warning with debug=True
+        bprint(b'.', end=b' ')
+        u.config(b'paths', name, untrusted=True) # no warnings
+        bprint(name, b'=', util.pconvert(path))
     print()
 
     return u
 
-os.mkdir('repo')
-os.chdir('repo')
-os.mkdir('.hg')
-f = open('.hg/hgrc', 'w')
-f.write('[paths]\n')
-f.write('local = /another/path\n\n')
+os.mkdir(b'repo')
+os.chdir(b'repo')
+os.mkdir(b'.hg')
+f = open(b'.hg/hgrc', 'wb')
+f.write(b'[paths]\n')
+f.write(b'local = /another/path\n\n')
 f.close()
 
 #print '# Everything is run by user foo, group bar\n'
@@ -106,120 +122,130 @@
 # same user, same group
 testui()
 # same user, different group
-testui(group='def')
+testui(group=b'def')
 # different user, same group
-testui(user='abc')
+testui(user=b'abc')
 # ... but we trust the group
-testui(user='abc', tgroups=['bar'])
+testui(user=b'abc', tgroups=[b'bar'])
 # different user, different group
-testui(user='abc', group='def')
+testui(user=b'abc', group=b'def')
 # ... but we trust the user
-testui(user='abc', group='def', tusers=['abc'])
+testui(user=b'abc', group=b'def', tusers=[b'abc'])
 # ... but we trust the group
-testui(user='abc', group='def', tgroups=['def'])
+testui(user=b'abc', group=b'def', tgroups=[b'def'])
 # ... but we trust the user and the group
-testui(user='abc', group='def', tusers=['abc'], tgroups=['def'])
+testui(user=b'abc', group=b'def', tusers=[b'abc'], tgroups=[b'def'])
 # ... but we trust all users
-print('# we trust all users')
-testui(user='abc', group='def', tusers=['*'])
+bprint(b'# we trust all users')
+testui(user=b'abc', group=b'def', tusers=[b'*'])
 # ... but we trust all groups
-print('# we trust all groups')
-testui(user='abc', group='def', tgroups=['*'])
+bprint(b'# we trust all groups')
+testui(user=b'abc', group=b'def', tgroups=[b'*'])
 # ... but we trust the whole universe
-print('# we trust all users and groups')
-testui(user='abc', group='def', tusers=['*'], tgroups=['*'])
+bprint(b'# we trust all users and groups')
+testui(user=b'abc', group=b'def', tusers=[b'*'], tgroups=[b'*'])
 # ... check that users and groups are in different namespaces
-print("# we don't get confused by users and groups with the same name")
-testui(user='abc', group='def', tusers=['def'], tgroups=['abc'])
+bprint(b"# we don't get confused by users and groups with the same name")
+testui(user=b'abc', group=b'def', tusers=[b'def'], tgroups=[b'abc'])
 # ... lists of user names work
-print("# list of user names")
-testui(user='abc', group='def', tusers=['foo', 'xyz', 'abc', 'bleh'],
-       tgroups=['bar', 'baz', 'qux'])
+bprint(b"# list of user names")
+testui(user=b'abc', group=b'def', tusers=[b'foo', b'xyz', b'abc', b'bleh'],
+       tgroups=[b'bar', b'baz', b'qux'])
 # ... lists of group names work
-print("# list of group names")
-testui(user='abc', group='def', tusers=['foo', 'xyz', 'bleh'],
-       tgroups=['bar', 'def', 'baz', 'qux'])
+bprint(b"# list of group names")
+testui(user=b'abc', group=b'def', tusers=[b'foo', b'xyz', b'bleh'],
+       tgroups=[b'bar', b'def', b'baz', b'qux'])
 
-print("# Can't figure out the name of the user running this process")
-testui(user='abc', group='def', cuser=None)
+bprint(b"# Can't figure out the name of the user running this process")
+testui(user=b'abc', group=b'def', cuser=None)
 
-print("# prints debug warnings")
-u = testui(user='abc', group='def', cuser='foo', debug=True)
+bprint(b"# prints debug warnings")
+u = testui(user=b'abc', group=b'def', cuser=b'foo', debug=True)
 
-print("# report_untrusted enabled without debug hides warnings")
-u = testui(user='abc', group='def', cuser='foo', report=False)
+bprint(b"# report_untrusted enabled without debug hides warnings")
+u = testui(user=b'abc', group=b'def', cuser=b'foo', report=False)
 
-print("# report_untrusted enabled with debug shows warnings")
-u = testui(user='abc', group='def', cuser='foo', debug=True, report=False)
+bprint(b"# report_untrusted enabled with debug shows warnings")
+u = testui(user=b'abc', group=b'def', cuser=b'foo', debug=True, report=False)
 
-print("# ui.readconfig sections")
-filename = 'foobar'
-f = open(filename, 'w')
-f.write('[foobar]\n')
-f.write('baz = quux\n')
+bprint(b"# ui.readconfig sections")
+filename = b'foobar'
+f = open(filename, 'wb')
+f.write(b'[foobar]\n')
+f.write(b'baz = quux\n')
 f.close()
-u.readconfig(filename, sections=['foobar'])
-print(u.config('foobar', 'baz'))
+u.readconfig(filename, sections=[b'foobar'])
+bprint(u.config(b'foobar', b'baz'))
 
 print()
-print("# read trusted, untrusted, new ui, trusted")
+bprint(b"# read trusted, untrusted, new ui, trusted")
 u = uimod.ui.load()
 # disable the configuration registration warning
 #
 # the purpose of this test is to check the old behavior, not to validate the
 # behavior from registered item. so we silent warning related to unregisted
 # config.
-u.setconfig('devel', 'warn-config-unknown', False, 'test')
-u.setconfig('devel', 'all-warnings', False, 'test')
-u.setconfig('ui', 'debug', 'on')
+u.setconfig(b'devel', b'warn-config-unknown', False, b'test')
+u.setconfig(b'devel', b'all-warnings', False, b'test')
+u.setconfig(b'ui', b'debug', b'on')
 u.readconfig(filename)
 u2 = u.copy()
 def username(uid=None):
-    return 'foo'
+    return b'foo'
 util.username = username
-u2.readconfig('.hg/hgrc')
-print('trusted:')
-print(u2.config('foobar', 'baz'))
-print('untrusted:')
-print(u2.config('foobar', 'baz', untrusted=True))
+u2.readconfig(b'.hg/hgrc')
+bprint(b'trusted:')
+bprint(u2.config(b'foobar', b'baz'))
+bprint(b'untrusted:')
+bprint(u2.config(b'foobar', b'baz', untrusted=True))
 
 print()
-print("# error handling")
+bprint(b"# error handling")
 
 def assertraises(f, exc=error.Abort):
     try:
         f()
     except exc as inst:
-        print('raised', inst.__class__.__name__)
+        bprint(b'raised', inst.__class__.__name__)
     else:
-        print('no exception?!')
+        bprint(b'no exception?!')
 
-print("# file doesn't exist")
-os.unlink('.hg/hgrc')
-assert not os.path.exists('.hg/hgrc')
+bprint(b"# file doesn't exist")
+os.unlink(b'.hg/hgrc')
+assert not os.path.exists(b'.hg/hgrc')
 testui(debug=True, silent=True)
-testui(user='abc', group='def', debug=True, silent=True)
+testui(user=b'abc', group=b'def', debug=True, silent=True)
 
 print()
-print("# parse error")
-f = open('.hg/hgrc', 'w')
-f.write('foo')
+bprint(b"# parse error")
+f = open(b'.hg/hgrc', 'wb')
+f.write(b'foo')
 f.close()
 
+# This is a hack to remove b'' prefixes from ParseError.__bytes__ on
+# Python 3.
+def normalizeparseerror(e):
+    if pycompat.ispy3:
+        args = [a.decode('utf-8') for a in e.args]
+    else:
+        args = e.args
+
+    return error.ParseError(*args)
+
 try:
-    testui(user='abc', group='def', silent=True)
+    testui(user=b'abc', group=b'def', silent=True)
 except error.ParseError as inst:
-    print(inst)
+    bprint(normalizeparseerror(inst))
 
 try:
     testui(debug=True, silent=True)
 except error.ParseError as inst:
-    print(inst)
+    bprint(normalizeparseerror(inst))
 
 print()
-print('# access typed information')
-with open('.hg/hgrc', 'w') as f:
-    f.write('''\
+bprint(b'# access typed information')
+with open(b'.hg/hgrc', 'wb') as f:
+    f.write(b'''\
 [foo]
 sub=main
 sub:one=one
@@ -230,32 +256,33 @@
 bytes=81mb
 list=spam,ham,eggs
 ''')
-u = testui(user='abc', group='def', cuser='foo', silent=True)
+u = testui(user=b'abc', group=b'def', cuser=b'foo', silent=True)
 def configpath(section, name, default=None, untrusted=False):
     path = u.configpath(section, name, default, untrusted)
     if path is None:
         return None
     return util.pconvert(path)
 
-print('# suboptions, trusted and untrusted')
-trusted = u.configsuboptions('foo', 'sub')
-untrusted = u.configsuboptions('foo', 'sub', untrusted=True)
-print(
+bprint(b'# suboptions, trusted and untrusted')
+trusted = u.configsuboptions(b'foo', b'sub')
+untrusted = u.configsuboptions(b'foo', b'sub', untrusted=True)
+bprint(
     (trusted[0], sorted(trusted[1].items())),
     (untrusted[0], sorted(untrusted[1].items())))
-print('# path, trusted and untrusted')
-print(configpath('foo', 'path'), configpath('foo', 'path', untrusted=True))
-print('# bool, trusted and untrusted')
-print(u.configbool('foo', 'bool'), u.configbool('foo', 'bool', untrusted=True))
-print('# int, trusted and untrusted')
-print(
-    u.configint('foo', 'int', 0),
-    u.configint('foo', 'int', 0, untrusted=True))
-print('# bytes, trusted and untrusted')
-print(
-    u.configbytes('foo', 'bytes', 0),
-    u.configbytes('foo', 'bytes', 0, untrusted=True))
-print('# list, trusted and untrusted')
-print(
-    u.configlist('foo', 'list', []),
-    u.configlist('foo', 'list', [], untrusted=True))
+bprint(b'# path, trusted and untrusted')
+bprint(configpath(b'foo', b'path'), configpath(b'foo', b'path', untrusted=True))
+bprint(b'# bool, trusted and untrusted')
+bprint(u.configbool(b'foo', b'bool'),
+       u.configbool(b'foo', b'bool', untrusted=True))
+bprint(b'# int, trusted and untrusted')
+bprint(
+    u.configint(b'foo', b'int', 0),
+    u.configint(b'foo', b'int', 0, untrusted=True))
+bprint(b'# bytes, trusted and untrusted')
+bprint(
+    u.configbytes(b'foo', b'bytes', 0),
+    u.configbytes(b'foo', b'bytes', 0, untrusted=True))
+bprint(b'# list, trusted and untrusted')
+bprint(
+    u.configlist(b'foo', b'list', []),
+    u.configlist(b'foo', b'list', [], untrusted=True))
--- a/tests/test-trusted.py.out	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-trusted.py.out	Wed Apr 17 13:41:18 2019 -0400
@@ -174,9 +174,9 @@
 # parse error
 # different user, different group
 not trusting file .hg/hgrc from untrusted user abc, group def
-('foo', '.hg/hgrc:1')
+ParseError('foo', '.hg/hgrc:1')
 # same user, same group
-('foo', '.hg/hgrc:1')
+ParseError('foo', '.hg/hgrc:1')
 
 # access typed information
 # different user, different group
--- a/tests/test-unamend.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-unamend.t	Wed Apr 17 13:41:18 2019 -0400
@@ -232,6 +232,7 @@
 
   $ hg revert --all
   forgetting bar
+  $ rm bar
 
 Unamending in middle of a stack
 
@@ -302,7 +303,6 @@
 Testing whether unamend retains copies or not
 
   $ hg status
-  ? bar
 
   $ hg mv a foo
 
@@ -370,3 +370,42 @@
   diff --git a/c b/wat
   rename from c
   rename to wat
+  $ hg revert -qa
+  $ rm foobar wat
+
+Rename a->b, then amend b->c. After unamend, should look like b->c.
+
+  $ hg co -q 0
+  $ hg mv a b
+  $ hg ci -qm 'move to a b'
+  $ hg mv b c
+  $ hg amend
+  $ hg unamend
+  $ hg st --copies --change .
+  A b
+    a
+  R a
+  $ hg st --copies
+  A c
+    b
+  R b
+  $ hg revert -qa
+  $ rm c
+
+Rename a->b, then amend b->c, and working copy change c->d. After unamend, should look like b->d
+
+  $ hg co -q 0
+  $ hg mv a b
+  $ hg ci -qm 'move to a b'
+  $ hg mv b c
+  $ hg amend
+  $ hg mv c d
+  $ hg unamend
+  $ hg st --copies --change .
+  A b
+    a
+  R a
+  $ hg st --copies
+  A d
+    b
+  R b
--- a/tests/test-uncommit.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-uncommit.t	Wed Apr 17 13:41:18 2019 -0400
@@ -34,9 +34,10 @@
   
   options ([+] can be repeated):
   
-      --keep                allow an empty commit after uncommiting
-   -I --include PATTERN [+] include names matching the given patterns
-   -X --exclude PATTERN [+] exclude names matching the given patterns
+      --keep                     allow an empty commit after uncommiting
+      --allow-dirty-working-copy allow uncommit with outstanding changes
+   -I --include PATTERN [+]      include names matching the given patterns
+   -X --exclude PATTERN [+]      exclude names matching the given patterns
   
   (some details hidden, use --verbose to show complete help)
 
@@ -101,14 +102,16 @@
   $ hg heads -T '{rev}:{node} {desc}'
   5:0c07a3ccda771b25f1cb1edbd02e683723344ef1 new change abcde (no-eol)
 
-Uncommit of non-existent and unchanged files has no effect
+Uncommit of non-existent and unchanged files aborts
   $ hg uncommit nothinghere
-  nothing to uncommit
-  [1]
+  abort: cannot uncommit "nothinghere"
+  (file does not exist)
+  [255]
   $ hg status
   $ hg uncommit file-abc
-  nothing to uncommit
-  [1]
+  abort: cannot uncommit "file-abc"
+  (file was not changed in working directory parent)
+  [255]
   $ hg status
 
 Try partial uncommit, also moves bookmark
@@ -156,8 +159,12 @@
   M files
   $ hg uncommit
   abort: uncommitted changes
+  (requires --allow-dirty-working-copy to uncommit)
   [255]
   $ hg uncommit files
+  abort: uncommitted changes
+  (requires --allow-dirty-working-copy to uncommit)
+  [255]
   $ cat files
   abcde
   foo
@@ -168,6 +175,7 @@
   $ echo "bar" >> files
   $ hg uncommit
   abort: uncommitted changes
+  (requires --allow-dirty-working-copy to uncommit)
   [255]
   $ hg uncommit --config experimental.uncommitondirtywdir=True
   $ hg commit -m "files abcde + foo"
@@ -191,16 +199,16 @@
   +abc
   
   $ hg bookmark
-     foo                       10:48e5bd7cd583
+     foo                       9:48e5bd7cd583
   $ hg uncommit
   3 new orphan changesets
   $ hg status
   M files
   A file-abc
   $ hg heads -T '{rev}:{node} {desc}'
-  10:48e5bd7cd583eb24164ef8b89185819c84c96ed7 files abcde + foo (no-eol)
+  9:48e5bd7cd583eb24164ef8b89185819c84c96ed7 files abcde + foo (no-eol)
   $ hg bookmark
-     foo                       10:48e5bd7cd583
+     foo                       9:48e5bd7cd583
   $ hg commit -m 'new abc'
   created new head
 
@@ -222,38 +230,36 @@
   +ab
   
   $ hg bookmark
-     foo                       10:48e5bd7cd583
+     foo                       9:48e5bd7cd583
   $ hg uncommit file-ab
   1 new orphan changesets
   $ hg status
   A file-ab
 
   $ hg heads -T '{rev}:{node} {desc}\n'
-  12:8eb87968f2edb7f27f27fe676316e179de65fff6 added file-ab
-  11:5dc89ca4486f8a88716c5797fa9f498d13d7c2e1 new abc
-  10:48e5bd7cd583eb24164ef8b89185819c84c96ed7 files abcde + foo
+  11:8eb87968f2edb7f27f27fe676316e179de65fff6 added file-ab
+  10:5dc89ca4486f8a88716c5797fa9f498d13d7c2e1 new abc
+  9:48e5bd7cd583eb24164ef8b89185819c84c96ed7 files abcde + foo
 
   $ hg bookmark
-     foo                       10:48e5bd7cd583
+     foo                       9:48e5bd7cd583
   $ hg commit -m 'update ab'
   $ hg status
   $ hg heads -T '{rev}:{node} {desc}\n'
-  13:f21039c59242b085491bb58f591afc4ed1c04c09 update ab
-  11:5dc89ca4486f8a88716c5797fa9f498d13d7c2e1 new abc
-  10:48e5bd7cd583eb24164ef8b89185819c84c96ed7 files abcde + foo
+  12:f21039c59242b085491bb58f591afc4ed1c04c09 update ab
+  10:5dc89ca4486f8a88716c5797fa9f498d13d7c2e1 new abc
+  9:48e5bd7cd583eb24164ef8b89185819c84c96ed7 files abcde + foo
 
   $ hg log -G -T '{rev}:{node} {desc}' --hidden
-  @  13:f21039c59242b085491bb58f591afc4ed1c04c09 update ab
+  @  12:f21039c59242b085491bb58f591afc4ed1c04c09 update ab
   |
-  o  12:8eb87968f2edb7f27f27fe676316e179de65fff6 added file-ab
+  o  11:8eb87968f2edb7f27f27fe676316e179de65fff6 added file-ab
   |
-  | *  11:5dc89ca4486f8a88716c5797fa9f498d13d7c2e1 new abc
+  | *  10:5dc89ca4486f8a88716c5797fa9f498d13d7c2e1 new abc
   | |
-  | | *  10:48e5bd7cd583eb24164ef8b89185819c84c96ed7 files abcde + foo
+  | | *  9:48e5bd7cd583eb24164ef8b89185819c84c96ed7 files abcde + foo
   | | |
-  | | | x  9:8a6b58c173ca6a2e3745d8bd86698718d664bc6c files abcde + foo
-  | | |/
-  | | | x  8:39ad452c7f684a55d161c574340c5766c4569278 update files for abcde
+  | | | x  8:84beeba0ac30e19521c036e4d2dd3a5fa02586ff files abcde + foo
   | | |/
   | | | x  7:0977fa602c2fd7d8427ed4e7ee15ea13b84c9173 update files for abcde
   | | |/
@@ -275,14 +281,15 @@
 
   $ hg uncommit
   $ hg phase -r .
-  12: draft
+  11: draft
   $ hg commit -m 'update ab again'
 
 Phase is preserved
 
   $ hg uncommit --keep --config phases.new-commit=secret
+  note: keeping empty commit
   $ hg phase -r .
-  15: draft
+  14: draft
   $ hg commit --amend -m 'update ab again'
 
 Uncommit with public parent
@@ -290,7 +297,7 @@
   $ hg phase -p "::.^"
   $ hg uncommit
   $ hg phase -r .
-  12: public
+  11: public
 
 Partial uncommit with public parent
 
@@ -301,11 +308,11 @@
   $ hg status
   A xyz
   $ hg phase -r .
-  18: draft
+  17: draft
   $ hg phase -r ".^"
-  12: public
+  11: public
 
-Uncommit leaving an empty changeset
+Uncommit with --keep or experimental.uncommit.keep leaves an empty changeset
 
   $ cd $TESTTMP
   $ hg init repo1
@@ -317,6 +324,21 @@
   > EOS
   $ hg up Q -q
   $ hg uncommit --keep
+  note: keeping empty commit
+  $ hg log -G -T '{desc} FILES: {files}'
+  @  Q FILES:
+  |
+  | x  Q FILES: Q
+  |/
+  o  P FILES: P
+  
+  $ cat >> .hg/hgrc <<EOF
+  > [experimental]
+  > uncommit.keep=True
+  > EOF
+  $ hg ci --amend
+  $ hg uncommit
+  note: keeping empty commit
   $ hg log -G -T '{desc} FILES: {files}'
   @  Q FILES:
   |
@@ -326,7 +348,15 @@
   
   $ hg status
   A Q
-
+  $ hg ci --amend
+  $ hg uncommit --no-keep
+  $ hg log -G -T '{desc} FILES: {files}'
+  x  Q FILES: Q
+  |
+  @  P FILES: P
+  
+  $ hg status
+  A Q
   $ cd ..
   $ rm -rf repo1
 
@@ -368,6 +398,7 @@
 
   $ hg uncommit
   abort: outstanding uncommitted merge
+  (requires --allow-dirty-working-copy to uncommit)
   [255]
 
   $ hg uncommit --config experimental.uncommitondirtywdir=True
@@ -398,3 +429,143 @@
   |/
   o  0:ea4e33293d4d274a2ba73150733c2612231f398c a 1
   
+
+Rename a->b, then remove b in working copy. Result should remove a.
+
+  $ hg co -q 0
+  $ hg mv a b
+  $ hg ci -qm 'move a to b'
+  $ hg rm b
+  $ hg uncommit --config experimental.uncommitondirtywdir=True
+  $ hg st --copies
+  R a
+  $ hg revert a
+
+Rename a->b, then rename b->c in working copy. Result should rename a->c.
+
+  $ hg co -q 0
+  $ hg mv a b
+  $ hg ci -qm 'move a to b'
+  $ hg mv b c
+  $ hg uncommit --config experimental.uncommitondirtywdir=True
+  $ hg st --copies
+  A c
+    a
+  R a
+  $ hg revert a
+  $ hg forget c
+  $ rm c
+
+Copy a->b1 and a->b2, then rename b1->c in working copy. Result should copy a->b2 and a->c.
+
+  $ hg co -q 0
+  $ hg cp a b1
+  $ hg cp a b2
+  $ hg ci -qm 'move a to b1 and b2'
+  $ hg mv b1 c
+  $ hg uncommit --config experimental.uncommitondirtywdir=True
+  $ hg st --copies
+  A b2
+    a
+  A c
+    a
+  $ cd ..
+
+--allow-dirty-working-copy should also work on a dirty PATH
+
+  $ hg init issue5977
+  $ cd issue5977
+  $ echo 'super critical info!' > a
+  $ hg ci -Am 'add a'
+  adding a
+  $ echo 'foo' > b
+  $ hg add b
+  $ hg status
+  A b
+  $ hg unc a
+  note: keeping empty commit
+  $ cat a
+  super critical info!
+  $ hg log
+  changeset:   1:656ba143d384
+  tag:         tip
+  parent:      -1:000000000000
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     add a
+  
+  $ hg ci -Am 'add b'
+  $ echo 'foo bar' > b
+  $ hg unc b
+  abort: uncommitted changes
+  (requires --allow-dirty-working-copy to uncommit)
+  [255]
+  $ hg unc --allow-dirty-working-copy b
+  $ hg log
+  changeset:   3:30fa958635b2
+  tag:         tip
+  parent:      1:656ba143d384
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     add b
+  
+  changeset:   1:656ba143d384
+  parent:      -1:000000000000
+  user:        test
+  date:        Thu Jan 01 00:00:00 1970 +0000
+  summary:     add a
+  
+Removes can be uncommitted
+
+  $ hg ci -m 'modified b'
+  $ hg rm b
+  $ hg ci -m 'remove b'
+  $ hg uncommit b
+  note: keeping empty commit
+  $ hg status
+  R b
+
+Uncommitting a directory won't run afoul of the checks that an explicit file
+can be uncommitted.
+
+  $ mkdir dir
+  $ echo 1 > dir/file.txt
+  $ hg ci -Aqm 'add file in directory'
+  $ hg uncommit dir
+  $ hg status
+  A dir/file.txt
+
+`uncommit <dir>` and `cd <dir> && uncommit .` behave the same...
+
+  $ hg rollback -q --config ui.rollback=True
+  $ echo 2 > dir/file2.txt
+  $ hg ci -Aqm 'add file2 in directory'
+  $ hg uncommit dir
+  note: keeping empty commit
+  $ hg status
+  A dir/file2.txt
+
+  $ hg rollback -q --config ui.rollback=True
+  $ cd dir
+  $ hg uncommit .
+  note: keeping empty commit
+  $ hg status
+  A dir/file2.txt
+  $ cd ..
+
+... and errors out the same way when nothing can be uncommitted
+
+  $ hg rollback -q --config ui.rollback=True
+  $ mkdir emptydir
+  $ hg uncommit emptydir
+  abort: cannot uncommit "emptydir"
+  (file was untracked in working directory parent)
+  [255]
+
+  $ cd emptydir
+  $ hg uncommit .
+  abort: cannot uncommit "emptydir"
+  (file was untracked in working directory parent)
+  [255]
+  $ hg status
+  $ cd ..
--- a/tests/test-update-atomic.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-update-atomic.t	Wed Apr 17 13:41:18 2019 -0400
@@ -4,13 +4,14 @@
 
   $ cat > $TESTTMP/show_mode.py <<EOF
   > from __future__ import print_function
+  > import os
+  > import stat
   > import sys
-  > import os
-  > from stat import ST_MODE
+  > ST_MODE = stat.ST_MODE
   > 
   > for file_path in sys.argv[1:]:
   >     file_stat = os.stat(file_path)
-  >     octal_mode = oct(file_stat[ST_MODE] & 0o777)
+  >     octal_mode = oct(file_stat[ST_MODE] & 0o777).replace('o', '')
   >     print("%s:%s" % (file_path, octal_mode))
   > 
   > EOF
@@ -19,11 +20,15 @@
   $ cd repo
 
   $ cat > .hg/showwrites.py <<EOF
+  > from __future__ import print_function
+  > from mercurial import pycompat
+  > from mercurial.utils import stringutil
   > def uisetup(ui):
   >   from mercurial import vfs
   >   class newvfs(vfs.vfs):
   >     def __call__(self, *args, **kwargs):
-  >       print('vfs open', args, sorted(list(kwargs.items())))
+  >       print(pycompat.sysstr(stringutil.pprint(
+  >           ('vfs open', args, sorted(list(kwargs.items()))))))
   >       return super(newvfs, self).__call__(*args, **kwargs)
   >   vfs.vfs = newvfs
   > EOF
--- a/tests/test-upgrade-repo.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-upgrade-repo.t	Wed Apr 17 13:41:18 2019 -0400
@@ -52,37 +52,41 @@
   $ hg init empty
   $ cd empty
   $ hg debugformat
-  format-variant repo
-  fncache:        yes
-  dotencode:      yes
-  generaldelta:   yes
-  sparserevlog:   yes
-  plain-cl-delta: yes
-  compression:    zlib
+  format-variant    repo
+  fncache:           yes
+  dotencode:         yes
+  generaldelta:      yes
+  sparserevlog:      yes
+  plain-cl-delta:    yes
+  compression:       zlib
+  compression-level: default
   $ hg debugformat --verbose
-  format-variant repo config default
-  fncache:        yes    yes     yes
-  dotencode:      yes    yes     yes
-  generaldelta:   yes    yes     yes
-  sparserevlog:   yes    yes     yes
-  plain-cl-delta: yes    yes     yes
-  compression:    zlib   zlib    zlib
+  format-variant    repo config default
+  fncache:           yes    yes     yes
+  dotencode:         yes    yes     yes
+  generaldelta:      yes    yes     yes
+  sparserevlog:      yes    yes     yes
+  plain-cl-delta:    yes    yes     yes
+  compression:       zlib   zlib    zlib
+  compression-level: default default default
   $ hg debugformat --verbose --config format.usefncache=no
-  format-variant repo config default
-  fncache:        yes     no     yes
-  dotencode:      yes     no     yes
-  generaldelta:   yes    yes     yes
-  sparserevlog:   yes    yes     yes
-  plain-cl-delta: yes    yes     yes
-  compression:    zlib   zlib    zlib
+  format-variant    repo config default
+  fncache:           yes     no     yes
+  dotencode:         yes     no     yes
+  generaldelta:      yes    yes     yes
+  sparserevlog:      yes    yes     yes
+  plain-cl-delta:    yes    yes     yes
+  compression:       zlib   zlib    zlib
+  compression-level: default default default
   $ hg debugformat --verbose --config format.usefncache=no --color=debug
-  format-variant repo config default
-  [formatvariant.name.mismatchconfig|fncache:       ][formatvariant.repo.mismatchconfig| yes][formatvariant.config.special|     no][formatvariant.default|     yes]
-  [formatvariant.name.mismatchconfig|dotencode:     ][formatvariant.repo.mismatchconfig| yes][formatvariant.config.special|     no][formatvariant.default|     yes]
-  [formatvariant.name.uptodate|generaldelta:  ][formatvariant.repo.uptodate| yes][formatvariant.config.default|    yes][formatvariant.default|     yes]
-  [formatvariant.name.uptodate|sparserevlog:  ][formatvariant.repo.uptodate| yes][formatvariant.config.default|    yes][formatvariant.default|     yes]
-  [formatvariant.name.uptodate|plain-cl-delta:][formatvariant.repo.uptodate| yes][formatvariant.config.default|    yes][formatvariant.default|     yes]
-  [formatvariant.name.uptodate|compression:   ][formatvariant.repo.uptodate| zlib][formatvariant.config.default|   zlib][formatvariant.default|    zlib]
+  format-variant    repo config default
+  [formatvariant.name.mismatchconfig|fncache:          ][formatvariant.repo.mismatchconfig| yes][formatvariant.config.special|     no][formatvariant.default|     yes]
+  [formatvariant.name.mismatchconfig|dotencode:        ][formatvariant.repo.mismatchconfig| yes][formatvariant.config.special|     no][formatvariant.default|     yes]
+  [formatvariant.name.uptodate|generaldelta:     ][formatvariant.repo.uptodate| yes][formatvariant.config.default|    yes][formatvariant.default|     yes]
+  [formatvariant.name.uptodate|sparserevlog:     ][formatvariant.repo.uptodate| yes][formatvariant.config.default|    yes][formatvariant.default|     yes]
+  [formatvariant.name.uptodate|plain-cl-delta:   ][formatvariant.repo.uptodate| yes][formatvariant.config.default|    yes][formatvariant.default|     yes]
+  [formatvariant.name.uptodate|compression:      ][formatvariant.repo.uptodate| zlib][formatvariant.config.default|   zlib][formatvariant.default|    zlib]
+  [formatvariant.name.uptodate|compression-level:][formatvariant.repo.uptodate| default][formatvariant.config.default| default][formatvariant.default| default]
   $ hg debugformat -Tjson
   [
    {
@@ -120,6 +124,12 @@
     "default": "zlib",
     "name": "compression",
     "repo": "zlib"
+   },
+   {
+    "config": "default",
+    "default": "default",
+    "name": "compression-level",
+    "repo": "default"
    }
   ]
   $ hg debugupgraderepo
@@ -207,37 +217,41 @@
   > EOF
 
   $ hg debugformat
-  format-variant repo
-  fncache:         no
-  dotencode:       no
-  generaldelta:    no
-  sparserevlog:    no
-  plain-cl-delta: yes
-  compression:    zlib
+  format-variant    repo
+  fncache:            no
+  dotencode:          no
+  generaldelta:       no
+  sparserevlog:       no
+  plain-cl-delta:    yes
+  compression:       zlib
+  compression-level: default
   $ hg debugformat --verbose
-  format-variant repo config default
-  fncache:         no    yes     yes
-  dotencode:       no    yes     yes
-  generaldelta:    no    yes     yes
-  sparserevlog:    no    yes     yes
-  plain-cl-delta: yes    yes     yes
-  compression:    zlib   zlib    zlib
+  format-variant    repo config default
+  fncache:            no    yes     yes
+  dotencode:          no    yes     yes
+  generaldelta:       no    yes     yes
+  sparserevlog:       no    yes     yes
+  plain-cl-delta:    yes    yes     yes
+  compression:       zlib   zlib    zlib
+  compression-level: default default default
   $ hg debugformat --verbose --config format.usegeneraldelta=no
-  format-variant repo config default
-  fncache:         no    yes     yes
-  dotencode:       no    yes     yes
-  generaldelta:    no     no     yes
-  sparserevlog:    no     no     yes
-  plain-cl-delta: yes    yes     yes
-  compression:    zlib   zlib    zlib
+  format-variant    repo config default
+  fncache:            no    yes     yes
+  dotencode:          no    yes     yes
+  generaldelta:       no     no     yes
+  sparserevlog:       no     no     yes
+  plain-cl-delta:    yes    yes     yes
+  compression:       zlib   zlib    zlib
+  compression-level: default default default
   $ hg debugformat --verbose --config format.usegeneraldelta=no --color=debug
-  format-variant repo config default
-  [formatvariant.name.mismatchconfig|fncache:       ][formatvariant.repo.mismatchconfig|  no][formatvariant.config.default|    yes][formatvariant.default|     yes]
-  [formatvariant.name.mismatchconfig|dotencode:     ][formatvariant.repo.mismatchconfig|  no][formatvariant.config.default|    yes][formatvariant.default|     yes]
-  [formatvariant.name.mismatchdefault|generaldelta:  ][formatvariant.repo.mismatchdefault|  no][formatvariant.config.special|     no][formatvariant.default|     yes]
-  [formatvariant.name.mismatchdefault|sparserevlog:  ][formatvariant.repo.mismatchdefault|  no][formatvariant.config.special|     no][formatvariant.default|     yes]
-  [formatvariant.name.uptodate|plain-cl-delta:][formatvariant.repo.uptodate| yes][formatvariant.config.default|    yes][formatvariant.default|     yes]
-  [formatvariant.name.uptodate|compression:   ][formatvariant.repo.uptodate| zlib][formatvariant.config.default|   zlib][formatvariant.default|    zlib]
+  format-variant    repo config default
+  [formatvariant.name.mismatchconfig|fncache:          ][formatvariant.repo.mismatchconfig|  no][formatvariant.config.default|    yes][formatvariant.default|     yes]
+  [formatvariant.name.mismatchconfig|dotencode:        ][formatvariant.repo.mismatchconfig|  no][formatvariant.config.default|    yes][formatvariant.default|     yes]
+  [formatvariant.name.mismatchdefault|generaldelta:     ][formatvariant.repo.mismatchdefault|  no][formatvariant.config.special|     no][formatvariant.default|     yes]
+  [formatvariant.name.mismatchdefault|sparserevlog:     ][formatvariant.repo.mismatchdefault|  no][formatvariant.config.special|     no][formatvariant.default|     yes]
+  [formatvariant.name.uptodate|plain-cl-delta:   ][formatvariant.repo.uptodate| yes][formatvariant.config.default|    yes][formatvariant.default|     yes]
+  [formatvariant.name.uptodate|compression:      ][formatvariant.repo.uptodate| zlib][formatvariant.config.default|   zlib][formatvariant.default|    zlib]
+  [formatvariant.name.uptodate|compression-level:][formatvariant.repo.uptodate| default][formatvariant.config.default| default][formatvariant.default| default]
   $ hg debugupgraderepo
   repository lacks features recommended by current config options:
   
@@ -498,7 +512,7 @@
   starting in-place swap of repository data
   replaced files will be backed up at $TESTTMP/upgradegd/.hg/upgradebackup.* (glob)
   replacing store...
-  store replacement complete; repository was inconsistent for 0.0s
+  store replacement complete; repository was inconsistent for * (glob)
   finalizing requirements file and making repository readable again
   removing old repository content$TESTTMP/upgradegd/.hg/upgradebackup.* (glob)
   removing temporary repository $TESTTMP/upgradegd/.hg/upgrade.* (glob)
@@ -840,4 +854,78 @@
   generaldelta
   revlogv1
   store
+
+#if zstd
+
+Check upgrading to a zstd revlog
+--------------------------------
+
+upgrade
+
+  $ hg --config format.revlog-compression=zstd debugupgraderepo --run  --no-backup >/dev/null
+  $ hg debugformat -v
+  format-variant    repo config default
+  fncache:           yes    yes     yes
+  dotencode:         yes    yes     yes
+  generaldelta:      yes    yes     yes
+  sparserevlog:      yes    yes     yes
+  plain-cl-delta:    yes    yes     yes
+  compression:       zstd   zlib    zlib
+  compression-level: default default default
+  $ cat .hg/requires
+  dotencode
+  fncache
+  generaldelta
+  revlog-compression-zstd
+  revlogv1
+  sparserevlog
+  store
+
+downgrade
+
+  $ hg debugupgraderepo --run --no-backup > /dev/null
+  $ hg debugformat -v
+  format-variant    repo config default
+  fncache:           yes    yes     yes
+  dotencode:         yes    yes     yes
+  generaldelta:      yes    yes     yes
+  sparserevlog:      yes    yes     yes
+  plain-cl-delta:    yes    yes     yes
+  compression:       zlib   zlib    zlib
+  compression-level: default default default
+  $ cat .hg/requires
+  dotencode
+  fncache
+  generaldelta
+  revlogv1
+  sparserevlog
+  store
+
+upgrade from hgrc
+
+  $ cat >> .hg/hgrc << EOF
+  > [format]
+  > revlog-compression=zstd
+  > EOF
+  $ hg debugupgraderepo --run --no-backup > /dev/null
+  $ hg debugformat -v
+  format-variant    repo config default
+  fncache:           yes    yes     yes
+  dotencode:         yes    yes     yes
+  generaldelta:      yes    yes     yes
+  sparserevlog:      yes    yes     yes
+  plain-cl-delta:    yes    yes     yes
+  compression:       zstd   zstd    zlib
+  compression-level: default default default
+  $ cat .hg/requires
+  dotencode
+  fncache
+  generaldelta
+  revlog-compression-zstd
+  revlogv1
+  sparserevlog
+  store
+
   $ cd ..
+
+#endif
--- a/tests/test-wireproto-command-capabilities.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-wireproto-command-capabilities.t	Wed Apr 17 13:41:18 2019 -0400
@@ -22,6 +22,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -45,6 +46,7 @@
   >    x-hgproto-1: cbor
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -82,6 +84,7 @@
   >    x-hgupgrade-1: foo bar
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -106,6 +109,7 @@
   >    x-hgproto-1: some value
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -131,6 +135,7 @@
   >    x-hgproto-1: cbor
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -170,6 +175,7 @@
   >    x-hgproto-1: cbor
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -202,6 +208,7 @@
   >    x-hgproto-1: cbor
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -460,6 +467,7 @@
   > command capabilities
   > EOF
   creating http peer for wire protocol version 2
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1,X-HgUpgrade-1\r\n
@@ -478,6 +486,7 @@
   s>     \r\n
   s>     \xa3GapibaseDapi/Dapis\xa1Pexp-http-v2-0003\xa4Hcommands\xacIbranchmap\xa2Dargs\xa0Kpermissions\x81DpullLcapabilities\xa2Dargs\xa0Kpermissions\x81DpullMchangesetdata\xa2Dargs\xa2Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84IbookmarksGparentsEphaseHrevisionIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullHfiledata\xa2Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x83HlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDpath\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullIfilesdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84NfirstchangesetHlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDdictIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullTrecommendedbatchsize\x19\xc3PEheads\xa2Dargs\xa1Jpubliconly\xa3Gdefault\xf4Hrequired\xf4DtypeDboolKpermissions\x81DpullEknown\xa2Dargs\xa1Enodes\xa3Gdefault\x80Hrequired\xf4DtypeDlistKpermissions\x81DpullHlistkeys\xa2Dargs\xa1Inamespace\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullFlookup\xa2Dargs\xa1Ckey\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullLmanifestdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x82GparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDtree\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullTrecommendedbatchsize\x1a\x00\x01\x86\xa0Gpushkey\xa2Dargs\xa4Ckey\xa2Hrequired\xf5DtypeEbytesInamespace\xa2Hrequired\xf5DtypeEbytesCnew\xa2Hrequired\xf5DtypeEbytesCold\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpushPrawstorefiledata\xa2Dargs\xa2Efiles\xa2Hrequired\xf5DtypeDlistJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDlistKpermissions\x81DpullQframingmediatypes\x81X&application/mercurial-exp-framing-0006Rpathfilterprefixes\xd9\x01\x02\x82Epath:Lrootfilesin:Nrawrepoformats\x83LgeneraldeltaHrevlogv1LsparserevlogNv1capabilitiesY\x01\xe0batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset compression=$BUNDLE2_COMPRESSIONS$ getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
   sending capabilities command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -498,23 +507,19 @@
   s>     \t\x00\x00\x01\x00\x02\x01\x92
   s>     Hidentity
   s>     \r\n
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   s>     13\r\n
   s>     \x0b\x00\x00\x01\x00\x02\x041
   s>     \xa1FstatusBok
   s>     \r\n
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     65e\r\n
   s>     V\x06\x00\x01\x00\x02\x041
   s>     \xa4Hcommands\xacIbranchmap\xa2Dargs\xa0Kpermissions\x81DpullLcapabilities\xa2Dargs\xa0Kpermissions\x81DpullMchangesetdata\xa2Dargs\xa2Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84IbookmarksGparentsEphaseHrevisionIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullHfiledata\xa2Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x83HlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDpath\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullIfilesdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84NfirstchangesetHlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDdictIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullTrecommendedbatchsize\x19\xc3PEheads\xa2Dargs\xa1Jpubliconly\xa3Gdefault\xf4Hrequired\xf4DtypeDboolKpermissions\x81DpullEknown\xa2Dargs\xa1Enodes\xa3Gdefault\x80Hrequired\xf4DtypeDlistKpermissions\x81DpullHlistkeys\xa2Dargs\xa1Inamespace\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullFlookup\xa2Dargs\xa1Ckey\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullLmanifestdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x82GparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDtree\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullTrecommendedbatchsize\x1a\x00\x01\x86\xa0Gpushkey\xa2Dargs\xa4Ckey\xa2Hrequired\xf5DtypeEbytesInamespace\xa2Hrequired\xf5DtypeEbytesCnew\xa2Hrequired\xf5DtypeEbytesCold\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpushPrawstorefiledata\xa2Dargs\xa2Efiles\xa2Hrequired\xf5DtypeDlistJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDlistKpermissions\x81DpullQframingmediatypes\x81X&application/mercurial-exp-framing-0006Rpathfilterprefixes\xd9\x01\x02\x82Epath:Lrootfilesin:Nrawrepoformats\x83LgeneraldeltaHrevlogv1Lsparserevlog
   s>     \r\n
-  received frame(size=1622; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     8\r\n
   s>     \x00\x00\x00\x01\x00\x02\x002
   s>     \r\n
   s>     0\r\n
   s>     \r\n
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   response: gen[
     {
       b'commands': {
--- a/tests/test-wireproto-content-redirects.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-wireproto-content-redirects.t	Wed Apr 17 13:41:18 2019 -0400
@@ -51,6 +51,7 @@
   > command capabilities
   > EOF
   creating http peer for wire protocol version 2
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1,X-HgUpgrade-1\r\n
@@ -71,6 +72,7 @@
   (remote redirect target target-a is compatible) (tls1.2 !)
   (remote redirect target target-a requires unsupported TLS versions: 1.2, 1.3) (no-tls1.2 !)
   sending capabilities command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -93,23 +95,19 @@
   s>     \t\x00\x00\x01\x00\x02\x01\x92
   s>     Hidentity
   s>     \r\n
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   s>     13\r\n
   s>     \x0b\x00\x00\x01\x00\x02\x041
   s>     \xa1FstatusBok
   s>     \r\n
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     6de\r\n
   s>     \xd6\x06\x00\x01\x00\x02\x041
   s>     \xa5Hcommands\xacIbranchmap\xa2Dargs\xa0Kpermissions\x81DpullLcapabilities\xa2Dargs\xa0Kpermissions\x81DpullMchangesetdata\xa2Dargs\xa2Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84IbookmarksGparentsEphaseHrevisionIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullHfiledata\xa2Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x83HlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDpath\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullIfilesdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84NfirstchangesetHlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDdictIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullTrecommendedbatchsize\x19\xc3PEheads\xa2Dargs\xa1Jpubliconly\xa3Gdefault\xf4Hrequired\xf4DtypeDboolKpermissions\x81DpullEknown\xa2Dargs\xa1Enodes\xa3Gdefault\x80Hrequired\xf4DtypeDlistKpermissions\x81DpullHlistkeys\xa2Dargs\xa1Inamespace\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullFlookup\xa2Dargs\xa1Ckey\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullLmanifestdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x82GparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDtree\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullTrecommendedbatchsize\x1a\x00\x01\x86\xa0Gpushkey\xa2Dargs\xa4Ckey\xa2Hrequired\xf5DtypeEbytesInamespace\xa2Hrequired\xf5DtypeEbytesCnew\xa2Hrequired\xf5DtypeEbytesCold\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpushPrawstorefiledata\xa2Dargs\xa2Efiles\xa2Hrequired\xf5DtypeDlistJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDlistKpermissions\x81DpullQframingmediatypes\x81X&application/mercurial-exp-framing-0006Rpathfilterprefixes\xd9\x01\x02\x82Epath:Lrootfilesin:Nrawrepoformats\x83LgeneraldeltaHrevlogv1LsparserevlogHredirect\xa2Fhashes\x82Fsha256Dsha1Gtargets\x81\xa5DnameHtarget-aHprotocolDhttpKsnirequired\xf4Ktlsversions\x82C1.2C1.3Duris\x81Shttp://example.com/
   s>     \r\n
-  received frame(size=1750; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     8\r\n
   s>     \x00\x00\x00\x01\x00\x02\x002
   s>     \r\n
   s>     0\r\n
   s>     \r\n
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   response: gen[
     {
       b'commands': {
@@ -383,6 +381,7 @@
   > command capabilities
   > EOF
   creating http peer for wire protocol version 2
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1,X-HgUpgrade-1\r\n
@@ -403,6 +402,7 @@
   (remote redirect target target-a is compatible)
   (remote redirect target target-b uses unsupported protocol: unknown)
   sending capabilities command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -423,23 +423,19 @@
   s>     \t\x00\x00\x01\x00\x02\x01\x92
   s>     Hidentity
   s>     \r\n
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   s>     13\r\n
   s>     \x0b\x00\x00\x01\x00\x02\x041
   s>     \xa1FstatusBok
   s>     \r\n
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     6f9\r\n
   s>     \xf1\x06\x00\x01\x00\x02\x041
   s>     \xa5Hcommands\xacIbranchmap\xa2Dargs\xa0Kpermissions\x81DpullLcapabilities\xa2Dargs\xa0Kpermissions\x81DpullMchangesetdata\xa2Dargs\xa2Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84IbookmarksGparentsEphaseHrevisionIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullHfiledata\xa2Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x83HlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDpath\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullIfilesdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84NfirstchangesetHlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDdictIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullTrecommendedbatchsize\x19\xc3PEheads\xa2Dargs\xa1Jpubliconly\xa3Gdefault\xf4Hrequired\xf4DtypeDboolKpermissions\x81DpullEknown\xa2Dargs\xa1Enodes\xa3Gdefault\x80Hrequired\xf4DtypeDlistKpermissions\x81DpullHlistkeys\xa2Dargs\xa1Inamespace\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullFlookup\xa2Dargs\xa1Ckey\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullLmanifestdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x82GparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDtree\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullTrecommendedbatchsize\x1a\x00\x01\x86\xa0Gpushkey\xa2Dargs\xa4Ckey\xa2Hrequired\xf5DtypeEbytesInamespace\xa2Hrequired\xf5DtypeEbytesCnew\xa2Hrequired\xf5DtypeEbytesCold\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpushPrawstorefiledata\xa2Dargs\xa2Efiles\xa2Hrequired\xf5DtypeDlistJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDlistKpermissions\x81DpullQframingmediatypes\x81X&application/mercurial-exp-framing-0006Rpathfilterprefixes\xd9\x01\x02\x82Epath:Lrootfilesin:Nrawrepoformats\x83LgeneraldeltaHrevlogv1LsparserevlogHredirect\xa2Fhashes\x82Fsha256Dsha1Gtargets\x82\xa3DnameHtarget-aHprotocolDhttpDuris\x81Shttp://example.com/\xa3DnameHtarget-bHprotocolGunknownDuris\x81Vunknown://example.com/
   s>     \r\n
-  received frame(size=1777; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     8\r\n
   s>     \x00\x00\x00\x01\x00\x02\x002
   s>     \r\n
   s>     0\r\n
   s>     \r\n
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   response: gen[
     {
       b'commands': {
@@ -720,6 +716,7 @@
   > command capabilities
   > EOF
   creating http peer for wire protocol version 2
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1,X-HgUpgrade-1\r\n
@@ -739,6 +736,7 @@
   s>     \xa3GapibaseDapi/Dapis\xa1Pexp-http-v2-0003\xa5Hcommands\xacIbranchmap\xa2Dargs\xa0Kpermissions\x81DpullLcapabilities\xa2Dargs\xa0Kpermissions\x81DpullMchangesetdata\xa2Dargs\xa2Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84IbookmarksGparentsEphaseHrevisionIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullHfiledata\xa2Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x83HlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDpath\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullIfilesdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84NfirstchangesetHlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDdictIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullTrecommendedbatchsize\x19\xc3PEheads\xa2Dargs\xa1Jpubliconly\xa3Gdefault\xf4Hrequired\xf4DtypeDboolKpermissions\x81DpullEknown\xa2Dargs\xa1Enodes\xa3Gdefault\x80Hrequired\xf4DtypeDlistKpermissions\x81DpullHlistkeys\xa2Dargs\xa1Inamespace\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullFlookup\xa2Dargs\xa1Ckey\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullLmanifestdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x82GparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDtree\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullTrecommendedbatchsize\x1a\x00\x01\x86\xa0Gpushkey\xa2Dargs\xa4Ckey\xa2Hrequired\xf5DtypeEbytesInamespace\xa2Hrequired\xf5DtypeEbytesCnew\xa2Hrequired\xf5DtypeEbytesCold\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpushPrawstorefiledata\xa2Dargs\xa2Efiles\xa2Hrequired\xf5DtypeDlistJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDlistKpermissions\x81DpullQframingmediatypes\x81X&application/mercurial-exp-framing-0006Rpathfilterprefixes\xd9\x01\x02\x82Epath:Lrootfilesin:Nrawrepoformats\x83LgeneraldeltaHrevlogv1LsparserevlogHredirect\xa2Fhashes\x82Fsha256Dsha1Gtargets\x81\xa4DnameNtarget-bad-tlsHprotocolEhttpsKsnirequired\xf5Duris\x81Thttps://example.com/Nv1capabilitiesY\x01\xe0batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset compression=$BUNDLE2_COMPRESSIONS$ getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
   (redirect target target-bad-tls requires SNI, which is unsupported)
   sending capabilities command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -759,23 +757,19 @@
   s>     \t\x00\x00\x01\x00\x02\x01\x92
   s>     Hidentity
   s>     \r\n
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   s>     13\r\n
   s>     \x0b\x00\x00\x01\x00\x02\x041
   s>     \xa1FstatusBok
   s>     \r\n
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     6d1\r\n
   s>     \xc9\x06\x00\x01\x00\x02\x041
   s>     \xa5Hcommands\xacIbranchmap\xa2Dargs\xa0Kpermissions\x81DpullLcapabilities\xa2Dargs\xa0Kpermissions\x81DpullMchangesetdata\xa2Dargs\xa2Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84IbookmarksGparentsEphaseHrevisionIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullHfiledata\xa2Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x83HlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDpath\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullIfilesdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84NfirstchangesetHlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDdictIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullTrecommendedbatchsize\x19\xc3PEheads\xa2Dargs\xa1Jpubliconly\xa3Gdefault\xf4Hrequired\xf4DtypeDboolKpermissions\x81DpullEknown\xa2Dargs\xa1Enodes\xa3Gdefault\x80Hrequired\xf4DtypeDlistKpermissions\x81DpullHlistkeys\xa2Dargs\xa1Inamespace\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullFlookup\xa2Dargs\xa1Ckey\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullLmanifestdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x82GparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDtree\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullTrecommendedbatchsize\x1a\x00\x01\x86\xa0Gpushkey\xa2Dargs\xa4Ckey\xa2Hrequired\xf5DtypeEbytesInamespace\xa2Hrequired\xf5DtypeEbytesCnew\xa2Hrequired\xf5DtypeEbytesCold\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpushPrawstorefiledata\xa2Dargs\xa2Efiles\xa2Hrequired\xf5DtypeDlistJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDlistKpermissions\x81DpullQframingmediatypes\x81X&application/mercurial-exp-framing-0006Rpathfilterprefixes\xd9\x01\x02\x82Epath:Lrootfilesin:Nrawrepoformats\x83LgeneraldeltaHrevlogv1LsparserevlogHredirect\xa2Fhashes\x82Fsha256Dsha1Gtargets\x81\xa4DnameNtarget-bad-tlsHprotocolEhttpsKsnirequired\xf5Duris\x81Thttps://example.com/
   s>     \r\n
-  received frame(size=1737; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     8\r\n
   s>     \x00\x00\x00\x01\x00\x02\x002
   s>     \r\n
   s>     0\r\n
   s>     \r\n
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   response: gen[
     {
       b'commands': {
@@ -1046,6 +1040,7 @@
   > command capabilities
   > EOF
   creating http peer for wire protocol version 2
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /?cmd=capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     vary: X-HgProto-1,X-HgUpgrade-1\r\n
@@ -1065,6 +1060,7 @@
   s>     \xa3GapibaseDapi/Dapis\xa1Pexp-http-v2-0003\xa5Hcommands\xacIbranchmap\xa2Dargs\xa0Kpermissions\x81DpullLcapabilities\xa2Dargs\xa0Kpermissions\x81DpullMchangesetdata\xa2Dargs\xa2Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84IbookmarksGparentsEphaseHrevisionIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullHfiledata\xa2Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x83HlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDpath\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullIfilesdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84NfirstchangesetHlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDdictIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullTrecommendedbatchsize\x19\xc3PEheads\xa2Dargs\xa1Jpubliconly\xa3Gdefault\xf4Hrequired\xf4DtypeDboolKpermissions\x81DpullEknown\xa2Dargs\xa1Enodes\xa3Gdefault\x80Hrequired\xf4DtypeDlistKpermissions\x81DpullHlistkeys\xa2Dargs\xa1Inamespace\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullFlookup\xa2Dargs\xa1Ckey\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullLmanifestdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x82GparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDtree\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullTrecommendedbatchsize\x1a\x00\x01\x86\xa0Gpushkey\xa2Dargs\xa4Ckey\xa2Hrequired\xf5DtypeEbytesInamespace\xa2Hrequired\xf5DtypeEbytesCnew\xa2Hrequired\xf5DtypeEbytesCold\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpushPrawstorefiledata\xa2Dargs\xa2Efiles\xa2Hrequired\xf5DtypeDlistJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDlistKpermissions\x81DpullQframingmediatypes\x81X&application/mercurial-exp-framing-0006Rpathfilterprefixes\xd9\x01\x02\x82Epath:Lrootfilesin:Nrawrepoformats\x83LgeneraldeltaHrevlogv1LsparserevlogHredirect\xa2Fhashes\x82Fsha256Dsha1Gtargets\x81\xa4DnameNtarget-bad-tlsHprotocolEhttpsKtlsversions\x82B42B39Duris\x81Thttps://example.com/Nv1capabilitiesY\x01\xe0batch branchmap $USUAL_BUNDLE2_CAPS$ changegroupsubset compression=$BUNDLE2_COMPRESSIONS$ getbundle httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx known lookup pushkey streamreqs=generaldelta,revlogv1,sparserevlog unbundle=HG10GZ,HG10BZ,HG10UN unbundlehash
   (remote redirect target target-bad-tls requires unsupported TLS versions: 39, 42)
   sending capabilities command
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     POST /api/exp-http-v2-0003/ro/capabilities HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     accept: application/mercurial-exp-framing-0006\r\n
@@ -1085,23 +1081,19 @@
   s>     \t\x00\x00\x01\x00\x02\x01\x92
   s>     Hidentity
   s>     \r\n
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   s>     13\r\n
   s>     \x0b\x00\x00\x01\x00\x02\x041
   s>     \xa1FstatusBok
   s>     \r\n
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     6d7\r\n
   s>     \xcf\x06\x00\x01\x00\x02\x041
   s>     \xa5Hcommands\xacIbranchmap\xa2Dargs\xa0Kpermissions\x81DpullLcapabilities\xa2Dargs\xa0Kpermissions\x81DpullMchangesetdata\xa2Dargs\xa2Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84IbookmarksGparentsEphaseHrevisionIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullHfiledata\xa2Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x83HlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDpath\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullIfilesdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x84NfirstchangesetHlinknodeGparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDdictIrevisions\xa2Hrequired\xf5DtypeDlistKpermissions\x81DpullTrecommendedbatchsize\x19\xc3PEheads\xa2Dargs\xa1Jpubliconly\xa3Gdefault\xf4Hrequired\xf4DtypeDboolKpermissions\x81DpullEknown\xa2Dargs\xa1Enodes\xa3Gdefault\x80Hrequired\xf4DtypeDlistKpermissions\x81DpullHlistkeys\xa2Dargs\xa1Inamespace\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullFlookup\xa2Dargs\xa1Ckey\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullLmanifestdata\xa3Dargs\xa4Ffields\xa4Gdefault\xd9\x01\x02\x80Hrequired\xf4DtypeCsetKvalidvalues\xd9\x01\x02\x82GparentsHrevisionKhaveparents\xa3Gdefault\xf4Hrequired\xf4DtypeDboolEnodes\xa2Hrequired\xf5DtypeDlistDtree\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpullTrecommendedbatchsize\x1a\x00\x01\x86\xa0Gpushkey\xa2Dargs\xa4Ckey\xa2Hrequired\xf5DtypeEbytesInamespace\xa2Hrequired\xf5DtypeEbytesCnew\xa2Hrequired\xf5DtypeEbytesCold\xa2Hrequired\xf5DtypeEbytesKpermissions\x81DpushPrawstorefiledata\xa2Dargs\xa2Efiles\xa2Hrequired\xf5DtypeDlistJpathfilter\xa3Gdefault\xf6Hrequired\xf4DtypeDlistKpermissions\x81DpullQframingmediatypes\x81X&application/mercurial-exp-framing-0006Rpathfilterprefixes\xd9\x01\x02\x82Epath:Lrootfilesin:Nrawrepoformats\x83LgeneraldeltaHrevlogv1LsparserevlogHredirect\xa2Fhashes\x82Fsha256Dsha1Gtargets\x81\xa4DnameNtarget-bad-tlsHprotocolEhttpsKtlsversions\x82B42B39Duris\x81Thttps://example.com/
   s>     \r\n
-  received frame(size=1743; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   s>     8\r\n
   s>     \x00\x00\x00\x01\x00\x02\x002
   s>     \r\n
   s>     0\r\n
   s>     \r\n
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   response: gen[
     {
       b'commands': {
@@ -1372,6 +1364,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/simplecache/missingkey HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
@@ -1416,6 +1409,7 @@
   >     user-agent: test
   > EOF
   using raw connection to peer
+  s> setsockopt(6, 1, 1) -> None (?)
   s>     GET /api/simplecache/47abb8efa5f01b8964d74917793ad2464db0fa2c HTTP/1.1\r\n
   s>     Accept-Encoding: identity\r\n
   s>     user-agent: test\r\n
--- a/tests/test-wireproto-exchangev2.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-wireproto-exchangev2.t	Wed Apr 17 13:41:18 2019 -0400
@@ -36,7 +36,10 @@
 
 Test basic clone
 
-  $ hg --debug clone -U http://localhost:$HGPORT client-simple
+Output is flaky, save it in a file and check part independently
+  $ hg --debug clone -U http://localhost:$HGPORT client-simple > clone-output
+
+  $ cat clone-output | grep -v "received frame"
   using http://localhost:$HGPORT/
   sending capabilities command
   query 1; heads
@@ -45,13 +48,6 @@
   sending command known: {
     'nodes': []
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command changesetdata: {
     'fields': set([
@@ -71,10 +67,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=941; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   add changeset 3390ef850073
   add changeset 4432d83626e8
   add changeset cd2534766bec
@@ -97,10 +89,6 @@
     ],
     'tree': ''
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=992; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command filesdata: {
     'fields': set([
@@ -121,13 +109,32 @@
       }
     ]
   }
+  updating the branch cache
+  new changesets 3390ef850073:caa2a465451d (3 drafts)
+  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat clone-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=941; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=992; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=901; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  updating the branch cache
-  new changesets 3390ef850073:caa2a465451d (3 drafts)
-  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm clone-output
 
 All changesets should have been transferred
 
@@ -163,30 +170,22 @@
 
 Cloning only a specific revision works
 
-  $ hg --debug clone -U -r 4432d83626e8 http://localhost:$HGPORT client-singlehead
+Output is flaky, save it in a file and check part independently
+  $ hg --debug clone -U -r 4432d83626e8 http://localhost:$HGPORT client-singlehead > clone-output
+
+  $ cat clone-output | grep -v "received frame"
   using http://localhost:$HGPORT/
   sending capabilities command
   sending 1 commands
   sending command lookup: {
     'key': '4432d83626e8'
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=21; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   query 1; heads
   sending 2 commands
   sending command heads: {}
   sending command known: {
     'nodes': []
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command changesetdata: {
     'fields': set([
@@ -205,10 +204,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=381; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   add changeset 3390ef850073
   add changeset 4432d83626e8
   checking for updated bookmarks
@@ -225,10 +220,6 @@
     ],
     'tree': ''
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=404; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command filesdata: {
     'fields': set([
@@ -246,13 +237,36 @@
       }
     ]
   }
+  updating the branch cache
+  new changesets 3390ef850073:4432d83626e8
+  (sent 6 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat clone-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=21; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=381; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=404; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=439; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  updating the branch cache
-  new changesets 3390ef850073:4432d83626e8
-  (sent 6 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm clone-output
 
   $ cd client-singlehead
 
@@ -269,7 +283,10 @@
 
 Incremental pull works
 
-  $ hg --debug pull
+Output is flaky, save it in a file and check part independently
+  $ hg --debug pull > pull-output
+
+  $ cat pull-output | grep -v "received frame"
   pulling from http://localhost:$HGPORT/
   using http://localhost:$HGPORT/
   sending capabilities command
@@ -281,13 +298,6 @@
       'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0'
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   searching for changes
   all local heads known remotely
   sending 1 commands
@@ -311,10 +321,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=573; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   add changeset cd2534766bec
   add changeset e96ae20f4188
   add changeset caa2a465451d
@@ -333,10 +339,6 @@
     ],
     'tree': ''
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=601; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command filesdata: {
     'fields': set([
@@ -355,14 +357,33 @@
       }
     ]
   }
+  updating the branch cache
+  new changesets cd2534766bec:caa2a465451d (3 drafts)
+  (run 'hg update' to get a working copy)
+  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat pull-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=573; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=601; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=527; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  updating the branch cache
-  new changesets cd2534766bec:caa2a465451d (3 drafts)
-  (run 'hg update' to get a working copy)
-  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm pull-output
 
   $ hg log -G -T '{rev} {node} {phase}\n'
   o  4 caa2a465451dd1facda0f5b12312c355584188a1 draft
@@ -459,7 +480,10 @@
   $ hg -R server-simple bookmark -r 3390ef850073fbc2f0dfff2244342c8e9229013a book-1
   $ hg -R server-simple bookmark -r cd2534766bece138c7c1afdc6825302f0f62d81f book-2
 
-  $ hg --debug clone -U http://localhost:$HGPORT/ client-bookmarks
+Output is flaky, save it in a file and check part independently
+  $ hg --debug clone -U http://localhost:$HGPORT/ client-bookmarks > clone-output
+
+  $ cat clone-output | grep -v "received frame"
   using http://localhost:$HGPORT/
   sending capabilities command
   query 1; heads
@@ -468,13 +492,6 @@
   sending command known: {
     'nodes': []
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command changesetdata: {
     'fields': set([
@@ -494,10 +511,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=979; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   add changeset 3390ef850073
   add changeset 4432d83626e8
   add changeset cd2534766bec
@@ -522,10 +535,6 @@
     ],
     'tree': ''
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=992; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command filesdata: {
     'fields': set([
@@ -546,13 +555,32 @@
       }
     ]
   }
+  updating the branch cache
+  new changesets 3390ef850073:caa2a465451d (1 drafts)
+  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat clone-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=979; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=992; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=901; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  updating the branch cache
-  new changesets 3390ef850073:caa2a465451d (1 drafts)
-  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm clone-output
 
   $ hg -R client-bookmarks bookmarks
      book-1                    0:3390ef850073
@@ -563,7 +591,10 @@
   $ hg -R server-simple bookmark -r cd2534766bece138c7c1afdc6825302f0f62d81f book-1
   moving bookmark 'book-1' forward from 3390ef850073
 
-  $ hg -R client-bookmarks --debug pull
+Output is flaky, save it in a file and check part independently
+  $ hg -R client-bookmarks --debug pull > pull-output
+
+  $ cat pull-output | grep -v "received frame"
   pulling from http://localhost:$HGPORT/
   using http://localhost:$HGPORT/
   sending capabilities command
@@ -576,13 +607,6 @@
       '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=3; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   searching for changes
   all remote heads known locally
   sending 1 commands
@@ -607,14 +631,25 @@
       }
     ]
   }
+  checking for updated bookmarks
+  updating bookmark book-1
+  (run 'hg update' to get a working copy)
+  (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat pull-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=3; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=65; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  checking for updated bookmarks
-  updating bookmark book-1
-  (run 'hg update' to get a working copy)
-  (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm pull-output
 
   $ hg -R client-bookmarks bookmarks
      book-1                    2:cd2534766bec
@@ -647,7 +682,10 @@
 
 Narrow clone only fetches some files
 
-  $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --include dir0/ http://localhost:$HGPORT/ client-narrow-0
+Output is flaky, save it in a file and check part independently
+  $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --include dir0/ http://localhost:$HGPORT/ client-narrow-0 > clone-output
+
+  $ cat clone-output | grep -v "received frame"
   using http://localhost:$HGPORT/
   sending capabilities command
   query 1; heads
@@ -656,13 +694,6 @@
   sending command known: {
     'nodes': []
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command changesetdata: {
     'fields': set([
@@ -681,10 +712,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   add changeset 3390ef850073
   add changeset b709380892b1
   add changeset 47fe012ab237
@@ -705,10 +732,6 @@
     ],
     'tree': ''
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command filesdata: {
     'fields': set([
@@ -733,13 +756,32 @@
       }
     ]
   }
+  updating the branch cache
+  new changesets 3390ef850073:97765fc3cd62
+  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat clone-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=449; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  updating the branch cache
-  new changesets 3390ef850073:97765fc3cd62
-  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm clone-output
 
 #if reporevlogstore
   $ find client-narrow-0/.hg/store -type f -name '*.i' | sort
@@ -751,7 +793,10 @@
 
 --exclude by itself works
 
-  $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --exclude dir0/ http://localhost:$HGPORT/ client-narrow-1
+Output is flaky, save it in a file and check part independently
+  $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --exclude dir0/ http://localhost:$HGPORT/ client-narrow-1 > clone-output
+
+  $ cat clone-output | grep -v "received frame"
   using http://localhost:$HGPORT/
   sending capabilities command
   query 1; heads
@@ -760,13 +805,6 @@
   sending command known: {
     'nodes': []
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command changesetdata: {
     'fields': set([
@@ -785,10 +823,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   add changeset 3390ef850073
   add changeset b709380892b1
   add changeset 47fe012ab237
@@ -809,10 +843,6 @@
     ],
     'tree': ''
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command filesdata: {
     'fields': set([
@@ -840,13 +870,32 @@
       }
     ]
   }
+  updating the branch cache
+  new changesets 3390ef850073:97765fc3cd62
+  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat clone-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=709; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  updating the branch cache
-  new changesets 3390ef850073:97765fc3cd62
-  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm clone-output
 
 #if reporevlogstore
   $ find client-narrow-1/.hg/store -type f -name '*.i' | sort
@@ -860,7 +909,10 @@
 
 Mixing --include and --exclude works
 
-  $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --include dir0/ --exclude dir0/c http://localhost:$HGPORT/ client-narrow-2
+Output is flaky, save it in a file and check part independently
+  $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --include dir0/ --exclude dir0/c http://localhost:$HGPORT/ client-narrow-2 > clone-output
+
+  $ cat clone-output | grep -v "received frame"
   using http://localhost:$HGPORT/
   sending capabilities command
   query 1; heads
@@ -869,13 +921,6 @@
   sending command known: {
     'nodes': []
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command changesetdata: {
     'fields': set([
@@ -894,10 +939,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   add changeset 3390ef850073
   add changeset b709380892b1
   add changeset 47fe012ab237
@@ -918,10 +959,6 @@
     ],
     'tree': ''
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command filesdata: {
     'fields': set([
@@ -949,13 +986,32 @@
       }
     ]
   }
+  updating the branch cache
+  new changesets 3390ef850073:97765fc3cd62
+  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat clone-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=160; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  updating the branch cache
-  new changesets 3390ef850073:97765fc3cd62
-  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm clone-output
 
 #if reporevlogstore
   $ find client-narrow-2/.hg/store -type f -name '*.i' | sort
@@ -967,7 +1023,10 @@
 --stream will use rawfiledata to transfer changelog and manifestlog, then
 fall through to get files data
 
-  $ hg --debug clone --stream -U http://localhost:$HGPORT client-stream-0
+Output is flaky, save it in a file and check part independently
+  $ hg --debug clone --stream -U http://localhost:$HGPORT client-stream-0 > clone-output
+
+  $ cat clone-output | grep -v "received frame"
   using http://localhost:$HGPORT/
   sending capabilities command
   sending 1 commands
@@ -977,10 +1036,6 @@
       'manifestlog'
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=1275; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   updating the branch cache
   query 1; heads
   sending 2 commands
@@ -990,13 +1045,6 @@
       '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   searching for changes
   all remote heads known locally
   sending 1 commands
@@ -1019,10 +1067,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   checking for updated bookmarks
   sending 1 commands
   sending command filesdata: {
@@ -1043,15 +1087,37 @@
       }
     ]
   }
+  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat clone-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=1275; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=1133; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm clone-output
 
 --stream + --include/--exclude will only obtain some files
 
-  $ hg --debug --config extensions.pullext=$TESTDIR/pullext.py clone --stream --include dir0/ -U http://localhost:$HGPORT client-stream-2
+Output is flaky, save it in a file and check part independently
+  $ hg --debug --config extensions.pullext=$TESTDIR/pullext.py clone --stream --include dir0/ -U http://localhost:$HGPORT client-stream-2 > clone-output
+
+  $ cat clone-output | grep -v "received frame"
   using http://localhost:$HGPORT/
   sending capabilities command
   sending 1 commands
@@ -1061,10 +1127,6 @@
       'manifestlog'
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=1275; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   updating the branch cache
   query 1; heads
   sending 2 commands
@@ -1074,13 +1136,6 @@
       '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   searching for changes
   all remote heads known locally
   sending 1 commands
@@ -1103,10 +1158,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   checking for updated bookmarks
   sending 1 commands
   sending command filesdata: {
@@ -1132,11 +1183,30 @@
       }
     ]
   }
+  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat clone-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=1275; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=449; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ rm clone-output
 
 #if reporevlogstore
   $ find client-stream-2/.hg/store -type f -name '*.i' | sort
@@ -1148,7 +1218,14 @@
 
 Shallow clone doesn't work with revlogs
 
-  $ hg --debug --config extensions.pullext=$TESTDIR/pullext.py clone --depth 1 -U http://localhost:$HGPORT client-shallow-revlogs
+Output is flaky, save it in a file and check part independently
+  $ hg --debug --config extensions.pullext=$TESTDIR/pullext.py clone --depth 1 -U http://localhost:$HGPORT client-shallow-revlogs > clone-output
+  transaction abort!
+  rollback completed
+  abort: revlog storage does not support missing parents write mode
+  [255]
+
+  $ cat clone-output | grep -v "received frame"
   using http://localhost:$HGPORT/
   sending capabilities command
   query 1; heads
@@ -1157,13 +1234,6 @@
   sending command known: {
     'nodes': []
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command changesetdata: {
     'fields': set([
@@ -1182,10 +1252,6 @@
       }
     ]
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   add changeset 3390ef850073
   add changeset b709380892b1
   add changeset 47fe012ab237
@@ -1206,10 +1272,6 @@
     ],
     'tree': ''
   }
-  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
-  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
-  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   sending 1 commands
   sending command filesdata: {
     'fields': set([
@@ -1227,15 +1289,30 @@
       }
     ]
   }
+  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
+
+  $ cat clone-output | grep "received frame"
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
+  received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
+  received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
+  received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
   received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
   received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=1005; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
   received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
-  transaction abort!
-  rollback completed
-  (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
-  abort: revlog storage does not support missing parents write mode
-  [255]
+
+  $ rm clone-output
 
   $ killdaemons.py
 
--- a/tests/test-wireproto.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-wireproto.py	Wed Apr 17 13:41:18 2019 -0400
@@ -78,6 +78,9 @@
         yield unmangle(f.value)
 
 class serverrepo(object):
+    def __init__(self, ui):
+        self.ui = ui
+
     def greet(self, name):
         return b"Hello, " + name
 
@@ -94,7 +97,7 @@
 
 wireprotov1server.commands[b'greet'] = (greet, b'name')
 
-srv = serverrepo()
+srv = serverrepo(uimod.ui())
 clt = clientpeer(srv, uimod.ui())
 
 def printb(data, end=b'\n'):
--- a/tests/test-worker.t	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/test-worker.t	Wed Apr 17 13:41:18 2019 -0400
@@ -83,8 +83,10 @@
   [255]
 
   $ hg --config "extensions.t=$abspath" --config 'worker.numcpus=8' \
-  > test 100000.0 abort --traceback 2>&1 | egrep '^(SystemExit|Abort)'
-  Abort: known exception
+  > test 100000.0 abort --traceback 2>&1 | egrep '(SystemExit|Abort)'
+      raise error.Abort(b'known exception')
+  mercurial.error.Abort: b'known exception' (py3 !)
+  Abort: known exception (no-py3 !)
   SystemExit: 255
 
 Traceback must be printed for unknown exceptions
--- a/tests/tinyproxy.py	Tue Mar 19 09:23:35 2019 -0400
+++ b/tests/tinyproxy.py	Wed Apr 17 13:41:18 2019 -0400
@@ -20,7 +20,10 @@
 import socket
 import sys
 
-from mercurial import util
+from mercurial import (
+    pycompat,
+    util,
+)
 
 httpserver = util.httpserver
 socketserver = util.socketserver
@@ -77,10 +80,11 @@
         try:
             if self._connect_to(self.path, soc):
                 self.log_request(200)
-                self.wfile.write(self.protocol_version +
-                                 " 200 Connection established\r\n")
-                self.wfile.write("Proxy-agent: %s\r\n" % self.version_string())
-                self.wfile.write("\r\n")
+                self.wfile.write(pycompat.bytestr(self.protocol_version) +
+                                 b" 200 Connection established\r\n")
+                self.wfile.write(b"Proxy-agent: %s\r\n" %
+                                 pycompat.bytestr(self.version_string()))
+                self.wfile.write(b"\r\n")
                 self._read_write(soc, 300)
         finally:
             print("\t" "bye")
@@ -97,15 +101,17 @@
         try:
             if self._connect_to(netloc, soc):
                 self.log_request()
-                soc.send("%s %s %s\r\n" % (
-                    self.command,
-                    urlreq.urlunparse(('', '', path, params, query, '')),
-                    self.request_version))
+                url = urlreq.urlunparse(('', '', path, params, query, ''))
+                soc.send(b"%s %s %s\r\n" % (
+                    pycompat.bytestr(self.command),
+                    pycompat.bytestr(url),
+                    pycompat.bytestr(self.request_version)))
                 self.headers['Connection'] = 'close'
                 del self.headers['Proxy-Connection']
-                for key_val in self.headers.items():
-                    soc.send("%s: %s\r\n" % key_val)
-                soc.send("\r\n")
+                for key, val in self.headers.items():
+                    soc.send(b"%s: %s\r\n" % (pycompat.bytestr(key),
+                                              pycompat.bytestr(val)))
+                soc.send(b"\r\n")
                 self._read_write(soc)
         finally:
             print("\t" "bye")