Wed, 30 Sep 2015 21:47:27 -0700 debugmergestate: add support for printing out merge driver
Siddharth Agarwal <sid0@fb.com> [Wed, 30 Sep 2015 21:47:27 -0700] rev 26652
debugmergestate: add support for printing out merge driver
Wed, 30 Sep 2015 19:43:51 -0700 merge.mergedriver: don't try resolving files marked driver-resolved
Siddharth Agarwal <sid0@fb.com> [Wed, 30 Sep 2015 19:43:51 -0700] rev 26651
merge.mergedriver: don't try resolving files marked driver-resolved The driver is expected to take care of these.
Mon, 28 Sep 2015 18:34:06 -0700 merge.mergestate: add support for persisting driver-resolved files
Siddharth Agarwal <sid0@fb.com> [Mon, 28 Sep 2015 18:34:06 -0700] rev 26650
merge.mergestate: add support for persisting driver-resolved files A driver-resolved file is a file that's handled specially by the driver. A common use case for this state would be autogenerated files, the generation of which should happen only after all source conflicts are resolved. This is done with an uppercase letter because older versions of Mercurial will not know how to treat such files at all.
Wed, 30 Sep 2015 21:42:52 -0700 merge.mergestate: add support for persisting a custom merge driver
Siddharth Agarwal <sid0@fb.com> [Wed, 30 Sep 2015 21:42:52 -0700] rev 26649
merge.mergestate: add support for persisting a custom merge driver A 'merge driver' is a coordinator for the overall merge process. It will be able to control: - tools for individual files, much like the merge-patterns configuration does today - tools that can work across groups of files - the ordering of file resolution - resolution of automatically generated files - adding and removing additional files to and from the dirstate Since it is a critical part of the merge process, it really is part of the merge state. This is a lowercase character (i.e. optional) because ignoring this is fine for older versions of Mercurial -- however, if there are any files that are specially treated by the driver, we should abort. That will happen in upcoming patches. There is a potential security issue with storing the merge driver in the merge state. See the inline comments for more details.
Tue, 13 Oct 2015 12:30:39 -0700 exchange: support sorting URLs by client-side preferences
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 13 Oct 2015 12:30:39 -0700] rev 26648
exchange: support sorting URLs by client-side preferences Not all bundles are appropriate for all clients. For example, someone with a slow Internet connection may want to prefer bz2 bundles over gzip bundles because they are smaller and don't take as long to transfer. This is information that a server cannot know on its own. So, we invent a mechanism for "preferring" server-advertised URLs based on their attributes. We could invent a negotiation between client and server where the client sends its preferences and the sorting/filtering is done server-side. However, this feels complex. We can avoid complicating the wire protocol and exposing ourselves to backwards compatible concerns by performing the sorting locally. This patch defines a new config option for expressing preferred attributes in server-advertised bundles. At Mozilla, we leverage this feature so clients in fast data centers prefer uncompressed bundles. (We advertise gzip bundles first because that is a reasonable default.) I consider this an advanced feature. I'm on the fence as to whether it should be documented in `hg help config`.
Tue, 13 Oct 2015 12:31:19 -0700 exchange: extract bundle specification components into own attributes
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 13 Oct 2015 12:31:19 -0700] rev 26647
exchange: extract bundle specification components into own attributes An upcoming patch will enable clients to prefer certain bundles over others. The idea is that we define values of attributes from manifests that are desirable. The BUNDLESPEC attribute is a complex value consisting of multiple parts. Clients may wish to only prefer one of these parts. Having to specify every combination of BUNDLESPEC would be annoying. So, we extract the components of BUNDLESPEC into their own attributes so clients can easily filter on a sub-component.
Tue, 13 Oct 2015 12:29:50 -0700 exchange: support preserving external names when parsing bundle specs
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 13 Oct 2015 12:29:50 -0700] rev 26646
exchange: support preserving external names when parsing bundle specs This will be needed to make client-side preferences work easier.
Tue, 13 Oct 2015 10:59:41 -0700 clonebundles: filter on SNI requirement
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 13 Oct 2015 10:59:41 -0700] rev 26645
clonebundles: filter on SNI requirement Server Name Indication (SNI) is commonly used in CDNs and other hosted environments. Unfortunately, Python <2.7.9 does not support SNI and when these older Python versions attempt to negotiate TLS to an SNI server, they raise an opaque error like "_ssl.c:507: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure." We introduce a manifest attribute to denote the URL requires SNI and have clients without SNI support filter these entries.
Tue, 13 Oct 2015 11:45:30 -0700 clonebundles: filter on bundle specification
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 13 Oct 2015 11:45:30 -0700] rev 26644
clonebundles: filter on bundle specification Not all clients are capable of reading every bundle. Currently, content negotiation to ensure a server sends a client a compatible bundle format is performed at request time. The response bundle is dynamically generated at request time, so this works fine. Clone bundles are statically generated *before* the request. This means that a modern server could produce bundles that a legacy client isn't capable of reading. Without some kind of "type hint" in the clone bundles manifest, a client may attempt to download an incompatible bundle. Furthermore, a client may not realize a bundle is incompatible until it has processed part of the bundle (imagine consuming a 1 GB changegroup bundle2 part only to discover the bundle2 part afterwards is incompatibl). This would waste time and resources. And it isn't very user friendly. Clone bundle manifests thus need to advertise the *exact* format of the hosted bundles so clients may filter out entries that they don't know how to read. This patch introduces that mechanism. We introduce the BUNDLESPEC attribute to declare the "bundle specification" of the entry. Bundle specifications are parsed using exchange.parsebundlespecification, which uses the same strings as the "--type" argument to `hg bundle`. The supported bundle specifications are well defined and backwards compatible. When a client encounters a BUNDLESPEC that is invalid or unsupported, it silently ignores the entry.
Tue, 13 Oct 2015 10:41:54 -0700 clonebundle: support bundle2
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 13 Oct 2015 10:41:54 -0700] rev 26643
clonebundle: support bundle2 exchange.readbundle() can return 2 different types. We weren't handling the bundle2 case. Handle it. At some point we'll likely want a generic API for applying a bundle from a file handle. For now, create another one-off until we figure out what the unified bundle API should look like (addressing this is a can of worms I don't want to open right now).
Mon, 05 Oct 2015 21:31:32 -0700 update: also use 'destupdate' for pull and unbundle
Pierre-Yves David <pierre-yves.david@fb.com> [Mon, 05 Oct 2015 21:31:32 -0700] rev 26642
update: also use 'destupdate' for pull and unbundle Update can also be performed by 'hg pull --update' and 'hg unbundle'. We use the destupdate function in these case too.
Tue, 29 Sep 2015 01:03:26 -0700 destupdate: also include bookmark related logic
Pierre-Yves David <pierre-yves.david@fb.com> [Tue, 29 Sep 2015 01:03:26 -0700] rev 26641
destupdate: also include bookmark related logic For the same reason, we move the bookmark related update logic into the 'destupdate' function. This requires to extend the returns of the function to include the bookmark that needs to move (more or less) and the bookmark to activate at the end of the function. See function documentation for details on this returns.
Tue, 13 Oct 2015 10:57:54 -0700 exchange: refactor bundle specification parsing
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 13 Oct 2015 10:57:54 -0700] rev 26640
exchange: refactor bundle specification parsing The old code was tailored to `hg bundle` usage and not appropriate for use as a general API, which clone bundles will require. The code has been rewritten to make it more generally suitable. We introduce dedicated error types to represent invalid and unsupported bundle specifications. The reason we need dedicated error types (rather than error.Abort) is because clone bundles will want to catch these exception as part of filtering entries. We don't want to swallow error.Abort on principle.
Tue, 13 Oct 2015 11:43:21 -0700 exchange: move bundle specification parsing from cmdutil
Gregory Szorc <gregory.szorc@gmail.com> [Tue, 13 Oct 2015 11:43:21 -0700] rev 26639
exchange: move bundle specification parsing from cmdutil Clone bundles require a well-defined string to specify the type of bundle that is listed so clients can filter compatible file types. The `hg bundle` command and cmdutil.parsebundletype() already establish the beginnings of a bundle specification format. As part of formalizing this format specification so it can be used by clone bundles, we move the specification parsing bits verbatim to exchange.py, which is a more suitable place than cmdutil.py. A subsequent patch will refactor this code to make it more appropriate as a general API.
Tue, 24 Mar 2015 00:28:28 +0900 revset: add optional offset argument to limit() predicate
Yuya Nishihara <yuya@tcha.org> [Tue, 24 Mar 2015 00:28:28 +0900] rev 26638
revset: add optional offset argument to limit() predicate It's common for GUI or web frontend to fetch chunk of revisions per batch size. Previously it was possible only if revisions were sorted by revision number. $ hg log -r 'limit({revspec} & :{last_known}, 101)' So this patch introduces a general way to retrieve chunk of revisions after skipping offset revisions. $ hg log -r 'limit({revspec}, 100, {last_count})' This is a dumb implementation. We can optimize it for baseset and spanset later.
Mon, 12 Oct 2015 17:19:22 +0900 revset: port limit() to support keyword arguments
Yuya Nishihara <yuya@tcha.org> [Mon, 12 Oct 2015 17:19:22 +0900] rev 26637
revset: port limit() to support keyword arguments The next patch will introduce the third 'offset' argument. This allows us to specify 'offset' without 'n' argument.
Mon, 12 Oct 2015 17:14:47 +0900 revset: eliminate temporary reference to subset in limit() and last()
Yuya Nishihara <yuya@tcha.org> [Mon, 12 Oct 2015 17:14:47 +0900] rev 26636
revset: eliminate temporary reference to subset in limit() and last()
Wed, 14 Oct 2015 02:49:17 +0900 dirstate: read from pending file under HG_PENDING mode if it exists
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Wed, 14 Oct 2015 02:49:17 +0900] rev 26635
dirstate: read from pending file under HG_PENDING mode if it exists True/False value of '_pendingmode' means whether 'dirstate.pending' is used to initialize own '_map' and so on. When it is None, neither 'dirstate' nor 'dirstate.pending' is read in yet. This is used to keep consistent view between '_pl()' and '_read()'. Once '_pendingmode' is determined by reading one of 'dirstate' or 'dirstate.pending' in, '_pendingmode' is kept even if 'invalidate()' is invoked. This should be reasonable, because: - effective 'invalidate()' invocation should occur only in wlock scope, and - wlock can't be gotten under HG_PENDING mode '_trypending()' is defined as a normal function to factor similar code path (in bookmarks and phases) out in the future easily.
Wed, 14 Oct 2015 02:49:17 +0900 dirstate: make writing in-memory changes aware of transaction activity
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Wed, 14 Oct 2015 02:49:17 +0900] rev 26634
dirstate: make writing in-memory changes aware of transaction activity This patch delays writing in-memory changes out, if transaction is running. '_getfsnow()' is defined as a function, to hook it easily for ambiguous timestamp tests (see also fakedirstatewritetime.py) 'if tr:' code path in this patch is still disabled at this revision, because there is no client invoking 'dirstate.write()' with repo object. BTW, this patch changes 'dirstate.invalidate()' semantics around 'dirstate.write()' in a transaction scope: before: with repo.transaction(): dirstate.CHANGE('A') dirstate.write() # change for A is written out here dirstate.CHANGE('B') dirstate.invalidate() # discards only change for B after: with repo.transaction(): dirstate.CHANGE('A') dirstate.write() # change for A is still kept in memory dirstate.CHANGE('B') dirstate.invalidate() # discards changes for A and B Fortunately, there is no code path expecting the former, at least, in Mercurial itself, because 'dirstateguard' was introduced to remove such 'dirstate.invalidate()'.
Wed, 14 Oct 2015 02:49:17 +0900 dirstate: make functions for backup aware of transaction activity
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Wed, 14 Oct 2015 02:49:17 +0900] rev 26633
dirstate: make functions for backup aware of transaction activity Some comments in this patch assume that subsequent patch changes 'dirstate.write()' like as below: def write(self, repo): if not self._dirty: return tr = repo.currenttransaction() if tr: tr.addfilegenerator('dirstate', (self._filename,), self._writedirstate, location='plain') return # omit actual writing out st = self._opener('dirstate', "w", atomictemp=True) self._writedirstate(st) This patch makes '_savebackup()' write in-memory changes out, and it causes clearing 'self._dirty'. If dirstate isn't changed after '_savebackup()', subsequent 'dirstate.write()' never invokes 'tr.addfilegenerator()' because 'not self._dirty' is true. Then, 'tr.writepending()' unintentionally returns False, if there is no other (e.g. changelog) changes pending, even though dirstate changes are already written out at '_savebackup()'. To avoid such situation, this patch makes '_savebackup()' explicitly invoke 'tr.addfilegenerator()', if transaction is running. '_savebackup()' should get awareness of transaction before 'write()', because the former depends on the behavior of the latter before this patch.
Wed, 14 Oct 2015 02:49:17 +0900 dirstate: move code paths for backup from dirstateguard to dirstate
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Wed, 14 Oct 2015 02:49:17 +0900] rev 26632
dirstate: move code paths for backup from dirstateguard to dirstate This can centralize the logic to write in-memory changes out correctly according to transaction activity into dirstate. Passing 'repo' object to newly added functions is needed to examine current transaction activity in subsequent patches, because 'dirstate' itself doesn't have direct reference to it.
Tue, 13 Oct 2015 12:25:43 -0700 localrepo: restore dirstate to one before rollbacking if not parent-gone
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Tue, 13 Oct 2015 12:25:43 -0700] rev 26631
localrepo: restore dirstate to one before rollbacking if not parent-gone 'localrepository.rollback()' explicilty restores dirstate, only if at least one of current parents of the working directory is removed at rollbacking (a.k.a "parent-gone"). After DirstateTransactionPlan, 'dirstate.write()' will cause marking '.hg/dirstate' as a file to be restored at rollbacking. https://mercurial.selenic.com/wiki/DirstateTransactionPlan Then, 'transaction.rollback()' restores '.hg/dirstate' regardless of parents of the working directory at that time, and this causes unexpected dirstate changes if not "parent-gone" (e.g. "hg update" to another branch after "hg commit" or so, then "hg rollback"). To avoid such situation, this patch restores dirstate to one before rollbacking if not "parent-gone". before: b1. restore dirstate explicitly, if "parent-gone" after: a1. save dirstate before actual rollbacking via dirstateguard a2. restore dirstate via 'transaction.rollback()' a3. if "parent-gone" - discard backup (a1) - restore dirstate from 'undo.dirstate' a4. otherwise, restore dirstate from backup (a1) Even though restoring dirstate at (a3) after (a2) seems redundant, this patch keeps this existing code path, because: - it isn't ensured that 'dirstate.write()' was invoked at least once while transaction running If not, '.hg/dirstate' isn't restored at (a2). In addition to it, rude 3rd party extension invoking 'dirstate.write()' without 'repo' while transaction running (see subsequent patches for detail) may break consistency of a file backup-ed by transaction. - this patch mainly focuses on changes for DirstateTransactionPlan Restoring dirstate at (a3) itself should be cheaper enough than rollbacking itself. Redundancy will be removed in next step. Newly added test is almost meaningless at this point. It will be used to detect regression while implementing delayed dirstate write out.
Wed, 14 Oct 2015 02:40:04 +0900 parsers: make pack_dirstate take now in integer for consistency
FUJIWARA Katsunori <foozy@lares.dti.ne.jp> [Wed, 14 Oct 2015 02:40:04 +0900] rev 26630
parsers: make pack_dirstate take now in integer for consistency On recent OS, 'stat.st_mtime' has a double precision floating point value to represent nano seconds, but it is not wide enough for actual file timestamp: nowadays, only 52 - 32 = 20 bit width is available for decimal places in sec. Therefore, casting it to 'int' may cause unexpected result. See also changeset 13272104bb07 fixing issue4836 for detail. For example, changed file A may be treated as "clean" unexpectedly in steps below. "rounded now" is the value gotten by rounding via 'int(st.st_mtime)' or so. ---------------------+--------------------+------------------------ "now" | | timestamp of A (time_t) float rounded time_t| action | FS dirstate ------ ------- ------+--------------------+-------- --------------- N+.nnn N N | | --- --- | update file A | N | dirstate.normal(A) | N N+.999 N+1 N | | | dirstate.write() | N (*1) | : | | change file A | N | : | N+1.00 N+1 N+1 | | | "hg status" (*2) | N N ------ ------- ------+--------------------+-------- --------------- Timestamp N of A in dirstate isn't dropped at (*1), because "rounded now" is N+1 at that time, even if 'st_mtime' in 'time_t' is still N. Then, file A is unexpectedly treated as "clean" at (*2) in this case. For consistent handling of 'stat.st_mtime', this patch makes 'pack_dirstate()' take 'now' argument not in floating point but in integer. This patch makes 'PyArg_ParseTuple()' in 'pack_dirstate()' use format 'i' (= checking type mismatch or overflow), even though it is ensured that 'now' is in the range of 32bit signed integer by masking with '_rangemask' (= 0x7fffffff) on caller side. It should be cheaper enough than packing itself, and useful to detect that legacy code invokes 'pack_dirstate()' with 'now' in floating point value.
Tue, 29 Sep 2015 00:18:49 -0700 destupdate: include the 'check' logic
Pierre-Yves David <pierre-yves.david@fb.com> [Tue, 29 Sep 2015 00:18:49 -0700] rev 26629
destupdate: include the 'check' logic After moving logic from 'merge.update' into 'destutil.destupdate', we are now moving logic from 'command.update' in 'destutil.destupdate'. This will make the function actually useful in predicting (and altering) the update behavior.
Mon, 05 Oct 2015 03:50:47 -0700 destupdate: move the check related to the "clean" logic in the function
Pierre-Yves David <pierre-yves.david@fb.com> [Mon, 05 Oct 2015 03:50:47 -0700] rev 26628
destupdate: move the check related to the "clean" logic in the function We want this function to exactly predict the behavior for update. Moreover, we would like to remove all high level behavior logic out of the merge module so this is a step forward. Now that the 'destupdate' function both compute and validate the destination, we can directly use it at the command level, ensuring that the 'hg update' command never call 'merge.update' without a defined destination. This is a first (but significant) step toward having 'merge.update' always feed with a properly validated destination and free of high level logic.
Mon, 12 Oct 2015 19:22:34 +0200 largefiles: better handling of merge of largefiles that not are available
Mads Kiilerich <madski@unity3d.com> [Mon, 12 Oct 2015 19:22:34 +0200] rev 26627
largefiles: better handling of merge of largefiles that not are available Before, when merging revisions with missing largefiles, the missing largefiles would be fetched as a part of the merge. If that failed (for example because the main repository temporarily was unavailable), the largefile would be left missing. However, the next commit would abort and (seemed to) fail when markcommitted tried to mark the standin file as normal and thus had to hash the largefile that didn't exist. (Actually, the commit would succeed but the largefile update that follows right after the commit transaction would abort - quite confusing.) To fix that, make sure that synclfdirstate only marks files as normal if they actually exist.
Sun, 11 Oct 2015 22:13:03 -0700 patchbomb: check that targets exist at the publicurl
Pierre-Yves David <pierre-yves.david@fb.com> [Sun, 11 Oct 2015 22:13:03 -0700] rev 26626
patchbomb: check that targets exist at the publicurl Advertising that the patch are available to be pulled requires that to be true. So we check revision availability on the remote before sending any email.
Mon, 12 Oct 2015 20:13:12 +0200 windows: read all global config files, not just the first (issue4491) (BC)
Mads Kiilerich <madski@unity3d.com> [Mon, 12 Oct 2015 20:13:12 +0200] rev 26625
windows: read all global config files, not just the first (issue4491) (BC) On windows, hgrc.d/*.rc would not be read if mercurial.ini was found. That was far from obvious from the documentation and different from the behavior on posix systems. As a consequence of this, TortoiseHg cacert configuration placed in hgrc.d would not be read if an old global mercurial.ini still existed. "hg config -g" could also crash when no global configuration files could be found. Instead, make windows behave like posix and read all global configuration files. The documentation was in a way right that individual config settings in the global Mercurial.ini would override settings from for example .hgrc.d\*.rc, but only because the .d files not would be read at all if a Mercurial.ini was found. The ordering in the documentation is thus changed to match the code.
Fri, 09 Oct 2015 14:48:59 -0700 strip: factor out revset calculation for strip -B
Ryan McElroy <rmcelroy@fb.com> [Fri, 09 Oct 2015 14:48:59 -0700] rev 26624
strip: factor out revset calculation for strip -B This will allow reusing it in evolve and overriding it in other extensions.
Fri, 09 Oct 2015 11:22:01 -0700 clonebundles: support for seeding clones from pre-generated bundles
Gregory Szorc <gregory.szorc@gmail.com> [Fri, 09 Oct 2015 11:22:01 -0700] rev 26623
clonebundles: support for seeding clones from pre-generated bundles Cloning can be an expensive operation for servers because the server generates a bundle from existing repository data at request time. For a large repository like mozilla-central, this consumes 4+ minutes of CPU time on the server. It also results in significant network utilization. Multiplied by hundreds or even thousands of clients and the ensuing load can result in difficulties scaling the Mercurial server. Despite generation of bundles being deterministic until the next changeset is added, the generation of bundles to service a clone request is not cached. Each clone thus performs redundant work. This is wasteful. This patch introduces the "clonebundles" extension and related client-side functionality to help alleviate this deficiency. The client-side feature is behind an experimental flag and is not enabled by default. It works as follows: 1) Server operator generates a bundle and makes it available on a server (likely HTTP). 2) Server operator defines the URL of a bundle file in a .hg/clonebundles.manifest file. 3) Client `hg clone`ing sees the server is advertising bundle URLs. 4) Client fetches and applies the advertised bundle. 5) Client performs equivalent of `hg pull` to fetch changes made since the bundle was created. Essentially, the server performs the expensive work of generating a bundle once and all subsequent clones fetch a static file from somewhere. Scaling static file serving is a much more manageable problem than scaling a Python application like Mercurial. Assuming your repository grows less than 1% per day, the end result is 99+% of CPU and network load from clones is eliminated, allowing Mercurial servers to scale more easily. Serving static files also means data can be transferred to clients as fast as they can consume it, rather than as fast as servers can generate it. This makes clones faster. Mozilla has implemented similar functionality of this patch on hg.mozilla.org using a custom extension. We are hosting bundle files in Amazon S3 and CloudFront (a CDN) and have successfully offloaded >1 TB/day in data transfer from hg.mozilla.org, freeing up significant bandwidth and CPU resources. The positive impact has been stellar and I believe it has proved its value to be included in Mercurial core. I feel it is important for the client-side support to be enabled in core by default because it means that clients will get faster, more reliable clones and will enable server operators to reduce load without requiring any client-side configuration changes (assuming clients are up to date, of course). The scope of this feature is narrowly and specifically tailored to cloning, despite "serve pulls from pre-generated bundles" being a valid and useful feature. I would eventually like for Mercurial servers to support transferring *all* repository data via statically hosted files. You could imagine a server that siphons all pushed data to bundle files and instructs clients to apply a stream of bundles to reconstruct all repository data. This feature, while useful and powerful, is significantly more work to implement because it requires the server component have awareness of discovery and a mapping of which changesets are in which files. Full, clone bundles, by contrast, are much simpler. The wire protocol command is named "clonebundles" instead of something more generic like "staticbundles" to leave the door open for a new, more powerful and more generic server-side component with minimal backwards compatibility implications. The name "bundleclone" is used by Mozilla's extension and would cause problems since there are subtle differences in Mozilla's extension. Mozilla's experience with this idea has taught us that some form of "content negotiation" is required. Not all clients will support all bundle formats or even URLs (advanced TLS requirements, etc). To ensure the highest uptake possible, a server needs to advertise multiple versions of bundles and clients need to be able to choose the most appropriate from that list one. The "attributes" in each server-advertised entry facilitate this filtering and sorting. Their use will become apparent in subsequent patches. Initial inspiration and credit for the idea of cloning from static files belongs to Augie Fackler and his "lookaside clone" extension proof of concept.
(0) -10000 -3000 -1000 -300 -100 -50 -30 +30 +50 +100 +300 +1000 +3000 +10000 tip