Taipei Forcing Club

Computer science and contract bridge

Renaming bidding systems

I’ve been with my girlfriend for 2+ months and finally found the red and blue mascots for my bridge bidding systems.

Oshawott is my girlfriend’s favorite pokémon. She thinks that Litten is my pokémon since my fursona is an orange cat. These two pokémons fit the color stickers from WBF systems policy and also the Red Oni, Blue Oni trope.

By the way, recently I reached 1800+ Elo on Cuebids. I took a screenshot because I not only was proud of it but also thought it cannot last long.

[Chen-Pang He reaching 1801 rated on Cuebids]

Rewriting Natsuki in Rust

I’m rewriting Natsuki (3.0) in Rust for several reasons. You can track my progress at this branch.

  1. Hosting: Free hosting tend to assume that node.js projects are websites. They put the app to sleep when idle, which effectively takes down a Discord bot. I’m now self-hosting Natsuki, but I’m not a fan of that. Rewriting in another language solves this issue.
  2. Economy commands: Shuttle provides free hosting even with 500 MB of database. This means we can make stateful features in the future, such as economy and games.
  3. Also please note that I’m considering remove NSFW functions. Discord has been more and more hostile to NSFW features. Making Natsuki SFW can spread her further.

Rebranding with a new icon

New icon

I made a new icon for the Taipei Forcing Club and this site. This icon honors both my hobbies: contract bridge and Japanese mahjong.

I used Inkscape to create the icon. I’m not a graphic designer, but I’m happy with the result. This icon combines lietxia’s 🀙 (1p) mahjong tile and ♣ in Noto Sans TC.

My bidding systems tested with Cuebids

I tested my bidding systems on Cuebids, a web app for pairs to practice bidding. All three accounts using my bidding systems made it to the top 10 this week. The following is the screenshot of the top 10 list when Mijumaru Blue Club (known as Bluepill (Canapé) Club then) just entered the list. My own account and Irene playing only Litten Polish Club were the top two.

We are still on top 5 as of this writing. We will try to stay on top 10 but not to be top 3 at the same time to keep the environment competitive.

My collection of bidding systems is available on GitHub. It contains aforementioned systems and a defensive system. Litten Polish Club is almost complete. Mijumaru Blue Club is still evolving. The defensive system is still in its infancy.

This collection is free software. You can use it for any purpose, including commercial purposes, at your own risk. I would appreciate it if you could share your experience with me.

NLTC, a good single hand evaluator

Inspired by Thomas Andrews’ article on single hand evaluators, I want to check out how good such an evaluator NLTC is. Since LTCs need an established trump fit, we only consider suit offense tricks, i.e. the most tricks a partnership can take in any suit contract.

Thomas built his data with Matt Ginsberg’s (GIB) double dummy library containing 700K+ deals. However, available sources of that library are long gone. To achieve similar statistical power, I would have to solve about 1M deals. Thanks to DDS, the well known double dummy solver in C++, along with modern computer architecture, we can solve 1M deals within one day.

I generated all data in this article with my bridge utility. It took 8 hours to solve 1M deals for only suit contracts. The following is the correlation coefficient matrix of various evaluations.

Tricks HCP+ BUM-RAP+ LTC NLTC ALTC
1 0.508270 0.512822 -0.482577 -0.521692 -0.516951
  1 0.987782 -0.861016 -0.927226 -0.903264
    1 -0.831184 -0.943001 -0.915663
      1 0.919761 0.935646
        1 0.979818
          1

The plus sign stands for short-suit points in GIB bid descriptions. As BUM-RAP gives fractional points, I made such an adjustment more rigorous.

  • S = (void = 3, singleton = 2, doubleton = 1)
  • X+ = max(X, S, X + S − 1) for each suit

As for how this adjustment is slightly better than max(X, S) and X + S, there will be a separate article.

ALTC is what Jeff Rubens suggested. Adjust −0.5 losers for each held ace and +0.5 for each guarded queen. NLTC bears this in mind but adjusts for missing aces and queens instead.

Things are different when we add up evaluations in each partnership. LTCs are less additive than HCP+. I think this phenomenon arises from counting values twice. A classic example is that a long suit in one hand and the corresponding doubleton in another are both counted as values a priori.

Tricks HCP+ BUM-RAP+ LTC NLTC ALTC
1 0.861012 0.870637 -0.749171 -0.839970 -0.813800
  1 0.987937 -0.832577 -0.910715 -0.880367
    1 -0.804589 -0.924311 -0.890025
      1 0.922495 0.940156
        1 0.974347
          1

Using NLTC in bidding

NLTC is more a single hand evaluator than an additive one. It is good to use NLTC for suit-oriented initial actions like preemptive openings and overcalls. Consider using additive point counts to assess supports.

The exact ranges of NLTC for preemptive openings are up to partnership agreement. I’d still like to point out some problems if you try to migrate from LTC to NLTC.

Don’t directly apply the rule of 2, 3, 4

With the plain LTC, we estimate the minimum playing tricks to be 13 − LTC. However, NLTC counts more losers than LTC, especially in single suiters. NLTC counts x and xx as 1.5 and 2.5 losers respectively, i.e. each 0.5 more than LTC. Single suiters are rich in singletons and doubletons. Besides, it is discouraged to preempt with a void. In general, NLTC intrinsically counts 1–1.5 more losers than LTC for single suiters. I am still not sure how to map NLTC to preemptive bids. There is going to be an update if I finally figure it out.

My defense to strong 1♣

My favorite defense to strong 1♣ is Psycho Suction. I filled in 2NT and the 1-level as per the useful space principle. This defense is so flexible that it also applies to small 1♣, strong 1, and over the forced responder, e.g. (1♣)-(1).

X Takeout double or stolen bid
1x Natural or lead-directing
1NT Non-touching pairs (♠ or ♣)
2♣ Clubs or red suits
2 Diamonds or majors
2 Hearts or black suits
2♠ Spades or minors (supplement to 2NT)
2NT Minors

Psycho Suction is an extension to the natural defense. It is a superset of the weak twos. The two suiters increase the probabilities of the 2-bids and unanchor them. Psycho Suction establishes a pass-or-correct system where the 2x overcalls are also pass-or-correct. The word “psycho” reflects the risk of playing an undoubled misfit like psychic bids.

Paradox advances

The paradox principle in a pass-or-correct system is to bypass underbids. For example, hearing (1♣)-2, we hold

♠ J98
KJ1032
Q98
♣ Q2

Option Advance
Diamonds 3
Majors 4

Bid 3. Besides diamond support, this bid also shows that we can also raise either major. Underbidding 2 conveys too little information and takes away too little space.

The paradox lies in that we do not bid our favorite strain. We have to make the cheapest call of the conjectured advances. Resultantly, we usually bid for the worst case.

Comparison with defenses to 1NT

Similarities

The most common subtype of strong 1♣ is a strong notrump. The opener is so strong that we are unlikely to have a game, so our main goal is to compete with partscores. Our cuebid strain is notrump because opponents have shown strength but no suit.

A common way to devise a defense to 1♣ is using our defense to 1NT. Suction is originally a defense to 1NT. Such a defense to 1NT with no anchor suit is banned in the ACBL Basic+ chart.1 However, artificial defenses to an artifical opening are generally allowed.2 Therefore, we can derive a defense to 1♣ from such an exotic defense to 1NT.

Differences

An obvious difference is that we can access 1-level overcalls. Beware of the lowest 2 overcalls as they give out bidding space.

X Gives 2 steps (P, XX)
1 Gives 1 step (P)
1 Neither gives nor takes
1♠ Takes 1 step
1NT Takes 2 steps

X and 1 are better constructive to make bidding space useful to everyone. Natural 1 and 1♠ are fine. Try to increase the probability of 2-level overcalls, which take considerable space.

Another difference is that 1NT is limited and descriptive. The opener does not have much to say even if given a second chance to bid. The probability that the opener has a major is also lower. These reasons make major-oriented overcalls effective, e.g. Multi-Landy.

On the other hand, a strong artificial opening is usually unlimited. Forcing overcalls are less effective since they let the opener pass at ease. Thus, a natural defense to 1♣ is stronger than to 1NT. Besides, Inverted Psycho Suction may work for 1NT, but Psycho Suction works better for 1♣.

Discussions

5=4 two suiters

I include some 5=4 two suiters to increase the probabilities of Psycho Suction bids. The rest can be easily expressed with a simple major overcall, where the 4-card suit is lower than the 5-card major.

1 5+ hearts
1♠ 5+ spades
1NT 4+ spades 5+ diamonds or 4+ hearts 5+ clubs
2♣ 6+ clubs or 4+ hearts 5+ diamonds
2 6+ diamonds or 4+ spades 5+ hearts
2 6+ hearts or 4+ spades 5+ clubs
2♠ 6+ spades or 4+ diamonds 5+ clubs
2NT 5+ diamonds 4+ clubs

I believe 5=4 is the sweet spot of two suiters. The original 5-5 is too infrequent to make these overcalls different from weak twos. The 5-4 pattern is as frequent as the single-suited, putting the pass at higher risk. Moreover, 5-4 bids are better guided with an extra step showing equal preference. For example, Landy shows 5-4 in majors, and the 2 advance indicates equal preference. Nevertheless, this additional step bears a suit we deny, so it lets opponents come in cheaply.

Strength of X and 1

The strength to double is opening values. The double is a two-way call that is either a takeout double or a stolen opening bid with 5+ cards.

The 1 overcall is slightly stronger than 1 and 1♠ because it gives space yet exhibits no major. I suggest near average strength, i.e. 10+ total points in which there are 8+ HCP. I even recommend this approach to natural 1♣ since 1 leaks information without taking space.

Overcalls with 16+ HCP

Congratulations on holding yet another strong club! A comeback after passing the first round definitely reveals 16+ HCP. There are also situations where an initial double is better. To decide the best overcall, we have to investigate their pros and cons.

The pass is better when coming back is easy. The following qualities suggest a pass.

  • Single suiter
  • 5-card major
  • Balanced

The takeout double is made for three suiters. It lets our partner decide the strain. If our partner bids our shortness, we can bid 1NT to provide choices again.

Two suiters fall between these scenarios. First, try to bid 1NT and 2NT since they are forcing. Next, hide a 4-card minor with a pass because introducing the longest suit is often enough. Then, we are left with the following two suiters to double.

  • Majors
  • 4=5 or more in same-colored suits

When a fit is not found yet, we can easily run to the cheaper suit. This strategy happens to spare 2♣, our rebid to show a regular opener with clubs.

Advance Majors Black Red
1 1 1♠  
1   1♠  
1♠     2

  1. Overcalls higher than 2♣ must be anchored except that 2 can be Multi. 

  2. Except fert bids, described as purely destructive in the ACBL convention charts

How to program mathematical functions

It is achievable to write fast and precise mathematical functions. There are cutting-edge implementations in computers, phones, calculators, and game consoles. Some of them are open source, like glibc and musl, from which we can learn. I have also been working on mathematics in metallic, as a shameless plug.

It may seem that mathematical functions are hardware instructions. We usually code them in software instead. The trend is to have the hardware deal with only basic operations after decades of evolution. We can perform mathematics with only operations required in IEEE 754 and integer operations via type punning.

Target system

Instruction set

Which instructions are on the target system? C operators are probably supported. Other operations are not as available even if they are required by IEEE 754. For example, fmod rarely compiles to a single instruction. It is usually done by long division, which then translates to a series of integer operations or partial remainder instructions. This C function is also an example where operators in other programming languages can be more expensive than expected, like % in JavaScript or Python.

Programming language

I suggest programming mathematical functions in C. It is fast and precise to evaluate complicated expressions with floating-point contraction. On supported platforms, a * b + c can compile to a fused multiply–add.

// Nearest integer for a nonnegative argument
float nearest(float x)
{
    const float rectifier = 0x1p23f;
    float y = x + rectifier;
    return y - rectifier;

    // Wrong: can be optimized to (x + 0)
    return x + rectifier - rectifier;
}

Rounding

Round half to even is the default rounding in IEEE 754. The roundoff error of a series of n operations by this unbiased rounding is only in O(√n). This rounding will be the implied rounding method throughout this article unless otherwise specified.

Double rounding

The default rounding is non-associative. If we round 1.49 to the nearest tenth and then to the nearest integer, it becomes 2. Rounding to midpoints must be avoided in intermediate roundings.

Round to odd is a helpful intermediate rounding for binary numbers. It rounds irrepresentable numbers to the nearest representation with an odd significand. Only numbers with even significands can be midpoints or exact in a coarser precision. Therefore, round to odd does not interfere with subsequent roundings. Round to odd is also associative like directed roundings as it rounds all values between representations to either of them.

Exact addition

We can obtain the exact error of addition with the default rounding. This technique is useful for storing precise intermediates to stop the propagation of error. The idea is to find s + e = a + b, where s is the rounded sum, and e is the error term. The error term is defined when s does not overflow.

Compensated summation produces precise results with preconditions. The base of the floating-point system (FLT_RADIX) can be at most 3, and logb(a) ≥ logb(b).

s = a + b;
e = a - s + b;

We can generalize this algorithm by comparison, as |a| ≥ |b| implies logb(a) ≥ logb(b). Branching is nevertheless inefficient. There is another unbranched algorithm working most of the time. This algorithm overflows only if an operand is the largest finite representation or its negative.

s = a + b;
x = a - s;
y = b - s;
e = (a + x) + (b + y);

Exact multiplication

First, a double is large enough to store the exact product of any two floats. Therefore, it is preferred to cast and multiply. This method is straightforward and fast, and double-precision multiplication is widely supported.

double multiply(float a, float b)
{
    return (double)a * b;
}

Then, we try to find s + e = ab. If the FMA instructions are available, use them. Probe this feature with FP_FAST_FMA[FL]? macros.

s = a * b;
e = fma(a, b, -s);

Next, without all the hardware witchcraft, we can still count on schoolbook multiplication. With an even number of significant digits, equally split them and obtain the higher part with a bitwise AND.

Even if the number of significant bits is odd, we can split a binary significand with the default rounding. Take IEEE 754 double-precision as an example, which has 53 significant bits. Its magic multiplier is 227 + 1, where 27 = (53 + 1) / 2.

double split(double x)
{
    double s = (0x1p27 + 1) * x;
    double c = x - s;

    return s + c;
}

The above function returns the higher half of the argument. We can extract the lower half by subtracting the higher half. Each half is guaranteed to have at most 26 significant bits. The possibility that the halves can have opposite signs recovers the seemingly lost bit.

Table maker’s dilemma

The cost of a correctly rounded transcendental function is unknown unless probed with brute force. Faithfully rounded versions are much faster and generally preferable, though they may round up or down. No matter how precise intermediate results are, they can be close to a turning point of the given rounding. For example, x is a mathematical result equal to x* + 0.49 ulp, where x* is an exact floating-point. An exquisite approximation gives x* + 0.51 ulp, which is only 0.02 ulp off. Nevertheless, it becomes x * + 1 ulp after default rounding, which is 1 ulp off from the correctly rounded x*.

We can correctly round an algebraic function by solving its polynomial equation at the turning point and compare the results. However, this extra cost is unwelcome if faithful rounding is enough. It is unlikely that a correctly rounded program solves a real-world problem that a faithfully rounded one does not. IEEE 754 does not require correct rounding for the cubic root. Therefore, I made cbrt faithfully rounded in metallic. Its error can be even larger in glibc and other C libraries.

Approximation

Eventually, we break down mathematical functions to basic arithmetics. This section covers how to turn mathematics into source code.

Argument reduction

Sometimes we can shrink the domain to a short interval with an identity. For example, to compute exp for a binary format, we can divide its argument x by ln 2.

\begin{align*} x &= n \ln 2 + r \\ \exp x &= 2^n \exp r \end{align*}

Where n is an integer, bit twiddling takes care of multiplication by 2n. If we pick n as an integer nearest to x, we simultaneously restrict r into [-0.5 ln 2, 0.5 ln 2].

Approximation to exp r is fast because [-0.5 ln 2, 0.5 ln 2] is a short interval. We approximate exp r with few terms to achieve the desired precision.

It is also precise because r is small. Computations involving small numbers are accurate. Floating-points are dense near the origin since they are essentially scientific notation. In IEEE 754 binary formats, there is the same number of representations in (0, 1.5) and in (1.5, ∞). Therefore, it is wise to shift the domain close to 0.

Transformations

Most mathematical functions we compute are well-behaved. We can save computations by taking advantage of continuity, differentiability, or symmetry.

When a function f passes through and is monotone at the origin, divide it by the identity function and approximate the quotient g instead.

\begin{align*} f(0) &= 0 \\ f(x) &= x g(x) \end{align*}

This transformation explicitly omits the constant term and reduces the overall relative error. The value of g(0) can be any finite number. We define g as a continuous extension for rigor.

\[ g(x) = \begin{cases} \displaystyle \frac{f(x)}{x} & \mbox{if } x \ne 0 \\ \displaystyle \lim_{t \to 0} \frac{f(t)}{t} & \mbox{if } x = 0 \end{cases} \]

Given an approximant ĝ of g, the overall absolute error x |ĝg| tends to 0 when x also approaches 0. This transformation enables approximating g without a weight function and simplifies calculation.

When f is an even function, view it as another function g of a squared variable.

\begin{align*} f(x) &= f (-x) \\ f(x) &= g \left( x^2 \right) \\ g(x) &= f \left( \sqrt x \right) \end{align*}

This transformation explicitly omits odd terms and halves the degree of the approximant.

An odd function is a combination of the above two. It is a product of the identity function and an even function.

\begin{align*} f(x) &= -f (-x) \\ f(x) &= x g \left( x^2 \right) \\ g(x) &= \frac{f \left( \sqrt x \right)}{\sqrt x} \end{align*}

The value of g(0) does not affect the approximation of g as it creates no hole in the domain. In practice, set the lower bound to a tiny positive number like 2-200, and everything is fine.

Remez algorithm

Remez exchange algorithm is an iterative minimax algorithm. It minimizes the error of a rational approximation of a function. The best explanation of this algorithm I found is from the Boost libraries. I recommend Remez.jl, a public module in the Julia language. It works out of the box after installation.

For example, the following snippet finds a cubic approximation of cos(√·) in [0, (π / 4)2] with minimax absolute errors. The last argument is 0 because we want a polynomial, whose denominator is a constant (of degree 0) if regarded as a rational function.

import Remez

N, D, E, X = Remez.ratfn_minimax(x -> cos(√x), (0, (big(π) / 4)^2), 3, 0)

The variables N, D, E, X are the numerator, the denominator, the maximum error, and coordinates of the extrema, respectively. In this case, we are interested in N and E only. If we run the snippet in the REPL, it is straightforward to inspect variables.

julia> N
4-element Array{BigFloat,1}:
  0.9999999724233229210670051040057597041917874465747537951681676248240168483719746
 -0.4999985669584884771720232450657038606385147149244782395789475085368551172067715
  0.04165502688425152443762347668780274316867072837392713367475023020736799395672903
 -0.001358590851011329858521158876238716265345398772374942259275377959127201806930143

julia> E
2.757667707893299489599424029580821255342524620485822487536621483700643103529491e-08

The resulting coefficients are in ascending order. For example, the first element of N is the constant term.

Polynomial evaluation

The best polynomial evaluation method depends on the system. For example, pipelining influences execution time. Luckily, there are well-known evaluation schemes that provide decent performance and reasonable errors.

Horner’s scheme produces the fewest operations. It is the fastest if its argument is already a vector. It is also usually the most accurate method for a polynomial approximant of a well-conditioned function. However, its dependency chain is also the longest. It underuses the pipeline because all operations except one depend on another. Hence, it is less than ideal on single-threaded systems.

On the other hand, Estrin’s scheme tries to be as parallel as possible. It groups terms in a binary fashion to achieve the shallowest dependency tree at the expense of O(log(n)) extra squaring ops.

There are also other evaluation schemes with different pros and cons. Benchmark to find the most suited one if their difference is critical.

Responder's direct cuebid

Responder’s direct cuebid is a disputed and under-discussed topic. There are two popular usages of this bid:

  • Limit raise or better
  • Generic game force

I suggest different approaches to major and minor openings.

After a major opening

The cuebid is a limit raise or better. From the pigeonhole principle, you have either of these:

  • 3-card support
  • 5-card unbid suit
  • 4-4 unbid suits
  • 4-card adverse suit

The promise of a fit clears the way for finding a game. Other calls better describe game-forcing hands without 3-card support.

  • 5-card unbid suit: free bid
  • 4-4 unbid: negative double
  • 4-card adverse suit: 3NT

After a minor opening

We make the cuebid a versatile tool to combine the advantages of popular treatments.

According to the previous section, you can have an embarrassing strong hand with no 4-card support, no biddable side suit, and no stopper in the adverse suit. Imagine holding the following hand at 1♣-(1♠)-?

♠ xxx
Axx
Axxx
♣ Axx

You could have bid 3NT if RHO did not overcall, but you cannot now because there is no spade stopper. You are wary of passing because 3NT is still playable if your partner has a spade stopper. Ask for one with 2♠.

Besides, we can keep the limit raise. Your partner is eager to show a stopper as notrump games score more than minor games. Including the limit raise in the cuebid does not affect the bidding structure.

Don't implement CMPLX with +

For brevity, let’s forget that CMPLX is a macro. The following code seems to be a cross-platform solution.

inline double _Complex CMPLX(double x, double y)
{
	return x + I * y;
}

It is not. Whether I is complex or imaginary, I * y evaluates to (+0, y) when added with a real number. If x happens to be -0, its sign is not preserved because -0 + +0 = +0.

I think we can only stick with compiler-specific constructs for now. :(

How to set charset of all text responses on nginx

All text files on a site usually share the same character encoding. Especially UTF-8 is the modern de facto standard. However, the default charset_types does not contain text/css, let alone other non-plain text types like text/markdown.

The default charset_types should be text/* because most of them are parsed in ASCII (us-ascii) by default for backward compatibility. A text/xml response is parsed in ASCII even if BOM and XML declaration tells otherwise. Therefore, we should use application/xml for XML responses now.

Nevertheless, the charset_types setting checks complete matches only. Luckily, the map directive knows regex, and charset_types accepts a variable.

map $sent_http_content_type $charset {
    ~^text/   utf-8;
}

charset       $charset;
charset_types *;

This setting would make nginx specify UTF-8 for all text responses, such as text/css; charset=utf-8.