We just covered instantiations, and learned that it is often possible to infer generic types. Nowe we’ll examine how that mechanism works. This bit gets a bit into the weeds. You’ll be forgiven if you choose to skip over this.
Type inference
A use of a generic function may omit some or all type arguments if they can be inferred from the context within which the function is used, including the constraints of the function’s type parameters. Type inference succeeds if it can infer the missing type arguments and instantiation succeeds with the inferred type arguments. Otherwise, type inference fails and the program is invalid.
Type inference uses the type relationships between pairs of types for inference: For instance, a function argument must be assignable to its respective function parameter; this establishes a relationship between the type of the argument and the type of the parameter. If either of these two types contains type parameters, type inference looks for the type arguments to substitute the type parameters with such that the assignability relationship is satisfied. Similarly, type inference uses the fact that a type argument must satisfy the constraint of its respective type parameter.
So let’s try to make this relationship a bit more concrete.
If we have a function sum(a, b int) int
, and a function call sum(1, 2)
, the relationship described is that between the types of the arguments 1
and 2
(untyped numeric constants), and the types of the function parameters, a
and b
(int
).
The untyped numeric constants are assignable to the respective parameters of type int
, so that constraint is satisfied.
The same relationship between the types of arguments and types of parameters can be used in the case of a generic function, to try to infer the type.
Given the generic function sum[T ~int | ~float](a, b T) T
, and the function call sum(1, 2)
, we can infer that the type of T
is int
, because the arguments are int
s. Actually, they’re not int
s. They’re untyped numeric constants, which happen to default to int
.
But if we change that to sum(1, 2.0)
, then the second one becomes a float64
, we get a different inferred type. See it in the playground.
Quotes from The Go Programming Language Specification Version of August 2, 2023