The errgroup
package we’ve been looking at lately has one more trick up its sleeve: Limiting the number of running goroutines.
Let’s imagine we’re building an HTTP handler that responds with a list of all customer orders. Doing so, requires making a number of calls to the ‘orders’ microservice. Some customers may have made hundreds or thousands of orders. We don’t want to look them all up simultaneously, so we’ll limit our client to 10 concurrent requests. But as with previous examples, an error means we can abort the entire process. Let’s see what that might look like:
func findAllOrders(ctx context.Context, orderIDs []int) ([]*Order, error) {
g, ctx := errgroup.WithContext(ctx)
g.SetLimit(10)
orders := make([]*Order, len(orderIDs))
for i, orderID := range orderIDs{
g.Go(func() error {
order, err := fetchSingleOrder(ctx, orderID)
if err != nil {
return err
}
orders[i] = order
})
}
if err := g.Wait(); err != nil{
return nil, err
}
return orders, nil
}
In this example, we tell our errgroup
instance to limit the number of running goroutines to 10. This means that even if orderIDs
contains thousands of elements, this function will make only 10 concurrent calls to fetchSingleOrder
. If other instances of fetchAllOrders
are running, each will have its own limit of 10 concurrent calls. If we need a global concurrency limit, we need a different tool.
Each time g.Go
is called, if the limit of 10 is already reached, then it blocks until a new slot opens up.
If you prefer to not run a goroutine when the limit is reached, you can instead use TryGo
, which always returns a bool
value immediately, indicating whether the goroutine was started or not.
In all honesty, I’ve never used this feature of errgroup
, and I think it would be very rare that I might want to. Far more frequently I want a global limit. Which I’ll talk about next time!