AI is not quarantined from negative societal realities such as discrimination and unfair practices. Consequently, it is arguably impossible to achieve zero risk of bias in an AI system. Therefore, AI bias risk management should aim to mitigate rather than avoid risks. Risks can be triaged and tiered; resources allocated to the most material risks, the worst problems and most sensitive uses, those “most likely to cause real-world harm.”