AI they are talking about is most likely completely different than chatgpt.
They are likely labeling people “at risk” using some very reliable old-school ML algorithm, such as xgboost.
Biases are clearly a problem, but they are more manageable than human biases, because of the mathematical form that help finding and removing them. This is why for instance EU regulations force to have mathematical models in many area, to replace “human intuition”. Because mathematical models are better for customers.
They aren’t doing anything new, just calling it AI
AI they are talking about is most likely completely different than chatgpt.
They are likely labeling people “at risk” using some very reliable old-school ML algorithm, such as xgboost.
Biases are clearly a problem, but they are more manageable than human biases, because of the mathematical form that help finding and removing them. This is why for instance EU regulations force to have mathematical models in many area, to replace “human intuition”. Because mathematical models are better for customers.
They aren’t doing anything new, just calling it AI