Curiosity driven algorithms make observations about the world and then try to predict what will come next. If the thing that happens next is not what the AI predicted, it counts that as a reward. As it learns to predict better, it has to seek new situations in which it doesn’t yet know how to predict the outcome. For those curious about it, there is a more formal description here:

https://pathak22.github.io/noreward-rl/
https://pathak22.github.io/noreward-rl/
https://pathak22.github.io/noreward-rl/

What would happen if we behaved more like a curiosity driven AI? What would happen if the reward was to find the unknown? We are living in a society that values finding the answer since we are children (exams, anyone?). So we stop asking questions when we are little. I find this problematic, and would like to figure out more ways in which we can stimulate each other to ask more questions, to be more driven by curiosity, perhaps.