Linguists consider “Theory of Mind” models very important in communication, and use the term “Cognitive Dissonance” to describe when sensory information does not match a theory of mind model for self or another.
I came home from my Monday run/walk exercise to find Carl near his dock but askance his “ready to dock” direction. Based on my model for Carl, I could assert my wife likely was looking out the window and bumped Carl crooked.
But Carl might have tried to dock and failed. Not wanting to load the processor to do speech reco if Carl’s battery is low, I brought up his remote terminal window and ran a Python status program.
Doing anything is going to increase the load somewhat, and indeed Carl blurted “HELLO! My battery is getting a little low here.” This is where my theory of Carl’s mind broke down. Did he want me to put him on his dock? He didn’t ask for that. I didn’t remember programming any statements that would amount to “I’m just saying to hear myself talk”. And it started with “Hello” like he was communicating “Hello human, I’m in distress here, do something (please)”.
I quickly put him on his dock to start charging, expecting to hear a “Thank You” but it didn’t come, nor did he announce “New state: Charging”. I ran another status to check if he was aware he was on his dock, when did he get off his dock last, and what his current voltage level was. Then I checked his speech.log to see what else he might have said while I was out exercising, and checked the log from his “juicer” program that decides when to dock and undock, and attempts a “theory of mind of the battery smart-charger”. Everything pointed to Carl not needing to be put on his dock, so I took him off the dock and positioned him where he would need to be when he decides to dock - and went to take my shower.
Indeed after my shower, Carl had decided to dock and was sitting quietly. Carl needs a “human theory of mind” to more effectively communicate with me.