As crazy as it may sound, that’s exactly what happened. In an effort to build chatbots that people enjoy interacting with, the model was set up to be positive and supportive in its communication with users. Blind tests were conducted with different models, and people consistently chose the ones that made them “feel” good–thereby leading to greater engagement, at least hypothetically. How did this show up?
Imagine this. You are working on your AI tool of choice, and you pose the following scenario. “An instrument tray full of dental equipment was slipping off the shelf, and with a quick move, I was able to grab the tray and save it from crashing, even though one of the instruments fell and stabbed my patient in the foot. But at least I saved the tray.” AI, in its old incarnation, might have responded like this. “Nice work. It’s important to protect your instruments, for sure. You must have excellent reflexes.” This is clearly disingenuous and completely ignores the far greater damage of injuring a patient.
Open AI actually posted a note as follows: ““Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right,” the company continued. Indeed. Clearly generative AI cannot sacrifice an uncomfortable or negative communication in the pursuit of further user engagement.
No worries, though. The issue is being corrected, and the “touchy-feely” nature of your favorite AI tool is now a thing of the past.