I was recently watching a MIT Opencourseware video on Youtube titled “Introduction to ‘The Society of Mind’” which is a series of lectures (or as the author refers to them “seminars”) by Marvin Minsky. While watching the first episode of this course the professors puts forth an interesting theory about what grants humans the capability to handle a variety of problems while machines remain limited in their capacity to generically compute solutions to problems. In this theory he alludes to the concept that humans “resourcefullness” is what grants us this capability which to paraphrase is the ability for humans to leverage a variety of different paths to identify a variety of solutions to the same problem. All of which can be used in a variety of different situations in order to develop a solution to the generic problem at hand. While he was describing this theory he made an off hand comment about the choice of the word “resourcefullness” positing whether there was a shorter word to describe the concept.
This got me thinking about the lingustical preciseness to describe the concept and I came across a very fullfilling suggestion on stack exchange to do just that. They suggested the word “equifinality” which is incredibly precise, but also a bit of a pompous choice for a general audience. Albeit, great for the audience he was addressing. The second suggestion sent me down a tangent of thought that I find very enticing though. “Convergent” is a word that’s commonly used to describe this in common tongue today and more importantly can be paired with wisdom to describe a new concept. I’m choosing to define the concept of “convergent wisdom” as utilizing the knowledge gained from studying multiple solutions that approach the same outcome in different ways in order to choose the appropriate solution for the problem at hand.
What’s interesting about the concept of convergent wisdom is that it suitably describes the feedback loop that humans exploit in order to gain the capability of generalizable problem solving. For example, in chemical synthesis the ability to understand the pathway of creating an exotic compound is nearly as important as the compound itself because it can affect the feasiblity of mass production of the compound. In manufacteuring similarly, there are numerous instance of giant discoveries occuring (battery technology is the one that comes to mind first) which then fall short when it comes time to manufateur the product. In both of these instances the ability to understand the chosen path is nearly as important as the solution itself.
So why does this matter and why define the concept? This concept seems incredibly important to the ability to build generically intelligent machines. Today, it seems much of the focus of the artificial intelligence feild focuses primarily on the outcome while treating the process as a hidden and unimportant afterthought up until the point in which the algorithm starts to produce ethically dubious outcomes as well.
Through the study of not only the inputs and outputs, but also the pathway by with the outcome is achieved I believe the same feedback loop may be able to be formed to produce generalizable computing in machines. Unfortunately, I’m no expert in this space and have tons of reading to do on the topic. So now that I’ve been able to describe and define the topic can anyone point me to the area of study or academic literature which focuses on this aspect of AI?