Cup¶
- Suppose robot picks pail (top left), and we say no its not a cup, then robot may need an explanation to understand the failure and correct its mistake. Learning from success that turned out to be a failure
- Suppose robot does not pick up blue cup (bottom left), because it does not have handle (say, as per its cup definition), then it failed to pick that up. It also needs to learn from that mistake. Learning from failures that should have been successes.
Questions¶
Another example:
Hey get me the important files from last Tuesday. Agent gets the files and I find they are not important because the files are too old.
- Agent isolates the error by now with date as a new criteria
- Agent could explain that perhaps the old files though meeting current criteria of new model are then again not important still
- Agent could include this date criteria to prevent this error from recurring.
We could never build a perfect agent. Even if we build with complete knowledge, world around us is dynamic, so agent starts failing. It is thus important it is able to learn and correct itself.
Note agent is not diagnosing something external, but this is kind of self diagnosing
Isolate Error¶
There are many ways agent could commit mistakes - here we will focus only on error in classification knowledge
Above figure shows the features common as trigger to identify something as a cup or not. And then we have some features not in common.
False suspicious feature Those on the right excluded end (ex: handle-moves). The agent identifies red pail as a cup. Now a false suspicious feature like handle moves could be a reason. Agent could redefine to exclude that in its definition
But how it knows which FSF it should take on when there are multiples. Either it can go one by one like ICP, or may combine multiple experiences and choose a feature which consistently acted as a FSF.
True suspicious feature The reverse is also true. Agent may not pick the blue cup, because its marked as blue which is not part of its definition. That blue as interior could be a TSF. The agent could include that as part of its definition to prevent this error from happening.
FSF are the features that makes agent to decide an negative example as negative, there by making the agent commit mistake.
TSF are features that prevents agent from failing to decide a positive example as negative thereby making the agent commit mistakes.
Algorithm¶
In classification there could be different types of outputs and certain no of features could be responsible for those outputs.
| Outputs | Descriptions | Features responsible |
|---|---|---|
| True Success | Output is identified as Success, and actually it is Success | T features |
| False Success | Output is identified as Success, but actually not Success | F features |
| False Failure | Output is identified as Failure, but actually not Failure | |
| True Failure | Output is identified as Failure, and actually it is Failure |
Identify Suspicious False Success relations¶
- Take all common F features from false successes. $\cap F$
- Take every T feature from true successes. $\cup T$
- Set Subtract $\cap F - \cup T$
Identify Suspicious True Success relations¶
- Take all common T features from True successes. $\cap T$
- Take every F feature from false successes. $\cup F$
- Set Subtract $\cap T - \cup F$
- No of features for algorithm to work depends on complextiy of concept. More complex => more features needed
Explanation Free Repair¶
- Handle is fixed is new addition in the rule.
- KBAI answers why that new rule is important. An explanation is seeked and given. This is important difference between KBAI and other schools of AI.
Explanation¶
Note that Ashok calls
Handle is fixedis a False suspicous relation, but afaik it should be True suspicious relation. It isolates positive example from negative examples.
We need to insert the new rule Handle is fixed in above explanation of cup. Is above insertion correct?
My reasons
- For first choice: No, the cups may not have handle at all (my general knowledge/reasoning).Pail doesnt have fixed, but liftable
- For second choice: Could be, cups with no handle can also be liftable?
- For third choice: Could be, like a pail with fixed handle?
- For fourth choice: Could be. Most generic choice.
Correct Answer: 4th choice Because, liftable explanation came from briefcase earlier. So if we say handle is fixed, then briefcase is no longer liftable.
Correcting the Mistake¶
- Agent can insert it above liftable and below drinking
- Can insert additional explanation that having handle fixed in parallel makes object manipulable
- Can also add more like how fixed handle enables orientation which in turn facilitates manipulable.
Connection to ICP¶
- In ICP we only saw how agent can learn via ICP. Here we see how agent uses its knowledge to learn
- KBAI, looks at reasoning, action, decides what knowledge needed and then learns that target.
- KBAI not only learns to take action but also gets feedback from world to refine that learning.
- Meta cognition is thinking about thinking. For me, this Learning by correcting mistakes seem like a meta cognitive task.
from IPython.display import YouTubeVideo
YouTubeVideo('dfBGWhoW-U0')