Quiz

2019-06-16_18h11_23.png

Above one a foo. Generalized concept is any 2 blocks not touching, acting as pillars is foo as per examples in the question.

Quiz

2019-06-16_18h16_09.png

Its also a foo but we are not quite sure, because now its not brick on top, but a wedge. Then why did we conlucde so? Because for vertical pillars, we saw different shapes could be used and still it could be foo, so here too, though top is not a brick we finalize it as foo.

  • Learning is incremental - one example at a time
  • Often example is labelled
  • There could be an order
  • Different from case based reasoning (there we stored in raw format here we abstract)
  • No of examples r very small here
  • When we abstract, what do we abstract? That is a tough question.

Incremental Concept Learning

2019-06-16_19h24_49.png

In [21]:
import graphviz as gz

G = '''
digraph {
    a[label="Am I told this is a cat?"]
    a1[label="Is this a dog?"]
    b[label="Is this cat black? \n(Caz my def says it should)"]
    c[label="Does this have 4 legs? \n(then I would think its a dog)"]
    d[label="Do nothing! \n then its a cat"]
    e[label="Generalize! \ncat need not be black always"]    
    f[label="Specialize! \nExclude cat and dot \nthough both have 4 legs"]
    g[label="Do nothing! \nIts not a dog anyway"]
    
    a -> b[label="yes"];
    
    b -> d[label="yes"];
    b -> e[label="no"];    
    
    
    a1 -> c[label="no"];
    c -> f[label="yes"];
    c -> g[label="no"];
} 
'''

gz.Source(G)
Out[21]:
%3 a Am I told this is a cat? b Is this cat black? (Caz my def says it should) a->b yes a1 Is this a dog? c Does this have 4 legs? (then I would think its a dog) a1->c no d Do nothing! then its a cat b->d yes e Generalize! cat need not be black always b->e no f Specialize! Exclude cat and dot though both have 4 legs c->f yes g Do nothing! Its not a dog anyway c->g no

When 2nd example has lot of overlap, in order to generalize, drop parts of first positive example. A heuristic is a rule of thumb.

2019-06-16_19h45_46.png

Given an negative example, make the missing pieces available in current concept mandatory. This new concept is a specialized one.

2019-06-16_19h48_26.png

Given an negative example, make the missing pieces available in current negative example forbidden. This new concept is a specialized one.

2019-06-16_19h51_05.png

Enlarge-Set heuristic

Given a positive element, make the additional pieces in positive example added to the current concept. This is a more generalized concept then.

2019-06-16_19h54_44.png

Climb Tree heuristic

If agent knows brick and wedge are blocks, we could genearlize further. Note below, now a cylinder which is also a block (if known by agent as so), can be in place of block to be recognized as an arch.

2019-06-16_19h56_50.png

Close Interval

A child has seen small dogs, and has that as definition. When it sees a bigger dog, it expands the size space to include bigger ones also as dogs.

19. Quiz

This is tricky so needs an explanation.

2019-06-16_20h12_30.png

In [22]:
G = '''
digraph {

    a1[label="Is this a foo?"]
    c[label="Does this fit to our foo model? \n(then I would think its a dog)"]
    f[label="Specialize! \nLearn and update our foo model"]
    g[label="Do nothing! \nIts not a foo anyway"]
       
    
    a1 -> c[label="no"];
    c -> f[label="yes"];
    c -> g[label="no"];
} 
'''

gz.Source(G)
Out[22]:
%3 a1 Is this a foo? c Does this fit to our foo model? (then I would think its a dog) a1->c no f Specialize! Learn and update our foo model c->f yes g Do nothing! Its not a foo anyway c->g no

So we chose a Do nothing because, it did not fit our model also.

Remember

  • If given example is positive, and fits our concept - do nothing
  • If given example is positive, and does not fit our concept - learn (drop link, enlarge-set, climb tree)
  • If given example is negative, and does fit our concept - learn (require link, forbid link)
  • If given example is negative and does not fit our concept - do nothing