While AI developers model the output of human understanding as solutions to problems in the form of computer code - this doesn't mean the computer code has understanding.
Will we achieve Strong AI (or 'quality' superintelligence) without first achieving machine understanding?
What are some necessary ingredients a system must include for it to actually 'understand' a problem it is pointed at?
Discussion on CYC / Symbolic GOFAI attempts to create AI - the usefulness of philosophical investigations to help ask the right questions, frame the right research to ultimately a) know what your aiming for and b) how to get there.
Kevin explains what the 'Frame Problem' is and why symboic approaches will never solve it. He also discusses Bayesian Primitives in reference to predicting the edges of competence, and gracefully degrading/coping and learning at these edges without catastrophic failure.
Социальные закладки