Ep. 27 - Awakening from the Meaning Crisis - Problem Formulation
(Sectioning and transcripts made by MeaningCrisis.co)
A Kind Donation
Transcript
Welcome back to awakening from the meaning crisis. This is episode 27. So last time we took a look at the nature of Cognitive Science and argued for Synoptic Integration that addresses equivocation and fragmentation, and the ignorance of the causal relation between these different levels which we [use to] talk about mind and study mind, and it does that by trying to create plausible and potentially profound constructs. And then I proposed to you that we'd start doing cognitive science in two ways: First of all, just trying to look at the cognitive machinery that is responsible for Meaning Cultivation. But also trying to exemplify that pattern of bringing about Synoptic Integration through the generation of plausible and potentially profound constructs. So we took a look at the central capacity to be [a] cognitive agent, and this is your capacity for intelligence and, of course, intelligence is something that neuroscientists are looking for in the brain and that artificial intelligence machine learning Is trying to create. They're trying to create intelligence. Psychology, of course, has famously been measuring intelligence since its very inception. The word intelligence has reading between the lines which seems, we'll see, has a lot to do with your use of language, et cetera. Culture seems to be deeply connected to the application and then development of intelligence.
So intelligence is a great place to start. And then I said we need to be very careful. We don't want to equivocate about intelligence. We want to make sure we approach it very carefully because, although it is very important to us, we often are using the term in an equivocal and confused manner and [are] therefore bullshitting ourselves in an important way. And then I proposed to you that we don't focus on the results, the product of our intelligence — our knowledge and what our knowledge does for us — our technologies, for example. We focus instead on the process that allows us to acquire knowledge because that way we have something we can use intelligence to explain how we have acquired the knowledge we have.
I then proposed to you to follow the work that was seminal, both in the psychometric measure of intelligence — Binet & Simon — and the attempt to artificially generate intelligence — the work of Newell & Simon — and this is the idea of intelligence as your capacity to be a General Problem Solver, to solve a wide variety of problems across a wide variety of domains. And then, in order to get clear about that, we took a look at the work of Newell and Simon, trying to give us a very careful formal analysis of what is going on in intelligence. And I'm going to come back to those ideas in a minute or two - this idea of analysis, of formal analysis.
(Now referring to the diagrams from the end of the last episode which are still on the board) A problem was analysed into a representation of an Initial State and a Goal State, and I have a problem when my Initial State and my Goal State are significantly different. I can then apply Operations or Operators, these are actions that will change one state into another state — remember me moving towards the cup, raising my hand, for example — and I can have a sequence of operations that will like take me from my Initial State to my Goal State, but I have to follow the Path Constraints. I want to remain a General Problem Solver. I don't want to solve any one problem to the detriment of my capacity to be a general problem solver, or then my solving this problem will undermine my intelligence in general. So to solve a problem is to apply a sequence of operations that will transform the initial state into the goal state while following the path constraints. And then you can analyse that by taking a look at the Problem Space. And it was this explication — making explicit — of the Problem Space that was the radical — and I will in fact argue profound — power in what Newell and Simon were doing. That's what made their work so impactful in so many disciplines.
Now there's two things we have to note about this (diagram of problem solving) that are potentially misleading. First of all, one is I didn't draw the whole diagram out, and that wasn't just happenstance - we'll come back to that. The second one is this diagram is misleading precisely because it is created from God’s-eye point of view. If I were to fill the diagram out, you could see all of the pathways at once and you could see, at once, which pathway leads from the Initial State to the Goal State. But of course in life, when you have a problem, you are not out here (John’s point of view, looking at the board). Having a problem is precisely to be here (John moves to the left of the Initial State, assuming it’s point of view) and you do not know which of all these pathways will take your Initial State to Goal State while obeying the Path Constraints. You don't know that. You're ignorant. Because remember, we're not confusing intelligence with knowledge. Solving this is how you acquire knowledge. The problem solving method is any method for finding the sequence of transformations that will take you from the initial state, into the goal state while obeying the path constraints.
And you say, “okay, I get it! The diagram isn't complete. And you're over here. You can't see the whole thing. You don't know which of all the pathways…. Yeah. So what?”. Well, here's so what: When I have analysed this and formalised it, when I've explicated it, in terms of problem space, it reveals something. I can calculate the number of pathways here by the formula F to the D (*** ), where F is the number of operators I'm applying at any stage and D is the number of stages I go through. So Keith Holyoak gave a very famous example of this — a psychologist who was instrumental in doing important work on the psychology of problem solving. Let's do a concrete example of this - it's a great example because it brings up machines that we have today.
Combinatorial Explosion
So let's say you're playing a game of chess. So on average the number of operations you can perform on any turn — that's F — is 30. Now don't say to me, “well, how many of those are stupid?”. That's not the point! I'm trying to explain how your intelligent; that's what I have to explain. It's not what I can assume. So there's 30 legal moves and, on average, there's 60 turns (**Writes 30 to the power of 60 on the board**). That's the number of pathways that are in the problem space. This is known as Combinatorial Explosion. It sounds like a science fiction weapon, but it's actually a central fact! This is a vast number. It's something like 4.29 times 10 to the 88th in standard, sort of scientific notation. Now, I want you to understand how big this is; how astronomically, incomprehensibly, large this is. So one thing you might say is, “well, you know, that's easy! I have many different neurones and they all work in parallel and they're all checking different alternative pathways, and that's how I do this: Parallel Distributed Processing!”. And there's an important element in that, but that's not going to be enough, because you probably have around that many neurones (writes 10 to the power of 10 on the board, below the last figure). Now that’s a lot! But it's nowhere near this (1st number) and you say, “ah, but it's not the neurones. It's number of connections” and that's something like 5 to the 10 to the 15th (**), and that's a big number! But you know what it's astronomically far away from? It's astronomically far away from this (1st number). So even if it's each synaptic connection is exploring a pathway, this is still overwhelming your brain. In fact, this is greater than the number of atomic particles that are estimated to exist in the universe. This is, in a literal sense, bigger than the universe. It's big.
And what does that mean? That means you cannot do the following: you cannot search the whole space (Problem Space). For very, very many problems, the problems are combinatorially explosive and therefore you cannot search that space. You do not have the time, the machinery, the resources to search the whole space. Now here is what is deeply, deeply interesting about you. This is sort of my professional obsession, you might put it (wipes the board clean), If I could represent it this way. If this is the whole problem space (draws one big circle) — this is what you do somehow — you can't search the whole space… I mean, you can't search, you can't look here (one section within the circle) and then reject. Because if you look at this part of the space and reject it, and then look at this part of it, then you end up searching the whole space. It's not a matter of checking and rejecting because that's searching the whole space! What you do is somehow do this: you somehow zero in on only a very small subsection of that whole space (draws a thin, 3 to 5 degree wedge of the circle at 12 o’clock) and you search in there and you often find a solution!! You somehow zero in on the relevant information and you make that information effective in your cognition, you do what I call Relevance Realisation. You realise what's relevant. Now this fascinates me, and that fascination is due to the work of Newell and Simon, because… how do you do that? You say, “well, the computers are really fast…!”. Even the fastest chess-playing computers don't check the whole space! They can’t! They're not powerful enough for fast enough. That's not how they play.
So this issue of avoiding combinatorial explosion is actually a central way of understanding your intelligence. And you probably hadn't thought of that before - that one of the things that makes you crucially intelligent is your ability to zero in on relevant information. And of course you're experiencing that in two related, but different ways. One way is [you're], and the way this is happening so automatically and seamlessly for you, is the generation of ‘obviousness’. Like, what's obvious? Well, obviously I should pick up my marker. Obviously I should go to the board. Obviousness is not a property of physics or chemistry or biology. Obviousness is not what explains your behavior - it explains your behavior in a “common sense” way, but obviousness is what I scientifically have to explain. How does your brain make things obvious to you? And that's related to, but not identical to, this issue of how things are salient to you. How they stand out to you. How they grab your attention! And what we already know is that that process isn't static because sometimes [-] how you zeroed in on things as relevant, what was obvious to you, what was salient to you, how you ‘join the nine dots’ is obvious and salient to you, and yet you get it wrong! And part of your ability is to restructure what you find relevant and salient - you can dynamically self-organise what you find relevant and salient.
Now Newell and Simon wrestled with this and there's a sense in which this (indicates the thin wedge of the circle on the board) is the key problem that the project of Artificial General Intelligence is trying to address right now. In fact, that's what I argued. I've argued in some work I did with Tim Lillicrap and Blake Richards, some work I've done with Leo Ferraro - there's related work by other people. But new Newell and Simon realised that in some way you have to deal with combinatorial explosion - to make a general problem solver, you have to give the machine, the system, the capacity to avoid combinatorial explosion. We're going to see that this is probably the best way of trying to understand what intelligence is. People like Stanovich argue that what we're measuring when we're measuring your intelligence in psychometric tests is precisely your ability to deal with computational limitations, to avoid combinatorial explosion. Christopher Cherniak argues something similar.
Heuristics And Algorithms
So what did Newell and Simon propose? Well, I want to talk about what they propose and show why I think it's important and then criticise them in what they mistook or misunderstood and therefore why their solution — and I don't think they would have disputed this — why their solution was insufficient. They proposed a distinction that's used a lot, but these terms have slipped - I've watched them slip in the 25 years I've been teaching at UFT. I've seen the term slip around, but I want to use them the way Newell and Simon used them within the context of problem solving. And this is the distinction between a heuristics and an algorithm (writes these both on the board). They actually didn't come up with this distinction. This actually came from an earlier book by Pólya called How To Solve It, which was a book just on the psychology and it was a set of practical advice for how to improve problem solving. So remember we talked about what a problem solving technique is? A problem solving technique is a method for finding a problem solution. That's not trivial because a problem solution has been analysed in terms of a sequence of operations that takes the Initial State into the Goal State while obeying the Path Constraints.
Okay. So what's an algorithm? An algorithm is a problem solving technique that is guaranteed to find a solution or prove — and I'm using that term technically, not ‘give evidence for’ but prove — that a solution can't be found. Okay. And of course there are algorithmic things you do. You know the algorithm for doing multiplication for example - you know, 33 times 4, right (writes this sum on the board)? There is a way to do that in which you can reliably guarantee / get an answer. So this is important, and I remember I said I'd come back to you and explain why Descartes' project was doomed for failure? Because algorithmic processing is processing that has held to the standard of certainty. You use an algorithm when you are pursuing certainty. Now what's the problem with using an algorithm as a problem solving technique? Well, it's guaranteed to find an answer or prove that an answer is not findable. So algorithms work in terms of certainty! Ask yourself: in order to be certain that you [have] found the answer or proved that an answer can not be found, how much of the problem space do you have to search?
There's some a-priori things you can do to shave the problem space down a little bit. And you know, computer science talks about that, but generally, for all practical purposes and intents, in order to guarantee certainty, I have to search the space, the whole space, and the space is combinatorially explosive. So if I pursue algorithmic certainty, I will not solve any problems. I will have committed cognitive suicide. If I try [to] be algorithmically certain in all of my processing, if I'm pursuing certainty, as I'm trying to get over to the cup, a combinatorially explosive space opens up and I can't get there because my lifetime, my resources, my processing power is not sufficient to search the whole space. That's why Descartes was doomed from the beginning. You can't turn yourself into Mr. Spock. You can't turn yourself into Data. You can't turn yourself into an algorithmic machine that is pursuing certainty. That is cognitive suicide. That tells us something right away, by the way, because logic — deductive logic — is certainty. It is algorithmic. It works in terms of certainty. An argument is valid if it is impossible for the premises to be true and the conclusion false. Logic works in terms of the normativity of certainty. It operates algorithmically. So does math. You cannot be comprehensively logical: If I tried to be Mr. Spock and logic my way through anything I'm trying to do, most of my problems are combinatorially explosive and I won't solve even one of them because I'd be overwhelmed by a combinatorially explosive search space.
This tells you something. This is what I meant earlier when I said trying to equate rationality with being logical is absurd. You can't do that. The issue: these terms are not, and cannot be — as much as Descartes wanted them to be — they are not, and cannot be synonymous (writes rational ‘does not equal’ logical on the board). Now that doesn't mean that rational means being illogical or willy-nilly. Ratio / rationing (underlines Ration in Rational) pay attention to this. ‘Ratio’, ‘rationing’, ‘being rational’ means knowing when, where, how much and what degree to be logical. And that's a much more difficult thing to do. And I would argue more that: being rational is not just the psycho-technology of logic, but other psycho-technologies - knowing where, when, and how to use them in order to overcome self deception and optimally achieve the goals that we want to achieve. Ofter when they talk about rationality, people think I'm talking about logic or consistency and they misunderstand that is not what I've meant. And that's what I said when Descartes was wrong, in a deep sense, from the beginning.
Newell and Simon realised this. That's precisely why they proposed the distinction: a heuristic is a problem solving method that is not guaranteed to find a solution. It is only reliable for increasing your chances of achieving your goal. So I've just shown you: you cannot play chess algorithmically. Of course you can — and even the best computer programs do this — play chess heuristically. You can play chess doing the following things. Here are some heuristics… ‘get your queen out early’, ‘control the centre board’, ‘castle, your King’. You can do all of these things and nevertheless not achieve your goal - winning the game of chess. And that's precisely because of how heuristics work. What heuristics do is they try to pre-specify where you should search for the relevant information (draws a bigger, 45 degree wedge in the circle). That's what a heuristic does: it limits the space you're searching. Now, what that actually means is it's getting you to ‘prejudge’ what's going to be relevant — and of course that's where we get our word ‘prejudice’ from — and a heuristic is therefore a bias, it's a source of bias. This is why the two are often paired together - the heuristic-bias approach.
Look, what my chest heuristics do is they bias where I'm paying attention. I focus on the centre board, I focus on my queen. If I'm playing against somebody who's very good, they'll notice how I'm fixated on the centre board and the queen and they'll keep me focused there while playing a peripheral game that defeats me. I played a game of chess, not that long ago and I was able to use that strategy against someone. This it's called the “No Free Lunch Theorem”; that it is unavoidable - you have to use heuristics because you have to avoid combinatorial explosion. You can't be comprehensively algorithmic, logical, mathematical. The price you pay for avoiding combinatorial explosion is you fall prey to bias again and again, and again. The very things that make us adoptive are the things that make us prone to self deception.
But if you remember, we talked about these heuristics and biases when we talked about the representativeness heuristic, and the availability heuristic that were at work when you take your friend to the airport, because you can't calculate all the probabilities - it’s combinatorially explosive. So you use the heuristic: How easy could I remember plane crashes? How prototypical to do they…/? How representative are they of disasters and tragedies? And because of that, you judge it highly probable that the plane will crash and then you ignore how deeply dangerous your automobile is. So the very things that make you adoptive make you prone to self deception. (Wipes board clean, except for the circle diagram.)
Praise For Newell And Simon
Now, this account. I think… I have tremendous respect for Newell and Simon. First of all, let me tell you why I have respect and then what criticisms I have. So, first of all, this idea that [that] part of what makes you intelligence is your ability to use your heuristics, I think that's a necessary part. And the empirical evidence that we use these heuristics is quite, quite powerful and convincing and well replicated. This is also an instance of doing really, really powerful work and this will add one more dimension to what it is to do good cognitive science. Yes, it’s about creating plausible constructs that afford Synoptic Integration, but there is another way in which Newell and Simon exemplified, they modelled to us, what it is to do it well, do it properly. And again, this is going to relate to the meaning crisis.
Notice what they've done. Notice how all of the aspects, all of the great changes that have made the scientific way of thinking possible are exemplified in what Newell and Simon are doing. Notice that they're analysing (writes analysing on the board). They're taking a complex phenomenon and they're trying to analyse it down into its basic components. Just like Thales did so long ago when he was trying to get at the underlying substances and forces. They're trying to do that ontological depth perception. And then, like Descartes, they're trying to formalise it. They're trying to give us a graphical mathematical representation. The problem space is a formalisation that allows us to do calculations, equations (writes formalisation below analysing). That's how I was able to explain to you combinatorial explosion. And then what they were doing is they were trying to mechanise (writes mechanise below formalisation). I know that will make some people's hackles rise, but the point of this is if I've got this right, if I can make a machine that can carry out my formal analysis, because that means I haven't snuck anything in. And that really matters because it turns out that [in] trying to explain the mind, we often fall into a particular fallacy.
Ok, so how do you see? Well, here's a triangle out here (draws a little triangle) and the light comes off of it and then it goes into your eye (draws a little eye) [-], and then the nerve impulses — and then I'll equivocate on the notion of information to hide all kinds of theoretical problems — and then it goes into this space inside of my mind — let's call it maybe working memory or something (draws a circle to represent this) — and it gets projected onto some inner screen (draws a little screen in the circle with the image of the triangle on it) and there it is, it’s projected there. And then there's a little man in here (draws a little stickman) — the Latin for little man is Homunculus — and the little man, maybe it's your central executive or something, says “triangle”. …and that's how it works, right? And notice what's going on here: It sounds like I'm giving a mechanical explanation, and then I invoke something.
Now what you should ask me right away is the following: “Ah, yes, but John, how does the little man, the little Homunculus, see the inner triangle?”, “Oh, well, inside his head is a smaller space with a smaller projection in the middle. And there's a little man in there that (draws a smaller repeat of the above diagram) [says] “triangle!””, and so on and so forth… And do you see what this gets? This gives you an infinite regress. It doesn't explain anything. Why? This is the circular explanation. Remember when we talked about this, right? This is when I'm using vision to explain vision. And you say, “well, yeah, that's stupid! I get why that's stupid. That's non-explanatory. Circular explanations are non-explanatory!”. Yes, they are: they're non-explanatory.
The Naturalistic Imperative Of Cognitive Science
But here's what I ultimately have to do. And this is what Newell and Simon are trying to do. They're trying to take a mental term — intelligence (writes intelligence on the board, labelling it ‘mental’) — and they're trying to analyse it, formalise it and mechanise it so they can explain it using non-mental terms. Because if I always use mind to explain mind, I'm actually never explaining the mind. I've just engaged in a circular explanation. What Newell and Simon are trying to do is analyse, formalise and mechanise an explanation of the mind. They're not doing this because they're fascists or they're worshipers of science, or they're enamoured with technology! Maybe some of those things are true about them, but independent from that I can argue is — which is what I'm doing — that the reason they're doing this is that it exemplifies the scientific method because it is precisely designed to avoid circular explanations. And as long as I'm explaining the mental, in terms of the mental, I'm not actually explaining it. I call this the Naturalistic Imperative in Cognitive science (writes Naturalistic Imperative at the top of the board): try to explain things naturalistically.
Again, some of this might be because you have a prejudice in favour of the scientific worldview and there's all kinds of cultural constraints. Of course! I'm not denying any of that critique. But what I'm saying is that critique is insufficient because here's an independent argument: the reason I'm doing this is precisely because I am trying to avoid circular explanations of intelligence. Why does that matter? Remember, the scientific revolution produced this scientific worldview that seems to be explaining everything except how I generate scientific explanations. My intelligence, my ability to generate science, is not one of the things that is encompassed by the scientific worldview. There's this whole in the naturalistic worldview! That's why many people who are critical of Naturalism, always zero in on our capacity to make meaning and have consciousness as the thing that's not being explained. They’re right to do that! I think they're wrong to conclude that that somehow legitimates other kinds of world-views — we'll come back to that. Because I think what you need to show is you need to show that this project is — because this is an inductive argument, it’s not a deductive [argument] — you have to show that this project is failing, that we're not making progress on it. And that's a difficult thing to say! You can't defeat a scientific program by saying, [by] pointing to things it hasn't yet explained. Because that will always be the case! You can't point to problems it faces! What you have to do — and this is something I think that Lakatos made very clear — you have to point to the fact that it's not making any progress in improving our explanation. And it's really questionable, and I mean that term seriously, that we're not making any progress in explaining intelligence by trying to analyse formalise and mechanise it. That's getting really hard to claim that we're not making any progress.
Now, why does this matter? Because if Cognitive science can create a Synoptic Integration by creating plausible constructs, theoretical ways of explanation, like what Newell and Simon are doing, that allow us to analyse formalise and mechanise, they have the possibility of making us part of the scientific worldview, not as animals or machines, but giving a scientific explanation of our capacity to generate scientific explanations. We can fit back into the scientific worldview that science has actually excluded us from as the generators of science itself. (Wipes board clean.) Newell and Simon are creating this powerful way of analysing formalising and mechanising intelligence. There's lots of stuff converging on it. There's stuff from how we measure intelligence — we talked about it, how we're trying to make machines — and that holds a promise for revealing things about intelligence that we didn't know before! Like the fact that one of the core aspects of intelligence is precisely your ability to avoid combinatorial explosion, make things salient and obvious and do this in this really dynamically self corrective fashion like when you have an insight.
A Critique Of Newell And Simon
So, I'm done praising Newell and Simon for now, because now I want to criticise them because Newell and Simon's notion of heuristics — also a valuable part of the multi-aptness, a valuable new explanatory way of dealing [with], thinking about our intelligence — while necessary is insufficient because Newell and Simon were failing to pay attention to other ways in which we constrain the Problem Space and zero in on relevant information, and do that in a dynamically self-organising fashion. Well, what were they failing to notice? They were failing to notice that they had an assumption in their attempt to come up with a theoretical construct for explaining general problem solving. They assumed that all problems were essentially the same. This is kind of ironic! We have a heuristic - as you remember, challenged a long time ago by Ockham. We have a heuristic of Essentialism (writes Essentialism on the board, under Heuristic) - this is also a term that has been taken up and, I think, often applied loosely within political controversy and discourse.
The idea of Essentialism is that when I group a bunch of things together with a term — remember Ockham’s ideas about [how] we group things together just by the words we use for them; that's Nominalism — but when I group a bunch of things together, they must all share some core properties. They must share an essence. Remember, that's the Aristotelian idea of a set of necessary and sufficient conditions for being something, right? It is of course the case that some things clearly fall in that category. Triangles have an essence: all triangles, no matter what their shape or size have three straight lines, three angles, and the angles add up to 180 degrees. And if you have each and every one of those, you are a triangle. And there are also natural kinds. One of the things that science does, and we'll see why this is important later, is it discovers those groupings that have an essence. So all gold things have a set of properties that are essential to being gold. And we'll talk about why that's the case later.
Not everything we group together — and this was famously pointed out by Wittgenstein — has an essence [-]. [For example] we call many things ‘games’. Now, what set of necessary and sufficient conditions do all and only games possess? “Well… Oh! They involve competition!”, “well, there are many things that involve competition that aren't games — war — and there are games that don't seem to involve competition, like catch!” “ Oh, well at least they involve other people”, “solitaire?”, “Oh, um, they, well, they have to involve imagination!”, “Solitaire?”, “but they have to involve pretence…”, “catch?? What are you pretending to do?!” And this is the constraints point. You won't find a definition that includes all and only games. And this is the case for many things like chair, table, et cetera. Remember, this was all part of what Ockham was pointing to, I think!
So the idea is we come with a Heuristic. We treat any category as if it has an essence. But many categories don't have essences. We're going to come back to that shortly in a few minutes when we talk about categorisation. Why do we use this Heuristic? Because it makes us look for essences. Why do we want to look for essences? Because this allows us to generalise and make very good predictions. Yes, I can over-generalise. But I can also under generalise! That's also a mistake. So we use this heuristic because it's adaptive. It's not algorithmic because there are many categories that don't have essences. Newell and Simon thought this category (Problems) had an essence that all problems are essentially the same, that all problems are essentially the same and therefore they could come up with one base…/ [-] if all problems are essentially the same, then to make a general problem solver, I basically need one problem solving strategy! I just have to find the one essential, I may have to make variations on it, but I have to find the one essential problem solving strategy. And because of this, how you formulate a problem, how you set it up to try and apply your strategy, how you represent the Initial State, the Goal State, the Operators, the Path Constraints, that's trivial, right? That's not important because if all problems are essentially the same, you're going to be applying basically the same problem solving strategy. Both of those assumptions [-] were in fact being driven by a psychological heuristic of essentialism. Essentialism isn't a bad thing! At least talking about it as a cognitive heuristic. It shouldn't be treated algorithmically, but we shouldn't pretend that we can do without it.
Well And Ill-Defined Problems
Now, if Newell and Simon were right about this, then of course these aren't problematic assumptions (that problems are of one essential kind and that they are trivial). But they're actually wrong about it because there are — and many people have converged on this at different times and using different terms — but there are fundamentally different kinds of problems, and there are different ways in which there are different kinds of problems. I just want to talk about a central one that's really important to your day to day life. This is the distinction between well-defined problems and ill-defined problems.
In a well-defined problem I have a good meaning and effective guiding representation of the Initial State, the Goal State, the Operators, so that I can solve my problem. So I take it that for many of you, that problem I gave earlier (writes 33 x 4 on the board again) — and there is a relationship between something being well defined in algorithmic. They are not identical, but there is a relationship — for many of you, that should be a well-defined problem. You can tell me your Initial State: this is a multiplication problem. And that gives you useful guiding information. You know a lot of things by knowing your initial state. You know what the Goal State should look like: this should be a number when I'm done. And you know that this number (the answer) should be bigger than these two numbers. The most beautiful picture of all time of a Platypus does not count as an answer. You know what the operations are - singing and dancing are irrelevant to this. This is well[-defined], and a lot of your education was getting you to practice making whole sets of problems well-defined and part of what psycho-technologies do is they make well-defined problems for us. Like literacy and numeracy - mathematics. And because of that power and because of their prevalence in our education, we tend to get blinded and we tend to think that that's what most problems are like. And that means we don't pay attention to how we formulate the problem, because the problem is well formulated for us precisely because it's a well defined problem. But most of your problems are ill-defined problems. In most of your problems, you don't know what the relevant information about the Initial State is, you don't know what the relevant information about the Goal State is. You don't know what the relevant Operators are. You don't even know what the relevant path constraints are.
You're sitting in lecture perhaps at the university, and you've got this problem: “Take good notes”. Okay, what's the initial state? “Well, I don't have good notes!”, “And?”, “Uh, well, um, yeah. Okay. Okay……!” So what should I do? And all you'll do is give me synonyms for relevance realisation: “I should pay attention to the relevant information, the crucial information, the important information…”, “and how do you do that?”, “Uh, well, you know, it's obvious to me, or it stands out to me!”, “Great! But how? How would I make a machine be able to do that? What are the operations?”, “Oh, I write stuff down!” Do I just write stuff down? Like, I draw, I make arrows. Do I write everything down? “Well, no, I don't write everything down and I don't just…” What are the operations? Does that mean everybody's notes will look the same? No, when I do this in class everybody's notes look remarkably very different! So what are the Operations and what does the Goal State look like? “Well, good notes!”, “Great! What are the properties of good notes?”, “Well, they're useful!”, “Why are they useful?”, “Well, because… Oh because they contain the relevant information connected in the relevant way that makes sense to me. And so that I can use it to…”, “yeah, right… I get it…”!
What's actually missing in an ill-defined problem is how to formulate the problem: how to zero in on the relevant information and thereby constrain the problem so you can solve it. So what's missing and what's needed to deal with your ill-defined problems and turn them into something like well-defined problems for you is good problem formulation, which involves, again, this process of zeroing in on relevant information: Relevance Realisation. And you see if they had noted this, if they had noted that this bias made them trivialise formulation, they would have realised that problems aren't all essentially the same and they would have realised the important work being done by problem formulation. And that would have been important because that would have given them another way of dealing with the issue of combinatorial explosion.
Let me show you: So we already see that the Relevance Realisation that's at work in problem formulation is crucial for dealing with real world problems. Taking good notes - that's an ill-defined problem. Following a conversation - that's another Ill-defined problem: “Well, I should say things!”, “What things?”, “well, Oh…”, “When?”, “well when it's appropriate…”, “How often?”, “Well, sort of…”. Tell a joke. Go on a successful first date. All of these are ill defined problems. Most of your real world problems are ill-defined problems. So you need the Relevance Realisation within good Problem Formulation to help you deal with most real world problems; already, Problem Formulation is crucial! But here's something that Newell and Simon could have used, and in fact Simon comes back and realises that later in an experiment he does with Kaplan in 1990 (writes Kaplan and Simon 1990 on the board). And I want to show you this experiment, because I want to show you precisely the power of problem formulation with respect to dealing with constraining the problem space and avoiding combinatorial explosion. I need to be able to deal with ill-defined problems to be genuinely intelligent. I also, as we've already seen, have to be able to avoid combinatorial explosion. That has something to do with relevance realisation, and that has a lot to do, as we've already seen, with Problem Formulation.
The Mutilated Chess Board
Let me give you the problem that they used in the study of the experiment. This is called The Mutilated Chess board example, right? There are eight columns and eight rows. And so we know that there are 64 squares. Now, because this is a square, [-] if I have a domino and it covers two squares — it'll cover two squares equally if I put it horizontally or vertically — how many dominoes do I need to cover the chess board? Well, that's easy: 2 goes into 64… I need 32. 32 dominoes will cover this without overhang or overlap. Now I'm going to mutilate the board. I'm going to remove this piece and this piece (two diagonally opposite corner pieces). How many squares are left here? 62. There are 62 squares left. So I've now mutilated the chessboard. Here's the problem: Can I cover this with 31 Domino's without overhang or overlap? And you have to be able to prove — deductively demonstrate — that your answer is correct.
Many people find this a hard problem. They find it a hard problem, perhaps you're doing this now, because they formulate it as a covering problem. They're trying to imagine a chess board and they're trying to imagine possible configurations of Domino's on the board. So they adopt a covering formulation of the problem, a covering strategy, and they try to imagine it. That strategy is combinatorially explosive. So famously there was somebody, one of the people in one of the experiments, one of the participants, trained in mathematics and was doing this topographical calculation and they worked on it for 16 to 18 hours and filled 81 pages of a [-] notebook. And they didn't come up with a solution! Why? Because if you formulate this as a covering strategy, you hit combinatorial explosion. The problem space explodes and you can't move through it. And that's what happened to that participant. It's not because they lacked the logic or mathematical abilities. In fact, it was precisely because of their logic and mathematical abilities that they came to grief. Now, you should know by now that I am not advocating for romanticism… “Oh, just give up logic and rationality…” That's ridiculous. You've seen why I I'm critical of that as well. But what I'm trying to show you, again, is you cannot be comprehensively algorithmic.
Okay. So if you formulate this as a covering strategy, you can't solve it. Let's reformulate it. And you can't quite see this on the diagram (the drawing on the board), but you'll be able to see it clearly in the panel that comes up (clear, onscreen picture of a chessboard). These squares (the two diagonally opposite corners that were removed) are always the same colour on a chess board. In fact that's not hidden in the diagram and what's used in the actual experiment - that's clearly visible; these squares are always the same colour. You say “so what?” Right! That's the point! You can see them, but they're not salient to you in a way that makes a solution obvious to you. They’re not salient to you! They're there, but they're not standing out to you in a way that makes a solution obvious to you.
Let's try this: If I put this domino on the board, if I put it horizontally or vertically, I will always cover a black and white square. Always. There is no way of putting it on the board that will not cover a black and white square. So in order to cover the board with dominoes, I need an equal number of black and white squares. I must have an equal number of black and white squares. That must be the case, but these squares (the two removed) are the same colour! Is there now an equal number of black and white squares there? No! Because these [removed ones] are the same colour. There's not an equal number of black and white squares.
I must have an equal number of black and white squares. I know for sure — because these [two] are the same colour — I do not have an equal number of black and white squares. Therefore I can prove to you that it is impossible to cover the board with the dominance.
If I go from formulating this problem as a covering strategy, which is combinatorially explosive, to using a Parody strategy in which the fact that [these two] are the same colour is salient to me, such that now a solution is obvious — now, it's obvious that it's impossible -- I go from not being able to solve the problem because it's combinatorially explosive to a search space that collapses (clicks fingers) and I solved the problem. This is why the phenomenon we've been talking about when we talked about flow and different aspects of higher States of consciousness is so relevant. This capacity to come up with good problem formulation — problem formulation that turns ill-defined problems into well-defined problems for you; problem formulation that goes from a self-defeating strategy because of combinatorial explosion to a problem formulation that allows you to solve your problem — that's insight. That's insight. That's why the title of this experiment is “In Search of Insight”. That's exactly what insight is. It is the process by which bad problem formulation is being converted into good problem formulation.
That's why insight, in addition to logic, is central to rationality. And in addition to any logical techniques that improve my inference, I have to have other kinds of psycho-technologies that improve my capacity for insight. And we've already seen that that might have to do with things like mindfulness - because of mindfulness’ capacity to give you the ability to restructure your salience landscape. So we're starting to see how Problem Formulation and Relevance Realisation are actually central to what it is for you being a real-world Problem Solver: avoiding combinatorial explosions, avoiding ill-definedness. We're going to continue this next time as we continue to investigate the role of Relevance Realisation in intelligence and related intelligent behaviours like categorisation, action [and] communication.
Thank you very much for your time and attention.
- END -
Episode 27 Notes
To keep this site running, we are an Amazon Associate where we earn from qualifying purchases
Keith Holyoak
Keith James Holyoak is a Canadian-American researcher in cognitive psychology and cognitive science, working on human thinking and reasoning. Holyoak's work focuses on the role of analogy in thinking.
Tim Lilicrap
Timothy P. Lillicrap is a Canadian neuroscientist and AI researcher, adjunct professor at University College London, and staff research scientist at Google DeepMind, where he has been involved in the AlphaGo and AlphaZero projects mastering the games of Go, Chess and Shogi
Blake Richards
Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at the Quebec Artificial Intelligence Institute.
Stanovich
Keith E. Stanovich is Emeritus Professor of Applied Psychology and Human Development, University of Toronto and former Canada Research Chair of Applied Cognitive Science. His research areas are the psychology of reasoning and the psychology of reading.
Christopher Cherniak
Christopher Cherniak is an American neuroscientist, a member of the University of Maryland Philosophy Department. Cherniak’s research trajectory started in theory of knowledge and led into computational neuroanatomy and genomics.
Pólya
George Pólya was a Hungarian mathematician. He was a professor of mathematics from 1914 to 1940 at ETH Zürich and from 1940 to 1953 at Stanford University. He made fundamental contributions to combinatorics, number theory, numerical analysis and probability theory.
How To Solve It
How to Solve It (1945) is a small volume by mathematician George Pólya describing methods of problem solving
Book mentioned - How to Solve It - Buy Here
Mr. Spock
Spock is a fictional character in the Star Trek media franchise. Spock, who was originally played by Leonard Nimoy, first appeared in the original Star Trek series serving aboard the starship Enterprise as science officer and first officer, and later as commanding officer of two iterations of the vessel.
Data
Data is a character in the fictional Star Trek franchise. He appears in the television series Star Trek: The Next Generation and Star Trek: Picard; and the feature films Star Trek Generations, Star Trek: First Contact, Star Trek: Insurrection, and Star Trek: Nemesis. Data is portrayed by actor Brent Spiner.
“Data is in many ways a successor to the original Star Trek's Spock (Leonard Nimoy), in that the character offers an "outsider's" perspective on humanity.”
No Free Lunch Theorem
In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. No solution therefore offers a "short cut"
Lakatos
Imre Lakatos was a Hungarian philosopher of mathematics and science, known for his thesis of the fallibility of mathematics and its 'methodology of proofs and refutations' in its pre-axiomatic stages of development, and also for introducing the concept of the 'research programme' in his methodology of scientific research programmes.
Wittgenstein
Ludwig Josef Johann Wittgenstein was an Austrian-British philosopher who worked primarily in logic, the philosophy of mathematics, the philosophy of mind, and the philosophy of language. From 1929 to 1947, Wittgenstein taught at the University of Cambridge.
Craig A Kaplan
In Search of Insight - Kaplan and Simon 1990
Other helpful resources about this episode:
Notes on Bevry
Additional Notes on Bevry