However, once I've solved the puzzle, the daily dose of frustration is reading the World Bot's analysis of how good my approach to solving the puzzle is.
To provide the small bit of necessary context if you do not play Wordle: you have six chances to guess a five letter word. When you enter a guess, you receive feedback on each letter: green means you have the right letter in the right place, yellow that the letter does appear in the word but not at that place, and grey that the letter does not appear in the word at all.
As you guess, you eliminate possibilities and get clues: The word does not contain an N and it does contain an A but not in the place you guessed it. etc.
As you guess, you eliminate possibilities and get clues: The word does not contain an N and it does contain an A but not in the place you guessed it. etc.
From a human point of view, I do what seems like the most systematic thing: I try to establish which vowels are in the word first, and I also try to start with the most common consonants. As I find out which letters are in the word, and where they are (or are not) in the five slots available, I'm able to narrow down my guesses.
Once I know things like "the word contains an A in the second place out of five, and it contains an S but at in the first slot" I can shape my guesses and zero in on the word.
Once I know things like "the word contains an A in the second place out of five, and it contains an S but at in the first slot" I can shape my guesses and zero in on the word.
In theory, the Wordle Bot, which I can use after completing the puzzle to analyze how well I solved it, provides feedback on the process. But in fact, it mostly underlines how differently humans and computers solve problems.
Wordle Bot analyzes the problem based on how many words you have eliminated with each guess.
So based on your second turn, it might look at a situation in which you've established that A, N, I, M, P, T, and S are not in the word, O is in the second place, and E is in the word, but not in the middle or the last place, and tell you that you've eliminated all but 58 possible words.
Wordle Bot analyzes the problem based on how many words you have eliminated with each guess.
So based on your second turn, it might look at a situation in which you've established that A, N, I, M, P, T, and S are not in the word, O is in the second place, and E is in the word, but not in the middle or the last place, and tell you that you've eliminated all but 58 possible words.
Now, to defend Wordle Bot, this is a decent metric for measuring how your puzzle solving is going. S is a much more commonly used letter than Z, so establishing that S is not in the word tells you much more about the word than establishing that Z is not in it. The number of possible words remaining statistic will reflect that.
But for a human, that "how many words are left" metric is just that, a metric. You're not thinking, "If I guess STONE, I will eliminate more words than if I guess "DROSS", though you may be thinking, "If I can eliminate S, T, N, and E, I will know more than if I just eliminate D, R, and S"
The machine problem solving approach of Wordle Bot is a brute force analysis approach: It has calculated the total number of five letter words in its dictionary, and with each possible guess it calculates the number of words still possible.
There may be some human somewhere who can calculated the number of five letter words that don't include A, N, I, M, P, T, and S, have O in the second place and have E but not in the middle or last place, but that person is so quantitative as to seem borderline insane to the rest of us clothed primates.
There may be some human somewhere who can calculated the number of five letter words that don't include A, N, I, M, P, T, and S, have O in the second place and have E but not in the middle or last place, but that person is so quantitative as to seem borderline insane to the rest of us clothed primates.
The human approach to solving this kind of puzzle is more intuitive. We know some quantitative things, such as that S, T and R are very common consonants and that vowels give a clear shape to a word as well as being very common. But with that information we are solving the problem more intuitively. We're taking possibilities off the table, and then at some point we have the "aha" moment where knowing something like "The word is _OVEL and that first letter isn't N" makes the answer jump out to us.
Some of the neural network style machine models which are behind modern LLMs, image recognition, and other forms of "artificial intelligence" are based on machine versions of this more intuitive approach. But as of now even these are fairly alien to the nested layers of learned intuition which power huma thinking. Imagine recognition programs have become very good, but some of the odd mistakes we still run into at times are because machine models are in some sense brute forcing all types of question at once: Is this image a cat? Is it a stop sign? Is it a bicycle? Is it the letter J?
I suppose you could say that wide open question is addressed by humans so quickly that we're unconscious of it. But another way would be to say that we start much further down the decision tree, and yet with less ability to count the possibilities.
If you show a picture of an animal to a child and ask, "Is this a cat?" the child (at least over a certain age) doesn't start out by eliminating the possibility that the picture is actually a fire hydrant. Nor does the child have an encyclopedic knowledge of how many animals or things the image could be of and start eliminating them "does not have upright posture and huge back feet, thus it's not a kangaroo"? Instead, the child leaps pretty quickly to an idea of what a cat looks like, compares a few features, and reaches a judgement.
If you show a picture of an animal to a child and ask, "Is this a cat?" the child (at least over a certain age) doesn't start out by eliminating the possibility that the picture is actually a fire hydrant. Nor does the child have an encyclopedic knowledge of how many animals or things the image could be of and start eliminating them "does not have upright posture and huge back feet, thus it's not a kangaroo"? Instead, the child leaps pretty quickly to an idea of what a cat looks like, compares a few features, and reaches a judgement.
Machine problem solving tends to be focused on the things that machines are really good at: working through lists of things very rapidly and performing logical tests.
Human problem solving tends to be focused on what we as creatures have become very good at: pattern recognition and intuition.
Human problem solving tends to be focused on what we as creatures have become very good at: pattern recognition and intuition.
Even as we make more progress in building "thinking machines" I would imagine that there will always be this difference in flavor to the two approaches.







No comments:
Post a Comment