DUAL CODE SYSTEMS

DUAL CODE SYSTEMS

People have an attentional bottle neck and limited cognitive resources. In a perfect world, we would notice every stimulus, process it, and catalog (remember) it accordingly. Unfortunately, we often just don't have the time to give things as much thought as they require. The upside is a lot of what happens in life doesn't need that much attention. We can get away with easy and rote answers and solutions to similar problems we've had numerous times before.

There are two ways of measuring memory: recall and recognition. Recognition is a two-level process.

Recognition: The first level is always quick and easy. We rely on a heuristic to decide and pass judgment on what we should proceed. This can be the availability heuristic (we sample what is recent, prototypical and emotionally charged). We look at an object and ask: Is this familiar? We can get a quick, gut reaction from it. If it's not familiar, we don't recognize it. Or, it may have a sense of "familiarity" which can lead to false recognitions. A ticket taker points to a man he thinks robbed him because the man looks familiar: turns out the man is just a soldier who has bought tickets from him in the past.

The problem with recognition system one is when nothing looks "familiar." While taking a multiple choice test, if all the multiple choice answers seem good, then one must have to go to the second level of processing. This is slow, methodical recall.

Recall is better if something is over learned and has many connections/memories attached to it. It's easier to re-learn something the second time. If the connections are there, one is not starting from scratch.

In the feature model concepts have a list of features: defining features (features an object must have to belong to the category) and characteristic features (features an object could have but are not defining). The theory is it's easier to put something in a category if it contains all the features of what we view as the defining feature of the category. For example, we compare a ROBIN that has feathers, a beak and two legs with what we know of typical BIRD which is it has feathers, a beak, and two legs. The defining, essential, characteristics are feathers and a beak. The characteristic features (non-essential) are two legs.

In order to come up with a sentence "Is a robin a bird?" we scan two sets of features and look for overlap. It's a stage 1 process that's quick and parallel. It generates a fast answer. If you're faced with an intermediate object that doesn't have defining characteristics, you have to kick into system 2: slow, serial processing. "Is a penguin a bird?" should be a slower yes. A robin has many of the defining features of what we determine a bird should have; a penguin does not.

There is a dual code theory of imagery. The theory was divined to explain the phenomenon that concrete words (eg flag) are easier learned than abstract words (eg democracy). When people imagine "democracy" they imagine a flag. Concrete words have the cognitive advantage over abstract words because
  • Concrete words are represented both in language systems and in the image system" i.e. "brick"
  • Abstract words are only represented in the language system (i.e. love/hate). They're difficult to define but they're communicable.

Abstract words are words we use because of context and not because we memorize the definition. In brain damage, we're more likely to retain concrete words than abstract words. Concrete words are learned younger at a basic level. If we pair memories with words and words with memories, this will help us with memory retrieval in the future—we have two paths to get to the same concept.

There are two subsystems of human cognitive processing that work simultaneously: one dealing with verbal objects—linguistic information--and one dealing with visual stimuli—images and pictures. Although they're processed independently, they're connected in memory. For example, when one watches a nature program on television that's narrated. we can look and process the pictures while we're hearing and processing the words. These are happening simultaneously and don't interfere with one another; in fact, they help future memory retrieval. The narration/words will trigger the pictures and the pictures will trigger the narration/words. 

In this dual code task, the better system for retrieval is the pairing of the concrete word with the word that represents it. This leads to the quickest reaction times as both input (word and image) are triggers for the other. An abstract word that is not paired with a concrete image—perhaps due to rote rehearsal—will have slower reaction times because someone has to search his/her memory for a definition of the word. There is no word that symbolizes the abstract word.

Rote rehearsal is repeating something over and over shallowly in order to keep it in short term memory. However, as soon as it's allowed to leave short-term memory, it's gone. It hasn't been processed to long term memory. This is fast and quick and easy for short-term projects—i.e. just keeping a phone number in your head in order to write it down. If one wants to remember something for the long haul, he or she needs to give the input context and connect it to meaning. This means processing it deeply and thinking about it and connecting it to earlier memories. For example, if I ever lose my phone and my keys, I have my one emergency phone number of a friend to call. Of course, I have to memorize this number because I will be without my pre-programmed phone. I went through his phone number segment by segment and gave all the numbers meanings. I can recite it now and will probably be able to recite it for the rest of my life!  Obviously, the latter of the two systems is preferred here, but it took time and energy for me to infuse his phone number with that much connecting information. I don't have the time and interest to do that with every phone number in my phone; I choose to spend my cognitive energy elsewhere.

Kahneman and Tversky came up with a two-stage process for judgment- and decision-making. The first stage is Stage 1 processing: it's quick and easy and relies on heuristics (strategies). One of the most "popular" heuristics is the availability heuristic: we judge the world on what happened to us most recently, what has affected us the most emotionally, and what we come in common with the most. Most of the time, we rely on this system because it requires few cognitive resources and it does the job just fine. Unfortunately, with the efficiency comes the loss of accuracy.  
System 2 is slow, methodical, and guaranteed to lead to the right solution if one gives it enough time. Sometimes years and decades is the right time. We can see this with problem solvers who are tackling a large, complex problem. Instead of making a snap decision, they apply an algorithmic solution—which is slow, serial, and methodical.

Grand masters do a lot of processing on system 1. They can look at a glance at patterns on a chessboard and memorize them. Most people can replicate the chess board set up—but that's it. Grand masters can do that if the pieces are in typical patterns; if the pieces are out of context or placed randomly, they can't memorize them. It depends on the context and the meaning of the placement.  Grand Masters have learned from previous games and what moves are possible. They know how games are resolved and have been resolved in the past. Grand Masters do deep processing: they try to look ahead several moves. They don't try to this more than 3-4 moves ahead—which is the amount of units or 'chunks' we can keep in our short term memory.

An algorithm may seem a good choice—always leads to the correct answer eventually—but this isn't a great strategy when one is pressed for time. For example, if you're playing chess, you wouldn't use a "brute force" approach—going though the entire game and exploring every possible solution in detail. As soon as one player moved a chess piece, the next move possibilities would almost be endless. It's in times like this, you need to seek patterns.

When people act unconsciously, they're freeing up cognitive resources to think and process other things. For example, they've done their bedtime routine so many times that they can brush their teeth and put the cat out all the while thinking about their upcoming final exam. However, these routines have to very systematic. As soon as something happens to shove one out of her routine (eg out of toothpaste), she's jarred out of it and has to rely on conscious thinking until she solves any problem and can go back to autopilot. Also, with unconscious processing, there's a problem with action slips: I'm so engaged in my thoughts I forget to take my glasses off before applying a huge handful of face cream. That set me up for a little system 2 problem solving!

Most if not all of the stage one processes tend to be quick and easy and efficient. They free up the mind to do higher calculations and planning. However, one gives up precision and exactness. Precision and exactness come at the price of time—which sometimes one doesn't have. It's a delicate balance.

According to Kahneman and Tversky, we are not utilitarians always doing what's best. We think we're rational human beings who will always do the correct thing but we're swayed by our faulty logic. We're swayed by recent events ("a woman got killed on an elevator so I'm not setting foot on one!"), the amount of emotion (the visit to the doctor has one momentary finger prick so I think it hurts to go to the doctors), and how much something resembles an ideal prototype (It looks like a duck and quacks like a duck: must be a duck!). We're affected by framing effects (how things are worded and presented) and "man-who" arguments (samples of one). All of these examples are System 1 processing. They're fast and easy and right most of the time.

One of the best ways to combat default stage 1 processing seems to be education—especially statistics. Once people have more statistical knowledge, they're more aware of probabilities versus possibilities.

No comments: