(P1 (WANTS JOAN APPLE))
The parts:
1. P1: This is the proposition number. Propositions
can be embedded in propositions, as in (P2 (TIME:IN P2 YESTERDAY)).
The numbering system acts as an embedding shorthand
2. WANTS: A relation. The word is in all caps to
indicate that it's a concept, not a word. So, propositions are supposed
to be in the "language of thought", not tied to a particular language.
WANTS could just as easily be AS1295D
3. JOAN APPLE: Arguments. These are what the relation
is about. Some relations need two arguments, some need just one
An example of a text and its propositional representation is on the
overhead at the end of the notes
C. Make a text base: The propositions are parsed into chunks.
Chunks are units of text that are roughly about the same thing. Usually,
they correspond to the sentences. This process takes a chunk and
tries to connect up the propositions into a single structure that looks
like a tree. There's an example in the back of the notes. This
process has several subprocesses:
1. Read in a chunk
2. Try to connect it to what's in memory. If it's the first
chunk, you can start fresh, otherwise, you try to tie new information in
with existing information
3. If you can't make a connection, do a memory search to find
a related proposition. This way, you're still making a connection
4. If you can't find anything in memory, make an inference to
connect the two chunks
5. Once you've got a start, relate all of the propositions in
the chunk to one another. Because of the way chunks are made, you're
always going to be able to link up all of the propositions in the chunk
6. Choose what to retain in memory. As in all models, short
term memory is limited, and you can't keep everything. Usually, you
have to choose the top 2 or 3 propositions. This is done using what's
called a "leading edge" strategy. The steps:
a. If any proposition you're keeping has an embedded proposition,
keep that too. This is to prevent situations where you know Joan
wants, but you don't know what
b. Starting at the top, take the most recent proposition from
each level until you run out of capacity (it takes the front edge of the
tree)
7. Start with a new chunk (connecting it to the propositions
you kept from the previous chunk)
8. Repeat until you're out of text
D. Form global concepts: You have to know what proposition
to use in building a structure from a chunk. The global concept is
the best one to use as the root because everything will eventually relate
to it. This process identifies these concepts
E. Make the connections between the chunks. You fill in
the inferences or tie the chunks together at the points of overlap to produce
a unified representation of the whole text
F. Some additional notes:
1. The more times a proposition gets carried over in working
memory, the easier it will be to remember it later. This is nice
because the topics usually get carried over, and they're also easiest to
remember. That's consistent with people's memory of text
2. Readability is characterized by properties of the text and
properties of the reader. For example, if the text is well constructed,
but your memory is low, then you'll have a hard time building structures.
Or, if the text is poor, but memory is high, you'll still struggle
3. How do they test this model? They present the passages
to the model and look at its memory. The model's recall of a text
is based on how often a proposition was held over in working memory, how
related propositions are to one another, how easy it was to form structures,
etc.
They also present these passages to human subjects and look at their
recall. If people tend to remember similar propositions in a similar
order, that's evidence that the model is using a similar process
4. This model has all the parts:
a. Levels of representation (we talked about this in the last
unit)
b. Limited working memory capacity (coming up)
c. Strategies to choose what to remember (coming up)
d. Influences of readers' knowledge (coming up)
Top
VIII. Working memory revisited. As you've probably
noticed, working memory plays a big role in almost every model of comprehension,
at every level. In the model we outlined last time, it was where
the local structures were built. Some new stuff:
A. Measuring it: Everyone "knows" that the capacity is
7±2 items. But, there's a problem with this estimate:
It only looks at storage. As you can clearly see, WM is a lot more
than storage. Processing also takes place there. A better way
to measure capacity is to use reading span. In this task, you read
sentences out loud, and you also try to remember the last word in each
sentence. You start with a set of two sentences. If you can
handle that, you go to three, etc. I can demonstrate that if we have
a victim
Using the new method, span usually tests out between three and four
items (although my materials are probably a bit different)
B. What happens when items are lost from WM? Originally,
people thought they decayed (sort of like rusting, it just happens).
The latest version has a more active explanation for how things leave WM.
When you get an item that you want to include, you have to gather some
activation (akin to "mental energy") that you can use to represent it.
Putting something in WM takes mental effort. If you
don't work at it, it won't happen. Unfortunately, the amount
of mental energy that you have at your disposal is fixed (when we talk
about WM capacity, that's the capacity)
Let's say you're processing this sample of text:
The plate is on the table.
The spoon is left of the plate.
The fork is behind the spoon.
The cup is right of the fork.
Let's further assume that your WM capacity is three things (plus processing
load). (This will make more sense if you follow along with the attached
overhead.) When you get the first proposition, you can give it all
of your storage capacity. When the next proposition comes in, you
have to steal some activation to represent it. So, they both get
in, but they're each half as strong as the first one was. Then the
third proposition comes in. You steal some more activation, and put
it in
When the fourth proposition comes along, you're out of juice.
Now, when you steal activation, something has to go. Usually, what
goes is the oldest piece of information. In the Kintsch model, this
happened when the local structure was built and then you had to cut it
down. The process of deleting nodes was akin to stealing enough activation
to do the processing on the next chunk of text. The leading edge
strategy was to take the most recent information at each level (after embedded
propositions were selected). There are other ways that this can be
done
Top
IX. Strategies for choosing information. Fletcher
(1986) looked at a number of strategies that readers might use to choose
propositions to retain. He did this with a think-aloud method.
In other words, as a person read a text (trying to remember it), they were
asked to say out-loud everything that they were thinking. Here's
a text like the ones he used, can I get a volunteer to try to remember
it? (See overhead for text) We'll also need a scribe to take
notes on their responses
What do we mean by strategies? What you choose is governed by
a complex process that takes into account prior knowledge of an area, the
goals for reading, decoding ability, memory capacity, etc. Under
different conditions, you choose different kinds of information.
To see this in action, let's get our volunteer to read another sample of
text (again, record thoughts)
As you might have noticed, different things are done. Fletcher
identified four local strategies and four global strategies. I'm
planning to focus on local ones (building local structures):
A. Recency: A person might retain in their tree as many
of the most recent propositions as they have room for
B. Sentence topic: A topic is:
1. First person or object mentioned
2. Referred to using a pronoun, a definite article, or a proper
name
3. What the sentence is about
The basic strategy is to hold the topic and propositions related to
it
C. Leading edge: Hold the most recent proposition at each
level in the tree. If you look at a graph of a tree, it's like taking
the right edge of the graph (hence the name)
D. Frequency: Take all of the most frequent propositions
that you can hold
Fletcher found from looking at what people choose to rehearse that
sentence topic = frequency > leading edge > recency. This tells us
that people are sensitive to high level information when choosing what
to represent
Top
X. Readers' knowledge. You have a lot of world knowledge
that you can bring to bear. One type of knowledge is script knowledge.
For a lot of overlearned event sequences, you know what typically happens.
Imagine going to the doctor's office. Write down what happens when
you visit the doctor? (Some people read theirs so we can compare.)
When you're trying to understand a story, one thing you're doing is matching
it to a script. If you know a lot about doctors, you can fill in
details from the story to improve comprehension. You can also use
the script to help you remember the story later
A. A demonstration of the influence of top-down knowledge on
comprehension: Hocked gems passage
B. A problem is: If all you're doing is remembering the
script, how do you tell individual events apart? An easy answer is
that you usually don't. What did you have for breakfast a year ago
today? Most people have no idea how to answer this question.
In experiments, people will frequently "remember" things that are typical
of a script, but not in the text they read. However, even though
people make mistakes, they can usually remember some deatils. McKoon
and Ratcliff propose Memory Organization Packets (MOPs) to explain this
1. What is a MOP? It's a dynamically constructed set of
scenes directed towards a goal. A scene is something like ordering
in a restaurant. Each MOP is constructed on the fly during reading
2. Where do the scenes come from? They're based on scripts.
The idea is that you borrow from the script whatever is available.
You remember deviations by marking them in the script event and then adding
additional propositions. For example, if you are reading about eating
at a restaurant and you want to pay with a check, you tag the "pay after
you eat" part of the script. The tag is a pointer to a proposition
like (PAY WITH CHECK). The process is like opening a file on an event
and adding to it as the event goes on. When you're through, you close
the file, and it goes into episodic memory
3. Why don't you remember them after a delay? It's the
same problem as anything else in episodic memory. You get too many
instances of similar things coded in a very weak way. As you keep
piling on experiences with a situation, it gets harder and harder to tell
them apart. The main factor in the forgetting is interference from
other experiences, not time itself
4. Let's assume you're trying to test this theory, what effects
should MOPs have on comprehension? What can we detect?
a. Events from two stories with the same MOP should prime one
another if they're consistent with the script actions. So, if two
stories are about going to the beach and in one a person rubs lotion on
her skin vs. splashing oil on her skin, the two events will be primed equally
b. Events from two stories with the same MOP will not prime one
another if they're not consistent with the script actions.
If two people go to the beach and one looks up at the swallows nesting
in the cliffs, that event shouldn't be primed by the other story and it's
not
C. It's not just scripts that can provide the necessary knowledge
to make a text comprehensible. Read this passage: (overhead)
Now read it again with this picture (overhead)
Top
XI. Mental models. So far, our readers are extracting
propositions, building local structures, identifying topics, and making
global structures. Is there anything else going on? Yes.
In addition to all of this, readers are forming mental models of the events
in a text. A mental model is a representation of what the text is
about, not the text itself. So, if goes beyond propositions (a representation
of the text)
A. Where is it? Remember that we talked about divisions
in
working memory, and two main branches. You have an articulatory
loop and a visuo-spatial scratchpad. The loop holds auditory information
(either from hearing speech or recoding written text). The sketchpad
is for images, processing pictures, and doing spatial things like moving
your eyes across the page and processing visual features of text.
It's not so much a place as a kind of mental energy devoted to spatial
processing. All of the stuff we've discussed so far goes on in the
loop, the mental model is in the sketchpad
B. What do I mean by a model? Read these sentences about
turtles and logs. Here are the situations. If you read the
first one and I give you the second one to verify, what will happen?
Now, look at these two. Here are the situations. If I give
you the first one and ask you to verify the second one, what will happen?
The difference in terms of propositions is very slight. In fact,
the difference that distinguishes the sentences in the two situations is
the same. So, there must be some additional level of representation
that explains this. That's the mental model
C. Can we tie it all together in a single model of processing?
Let's think about that for a minute
Top