Langston, Psychology of Language, Notes 8 -- Semantics and Discourse
 
I.  Goals.
A.  Four levels of meaning.
B.  Literal.
C.  Inferences.
D.  Figurative Language.
E.  Pragmatics.
F.  Outline of a basic discourse model.
G.  Working memory revisited.
H.  Strategies for choosing information.
I.  Readers' knowledge.
J.  Mental models.
 
II.  Four levels of meaning.  We're leaving behind syntax (that tells you the grammatical role of words) and moving to semantics (the meanings).  When we talk about the "meaning" of a sentence, it can actually be represented at four different levels.  What we'll do in this unit is look at the kinds of information represented at each level.  We'll start with literal.  Then we'll talk about going a little beyond literal by adding information to the representation.  Then we'll go way beyond literal by looking at figurative language (the meaning is different from the words).  Finally, we'll talk about pragmatics (the meaning isn't in the words at all).
 
Top
 
III.  Literal meaning.  This is what the sentence is about in the strictest sense.  This kind of meaning can be broken down as well.
A.  Verbatim meaning:  Basically, memorize the text.  You don't process it or try to understand it, just put it in.  This type of meaning representation is very poor.  In some contexts (like indeterminate descriptions) this is all you have to rely on, so you will see people form verbatim representations.
B.  Deep structure:  It's a fact that people remember the gist of what they read better than the exact words.  One idea for what the gist might be is to remember deep structures as in transformational grammar.  So, if you hear "Alice plays the tuba" you remember something like "Alice play tuba," and if you hear "The tuba was played by Alice" you remember "Alice play tuba + passive".  The "+ passive" part refers to the fact that the sentence you heard was in the passive voice, otherwise, you couldn't tell that from the representation.  There's evidence that when you have people recall sentences, they generally forget transformations more than meaning.  Unfortunately, a lot of this work has confounds (like sentence length), and the theory of transformational grammar has undergone a lot of changes, so this representation isn't discussed anymore.
C.  What's replaced the notion of deep structure representation is propositional representation.  Propositions are like the idea units in a text.  For example, the structure of "The professor delivers the boring lecture" would include:
P1:  Exists(professor)
P2:  Exists(lecture)
P3:  Boring(P2)
P4:  Deliver(P1, P2)
Evidence:
1.  That you go beyond verbatim representation:
Sachs (1967) presented a story about telescopes.  During reading, if you test immediately after a sentence, people recognize changes in the surface structure of the sentences as readily as changes in meaning.  After 80 syllables, however, people recognize changes in meaning, but not changes in surface structure.
2.  That you have propositions:
Many studies show that what people forget from sentences tend to be whole propositions.  Furthermore, varying the number of propositions per sentence is a much more effective way of increasing the difficulty of a text than varying the number of words.
D.  Images/models:  There's additional evidence that images play a role in comprehension.  For example, lists of concrete nouns are easier to memorize than lists of abstract nouns.  Also, "ease of imaging" is an important determinant of comprehension.
 
Top
 
IV.  Inferences.  Inferences go beyond literal meaning by adding something to the representation of the text that wasn't in the text.  I'll present a scheme for classifying inferences along four dimensions.
A.  Logical vs. pragmatic inferences:  Some inferences are guaranteed to be correct (if you form them).  So, if you hear "Todd has six apples, he gave three to Susan," it's safe to infer that Todd has three apples (logical inference).  On the other hand, some inferences are likely, but not guaranteed (pragmatic).  So if you hear "Todd dropped the egg," you don't know for sure that it broke (but it probably did).
B.  Forward vs. backward:  Some inferences are made in advance.  For example, if you hear "John pounded the nail" and infer "John used a hammer," it's forward.  Other inferences are made about past things.  For example, if you read "John pounded the nail.  The handle broke and he smashed his thumb," you need to infer "hammer" to figure out what handle broke.  Backward inferences are generally called bridging because they build a bridge between two parts of the text to explain how you get from one to the other.  Forward inferences are usually elaborative because they elaborate on the text but aren't strictly necessary.
C.  Type of inference:  There are five types here:
1.  Case-filling:  In your case-role grammar you can infer parts that are missing.  For example, if you hear "Father carved the turkey" you can fill in the instrument (like knife).
2.  Event-structure:  If you read "The actress fell from the 14th floor balcony" you might infer the consequence (she died).  Or, you might infer a cause (she slipped).  These are things that flesh out the structure of an event.
3.  Parts:  If I say "Carol entered the room.  The X was dirty," you might infer "the room has an X."  Usually, these are required to make sense of a text.  For example, if you read "He poured the tea and burned his hand on the handle" you need to infer a teapot to make sense of it.
4.  Script:  People have scripts for prototypical event-sequences (like going to a restaurant).  Script inferences are when they fill in missing events with items from the script.
5.  Spatial/temporal:  You can infer relationships between items in a text.  For example, if I say "B is to the left of A," you might infer "A is to the right of B."
D.  Implicational probability:  How strongly is the inference implied by the text.  Some things are much more likely than others.  For example, floors are more likely in rooms than chandeliers.
 
Top
 
V.  Figurative language.  The meaning is completely different from the words used to convey it.  I'll just list some of the more common types.
A.  Metaphor:  "John is a pig"  You know John isn't actually a pig, the meaning somehow relates features of pigs to features of John.
B.  Idioms:  "Spill the beans"  It's like a conventionalized metaphor.  In terms of comprehension, it's treated like a big word (maybe).  For example, "spill the beans" = "tell a secret."
C.  Metonymy:  "Washington and the Kremlin are finally talking"  Let some aspect stand for the whole.  For example, we let the fact that the US government is in Washington stand for the whole government.
D.  Colloquial tautologies:  "Boys will be boys"  It's a kind of metonymy where some feature is highlighted.  It's usually negative (as in "business is business"), but for objects it's also indulgent.  So, you'll hear "boys will be boys," but not "rapists will be rapists."
E.  Irony/sarcasm:  The words are used to express a situation that's actually opposite from the words.  For example, if someone's lounging on the couch and you come in and say "Boy, you're working hard."
 
Top
 
VI.  Pragmatics.  Speaker and hearer's background beliefs, understanding of the context, and knowledge of the way language is used to communicate.  Note that none of this stuff is literally in the message.  Consider:
 
"The councilors refused the marchers a parade permit because they feared violence."  (who fears?)
vs.
"The councilors refused the marchers a parade permit because they advocated violence."  (who advocates?)
 
The information about who fears and who advocates comes more from your knowledge about councilors and marchers, not as much from the sentence.
I'm going to lump a lot of diverse language activities under this umbrella for want of a better place to put them.  The thing they have in common is that the meaning is derived as much from external factors as from the message.
A.  Presuppositions:  An assumption or belief is implied by the choice of a particular word.  Consider:
 
"Have you stopped exercising regularly?"
vs.
"Have you tried exercising regularly?"
 
"Stopped" implies that you used to exercise, "tried" implies that you don't exercise.  It's possible that in some contexts this will even lead to a person being insulted.
B.  Speech acts:  The effect of the message is different from its literal content.  There are 3 parts:
1.  The locutionary act:  The utterance.
2.  The illocutionary act:  What's intended by the speaker.
3.  The perlocutionary act:  The effect.
Consider when I said "Can you open the window?":  L = "Are you able to open the window?," I = "Please open the window," P = someone opens the window.
Speech acts can take numerous forms:
1.  Statement:  "There's a bear behind you."
2.  Command:  "Run!"
3.  Yes/No question:  "Did you know there's a bear behind you?"
4.  Wh- question:  "What's that bear doing in here?"
The form can have an impact on the perlocutionary act.
C.  Conversation:  Most conversation is governed by pragmatic information.  I'll break it down into components.
1.  Structure of conversation:
a.  Opening:  Usually it's a stock question (like "Nice weather we're having") that has a stock reply.  This sets up turn taking and gets the ball rolling.
b.  Turn-taking:  Three rules (applied in order):
1)  The current speaker can select the next speaker.  This usually amounts to asking someone a question.
2)  If no-one is nominated, you can speak up.  This keeps it going, because there's an incentive to jump in.
3)  The speaker can continue (no obligation).  Usually, if there's a gap, someone will get nominated.
c.  Nonverbal turn-yielding behavior:  There are some cues that will signal the end of a turn:
1)  Drawl last syllable.
2)  Terminate hand gestures.
3)  Use stereotyped expressions ("you know," "or something," "but, uh").
The more of these cues you have, the more likely someone is to jump in.  With none, someone speaks 10% of the time.  With 3, 33%, and with 6, 50%.
There are also nonverbal cues that you don't want to stop.  When these are present, there are no attempts to speak.
1)  Keep using hand gestures.
2)  Look away.
2.  Basic rules of conversation:
a.  Quantity:  Make your contribution informative, but not more informative than required.
b.  Quality:  Make your contribution truthful, avoid saying things you know to be false.
c.  Relation:  Your contribution should be related to the topic.
d.  Manner:  Be clear, avoid obscurity, wordiness, ambiguity.
Consider the following excerpt:
 
M:  "Did you hear that Wilfred's seeing a woman tonight?"
F:  "No, does his wife know?"
M:  "Of course.  That's who he's seeing."
 
Which rules are violated to produce the confusion?  How about this:
 
"Harold was in an accident last night.  He had been drinking."
 
What had Harold been drinking?  Why do you come to that conclusion?
3.  Establishing coherence:  Here are some responses to "I just bought a new hat."
 
"Fred likes hamburgers."
"I just bought a new car."
"There's supposed to be a recession."
"My hat's in good shape."
"What color?"
 
Some are acceptable and some aren't.  What governs this?  Your response should intersect a proposition in the topic.  If the sentence is:
 
"John bought a red car in Baltimore yesterday."
 
You can talk about red ("That's a terrible color"), Baltimore ("Oh, I love Baltimore"), buying cars ("My brother just bought a car"), etc.  My response becomes a new topic and a person can respond to a proposition in it.  If there's a lull, someone will usually say something noncommittal to move things along ("Yes, buying a car can be tough").
4.  Social aspects:
a.  Roles:  Different roles govern the language used.  In a lot of languages (like Romanian), the form of "you" is different for formal and informal conversations.  The selection of pronouns generally follows a power differential.  For example, bosses get addressed formally while they address others informally.
There's also a solidarity semantic that can take over.  If you discover you have a common tie, then you might toss the roles.
b.  Situations:  I might address you differently during the lecture vs. in an informal chat after lecture vs. in my office.  The situation will govern the nature of the conversation.
c.  Code switching:  Changing your language or style depending on the audience.  If you've had a professor who struggles to be hip in lecture, that's code-switching (but it probably doesn't work when you're trying).  Usually, this refers to within-group vs. without-group conversations.  For example, in some parts of Norway people speak a different language at home and in public.
We'll compare the information above to the conversations you bring in.
 
Top
 
VII.  Outline of a basic discourse model.  The basic problem in text comprehension is that the meaning of a text is more than the meanings of its sentences.  Somehow, you have to connect information in the sentences into a coherent structure that is the "meaning" of the text.  The Kintsch and vanDijk (1978) model illustrates all of the basic parts of this process
A.  Overview:  There are four main steps in comprehending texts.  They are:
1.  Turn the text into propositions
2.  Arrange the propositions into a text base:  An organized representation of the text (but only a local representation, meaning it covers relationships between ideas that are close together)
3.  Use world knowledge to form global concepts (akin to identifying the main ideas)
4.  Form a macrostructure:  The relationships between the units in the text base
We'll go over the processes that happen in each of these steps
B.  Turn the text into propositions:  There's a system that is used to do this.  It's consistent, but its basis in reality is suspect.  A proposition can be thought of as an idea unit.  Loosely speaking, it contains a single idea from the text.  For this system, a proposition looks like this:

 (P1  (WANTS JOAN APPLE))

The parts:
1.  P1:  This is the proposition number.  Propositions can be embedded in propositions, as in (P2  (TIME:IN P2 YESTERDAY)).  The numbering system acts as an embedding shorthand
2.  WANTS:  A relation.  The word is in all caps to indicate that it's a concept, not a word.  So, propositions are supposed to be in the "language of thought", not tied to a particular language.  WANTS could just as easily be AS1295D
3.  JOAN APPLE:  Arguments.  These are what the relation is about.  Some relations need two arguments, some need just one
An example of a text and its propositional representation is on the overhead at the end of the notes
C.  Make a text base:  The propositions are parsed into chunks.  Chunks are units of text that are roughly about the same thing.  Usually, they correspond to the sentences.  This process takes a chunk and tries to connect up the propositions into a single structure that looks like a tree.  There's an example in the back of the notes.  This process has several subprocesses:
1.  Read in a chunk
2.  Try to connect it to what's in memory.  If it's the first chunk, you can start fresh, otherwise, you try to tie new information in with existing information
3.  If you can't make a connection, do a memory search to find a related proposition.  This way, you're still making a connection
4.  If you can't find anything in memory, make an inference to connect the two chunks
5.  Once you've got a start, relate all of the propositions in the chunk to one another.  Because of the way chunks are made, you're always going to be able to link up all of the propositions in the chunk
6.  Choose what to retain in memory.  As in all models, short term memory is limited, and you can't keep everything.  Usually, you have to choose the top 2 or 3 propositions.  This is done using what's called a "leading edge" strategy.  The steps:
a.  If any proposition you're keeping has an embedded proposition, keep that too.  This is to prevent situations where you know Joan wants, but you don't know what
b.  Starting at the top, take the most recent proposition from each level until you run out of capacity (it takes the front edge of the tree)
7.  Start with a new chunk (connecting it to the propositions you kept from the previous chunk)
8.  Repeat until you're out of text
D.  Form global concepts:  You have to know what proposition to use in building a structure from a chunk.  The global concept is the best one to use as the root because everything will eventually relate to it.  This process identifies these concepts
E.  Make the connections between the chunks.  You fill in the inferences or tie the chunks together at the points of overlap to produce a unified representation of the whole text
F.  Some additional notes:
1.  The more times a proposition gets carried over in working memory, the easier it will be to remember it later.  This is nice because the topics usually get carried over, and they're also easiest to remember.  That's consistent with people's memory of text
2.  Readability is characterized by properties of the text and properties of the reader.  For example, if the text is well constructed, but your memory is low, then you'll have a hard time building structures.  Or, if the text is poor, but memory is high, you'll still struggle
3.  How do they test this model?  They present the passages to the model and look at its memory.  The model's recall of a text is based on how often a proposition was held over in working memory, how related propositions are to one another, how easy it was to form structures, etc.
They also present these passages to human subjects and look at their recall.  If people tend to remember similar propositions in a similar order, that's evidence that the model is using a similar process
4.  This model has all the parts:
a.  Levels of representation (we talked about this in the last unit)
b.  Limited working memory capacity (coming up)
c.  Strategies to choose what to remember (coming up)
d.  Influences of readers' knowledge (coming up)
 
Top
 
VIII.  Working memory revisited.  As you've probably noticed, working memory plays a big role in almost every model of comprehension, at every level.  In the model we outlined last time, it was where the local structures were built.  Some new stuff:
A.  Measuring it:  Everyone "knows" that the capacity is 7±2 items.  But, there's a problem with this estimate:  It only looks at storage.  As you can clearly see, WM is a lot more than storage.  Processing also takes place there.  A better way to measure capacity is to use reading span.  In this task, you read sentences out loud, and you also try to remember the last word in each sentence.  You start with a set of two sentences.  If you can handle that, you go to three, etc.  I can demonstrate that if we have a victim
Using the new method, span usually tests out between three and four items (although my materials are probably a bit different)
B.  What happens when items are lost from WM?  Originally, people thought they decayed (sort of like rusting, it just happens).  The latest version has a more active explanation for how things leave WM.  When you get an item that you want to include, you have to gather some activation (akin to "mental energy") that you can use to represent it.  Putting something in WM takes mental effort.  If you
 don't work at it, it won't happen.  Unfortunately, the amount of mental energy that you have at your disposal is fixed (when we talk about WM capacity, that's the capacity)
Let's say you're processing this sample of text:
The plate is on the table.
The spoon is left of the plate.
The fork is behind the spoon.
The cup is right of the fork.
Let's further assume that your WM capacity is three things (plus processing load).  (This will make more sense if you follow along with the attached overhead.)  When you get the first proposition, you can give it all of your storage capacity.  When the next proposition comes in, you have to steal some activation to represent it.  So, they both get in, but they're each half as strong as the first one was.  Then the third proposition comes in.  You steal some more activation, and put it in
When the fourth proposition comes along, you're out of juice.  Now, when you steal activation, something has to go.  Usually, what goes is the oldest piece of information.  In the Kintsch model, this happened when the local structure was built and then you had to cut it down.  The process of deleting nodes was akin to stealing enough activation to do the processing on the next chunk of text.  The leading edge strategy was to take the most recent information at each level (after embedded propositions were selected).  There are other ways that this can be done
 
Top
 
IX.  Strategies for choosing information.  Fletcher (1986) looked at a number of strategies that readers might use to choose propositions to retain.  He did this with a think-aloud method.  In other words, as a person read a text (trying to remember it), they were asked to say out-loud everything that they were thinking.  Here's a text like the ones he used, can I get a volunteer to try to remember it?  (See overhead for text)  We'll also need a scribe to take notes on their responses
What do we mean by strategies?  What you choose is governed by a complex process that takes into account prior knowledge of an area, the goals for reading, decoding ability, memory capacity, etc.  Under different conditions, you choose different kinds of information.  To see this in action, let's get our volunteer to read another sample of text (again, record thoughts)
As you might have noticed, different things are done.  Fletcher identified four local strategies and four global strategies.  I'm planning to focus on local ones (building local structures):
A.  Recency:  A person might retain in their tree as many of the most recent propositions as they have room for
B.  Sentence topic:  A topic is:
1.  First person or object mentioned
2.  Referred to using a pronoun, a definite article, or a proper name
3.  What the sentence is about
The basic strategy is to hold the topic and propositions related to it
C.  Leading edge:  Hold the most recent proposition at each level in the tree.  If you look at a graph of a tree, it's like taking the right edge of the graph (hence the name)
D.  Frequency:  Take all of the most frequent propositions that you can hold
Fletcher found from looking at what people choose to rehearse that sentence topic = frequency > leading edge > recency.  This tells us that people are sensitive to high level information when choosing what to represent
 
Top
 
X.  Readers' knowledge.  You have a lot of world knowledge that you can bring to bear.  One type of knowledge is script knowledge.  For a lot of overlearned event sequences, you know what typically happens.  Imagine going to the doctor's office.  Write down what happens when you visit the doctor?  (Some people read theirs so we can compare.)  When you're trying to understand a story, one thing you're doing is matching it to a script.  If you know a lot about doctors, you can fill in details from the story to improve comprehension.  You can also use the script to help you remember the story later
A.  A demonstration of the influence of top-down knowledge on comprehension:  Hocked gems passage
B.  A problem is:  If all you're doing is remembering the script, how do you tell individual events apart?  An easy answer is that you usually don't.  What did you have for breakfast a year ago today?  Most people have no idea how to answer this question.  In experiments, people will frequently "remember" things that are typical of a script, but not in the text they read.  However, even though people make mistakes, they can usually remember some deatils.  McKoon and Ratcliff propose Memory Organization Packets (MOPs) to explain this
1.  What is a MOP?  It's a dynamically constructed set of scenes directed towards a goal.  A scene is something like ordering in a restaurant.  Each MOP is constructed on the fly during reading
2.  Where do the scenes come from?  They're based on scripts.  The idea is that you borrow from the script whatever is available.  You remember deviations by marking them in the script event and then adding additional propositions.  For example, if you are reading about eating at a restaurant and you want to pay with a check, you tag the "pay after you eat" part of the script.  The tag is a pointer to a proposition like (PAY WITH CHECK).  The process is like opening a file on an event and adding to it as the event goes on.  When you're through, you close the file, and it goes into episodic memory
3.  Why don't you remember them after a delay?  It's the same problem as anything else in episodic memory.  You get too many instances of similar things coded in a very weak way.  As you keep piling on experiences with a situation, it gets harder and harder to tell them apart.  The main factor in the forgetting is interference from other experiences, not time itself
4.  Let's assume you're trying to test this theory, what effects should MOPs have on comprehension?  What can we detect?
a.  Events from two stories with the same MOP should prime one another if they're consistent with the script actions.  So, if two stories are about going to the beach and in one a person rubs lotion on her skin vs. splashing oil on her skin, the two events will be primed equally
b.  Events from two stories with the same MOP will not prime one another  if they're not consistent with the script actions.  If two people go to the beach and one looks up at the swallows nesting in the cliffs, that event shouldn't be primed by the other story and it's not
C.  It's not just scripts that can provide the necessary knowledge to make a text comprehensible.  Read this passage:  (overhead)
Now read it again with this picture (overhead)
 
Top
 
XI.  Mental models.  So far, our readers are extracting propositions, building local structures, identifying topics, and making global structures.  Is there anything else going on?  Yes.  In addition to all of this, readers are forming mental models of the events in a text.  A mental model is a representation of what the text is about, not the text itself.  So, if goes beyond propositions (a representation of the text)
A.  Where is it?  Remember that we talked about divisions in
 working memory, and two main branches.  You have an articulatory loop and a visuo-spatial scratchpad.  The loop holds auditory information (either from hearing speech or recoding written text).  The sketchpad is for images, processing pictures, and doing spatial things like moving your eyes across the page and processing visual features of text.  It's not so much a place as a kind of mental energy devoted to spatial processing.  All of the stuff we've discussed so far goes on in the loop, the mental model is in the sketchpad
B.  What do I mean by a model?  Read these sentences about turtles and logs.  Here are the situations.  If you read the first one and I give you the second one to verify, what will happen?  Now, look at these two.  Here are the situations.  If I give you the first one and ask you to verify the second one, what will happen?
The difference in terms of propositions is very slight.  In fact, the difference that distinguishes the sentences in the two situations is the same.  So, there must be some additional level of representation that explains this.  That's the mental model
C.  Can we tie it all together in a single model of processing?  Let's think about that for a minute
 
Top
 


Psychology of Language Notes 8
Will Langston

 Back to Langston's Psychology of Language Page