Schacter, D. L., Chiu, P. C. -Y., Ochsner, K. N. (1993). Implicit memory: A selective review. Annual Reviews of Neuroscience, 16, 159-182.
Memory is not a unitary phenomenon. Some memory processes are driven by conscious effort to bring to mind previously encountered information (declarative memory). Similarly, we might consciously try to recreate previous experiences, recalling where we were, what we were feeling, what we were sensing, etc. (episodic memory). These are explicit memory processes; whcih I think of as memory of. (I have a memory of an episode in my life, or an explict memory of a list that I learned.)
Other times, though, memory is apparent without conscious effort, and without any explicit effort either during encoding or retrieval. This is memory for; our knowledge of how to do things, contrasted with the propositional knowledge above (“knowledge that”).
Visual word priming. Tasks include: Stem completion (for____), fragment completion (a__a__in), identification of words at very brief (35 ms) exposure, and lexical decision (word vs. nonword).
In normal subjects, there is evidence of dissociation in memory encoding: processes that strengthen explicit memory (e.g. depths of processes) have no effect on implicit memory.
This may be one indication that priming effects are limited to surface features. Further evidence: priming effects are more pronounced after tasks that ask participants to rate printed words on the readability of the font, rather than the pleasantness of the word. Priming effects are longer lived with identical stimulus presentations at later testing dates. Priming does not appear cross-modally (this makes me a little sad as a gesture researcher).
Further dissociation may be seen in the involvement of brain regions: implicit memory may be related to a decrease in activation to the right extrastriate cortex, which explit involves the hippocampus quite specifically.
In memory impaired populations, there is more evidence of dissociation. K.C. has essentially no explicit memory, but still shows priming effects. Such priming effects may extend to novel as well as previously learned targets–though the evidence is muddled. In particular, differences between normal and impaired populations (or indeed between young and old) may be less attributable to implicit memory deficits in the impaired (or aged) population, and more to the use of explicit memory strategies by normal (or young) participants responding to task demands.
Visual object priming. Tasks include: picture naming, picture fragment completion, object decision (objects. vs. non-objects), and dot pattern identification
In normal populations, specificity is again an issue, with priming effects stronger for identical stimulus presentations, less for different view of the same stimulus, less still for different exemplars that would be labeled with the same word. Presenting a category label does not serve as a prime for images of objects within that category.
More dissociation: scaling the size, or reflecting the visual representation, of an object degrades explicit memory for the object; but implicit memory appears insensitve to these manipulations (but does suffer when rotation is changed). Additionally, explicit memory of visual stimuli benefits from generating semantic elaborations, implicit memory does not.
Auditory word priming. Tasks include: Perceptual identification (ID words in noise) and auditory stem completion. Again, we see little evidence of cross-modal priming. Dissociation between memory systems are supported by evidence that semantic encoding of auditory stimuli helps explicit memory, but not implicit.
Does disassociable performance on implicit and explicit memory tasks point to separate memory systems, or could a unified process theory account for both types of phenomena? The authors argue for different neurological systems: particularly on the evidence that organic damage to the hippocampus disrupts access to semantic knowledge (necessary for explicit memory) but not knowledge of form and structure (which contribute to priming effects).
They propose a presemantic perceptual representaiton systems (PRS), with domain-specific subsystems, like a “visual word form subsystem” for printed words (in extrastriate cortex), a “structural description system,” for part-object relations (inferior temporal regions), and an “auditory word form system” for phonological info (perisylvan cortex)
Skill learning and implicit memory
Skill learning is another form of implicit memory, and evidence from Alzheimer’s and Hunttington’s dimentia patients might indicate that the two are dissociable, with AD patients showing intact motor skill learning but impaired priming, while HD patients show the opposite pattern.
Some of my questions
- “Squire et (1992) found that priming was associated with a reduction of blood flow right extrastriate cortex. They also found some evidence for hippocampal activation during priming, but suggested that this effect was attributable to test awareness and subjects’ use of explicit retrieval processes under the specific conditions of this experiment. Explicit memory performance (cued recall), however, was associated with marked activation in the right hippocampus.” What do we know about the function of the right extrastriate cortext that might explain why a reduction in bloodflow here is associated with explicit memory tasks?
- When I make judgments about the object or non-object status of a stimulus, or when I am encoding a pattern of dots, how much of this is truly non-verbal? Is part of my recognition of an object aided by my having a lexical entry (or two or twelve) for that object? When I encoded a pattern of dots, am I encoding the physical sensation of blue-green-blue-blue, or a little phonological script of “blue-green-blue-blue?”
- Is the “presemantic perceptual representation system” necessarily “pre” semantic? WHy not “peri-semantic?” “Pre” seems to imply a serial process from features to meaning. Could these things be disassociable enough that they are in parallel? This may seem like a silly question, but I ask it earnestly: do I have to have recognized something as an object (or a typed word, or a bit of speech) as opposed to a nonsense object (or string of letters, or nonsense syllables) before I can even begin to imbue it with meaning? Is the answer different for encountering words/objects whatever on a computer screen in a stark and empty laboratory than in richer, more typical, more meaningful environments?