La revista Psicothema fue fundada en Asturias en 1989 y está editada conjuntamente por la Facultad y el Departamento de Psicología de la Universidad de Oviedo y el Colegio Oficial de Psicología del Principado de Asturias. Publica cuatro números al año.
Se admiten trabajos tanto de investigación básica como aplicada, pertenecientes a cualquier ámbito de la Psicología, que previamente a su publicación son evaluados anónimamente por revisores externos.
Psicothema, 1999. Vol. Vol. 11 (nº 4). 747-767
Susanna Millar
Oxford University
Models of short-term memory have to take into account that touch is not a tightly organised single modality. Touch, without vision or other external cues, depends on information from touch, movement and from body-centred (posture) cues. These inputs vary with the size and types of object, and with task demands. It is argued that the convergence and overlap of inputs from different sources is crucial to parsimonious organisation for memory and recall. «Modality-specific» input conditions thus form an integral part of the available information, which changes, and is changed by longer-term information. Three general principles apply: (i) Parsimony of coding modality-specific inputs for recognition and recall; (ii) links with output systems which can «rehearse» information; and (iii) longer-term familiarity with procedures and types of coding. The introduction and first section discuss these points in relation to models for hearing and vision. The third section cites findings on modality-specific tactual memory, and explains tactual memory spans in terms of paucity versus redundancy of reference information to organise inputs spatially. Movements are considered next as inputs and as spatially organised outputs that can provide haptic rehearsal. The final section argues that intersensory modality-specific processes and longer-term memory need to be included as interrelated systems in STM models in order to account for memory in touch.
La memoria en el tacto. Los modelos de memoria a corto plazo deben tener en cuenta que el tacto no es una modalidad única estrechamente organizada. El tacto sin visión u otras claves externas, depende de la información obtenida a partir del tacto, del movimiento y de las claves (la postura) centradas en el cuerpo. Estos inputs varían con el tamaño y los tipos de objetos, y con las demandas de la tarea. La convergencia y el solapamiento de los inputs a partir de diferentes fuentes es crucial para la organización de la memoria y el recuerdo. Las condiciones del input «específicas de la modalidad forman una parte integral de la información disponible, que cambia, y es cambiada por la información a largo plazo. Tres principios generales pueden aplicarse: (i) La parsimonia de la codificación de los inputs específicos de la modalidad para el reconocimiento y el recuerdo; (ii) las uniones con los sistemas del output que pueden «repetir» la información; y (iii) mayor familiaridad con los procedimientos y los tipos de codificación. En la introducción y en la primera sección se discuten estos puntos en relación con los modelos de visión y audición. La tercera sección cita resultados sobre la memoria del tacto específica de la modalidad, y explica la amplitud de la memoria táctil en función de la pobreza versus la redundancia de información de referencia para organizar los inputs de manera espacial. Después se consideran los movimientos como inputs y outputs organizados espacialmente que pueden proporcionar repetición háptica. En la sección final se argumenta que los procesos intersensoriales específicos de la modalidad y la memoria a largo plazo necesitan ser incluidos como sistemas interrelacionados en los modelos de MCP para poder explicar la memoria en el tacto.
Does the modality in which information is presented influence how it is remembered? Human information tends to be organised predominantly in either verbal or spatial form. Interestingly, each is connected with a particular sense modality: Space with vision, language with hearing. Touch does not have such a well-defined link with a particular form of knowledge. Feeling texture, temperature, pressure and pain in which touch is specialised, belong more to the «qualia» of experience - as do colour in vision, or pitch in hearing -, than to specifically linguistic or spatial forms of organising information. The links between vision and space, hearing and language are not, of course, exclusive. Visual icons, scripts, signs and gestures can carry verbal information. Distance, direction, and even shape can be remembered from hearing sound patterns and verbal descriptions. Touch can convey both forms of information.
Is there, therefore, any need to include the perceptual modality of inputs in models of memory? I shall argue that there is. The «medium» is not the «message». But the perceptual conditions in which inputs occur, and the links between perception and output systems, are an integral part of the type and amount of information that is available, and how that information is processed and remembered.
The general framework I prefer for describing information processing and memory uses the metaphor of dynamic networks of interrelated and converging processes (Millar, 1994, 1997). It focuses on active processing of the information that is available to the organism from external and internal sources. The contrast is with metaphors that imply a static architecture. Such metaphors underplay the continual, reciprocal influence of previous and current information in processing, though that is obvious even in order effects in experimental presentations. Architectural metaphors suggest a rigid hierarchical («bottom-up» or «top-down») organisation. Separate inputs from the modalities are either integrated at the top level by a separate translation mechanism, or simply subserve a top-level abstract description that obtains for all inputs. Metaphors that imply active networks of interacting and converging processes accord better with current findings on neural connections in the constantly active brain. Thus inputs from multisensory sources seem to converge in varying combinations in a number of brain areas. Above all, the metaphor of active, converging processes is needed for touch.
Touch is not a single modality, nor even a single perceptual system. What we call «touch» refers to combinations of a number of different, converging sources of information that include inputs from some skin receptors to greater or lesser extents. The notion of perceptual systems that receive inputs from several sources is true also for other perceptual systems (Gibson, 1966). But in touch, the combinations of inputs vary with the type, size, meaning and familiarity of objects, as well as with current information, and importantly also, with task demands. The questions we ask about memory in touch thus have to be seen in the context of the combinations of disparate inputs that need to converge for recognition and short-term recall in different haptic conditions.
Because differences in informational conditions are particularly important in touch, the present paper focuses on short-term memory for raised line configurations, rather than on object recognition. The neural pathways in identifying objects (what?) and locations (where?) are not precisely the same (e.g. Anderson, 1987; Schneider, 1967; Weiskrantz, 1986). Recognising familiar manipulable objects, that have well-known names, taps into a wide range of semantic knowledge and meaningful verbal descriptions of their every day use and even their visual appearance for subjects who have the experience. Prior knowledge has to be considered in examining short-term memory. But it is more circumscribed, and somewhat differently focused, in short-term memory tasks that demand recognition or recall of relatively unfamiliar tactual raised line and dot patterns. The present paper centres on the information that is available to subjects in such haptic conditions.
The paper is divided into several sections. The first section briefly describes some relevant theoretical models of short-term memory. Since they are predominantly based on findings with visual and verbal materials, these are considered first to establish the context in which findings on memory for touch must be examined. The next two sections survey findings from studies on memory for what is generally called «touch» although it concerns information from a number of different sources that include touch. The final section considers the findings in relation to intersensory information, and looks at implications of the findings for theoretical descriptions.
Some Theoretical Considerations
Models of memory have often relegated the modality of inputs to a very minor role. Some equate modality-specific information solely with brief persistence of stimulation after its offset, which has no function in memory except that of initial registration. Processing is carried by a short-term memory system that is strictly limited in attentional capacity. The number of serial items to which it can attend depends crucially on recoding the inputs into more economical forms (Miller, 1956). A favoured description for many years was of fast-fading traces of the stimulation that must be translated quickly into verbal form, in order to be maintained in limited-capacity short-term memory, prior to being stored in longer-term memory (e.g. Atkinson & Shiffrin, 1968).
The most influential recent model is a «working memory» system which has several components (Baddeley, 1990). It assumes a central executive, decision-making process that allocates attentional resources. A speech-based articulatory loop refreshes fast-fading phonological (heard) inputs through rehearsal (Baddeley, 1986, 1990; Baddeley & Hitch, 1974). A «visuo-scratchpad» accounts for findings which show that memory for spatial location, direction and shape is disrupted more by conflicting visuo-spatial than by verbal information, and for evidence that such non-verbal information can be maintained actively in memory over short delays (e.g. Atwood, 1971; Brooks, 1968; Hitch & Halliday, 1983; Hitch et al, 1988; Kosslyn, 1980; Kosslyn et al, 1978; Millar, 1972a; Segal & Fusella, 1970; Shepard & Cooper, 1982; Shepard & Feng, 1972)).
It is noteworthy that the main burden of maintaining phonological information in temporary memory is assigned to speech-based output activities, rather than to input coding (e.g. Baddeley, 1986; 1990; Broadbent, 1984; Conrad, 1964; Monsell, 1987). There is ample evidence that concomitant speech, including mouthing of an irrelevant syllable, disrupts recall for heard speech. It has been much more difficult to demonstrate temporary memory for modality-specific input sounds. Better recall of the last item in a heard than viewed series (e.g. Waugh & Norman, 1965) is sometimes attributed to the persistence of the last sound in uncoded («raw») «echoic» form. The notion of pre-categorical acoustic storage subsumes suffix effects in which an irrelevant speech sound at the end of an auditory sequence affects recall of the whole series. However, phonological effects in memory have also been found when speech output is made difficult, or is impossible (e.g. Besner, 1987; Vallar & Baddeley, 1982). Articulation rate is related to the size of immediate memory spans (e.g. Baddeley, 1990; Hitch & Halliday, 1983; Hitch et al, 1988, Hulme & Tordoff, 1989). But age differences in span are not eliminated by equating articulation rates, as would be expected if speech output were the only factor (Henry & Millar, 1991, 1993). Familiarity with input sounds has significant effects on the size of immediate memory spans with age (Henry & Millar, 1991). Very young children tend not to use verbal rehearsal strategies (e.g. Conrad, 1971), but their memory span for sounds increases nevertheless to some extent. The contribution of longer-term memory to input coding for temporary memory is, therefore, a further factor that has to be included in theoretical models (Henry & Millar, 1993). Familiarity with inputs, as well as links with output systems thus need to be considered in examining findings on touch.
Experimental evidence for temporary memory for visual inputs has been even more difficult to establish than for sounds. Vision does not seem to have such an obvious closely-knit output system as vocalisation for hearing, though gestures in pointing, reaching, drawing and locomotion constitute such output systems. However, the «perception-action» models, deriving from Gibson (1979), which stress input-output links, use these to explain perceptual organisation without recourse to memory processes. In studies of visual memory, on the other hand, the main controversy centred on the question whether visual memory is visual, or spatial and «abstract» in character. The visual character of introspectively vivid visuo-spatial imagery seems obvious to those who experience it. Moreover, errors in «reading off» from very vivid or «eidetic» imagery show that it involves memory rather than persistence of «raw» sensory stimulation (Haber & Haber, 1964). Experimental findings have also been interpreted in analogy with a transitory quasi-«pictorial» visual buffer that functions as if in «a medium» of coordinated space (e.g. Kosslyn, 1980, 1981). But experiments have not always distinguished between visual and spatial aspects of the materials. Discrepant visual inputs that interfere with memory for visual presentations may do so because of their spatial form, and not because they are visual in character (e.g. Logie, 1986; Logie, Zucco & Baddeley, 1990). Thus, evidence that simple flashes of light have relatively little effect on memory for visuo-spatial information has suggested that memory depends on «abstract» spatial representations. The notion implies that representations in memory are either not specifically visual in character (introspective reports are mistaken), or that their visual character is functionally irrelevant, or both.
Nevertheless, the total burden of current evidence suggests that temporary memory for modality-specific visual aspects, as well as for their spatial organisation, may be assumed. The fact that electrical stimulation of visual areas of the brain produces visual images (e.g. Dobelle, Mladejovsky & Girvin, 1974), and cognate evidence for the involvement of the visual cortex in attempts to form visual images (Goldenberg et al, 1990), suggests a physiological basis for «visual» aspects of introspectively reported imagery. Whether or not that is sufficient to establish that the visual character of reported visual imagery is biologically useful, is a moot point. But further experimental studies with children and adults also strengthen evidence for temporary memory for specifically visual aspects, as well as for spatial information (e.g. Logie & Baddeley, 1990; Logie & Pearson, 1997). Thus, recognition as well as recall for visual patterns was better than memory for sequences of movements to spatial targets, and the difference varied with age (Logie & Pearson, 1997). Taken together, the findings suggest a somewhat more complex picture of «working memory» than the original notion of short-term memory as a single system with one limited attention bottleneck.
An alternative has been to assume completely distributed storage systems with multiple, separate «processing modules», each with its own temporary storage capacity (e.g. Monsell, 1984). In principle, modality-specific aspects of memory could easily be accommodated in such a system, though that was not the reason for the notion. The extreme version assumes quite separate «modules» for different cognitive processes (Fodor, 1983). It was originally taken up with great alacrity, because the considerable functional specialisation of different areas of the brain seemed to accord better with descriptions in terms of separate «modules» than with the idea of one over-arching cognitive system. The evidence for regional cerebral specialisation of functions is not in doubt. However, the physiological and psychophysiological evidence neither requires, nor is actually compatible with the assumption that there are only very limited interactions between different brain regions (e.g. Squire, 1987).
It can be argued, on the contrary, that the findings on specialisation of functions are better described in terms of the convergence, in different combinations for different regions, of processing paths or activities arising from specialised analyses. A more dynamic picture of this kind seems to be required particularly to account for evidence on memory processes in touch.
Touch has been comparatively neglected in studies of psychology. The issues that have been considered here so far were raised predominantly by findings with visual and verbal materials. How do findings on touch fit in with these? Useful general theoretical models should account for findings on touch as well as on vision and hearing. But it is not sufficient to give theoretical descriptions only at the most general level. The question is what forms of information can be assumed to be available for processing in tactual conditions, and what, if any, effects these have on memory.
The point is that «touch» is actually a euphemism for intersensory processing of information from a number of different sources. The skin receptors convey information about texture, pressure, temperature and pain, as well as light touch, and the inputs occur in various combinations (e.g. Katz, 1925). Moreover, even passive touch can depend on concomitant input from proprioceptive sources. Single unit recording has shown that discrimination of round and rectangular objects held in the monkey’s hand depends on converging inputs from touch receptors in the palm and proprioceptive inputs from the posture of the finger joints (Sakata & Iwamura, 1978). The organism quickly adapts to passive touch or pressure. Even passive perception usually needs intermittent, necessarily sequential, stimulation. Movement is crucial for active tactual exploration to gain information about objects and environments (e.g. Gibson, 1962; Revesz, 1950). The term «haptics» was used to designate that proprioceptive and kinaesthetic inputs function together with touch in exploring objects (Revesz, 1950). Gibson (1966) made the important point that the modalities of vision and hearing, as well as touch, are perceptual systems which depend on several sources of information.
However, in touch or haptics the combination of inputs from different sources does not seem to operate as an inherently tightly organised single perceptual system. Exploratory skill, experience with the stimulus objects and with tasks, as well as physical characteristics of stimulus objects determine haptic movements and performance (e.g. Appelle, 1991; Berla & Butterfield, 1977; Davidson, 1972). The combinations of converging information, from touch, finger, hand and arm movements, but also proprioceptive inputs from hand and/or body postures, differ not only with task demands, but also with the size, depth and composition of objects in touch (Millar, 1981b, 1994, 1997).
Evidence on whether, and if so how, the various modality-specific aspects of haptic task conditions affect memory is considered in the next section in relation to temporary or «working» memory.
Short-term tactual memory
Evidence for modality-specific tactile short-term memory was found in a study which demonstrated a tactile suffix effect (Watkins & Watkins, 1974). Recall of the serial position in which fingers of both hands had been touched was disrupted by a stroke across two fingers at the end of a series. The finding was interpreted as evidence for capacity-limited tactile memory, analogous to suffix effects in auditory memory (see earlier). But the finding raises similar questions as demonstrations for visuo-spatial inputs. Thus, it is possible to argue that recall of the sequence in which fingers had been touched depended on coding the spatial location of the fingers that had been touched. In principle, finger locations can be coded spatially by reference to body-centred frames, in the absence of vision, and even if covert visuo-spatial strategies were not being used. Memory may, therefore, have depended on the spatial organisation of the tactual inputs, rather than on memory for «feels».
Millar (1975 a) showed that modality-specific tactual effects could be separated experimentally from verbal recoding of the inputs. Recall spans were compared for lists of braille patterns which were either similar tactually, or had similar sounding names, or were dissimilar on both counts. Subjects were congenitally totally blind children at different stages of learning braille. They felt the patterns sequentially in each list. The task was to point to the serial position that a subsequently presented test pattern had occupied in the memory series. Results for the tactually similar lists were completely the opposite of those for phonologically similar lists. As expected from Conrad’s (e.g. 1964, 1971) work with nameable visual items, Millar (1975 a) found that recall spans for patterns with phonologically confusable names were significantly impaired compared to patterns with dissimilar names, showing that the items had been recoded verbally. The difference related significantly, not only to the size of memory spans on control lists, but also to pre-test naming speeds for the items. Subjects with faster pre-test naming latencies showed larger recall spans and were more impaired by phonological similarity. By contrast, tactual similarity effects were associated with small recall spans, and related inversely to pre-test naming. Tactual similarity impaired recall more by subjects with smaller recall spans who showed little or no phonological similarity effects, and were either slow on pre-test naming or could not name the items on pre-test.
Evidence that tactual similarity affects the size of recall spans suggests temporary memory for tactual aspects of the items. The study has been described in some detail again, because it is possible to rule out explanations in terms of visual, as well as phonological strategies, since the subjects were congenitally totally blind. But that does not mean that tactual memory effects are confined to blind children. Blindfolded sighted young children were tested on recall spans for lists of tactually presented objects. The object series were again either similar in feel or had similar names (Millar, 1975 b). As for the blind, tactual similarity, but not phonological similarity reduced the recall spans of children with small spans in control conditions, while subjects who achieved large spans in control conditions were significantly affected by phonological and not by tactual similarity in the memory lists. Several subsequent studies also showed that the findings could not be attributed to individual difference factors, such as age (Millar; 1978 a,b). Thus, the same children who had recall spans of six or more tactual items that they could name, and improved further when the items were grouped, produced recall spans of only two to three items for tactual nonsense shapes. If anything, grouping tactual nonsense shapes had the opposite effect. The findings thus suggest that modality-specific tactual memory must be assumed, but it typically consists of only two to three tactual items.
Prima facie, there seems to be no reason why memory spans for tactual patterns should be worse than for the same patterns in vision, if the tactual patterns are coded spatially as global shapes. Evidence for spatial coding comes from studies of brain insults in specialised cerebral regions that are known to be important in visuo-spatial organisation (e.g. right hemisphere, post-parietal area). Such infarcts also disrupt temporary memory for nonverbal tactual inputs in spatial tasks, although it is not always clear whether the loss is primarily sensory or attentional (e.g. Stein, 1991). But there is good evidence that spatial tasks involve a number of cortical and subcortical regions of the brain. The involvement of different regions (e.g. hippocampal, sensori-motor, somatosensory, pre-frontal, post-parietal, cerebellar, occipital and temporal) seems to depend on the combination of demands that tasks make on spatial, verbal and cognitive and memory skills, and on biomechanical constraints that particular conditions involve. Spatial tasks generally show greater involvement of the right hemisphere, whether the stimulus materials are visual or tactile. However, it is not possible to make the reverse inference, and to conclude that a left-hand advantage necessarily involves right hemisphere/spatial processing. Similarly, without additional evidence and adequate task analysis, a right hand advantage does not necessarily imply left hemisphere/verbal processing. Thus, braille studies have reported left hand advantages, right hand advantages, and equal performance by both hands. Not surprisingly, the studies differ widely in the type of braille task that was being used, and in the experience and skill of the subjects who took part. Moreover, some apparently spatial tasks, such as aiming, have produced right hand advantages (Millar, 1994, for review). To make sense, stimulus conditions, task demands, and subjects’ experience with exploratory strategies and materials have to be taken into account (e.g. Wilkinson & Carr, 1987).
There has long been evidence that memory for unfamiliar tactual shapes is much less efficient than memory for the same unfamiliar shapes in vision (e.g.; Berla & Butterfield, 1977; Gilson & Baddeley, 1969; Goodnow, 1971; Millar 1971a, 1977a, 1978b, 1990, 1991). Such assertions are often misunderstood. It should be stressed at once, therefore, that they do not, of course, call in question that shape «can» be perceived by touch. That has long been established (e.g. Gibson, 1962; Katz, 1925), and requires no further research. Nor should evidence of poor tactual recognition be taken to mean that memory must necessarily be poor when the inputs are tactual. For instance, tactual recognition of familiar three-dimensional objects is often very good (e.g. Hatwell, 1978; Katz, 1925; Klatzky, Lederman & Metzger, 1985; Weber, 1834, transl. 1978). Levels of efficiency, whether high or low, are of theoretical interest only because they raise questions about the factors that produce differences. The question here is thus about the informational conditions that underlie relatively small spans and poor memory for tactual inputs.
Thus, it is relevant that poorer tactual recognition is more often found for unfamiliar two-dimensional raised line and raised-dot patterns, early in learning (e.g.; Goodnow, 1971; Hatwell, 1978; Lederman, Klatzky, Chataway & Summers, 1990; Millar, 1975a,b, 1977b, 1978a; 1985a, 1990, 1991), rather than for familiar objects (see earlier), and for three-dimensional shapes (Davidson, Barnes & Mullen, 1974; Klatzky et al, 1985; Millar, 1974; Shimizu, Saida, & Shimura, 1993).
The difference is not merely that solid forms, as such, necessarily provide more general information than outline shapes, as has sometimes been assumed. Many studies of 3-D shapes use familiar objects. For the recognition of such objects, shape (e .g. long versus round) is merely one among many possible cues, from temperature to resilience to pressure, to the use, name and contextual meaning of the object (see earlier). The important advantage for recognising the shape of unfamiliar 3-D versus 2-D forms is that 3-D shapes potentially afford more reference cues for spatial coding. Thus both hands are usually used to manipulate small three-dimensional forms. The two hands can act as spatial anchors and reference frames in relation to each other to locate component features. For large, stationary three-dimensional objects that are explored by hand and arm movements, contour features can be located and spatially organised by reference to body-centred reference frames. Two-dimensional raised line configurations are usually explored by one finger, and, in particular if they are small, are difficult to relate to body-centred frames.
We actually have sufficient evidence from past findings to be able to specify the main types of information that can vary in tasks that demand tactual memory for shape or form. Previous experience, both with and without vision, affects performance significantly. Modes of exploration and movements vary with familiarity, and with the type, size, depth and composition of stimuli. These, and task conditions are among the most important variables in haptic tasks (Davidson, 1972, 1974; Davidson et al, 1974; Locher & Simmons, 1978; Millar, 1981, 1988a; Simmons & Locher, 1979). A crucial factor is the type, amount and convergence of reference information that is available from internal and external sources, since it determines the spatial organisation of shapes in haptic conditions (Berthoz, 1991; Millar 1981a, 1988, 1994; Paillard, 1991).
Thus, the informational conditions in memory tasks for unfamiliar tactual raised-line and raised-dot patterns are typically characterised by a paucity of the reference cues that are needed for the patterns to be spatially organised as global shapes (Millar, 1978 a, 1988a, 1994, 1997).
Braille patterns are an important example. The patterns lack salient features, because all characters derive from a single, small (6.2 x 2 mm) matrix of six raised dots. The small size produces problems of acuity. But the principal difficulty does not lie in distinguishing patterns from each other. Discriminating characters is fairly easy even for the inexperienced (Katz, 1925; Millar, 1977a,b). The main problem lies in the paucity of reference cues for coding the patterns as global shapes. The lack of redundancy means that there are no salient features in the patterns that relate easily to each other. That makes it difficult to code the pattern as a distinctive global spatial configuration. It is also difficult to use self-referent frames to code the pattern as a global configuration, because the components are also too small to be related to body-centred reference cues to determine their position in the pattern. In principle, the tip of the exploring finger could act as a frame in relation to which dot locations could be determined. However, that requires some experience. Inexperienced people tend to «rub» over the dots too unsystematically to benefit from that. External reference cues are typically lacking in blind conditions, unless they are deliberately sought. This is in contrast to visual conditions, which typically provide concomitant reference, anchor, and updating cues which organise visual patterns, including braille shapes, automatically as distinctive global configurations. The efficiency of visual shape recognition is reduced to the level of touch when the field of view is restricted so that simultaneous attention to the components of a pattern is no longer possible (e.g. Loomis & Klatzky, 1991). In touch, reference cues, for coding patterns as global shapes, require exploration and movement. Effective exploratory strategies are learned with experience (e.g. Berla & Butterfield, 1977).
It is thus possible to explain the evidence for small tactual memory spans and poor recognition of unfamiliar tactual shapes, as a direct consequence of the paucity of reference information in tactual conditions, because that makes it difficult for inputs to be spatially organised in terms of global configurations (Millar, 1978 a). Global shape can be regarded as an extremely economical organisation of inputs to the system. Coding in terms of shape can thus perform the important function of information reduction. The explanation harks back to Miller’s (1956) contention that immediate memory spans are limited, and that inputs require recoding in economical form, if capacity is not to be exceeded.
Evidence that, early in learning, small raised dot patterns tend to be coded by texture or dot-density cues, rather than in terms of their global outline shape has been reviewed extensively before (e.g. Millar, 1978a, 1981a ; 1997). Briefly, the findings come from a number of studies that used convergent methods to test for shape coding in touch. Thus, generalising braille characters to enlarged forms was found to depend on experience with the characters. Young beginners needed training to recognise enlarged forms of braille characters that they had learned to name. Children with more experience generalised to enlarged forms without error, although it took them longer to recognise letters in enlarged form than in the original format (Millar, 1977 a.). Errors consisting of missed dots are the most prevalent (Nolan & Kederis, 1969). Outline shape is a poor cue for braille letter recognition since several letters share the same outline. Priming patterns with connected outline shapes produced poorer rather than better recognition, and errors in recognising and in reproducing patterns were found to depend on failures to locate the position of constituent dots accurately, rather than on confusing outline shapes (Millar, 1978a, 1985a). Blindfolded sighted children found it very easy to discriminate between braille characters that they had never felt before. But their drawings of four patterns that they learned subsequently showed that they had no idea of the shapes of the characters, suggesting that the global shapes of the patterns had not been the basis for their accuracy of discrimination (1977 a). Matching successive patterns by dot-density differences was superior to matching by the spatial location of the dots, and dot-density differences produced higher levels of performance than matching by symmetry versus asymmetry in the shapes (Millar, 1978 a). Differences in dot numerosity were also a better cue also than outline shape for judging the odd one out of four braille words (Millar, 1984a). Dot-density differences were also superior to differences in outline shape for matching small dot patterns other than braille (Millar, 1985 a). Recognition of braille letters was significantly impaired when the characters differed from each other in texture (dot-gap intervals), despite the fact that they were identical in shape, as well as in name (Millar, 1977b).
There is, of course, no question that even the small braille patterns that lack redundancy can be coded as spatial configurations also by touch. That usually requires prior experience. Direct evidence comes from filming the movements of the reading finger from underneath transparent surfaces (Millar, 1988 b, 1997)). Deliberate exploration of characters for shape information is seen typically in very slow, letter-by-letter readers, particularly by former print readers with extensive visuo-spatial experience, who learned braille relatively late in life (Millar, 1997). They move the exploring finger in up/down, zigzag and circular fashion over a character in attempting to construct the total pattern. Such movements become much faster and stereotyped with experience, without losing the typical character of scanning the form. For experienced readers, the manner of coding depends on task demands. Fluent braillists who have learned braille from the start, show evidence of coding letter shapes mainly in tasks that demand search for single characters. By contrast, fast reading for meaning is based on cues from dynamic shear patterns across the moving fingerpad (Millar, 1987). With experience, scanning movements by the two hands are organised, relative to each other, to provide the spatial information about the location of words and lines that is needed for reading texts, as well as the tactual (shear pattern) information which is translated rapidly into verbal (phonological and semantic) form for comprehension and recall.
Taken together, the findings support the hypothesis that the small (two to three item) tactual memory spans for unfamiliar patterns (see earlier) can be explained by the paucity of reference information for coding inputs spatially in these conditions. Touch seems to be particularly good at feeling texture differences. These can be used to enhance discrimination (Millar, 1986 a ; Schiff & Isikow, 1966), and may underlie the modality-specific tactual coding that was demonstrated. Relying on texture differences in memory when reference information is too sparse for spatial organisation would certainly be useful. But it would also explain why non-verbal tactual memory spans were confined to two to three items. That was in contrast to large spans for inputs that are quickly re-coded verbally via long-term familiar names that can be rehearsed in the short term (see earlier).
The hypothesis that spatial coding depends on the amount and redundancy of available reference information can also account for better recognition of 3-D forms that are not necessarily nameable. Such conditions exist, for instance, for experienced subjects who use exploratory strategies that actively seek environmental reference cues, or use the two hands relative to the body midline to act as self-referent frames in relation to which tactual «feels» can be localised.
It should be noted that the term «frames of reference» is used here as an operationally defined term. The relevant information can be experimentally manipulated. Blindfolding eliminates current information about environmental frames in relation to which the location of a stimulus can be determined. Such manipulation leaves body-centred reference frames intact. But self-referent frames can also be enhanced or disturbed. Self-reference that is centred on the body midline can be disrupted by changing the posture or orientation of the body, or the orientation of displays (e.g. Millar, 1975c, 1985 b). Similarly, stimuli can be aligned to facilitate self-referent coding. We showed that positioning the two exploring fingers directly above the stimuli, in alignment with the body midaxis, produced a similar advantage for symmetric over asymmetric (raised line) patterns to that which is found in vision (Ballesteros, Millar & Reales, 1998; Millar, Ballesteros & Reales, 1994). By contrast, shape symmetry is not an effective cue in haptic conditions that afford few reference cues for spatial coding (Millar, 1978 a). Exploring unfamiliar shapes blind with one finger, without special alignment to the body-centred frames provides few references cues. In such conditions, symmetry has no advantage (Ballesteros, Millar & Reales, 1998, Millar, Ballesteros & Reales, 1994; Millar 1978 a).
Reference frames in relation to which the location of tactual features can be determined are needed for spatial coding. The hypothesis here is that spatially organised configurations reduce inputs, without loss of information, to a form in which the information is manageable in memory over the short term. It predicts good memory for spatially organised tactual inputs in contrast to relying on relatively unorganised texture aspects of tactual input.
Studies concerned with nonverbal tactual stimuli have mainly used single stimulus configurations that are larger than braille patterns (e.g. Warm & Foulke, 1968; Locher & Simmons, 1978 ). The next section focuses more specifically on factors in memory for the somewhat larger movements that such layouts require.
Short-term memory for movements
Movements have mainly been studied as output systems. The focus has been on the kind of control (feedback, feed-forward, open-loop or closed-loop) that accounts for controls in sensorimotor and motor skills. However, recent work has emphasised the importance, as well as the diversity, of reference frames in relation to which movements are organised spatially (e.g. Berthoz, 1991; Jeannerod, 1988, 1991; Paillard, 1991). Most of the work has concerned movements in visual environments in which the goal or target and other external cues were visible, either throughout the tests, or initially and at various points before blindfolding. A good deal of the evidence on spatial coding, for instance, of reaching movements, comes from studies with blindfolded sighted subjects who were initially allowed sight of the target. The information subjects have at the beginning of the task was thus of the target location that could, in principle, be determined in relation to surrounding external visuo-spatial frame cues, as well as relative to body-centred cues. When external frame cues are excluded by blindfolding subjects, the postural, body-centred frames that guide reaching and aiming movements sustain memory for the target location. In totally blind conditions, external environmental targets can be signalled initially by auditory or olfactory cues to which postures are adjusted.
Short-term memory for blind movements within personal space (reach) has also been studied in the context of attentional or capacity limitations (Laabs & Simmons, 1981). Laabs (1973) distinguished between coding location and movement or kinaesthetic information by requiring recall of either the location or the extent of a positioning movement, from a different position than in presentation. In such paradigms, the endlocation is recalled much more accurately than the extent of the movement. The findings were widely interpreted to suggest that movements and spatial location are coded differently in short-term memory (Kelso & Wallace, 1978; Kelso & Clarke, 1982; Laabs & Simmons, 1981; Marteniuk, 1978; Russell, 1976). Thus the endlocations of blind movements can be coded spatially by reference to body-centred spatial frames, while recall of movement extents depends on kinaesthetic inputs when these are not organised relative to spatial anchors or frames. There has long been evidence that even unorganised movements survive in short-term recall. Thus, interpolating different, irrelevant movements in delay periods, distorts recall of the target movement significantly (Adams & Djikstra, 1966).
Recall of endlocations and of movement extents of positioning movements is not completely independent in paradigms in which recall starts from a different position (e.g. Walsh and Russell, 1979, 1980). Thus, subjects undershoot endlocations if the start in recall is further away, and overshoot if the start location is nearer than the start in presentation. Precisely the opposite pattern of over-and undershooting is found in recall of movement extent or distance from positions that are further away or nearer than the original starting point of the movement (Imanaka, & Abernethy, 1992 a). It has been suggested that subjects use a location strategy when location information is reliable, but use other means, such as trying to time movements by counting, when location is unreliable (e.g. Diewert & Roy, 1978). Explicit information about starting and endpositions of a movement enabled subjects to use a «location» strategy which abolished the typical over-and undershooting patterns. The typical pattern reappeared when that location was made unreliable and subjects were asked to use counting (timing) or to envisage the distance from a changed position about which they had no information (Imanaka & Abernethy, 1992 b). The findings can be explained in terms of coding endlocation spatially, and relying on memory for kinaesthetic cues, depending on the reliability of the spatial information.
External frame information can also be provided, and its use encouraged by instructions, in blind conditions. Thus deliberately providing an external frame, and instructing blindfolded sighted children to use it to remember the endlocations of positioning movements from different starting points, produced greater accuracy, and significantly reduced interference from changes in the length of movements (Millar, 1985 b). Deliberately instructing the subjects to use their body midline as a spatial anchor had similar, although less dramatic effects.
It has been shown that reliable spatial information does not need to be visual in origin. Coding locations spatially relative to egocentric frames has been demonstrated with young congenitally totally blind children. Indeed, young congenitally totally blind children tend to rely more on body-centred spatial coding (Millar 1981 b, 1985 b). Such spatial information is, of course, reliable also in the absence of vision. When self-referent cues are unreliable and there are no salient (or well-known) external frame cues, they tend to use memory for movements (Millar, 1975c, 1979). Using a cognitive interference task (counting backwards) in conjunction with the Laabs (1973) paradigm, we found that congenitally totally blind children showed interference in memory for locations that could be coded reliably by reference to body-centred frames, but no interference in memory for movements, when reference cues were unreliable (Millar, 1994, p. 148). Memory for locations that could be coded reliably by reference to body-centred frames showed significant interference by the difficult cognitive task during delays, while such tasks did not interfere with memory for movements that could not be coded reliably in terms of spatial locations.
The findings thus demonstrate short-term memory for movements also in the total absence of visual information. But they suggest also that memory for movement extents, coded in terms of the kinaesthetic inputs, tends to be less accurate and more variable than movement information about the endlocation if these are coded spatially, for instance, with reference to body-centred frames.
Spatial coding is not the only form of organising movement information. Blind children are taught to estimate distances by counting the number of steps, or sounds of a turning wheel. Counting during delays has been found to impair kinaesthetic memory (Williams, Beaver, Spence & Rundell, 1969). But the degree of practice is also important. Constant repetition of a given movement distance interfered significantly with memory for endlocations of the movement when the starting position of these varied (Millar, 1985 b). Thus, with extensive practice, memory for movement extents can become extremely robust also. The whole question of effects of familiarity on short-term memory spans and on working memory is extremely important, not least for the recall and recognition of movement information, and will be considered in more detail later.
The point that needs to be stressed is that short-term memory for kinaesthetic inputs can be demonstrated even when these inputs are not spatially organised, although memory for unorganised kinaesthetic inputs (e.g. movement extent) is less accurate and less stable. We have shown modality-specific motor memory in blind conditions from which any influence from longer-term visual knowledge can be excluded (Millar & Ittyerah, 1991). Recall of a criterion movement by congenitally totally blind children showed signficant overshooting when an irrelevant larger movement was interpolated during the delay period, and significant undershooting of the criterion in recall when the interpolated irrelevant movement was shorter than the criterion. Blindfolded sighted children showed the same effects. But in the case of the blind children it was also possible to exclude any possibility that the modality-specific effects on short-term memory could have been mediated by longer-term visual knowledge.
Perhaps more important still, merely imagining the irrelevant movements during the delay period, without actually executing them, had similar effects on short-term recall of a criterion movement as actually excecuting the irrelevant movement (Millar & Ittyerah, 1991). Such movement imagery was shown also in conditions of total blindness. Congenitally totally blind children, as well as blindfolded sighted children, were instructed to imagine («in their heads») executing an irrelevant shorter or longer movement during the delay before recalling the criterion, but not to make the movements overtly. Significant undershooting and overshooting in recall, depending on the type of irrelevant movement that had been imagined, was shown by the congenitally totally blind as well as by blindfolded sighted children.
The finding shows movement representation in short-term memory. This is an important finding, because it suggests a basis for mental rehearsal of movements even in conditions that exclude past as well as present visual information totally. Mental rehearsal of movements has long been shown to improve performance by sighted adults (e.g. Johnson, 1982), and is used in sports training. The effects of imagining irrelevant movements with blind children thus suggest that such strategies are, in principle, also available in totally blind conditions. Instructions to mentally rehearse the criterion movement («imagine repeating the movement in your head») during delays significantly improved recall by the blindfolded sighted children. The improvement was in the same direction for the congenitally totally blind. But it did not reach significance level, despite the fact that discrepant movements during delays significantly interfered with their recall (see earlier). The lack of significant improvement is probably explained by differences in informational conditions. All movements in the study had been designed to cross the body midline, in order to encourage movement coding by reducing spatial coding in terms of the body midline (Millar & Ittyerah, 1991). Young blind children tend to rely on memory for movements when coding in terms of self-referent spatial frames is made difficult or disrupted (Millar, 1979, 1981b, 1985b, 1994). Memory for the criterion movement would be liable to disruption by the discrepant interpolations in delays if the criterion was coded in terms of kinaesthetic cues. But such coding may not be sufficiently economical to sustain active mental rehearsal to improve recall. Rehearsal strategies are likely to be more efficient with codes that organise inputs economically (see earlier). Thus the blindfolded sighted may be able to use additional reference frames, derived from visuo-spatial experience, to organise kinaesthetic inputs when self-referent spatial coding is made difficult. Effective mental rehearsal of kinaesthetic cues may depend on using more economical codes that reduce the memory load of kinaesthetic inputs. Spatial coding, either in terms of body-centred or external frames, would produce better recall for that reason.
Mental rehearsal of movements can be shown to have physiological correlates. Thus physiological studies of motor imagery have shown that patterns of activity in brain areas, including the motor cortex, are similar to patterns for actually executed movements (Jeannerod & Decety, 1995). It may be important that subjects in most movement studies are sighted people who are tested blindfolded. In some cases, testing occurs in the dark, but an external orienting cue remains visible throughout. It seems possible that in conditions of total blindness, efficient organisation of movements, which permits effective mental rehearsal, requires more practice, familiarity and consequently more influence of longer-term knowledge for mental rehearsal of movements to produce the same level of facilitation in short-term memory. That requires further study.
The whole question of longer-term memory involvement in short-term and working memory is extremely important, not least for the recall and recognition of movement information. There is little doubt that immediate memory spans are larger for familiar than for new words, and for tactual patterns for which names can be retrieved easily and fast (e.g. Henry & Millar, 1991, 1993; Millar, 1975a). It seems likely that this is true also for immediate memory for movements. Many everyday movements are, of course, so well practiced that they run off automatically with great precision. A discussion of the feedback and feed-forward processes by which movement accuracy is achieved is beyond the present brief. The point is rather that short-term motor memory in blind conditions can be based on kinaesthetic inputs, but that efficient mental rehearsal of movements, which facilitates recall, involves longer-term memory for efficient means of organising the input information so as to reduce its memory load.
The longer-term information that I have particularly in mind, as one basis of short-term memory for movement inputs, are practiced exploratory movements. The importance of input-output links for mental rehearsal and the size of immediate memory spans has been mentioned before in connection with memory for sounds (see, Henry & Millar, 1991, 1993). But there is also evidence suggesting that familiarity with well-organised exploratory movements form an important basis of tactual recognition memory.
Thus, a surprising finding in getting congenitally totally blind children to draw was that that even though they had never drawn before, the older subjects at least drew figures that differed little in general schema from those of their sighted cohorts (Millar, 1975 c). More surprising still, the blind children were much better at producing recognisable raised line drawings of the human figure than at recognising such figures (Millar, 1986 b, 1990, 1991 a). For the sighted, the reverse is the case. Young sighted children can recognise drawings long before they can attempt to reproduce them. The finding for the blind should not, of course, be taken to mean that they never recognise drawings immediately, let alone that such recognition is impossible. But it does require more familiarity with exploratory strategies than is needed in vision.
The reversal in difficulty between recognition and production focuses attention on the importance of both movement output, and familiarity with exploratory strategies, as factors in short-term motor memory. The findings suggest strongly that familiarity with efficiently organised exploratory output movements can serve as, at least one, important basis for haptic recognition and short-term haptic memory in the total absence of sight (Millar, 1991).
Intersensory coding and haptic memory
I have suggested that memory for tactual information has to be examined in the context of intersensory processing, in which converging inputs vary with task demands. In blind conditions, inputs for two-dimensional stimulus patterns come from touch, movement, and posture cues, in varying conjunctions with longer-term procedural and organizational knowledge. The findings that have been reviewed here suggest how information from touch and movement may be coded in short-term memory, and what informational conditions produce different levels of efficiency in recognition and recall spans.
Not surprisingly, general principles that apply to short-term memory for inputs from other modalities also apply to memory for haptic inputs. One of the most important is the principle of economy of coding (Miller, 1956). Briefly, the more parsimonious the organisation of inputs, the greater is the probability of recall. The principle implies that there are limits to the amount of information that the organism can handle at any one time. The fact that such «capacity limits» are not rigid is shown by the almost universal effect of familiarity and repetition. Familiarity effects are also (though not only) important in the modes of recoding, or reorganizing inputs more parsimoniously, that are available to a subject. The second principle is, therefore, that tasks, which require temporary memory, also involve longer-term memory to greater or lesser extents.
The third principle, advocated here, is that limits on short-term functioning depend on the amount of overlap and redundancy of converging information from different sources. Neither the metaphor of a single limited-capacity channel, nor the notion of numbers of quite separate (floppy-disk-like) modules, are adequate. The alternative metaphor is needed particularly for increasing evidence on the relation between the modalities. Findings suggest that high degrees of specialization, on the one hand, but also high degrees of informational overlap, on the other hand, are needed to organise inputs spatially in terms of reference frames.
Multisensory information is indeed the norm for humans in most conditions. It is inconceivable, on general grounds, that multiple specialized mechanisms would have evolved if each provided precisely the same higher order abstract information. Specialized series of analyses of inputs from different external and internal sources converge in varying combinations to provide multisensory information. Thus, the physiological evidence shows that a number of areas of the brain that subserve spatial tasks receive multisensory information (e.g. Stein, 1991). Single unit recording has found bimodal neurons, which are activated both by visual and tactual stimuli, in a number of brain areas which are involved in reaching movements, suggesting how spatial organisation of reaching movements may be encoded relative to extrapersonal space (Graziano & Gross, 1994, 1995). Multisensory information and overlap seems to be particularly important for the spatial organisation of inputs.
Experimental studies of crossmodal, visual/tactual performance have mainly used paradigms which dissociate the contributing modalities in spatial tasks. There is ample evidence from these that viewing the hand through distorting lenses alters subjects’ haptic perception of where their hands are (e.g. Howard & Templeton, 1966). Crossmodal coding of 3-D shapes has been demonstrated with young children (e.g. Rudel & Teuber, 1964). But it involves more than one factor. Task conditions, even the order in which intramodal and crossmodal conditions are presented, have signficant effects (e.g. Millar, 1975 c). Fewer studies actually use simultaneous bimodal inputs. But it is fairly clear from the findings that concomitant inputs from different sources facilitate performance particularly in conditions in which the available task information is degraded or insufficient. Thus adding information from touch to vision contributes little to bettering visual shape recognition, but adding visual information aids recognition of unfamiliar tactual shapes (Millar, 1971; Heller, 1982). Similarly, concordant inputs from texture and shape cues improve tactual performance, while discordant inputs disrupt recognition (Millar, 1986 c). The tasks that are of special interest here concern recognition and memory of relatively unfamiliar 2-D shapes and displays. Such tasks are rarely a problem in normal visual conditions even for very young children (e.g. Ballesteros et al, 1998; Millar, 1971, 1972a; Millar et al, 1994). Visual conditions typically provide the concomitant cues from different surfaces and features that act as reference frames for spatial coding from the start. Moreover, they usually overlap with convergent proprioceptive reference cues.
The same tasks for which vision provides the most salient concomitant reference cues can also be performed with purely haptic information. But the automatic overlap and redundancy of concomitant current reference cues from different sources is severely reduced the absence of sight. It is possible, in principle, to achieve the same levels of efficiency in haptic as in visual tasks that require memory for 2-D shapes, distances, directions, or locations, as in vision, provided that alternative means of reference are available for spatial coding. At the same time, information conditions are not precisely the same for blindfolded sighted subjects, as for the congenitally totally blind who have no long-term visual experience (Millar, 1979, 1988 a, 1994). Body-centred reference frames, prior procedural experience in searching for external frame cues, and cue redundancy from alternative (e.g. hearing) sources become more important for spatial coding in these conditions.
Memory for information from touch and movement, both in short-term blindfolding, and in the total absence of prior visual experience, has been demonstrated. The principle of parsimony applies. Short-term spans for unfamiliar inputs are small. But they show effects of coding texture (shear pattern) in the case of shape tasks, and kinaesthetic coding in the case of unfamiliar movements, or both. Such memory coding is «modality-specific» in the sense that coding is derived from relatively specific aspects of the input.
More robust short-term haptic memory is found when the same haptic inputs are spatially organised. Such coding depends on accessible reference frame cues, as does spatial coding in visual conditions. The same general principle thus applies. Nevertheless, the informational conditions on which such coding depends differ from those in which visual cues are also present. Thus there is greater need to rely on body-centred reference information, or prior procedural knowledge, or to use alternative external (e.g. sound) location cues when that is possible. Alternative means of more parsimonious coding can involve naming (see earlier), or counting. Moreover, the findings show that short-term haptic memory can involve mental rehearsal of movements. This differs from the articulatory coding that increases verbal memory spans. Memory even for the more organised haptic inputs thus includes information that derives from the haptic input conditions. Memory in touch thus includes «modality-specific» aspects of the input information.
To fit these findings into some form of «working memory» model, one might add a haptic - movement loop system, in analogy with articulatory coding. But to work effectively, the system would also need access to longer-term visuo-spatial and/or egocentric reference information, as well as to longer-term procedural knowledge. Such access would need to be quite flexible, especially in response to differences in task demands. It is possible that the assumption of a «central executive» in the system (Baddeley, 1990) might be sufficient to fulfill that role. But it is not quite clear how the model incorporates the continual changes in longer-term memory that must be supposed with development and further experience, and which clearly play an important role, particularly in haptic short-term memory.
Adams, J.A. & Dijkstra, S. (1966). Short-term memory for motor responses. Journal of Experimental Psychology, 71, 314-318.
Appelle, S. (1991). Haptic perception of form: Activity and stimulus attributes. In Heller & W. Schiff (Eds) The Psychology of Touch (pp 169-188). Hillsdale, N.J.: Erlbaum.
Anderson, R.A. (1987). Inferior parietal lobule function in spatial perception and visuomotor integration. In Plum & V.B. Mountcastle (Eds) Handbook of Physiology. Rockville, Maryland: American Physiological Society.
Atkinson, R.C. & Shiffrin, R.M. (1968). Human Memory: A Proposed System and its Control Processes. In K. Spence & J.T. Spence (Eds), The Psychology of Learning and Motivation, Vol.2. London: Academic Press.
Atwood, G. (1971). An experimental study of of visual imagination and memory. Cognitive Psychology, 2, 290-299.
Baddeley, A.D. (1986). Working Memory. Oxford: The Clarendon press.
Baddeley, A.D. (1990). Human Memory: Theory and Practice. Hove: Lawrence Erlbaum Associates.
Baddeley, A.D. & Hitch, G. (1974). Working memory. In G. Bower (Ed). The Psychology of Learning and Motivation. Vol. VIII, pp. 47-89. New York: Academic Press.
Ballesteros, S. Millar, S. & Reales, J.M. (1998). Symmetry in haptic and in visual perception. Perception & Psychophysics, 60, 389-404.
Berla, E.P. & Butterfield, L.H.Jr. (1977). Tactual distinctive feature analysis: Training blind students in shape recognition and in locating shapes on a map. The Journal of Special Education, 11, 336-346.
Berthoz, A. (1991). Reference frames for the perception and control of movement. In J. Paillard (Ed). Brain and Space. Oxford: Oxford University Press.
Besner, D. (1987). Phonology, lexical access in reading, and articulatory suppression: A critical review. Quarterly Journal of Experimental Psychology, 41A, 91-105.
Broadbent, D. (1984). The maltese cross: a new simplistic model for memory. Behavioural and Brain Sciences, 7, 55-94.
Brooks, L.R. (1968). Spatial and verbal components of the act of recall. Canadian Journal of Psychology, 22, 349-368.
Colley, A. & Colley, M. (1981). Reproduction of endlocation and distance of movement in early and later blind subjects. Journal of Motor Behavior, 13, 102-109.
Conrad, R. (1964). Acoustic confusions in immediate memory. British Journal of Psychology, 55, 75-84.
Conrad, R. (1971). The chronology of the development of covert speech in children. Developmental Psychology, 5, 398-405.
Davidson, P.W. (1972). The role of exploratory activity in haptic perception: Some issues, data and hypotheses. Research Bulletin, American Foundation for the Blind, 24, 21-28.
Davidson, P.W. (1974). Some functions of active handling: Studies with blinded humans. New Outlook for the Blind, 70 (5), 198-202.
Davidson, P.W., Barnes, J.K. & Mullen, G. (1974). Differential effects of task memory demands on haptic matching of shape by blind and sighted humans. Neuropsychologia, 12, 395-397.
Diewert, G.L. & Roy , E.A. (1978). Coding strategy for movement and extent information. Journal of Experimental psychology: Human: Learning & Memory, 4, 666-675.
Dobelle, W.H., Mladejovsky, J.P. & Girvin, J.P. (1974). Artificial vision for the blind: Electrical stimulation of the visual cortex offers hope for a functional prothesis. Science, 183, 440-444.
Fodor, J. (1983). Modularity of Mind. Cambridge, Mass.: MIT Press.
Gibson, J.J. (1962). Observations on active touch. Psychological Review, 69, 477-491.
Gibson, J.J. (1966). The Senses Considered as Perceptual Systems. Boston: HoughtonMiflin.
Gibson, J.J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin.
Gilson, E.Q. & Baddeley, A.D. (1969). Tactile short-term memory. Quarterly Journal of Experimental Psychology, 21, 180-189.
Goldenberg, G., Podreka, I., & Steiner, M. (1990). The cerebral localization of visual imagery: evidence from emission computerized tomography of cerebral blood flow. In P.J.Hampson, D. Marks, & J.T.E. Richardson (Eds) Imagery: Current Developments. London: Routledge.
Goodnow, J.J. (1971). Eye and hand: differential memory and its effect on matching.Neuropsychologia, 9, 89-95.
Graziano, M.S.A. & Gross, C.G. (1994). The representation of extrapwersonal space: A possible role for bimodal visual-tactile neurons. In Gazzaniga, M.S. (Ed) The Cognitive Neurosciences (pp. 1.021-1.034.). Cambridge, Mass: MIT Press.
Graziano, M.S.A. & Gross, C.G. (1995). Mapping space with neurons. Current Directions in Psychological Science, 3 (5), 164-167.
Haber , R.N. & Haber, R.B. (1964). Eidetic imagery: I Frequency. Perceptual & Motor Skills, 19, 131-138.
Hatwell, Y. (1978). Form perception and related issues in blind humans. In R. Held.
H.W. Leibowitz & H.L.Teuber (Eds) Handbook of Sensory Physiology. Berlin: Springer Verlag.
Henry, L. & Millar, S. (1991): Memory span increase with age: A test of two hypotheses. Journal of Experimental Child Psychology. 51, 458-484.
Henry, L.A. & Millar, S. (1993). Why does memory span improve with age? A review of the evidence for two current hypotheses. European Journal of Cognitive Psychology, 5, 241-287.
Heller, M. A. (1982). Visual and tactual texture perception: Intersensory cooperation. Perception & Psychophysics, 31, 339-344.
Hitch, G. J. & Halliday, M.S. (1983). Working memory in children. Philosophical Transactions of the Royal Society of London, B302, 325-340.
Hitch, G.J., Halliday, M.S. Schaafstal, A.M. & Schraagen, J.M.L. (1988). Visual working memory in young children. Memory & Cognition, 16, 120-132.
Howard, I . P. & Templeton, W.B. (1966). Human Spatial Orientation. New York: John Wiley.
Hulme, C. & Tordoff, V. (1989). Working memory development: The effects of speech rate, word length , and acoustic similarity on serial recall. Journal of Experimental Child Psychology, 47, 72-87.
Imanaka, K. & Abernethy, B. (1992 a). Interference between location and distance information in short-term motor memory: The respective roles of kinaesthetic signals and abstract codes. Journal of Motor Behavior, 24, 274-280.
Imanaka, K. & Abernethy, B. (1992 b). Cognitive strategies and short-term memory for movement distance and location. Quarterly Journal of Experimental Psychology, 45A (4) 669-700.
Jeannerod, M. (1988). The Neural and Behavioural Organization of Goal-directed Movements. Oxford: Clarendon Press.
Jeannerod, M. (1991). A neurophysiological model for the directional coding of reaching movements. In J. Paillard (ed.) Brain and Space. Oxford: Oxford University Press.
Jeannerod, M. & Decety, J. (1995). Mental motor imagery: a window into the representational stages of action. Current Opinion in Neurobiology, 5, 727-732.
Johnson, P. (1982). Functional equivalence of imagery and movement. Quarterly Journal of Experimental Psychology, 34A, 349-365.
Katz, D. (1925). Der Aufbau der Tastwelt. Leipzig: Barth.
Klatzky, R.L., Lederman, S.J. & Metzger, V.A. (1985). Identifying objects by touch. Perception & Psychophysics, 37, 299-302.
Kelso, J.A.S. & Wallace, S.A. (1978). Conscious mechanisms in movement. In. G.E. Stelmach (Ed) Information Processing in Motor Control and Learning. New York: Academic Press.
Kelso, J.A.S. & Clarke, J.E. (1982). The Development of Movement Control. New York: John Wiley.
Kosslyn, S.M. (1980). Image and Mind. Harvard: Harvard University Press.
Kosslyn, S.M. (1981). The medium and the message in mental imagery: A theory. Psychological Review, 88, 46-66.
Kosslyn, S.M., Ball, T.M., & Reiser, B.J. (1978). Visual images preserve metric spatial information: Evidence from studies of image scanning. Journal of Experimental Psychology: Human Perception & Performance, 4, 47-60.
Laabs, G.L. (1973). Retention characteristics of different reproduction cues in short-term memory. Journal of Experimental Psychology, 100, 168-177.
Laabs, G.L. & Simmons, R.W. (1981). Motor memory. In D. Holding (Ed) Human Skills. New York: Wiley.
Locher, P. J. & Simmons, R.W. (1978). Influence of stimulus symmetry and complexity upon haptic scanning strategies during detection, learning and recognition tasks. Perception & Psychophysics, 23, 110-116
Logie, R.H. (1986). Visuo-spatial processing in working memory. Quarterly Journal of Experimental Psychology, 38A, 229-247.
Logie & Baddeley, A.D. (1990). Imagery and working memory. In P.J.Hampson, D.F. Marks, & J.T.E. Richardson (Eds) Imagery: Current Developments. London: Routledge.
Logie, R.H. & Pearson, D.G. (1997). The inner eye and the inner scribe of visuo-spatial working memory: Evidence from developmental fractionation. European Journal of Cognitive Psychology, 9 (3), 241-257.
Logie , R.H., Zucco, G.M. & Baddeley, A.D. (1990). Interference with visual short-term memory. Acta Psychologica, 75, 55-74.
Loomis, J.M. & Klatzky, R.L. (1991). Similarity of tactual and visual picture recognition with limited field of view. Perception, 20, 167-177.
Marteniuk, R.G. (1978). The role of eye and head position in slow movement execution.In G.E. Stelmach (Ed). Information processing in Motor Control and Learning. New York: Academic Press.
Millar, S. (1971). Visual and haptic cue utilization by preschool children: The recognition of visual and haptic stimuli presented separately and together. Journal of Experimental Child Psychology, 12, 88-94.
(1972a). Effects of instructions to visualise stimuli during delay on visual recognition by preschool children. Child Development, 43, 1.073-75.
(1974). Tactile short-term memory by blind and sighted children. British Journal of Psychology, 65, 253-263.
(1975a). Effects of tactual and phonological similarity on the recall of Braille letters by blind children. British Journal of Psychology, 66, 193-201.
(1975b). Effects of phonological and tactual similarity on serial object recall by blind and sighted children. Cortex , 11, 170-180.
(1975 c). Spatial memory by blind and sighted children. British Journal of Psychology, 66, 449-459.
(1975 d). Translation rules or visual experience? Drawing the human figure by blind and sighted children. Perception, 4, 363-371.
(1975 e). Effects of input variables on visual and kinaesthetic matching by children within and across modalities. Journal of Experimental Child Psychology, 1975, 19, 63-78.
(1977 a). Early stages of tactual matching. Perception, 6, 333-343.
(1977 b). Tactual and name matching by blind children. British Journal of Psychology, 68, 377- 387.
(1978 a). Aspects of information from touch and movement. In G. Gordon (ed.) Active Touch. London: Pergamon Press.
(1978 b). Short-term serial tactual recall: Effects of grouping tactually probed recall of Braille letters and nonsense shapes by blind children. British Journal of Psychology, 69, 17-24.
(1979). Utilization of shape and movement cues in simple spatial tasks by blind and sighted children. Perception, 1979, 8, 11-20.
(1981 a). Crossmodal and intersensory perception and the blind. In Walk & Picks (eds.) Intersensory Perception and Sensory Integration. Oxford:, Plenum Press. 34.
(1981 b). Self-referent and movement cues in coding spatial location by blind and sighted children. Perception, 10, 255-264.
(1984). Strategy choices by young Braille readers. Perception, 13, 567-579.
(1985 a). The perception of complex patterns by touch. Perception, 14, 293-303.
(1985 b). Movement cues and body orientation in recall of location by blind and sighted children. Quarterly Journal of Experimental Psychology, 37, 257-279.
(1986 a). Aspects of size, shape and texture in touch: Redundancy and interference in children’s discrimination of raised dot patterns. Journal of Child Psychology and Psychiatry, 27, 367-381.
(1986 b). Drawing as Image and Representation in Blind Children. In D.G. Russell, Marks & T.E. Richardson (Eds) Image 2. Dunedin, New Zealand:
(1987 a). Perceptual and task factors in fluent braille. Perception, 1987, 16, 521-536.
(1988 a). Models of Sensory Deprivation: The nature/nurture dichotomy and spatial representation in the blind. International Journal of Behavioural Development, 11, 69-87.
(1988 b). An apparatus for recording hand movements. British Journal of Visual Impairment and Blindness, VI, 87-90.
(1990) Imagery and Blindness. In Hampson, D.F. Marks & J.T.E. Richardson (Eds) Imagery: Current Developments. London: Routledge & Kegan Paul.
(1991). A Reversed Lag in the Recognition and Production of Tactual Drawings: Theoretical Implications for Haptic Coding. M.A. Heller and W. Schiff (Eds) The Psychology of Touch. N.J.: Lawrence Erlbaum Associates.
(1994). Understanding and Representing Space: Theory and Evidence from Studies with Blind and Sighted Children. Oxford: Clarendon Press.
(1997). Reading by Touch. London & N.Y.: Routledge.
Millar, S. Ballesteros & J. M. Reales (1994). Influence of symmetry in haptic and visual perception, Paper presented at the 35th Annual Meeting of the Psychonomics Society, St. Louis, Mo. November 11-13.
Millar, S. & Ittyerah, M. (1991). Mental practice without visuo-spatial information. International Journal of Behavioral Development, 15, 125-146.
Miller, G. (A. 1956). The magical number seven plus or minus two: Some limits on capacity for processing information. Psychological Review, 63, 81-87.
Monsell, S. (1984). Components of working memory underlying verbal skills: A «distributed capacities» view. In H. Bouma & D. Bouwhuis (Eds) International Symposium on Attention and Performance, X (327-350). Hillsdale, N.J.: Erlbaum.
Monsell, S. (1987). On the relation between lexical input and output pathways for speech. In A. Allport, D.G.Mackay, W.Prinz, & E. Scheerer (Eds) Language Perception and Production: Relations between Listening, Speaking, Reading and Writing. London: Academic Press.
Nolan, C., Y., & Kedeeris, C.J. (1969). Perceptual Factors in Braille Recognition. American Foundation for the Blind Research Series, No.23. New York: American Foundation for the Blind.
Paillard, J. (1991). Motor and representational framing of space. In J. Paillard (Ed) Brain and Space. Oxford: Oxford University Press.
Revesz, G. (1950). Psychology and Art of the Blind. London: Longmans.
Rudel, R.G. & Teuber, H.L. (1964). Crossmodal transfer of shape information by children. Neuropsychologia, 2, 1-8.
Russell, D.G. (1976). Spatial location cues and movement production. In G. Stelmach (Ed) Motor Control: Issues and Trends. New York: Academic Press.
Sakata, H. & Iwamura, Y. (1978).Cortical processing of tactile information in the first somato-sensory and parietal association areas in the monkey. In G. Gordon (Ed) Active Touch. New York: Pergamon Press.
Schiff, W. & Isikow, H. (1966). Stimulus redundancy in the tactile perception of histograms. International Journal of Education for the Blind, 16, 1-10.
Schneider, G.E. (1967). Contrasting visuo-motor functions of tectum and cortex in the golden hamster. Psychologische Forschungen, 31, 52-62.
Segal, S.J. & Fusella, V. (1970). Influence of imagined pictures and sounds on detection of visual an auditory signals. Journal of Experimental Psycholog, 83, 458-464.
Shephard, R.N. & Cooper, L.A. (1982). Mental images and their Transformations. Cambridge, Mass: MIT Press.
Shephard, R.N & Feng, C. (1972). A chronometric study of mental paperfolding. Cognitive Psychology, 3, 228-243.
Shimizu, Y, Saida, Sh. & Shimura, H. (1993). Tactile pattern recognition by graphic display: Importance of 3-D information for haptic perception of familiar objects. Perception & Psychophysics, 53, (1), 43-48.
Simmons, R. & Locher, P. (1979). Role of extended haptic experience upon perception of nonrepresentational shapes. Perceptual & Motor Skills, 48, 987-991.
Squire, L.R. (1987). Memory and Brain. Oxford: Oxford University Press.
Stein, J. F. (1991). Space and the parietal association areas. In J. Paillard (Ed) Brain and Space. Oxford: Oxford University Press.
Vallar, G & Baddeley, A.D. (1982). Short-term forgetting and the articulatory loop. Quarterly Journal of Experimental Psychology, 34A, 530- 560.
Walsh, W.D. & Russell, D.G. (1979). Memory for movement location and distance:Starting position and retention interval effects. Journal of Human Movement Studies, 5, 68-76.
Walsh, W.D. & Russell, D.G. (1980). Memory for preselected slow movements: Evidence for integration of location and distance. Journal of Human Movement Studies, 6, 95-105.
Warm, J. & Foulke, E. (1968). Effects of orientation and redundancy on the tactual perception of forms. Perceptual & Motor Skills, 27, 83-89.
Watkins, M.J. & Watkins, O.C. (1974). A tactile suffix effect. Memory & Cognition, 5, 529-534
Waugh, N. & Norman, D.A. (1965). Primary memory. Psychological Review, 72, 89-104.
Weber, E.H. (1834). De Tactu (translated 1978, by H.E. Ross). London Academic Press.
Weiskrantz, L. (1986). Blindsight: A Case Study and its Implications. Oxford: Oxford University Press.
Williams, H.L. Beaver, W.S., Spence, M.T. & Rundell, Q. (1969). Digital and kinaesthetic memory with interpolated information processing. Journal of Experimental Psychology, 80, 530-536.
Wilkinson, J.M. & Carr, T.H. (1987). Strategic hand use preferences and hemispheric specialisation in tactual reading. Brain & Language, 32(1) 97-123.
Aceptado el 11 de marzo de 1999