What term describes what occurs when information interacts with sensory receptors?

Analyzing incoming sensory information is essential to forming visually guided motor decisions.

From: Brain Mapping, 2015

Social Cognitive Neuroscience, Cognitive Neuroscience, Clinical Brain Mapping

S.-W. Wu, ... L.T. Maloney, in Brain Mapping, 2015

Dynamic Integration of Noisy Sensory Information for Action Selection

Analyzing incoming sensory information is essential to forming visually guided motor decisions. Consider the following example. You are about to decide whether to make a swing at a baseball. As you watch the ball, you continue to receive updates about the ball. Instead of taking a snapshot at a particular time point, such decision is likely going to depend on the dynamic integration of sensory information over time. Hence, the key questions include the following: How does the brain dynamically integrate incoming sensory information over time? What are the neural mechanisms for such integration? And what circuit(s) in the brain is involved in performing such integration computations?

The major advancement made in our understanding of how the brain integrates sensory information over time came from the work of Newsome, Shadlen, and colleagues (Gold & Shadlen, 2007; Newsome, Britten, & Movshon, 1989; Shadlen & Newsome, 1996, 2001). In a perceptual judgment task, nonhuman primates were presented with a noisy visual stimulus that consists of randomly moving dots (Figure 3(a)). Among them, a proportion of dots moved coherently in one direction. The task was to perform a judgment on dot motion direction, with correct responses leading to juice rewards. Because of the dynamic nature of the stimulus, integrating and evaluating sensory information about the stimulus over time are critical to forming the judgment on motion direction. This feature makes the task a natural candidate for studying the computations of dynamic integration.

What term describes what occurs when information interacts with sensory receptors?

Figure 3. The random-dot motion (RDM) discrimination task. (a) Trial sequence of the RDM task. The subjects were presented with a noisy sensory stimulus consisting of randomly moving dots. Among them, a proportion of dots moved coherently in one of two possible directions (motion coherence level). Subjects had to indicate the direction of dot motion with a movement. (b) Activity of neurons in the lateral intraparietal area (LIP) reported in Roitman and Shadlen (2002). Approximately 170 ms after the stimulus onset, the dynamics of LIP neurons changed as a function of motion coherence levels. Activity increased more sharply as the motion coherence level increased but plateaued at approximately the same level.

(Adapted from Gold, J. I. & Shadlen, M. N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535–574.)

Critically in this task, the animals have to use a saccadic eye movement to indicate its decision on motion direction. This engages specific neural systems involved in the preparation of saccadic eye movements, including the lateral intraparietal area (LIP), the frontal eye field (FEF), and the superior colliculus.

Shadlen and colleagues found that neuronal activity in LIP and FEF likely reflects the accumulation of sensory evidence over time (Gold & Shadlen, 2000; Huk & Shadlen, 2005; Mante, Sussillo, Shenoy, & Newsome, 2013; Roitman & Shadlen, 2002). Such conclusion was drawn from the observation that the motion coherence level of the stimulus, that is, the proportion of dots moving in the same direction, systematically affects the dynamics of neuronal activity. As shown in Figure 3(b), activity in area LIP increased more sharply when the coherence level of the stimulus was higher. Such patterns could be captured by a decision process in which a decision-making system evaluates sensory evidence at each moment in time and integrates it over time for the purpose of eventually forming a decision. The key intuition here is that the moment-by-moment sensory evidence is stronger for stimuli with higher coherence level, and as a result, the rate of accumulation of sensory evidence is steeper in time. See Smith and Ratcliff (2004) and Gold and Shadlen (2007) for more details on the theoretical treatment of the problem.

Following the logic of evidence accumulation over time, Yang and Shadlen (2007) designed a probabilistic inference task in which information about the probability of reward associated with different actions was revealed sequentially. They found that activity of neurons in area LIP represents information about the sequential updates of the relative likelihood of reward whenever a new piece of information was revealed. In humans using fMRI, similar results were obtained in LIP and extended to different regions. For example, Liu and Pleskac (2011) showed that the FEF and the intraparietal sulcus (IPS) tracked the accumulation of sensory evidence in a random-dot motion task. Heekeren, Marrett, Ruff, Bandettini, and Ungerleider (2006) found that the dorsolateral prefrontal cortex discriminates between high- and low-coherence stimuli in the same task. Heekeren, Marrett, Bandettini, and Ungerleider (2004) further revealed that the representation of noisy sensory evidence appears to be encoded in part of the ventral visual system and is category-specific. Philiastides, Biele, and Heekeren (2010) showed evidence for the representation of integrated reward probability in the same ventral pathway in a task similar to Yang and Shadlen (2007) except that all information was revealed at the same time. Taken together, these results suggest that the posterior parietal cortex and the lateral prefrontal cortex are part of a network that is involved in representing the continuous updates of information relevant to decision-making.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123970251002633

The Brain and Perception

Lois Isenman, in Understanding Intuition, 2018

Perception and Complexity in the Brain

The brain must filter out unimportant incoming sensory information at the same time that it intensifies what is important. Freeman became interested in what perception, rather than sensory recording alone, might mean at the neural level, as captured by the electroencephalogram (EEG). He initially studied olfaction in small, awake mammals such as cats and rabbits to learn, as the title of one of his books puts it, How Brains Make Up Their Minds.13

In neural tissue, the axon of one neuron and the dendrite of another are separated by small gaps called synapses. When an axon fires, neurotransmitter molecules rapidly diffuse across the synapse and bind to receptors on the opposing neuron’s dendrite. This initiates a wave of electrochemical excitation that travels along the dendrite to the opposing neuron’s cell body. In turn, this generates a weak electrical current in the extracellular fluid outside the dendritic membrane. The currents in the extracellular fluid from different cells add together. When summed, they can greatly enhance the possibility that other neurons in the neighborhood will also fire. The brain waves recorded by EEG primarily measure the combined extracellular current generated by the dendrites of a large number of cells.13

Extracellular currents become particularly important in the presence of neuromodulator molecules that signify biological “value.” These values correspond to the animal’s intrinsic needs and goals, and they are progressively fine-tuned by experience. Neuromodulators work by modifying neural firing to encourage behaviors that support survival and well-being. They are released in response to signals from the limbic system, the group of brain structures largely responsible for innate seeking and avoidance behaviors and for emotional responses. Some neuromodulators such as dopamine and acetylcholine (sometimes called nonspecific neuromodulators or neurotransmitters) are released throughout large regions of the brain. Others are deployed more strategically in both the brain and body at the behest of the emotional and motivation system. Oxytocin, for example, is released in more specific regions of the brain and body during childbirth, sexuality, and bonding.19

In the presence of neuromodulators, nearby neurons that fire together readily form associations that are strongly interconnected, called cell assemblies. When one neuron in a cell assembly fires, others in the assembly are more likely to fire as well. The large extracellular current that results when a cell assembly fires enhances the probability that neighboring neurons will also fire.13 Moreover, because of positive feedback, the more cells that fire, the more likely it is that other cells in the layer will also fire (up to a point). Together they can spark a response of the tissue layer as a whole. The individual cells undergo a state or phase transition; they begin to function as a coordinated population, which can be detected by EEG.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012814108300004X

Autonomic Nervous System

Maria G. Cersosimo, Eduardo E. Benarroch, in Handbook of Clinical Neurology, 2013

Amygdala

The amygdala provides affective or emotional value to incoming sensory information (LeDoux, 2007). The amygdala is structurally complex and has multiple downstream targets that participate in the autonomic and neuroendocrine response to stress (Ulrich-Lai and Herman, 2009). The amygdala nuclear complex consists of two major divisions: the basolateral complex and the extended amygdala. The basolateral amygdala has an affiliation with the cerebral cortex and includes the lateral, basal, and accessory nuclei. The extended amygdala is a continuum that includes the central nucleus of the amygdala (CeA), lateral bed nucleus of the stria terminalis (BNST), and associated regions of the sublenticular substantia innominata. There is a general pattern of connectivity within the amygdala. Inputs from the cerebral cortex and thalamus reach primarily the basolateral subdivision. The basolateral complex projects back to the cerebral cortex, thalamus, and striatum to modulate sensory processing, memory, decision making, and behavioral drive. In addition, the basolateral complex sends intra-amygdaloid inputs to the extended amygdala, which projects to the hypothalamus and brainstem. The CeA is the main effector nucleus of the extended amygdala and has a major role in integration of the stress response, particularly fear responses (Davis, 1992). The CeA initiates behavioral and autonomic responses via widespread connections with the hypothalamus, particularly the paraventricular nucleus and lateral hypothalamic areas, and brainstem, particularly the periaqueductal gray, rostral ventrolateral, and ventromedial medullary reticular formation (Loewy, 1982). The CeA is differentially activated by disruption of homeostasis and systemic stressors (such as cytokines) and contributes to the integration of the autonomic responses to psychological stress (Ulrich-Lai and Herman, 2009).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444534910000055

Spatiotemporal characteristics of perceptual decision making in the human brain

Marios G. Philiastides, Hauke R. Heekeren, in Handbook of Reward and Decision Making, 2009

8.1 Introduction

Perceptual decision making is the process by which incoming sensory information is combined and used to influence how we behave in the world. The neural correlates of perceptual decision making in the human brain are currently under intense investigation by systems and cognitive neuroscience. Fortunately, animal neurophysiology has already laid the foundation upon which critical new hypotheses about human decision making can be based. Specifically, results of single and multi-unit recordings in primates have already proposed that decision making involves three main processing stages: representation of sensory evidence, integration of the available sensory information across time, and a comparison of the accumulated evidence to a decision threshold [1]. Furthermore, some psychological theories suggest that these stages of decision formation are likely to occur in a serial fashion [2,3].

Though the overall simplicity of this hierarchical model is admittedly appealing, perceptual decision making in the human brain is likely to involve a more complex, non-serial cascade of events that includes sensory processing, attention, prior information, reward, evidence accumulation, and motor response networks [4]. An alternative model, as outlined in Fig. 8.1A, involves at least four complementary and partially overlapping systems which interact in a heterarchical manner, with some of the processes occurring in parallel.

What term describes what occurs when information interacts with sensory receptors?

Figure 8.1. A theoretical model for human perceptual decision making. Integrating EEG with fMRI can help reveal the spatiotemporal characteristics of this model. (A) A four-compartment model of perceptual decision making in the human brain. In contrast to traditional hierarchical models of decision making [2,3] the main processes of some of these compartments can happen in parallel. The model includes a system for representing the early sensory evidence, and a system for post-sensory, decision-related processing including comparison and accumulation of sensory evidence and computation of decision variables. As in many tasks, decisions are usually expressed through action; this system includes motor and premotor structures. In addition, it incorporates a system for detecting perceptual uncertainty or difficulty to signal for recruitment of more attentional resources when task demands are increased, and a system for performance monitoring to detect when errors occur and when decision strategies need to be adjusted to improve performance. (B) Identifying the spatiotemporal characteristics of the model presented in (A) requires imaging the brain at both high spatial and high temporal resolution. fMRI can provide the desired spatial resolution while single-trial EEG can identify temporally well-localized features of this model. Developing new techniques to integrate EEG with fMRI can ultimately enable the recovery of the spatiotemporal characteristics of the network processes underlying human perceptual decision making.

In addition to the main processing modules of the simple hierarchical architecture, this four-compartment model includes a system that detects perceptual uncertainty or task difficulty as well as a performance monitoring system. In this chapter we will discuss recent findings from human neuroimaging studies, which use new data analysis techniques to identify the spatiotemporal characteristics of these different systems, to provide support for the extended model proposed here.

The majority of human studies that have addressed this problem use functional magnetic resonance imaging (fMRI) to identify the cortical regions that are participating in decision making [5–8]. The low temporal resolution of fMRI however, imposes limitations on inferring causation as little can be said about the sequence of neural activation in these regions, which is also needed to ultimately infer the true underlying neural network. A different approach to deciphering the temporal characteristics of perceptual decision making is provided by non-invasive measurements of the human electro- and magneto-encephalograms (EEG/MEG). Though the spatial resolution of these imaging modalities is rather low, they possess temporal resolution on the order of milliseconds and in conjunction with advanced single-trial analysis techniques can be used to map out temporally distinct components related to different events during decision formation [9–11].

Though significant progress has already been made using each of these modalities in isolation, the localization restrictions of EEG and MEG and the temporal precision constraints of fMRI, suggest that only a combination of these approaches can ultimately enable the recovery of the spatiotemporal characteristics of the network processes underlying perceptual decision making in humans. This can potentially be achieved by simultaneous EEG/fMRI measurements or by EEG-informed fMRI analysis techniques where EEG-derived regressors are used to model the fMRI data [12–14]. As the across-trial and across-condition variability seen in the identified EEG components may carry important information regarding the underlying neural processes, correlating EEG component activity with the blood-oxygenation-level-dependent (BOLD) fMRI signal could provide images of the source of this variability with high spatial resolution. Figure 8.1B illustrates the benefits of combining EEG and fMRI in inferring the spatiotemporal characteristics of human decision making.

This chapter is organized as follows. We start out by briefly reviewing the major contributions of monkey neurophysiology to our current knowledge of perceptual decision making. We then discuss how the concepts derived from this animal work also apply to human decision making by providing evidence from a number of recent fMRI, EEG and MEG studies. Where appropriate we use these findings to make references in support of the model outlined in Fig. 8.1A.The latter part of the chapter focuses on how to best integrate EEG and fMRI and provides an example of how the EEG-derived fMRI analysis approach can be a valuable tool in achieving high spatiotemporal characterization of the neural correlates of perceptual decision making in humans. We conclude by providing a motivation on how this framework for perceptual decision making in combination with the techniques presented here can be extended to study reward- and value-based decision making in humans.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012374620700008X

The Regulatory Disordered Infant and Child

Sharon L. Johnson, in Therapist's Guide to Pediatric Affect and Behavior Regulation, 2013

Abnormal Processes

When an infant/child brain does not process the incoming sensory information normally their ability to self-regulate is diminished and they may not be able to maintain an appropriate level of awareness to meet environmental demands. The types of information shared by distressed parents about difficulties experienced by their children fall into two basic categories:

1.

Symptoms of hypersensitivity, defensiveness, or sensory avoiding demonstrating abnormal responding to sensory input

a.

demonstrates limited interests/fatigues easily and appears unmotivated

b.

fear of movement or heights (may get sick in association with movement or heights)

c.

overly cautious and unwilling to try new things or take risks/taking part in unsafe activities (e.g., climbing too high)

d.

withdrawing or responding aggressively when touched

e.

demonstrates a negative response to sounds

f.

selective “picky” eater, overly sensitive to food smells or textures

g.

intolerance to clothing tags, fabric textures, or will only wear certain kinds of clothes

2.

Regulatory difficulties

a.

difficulty tolerating or adjusting to changes, even minor changes, in routine

b.

attentional difficulties, easily distracted or becomes fixated on one activity with difficulty changing focus to another activity

c.

impaired ability associated with the interaction with others

d.

disrupted sleep wake cycles

e.

feels uncomfortable in environments with high levels of activity, such as athletic events, shopping malls, celebratory events, etc.

Associated to these regulatory difficulties is the ability to maintain an appropriate level of awareness to meet environmental demands:

Hides face, isolates from people/activity to avoid interaction and sensory input, may appear to be asleep

Nervous, anxious, difficulty attending/easily distracted

Requires cues or direct assistance to remain involved

May actually go to sleep to avoid sensory input.

As previously stated, the trajectory of normal development for self-regulation exists within a range versus being linear. Clarification of dysfunction is facilitated by a conceptual classification associated with the range of dysfunction to illuminate the progression and degree of impact of regulatory difficulties experienced (Kostuik & Fouts, 2002; Schore, 2002; Bell & Dieter-Deckard, 2007; Woolraich et al, 2008).

Mild

the child is able to adequately maintain while at daycare/preschool, but when they get home or stress escalates they decompensate and “lose it”

they feel like they are different from others but are not able to express specifically why, and may experience general unhappiness

they maintain strict routines and tend to be compulsive

demonstrate a narrow range of interests and activities

Moderate

all areas of life are affected by lack of self-regulation ability (home, school, activities, leisure)

avoidant of situations or sensory input which is experienced as aversive

negative influence reaches beyond the child to those around them

Severe

requires the assistance of others to participate or complete basic self-care needs

obvious discomfort/distress for the majority of the day

may engage in stereotypic, aggressive or self-destructive/injurious behavior.

When babies and toddlers are in distress, all of their regulatory resources are devoted to trying to organize and regain equilibrium. For a baby, stress is anything that pulls them out of attunement and into a negative emotional state (fear, anxiety, and sadness create stress). Stress is not exclusively associated to negatively charged events, it could just be the exposure to something new and different. When they are in emotional balance, they are emotionally regulated, and they rely on the relationship with the mother/caregivers to keep them regulated. For example, a mother sets her baby down to answer the phone and the baby begins to cry – signaling the need/requirement for the mother to return and “re-attune” them in order to prevent them from becoming overwhelmed by sadness. If the mother neglects/ignores the cry/plea the crying intensifies and leads to a series of internal reactions at the primary level (survival) focusing the child to direct all resources to basic functioning which unfortunately forfeits the opportunity for potential growth and integration or internalization of self-soothing. As a child continues to grow and develop these needs evolve, but their reliance on the attachment system endures. Healthy attachment resulting from healthy attunement is the key to emotionally healthy babies and toddlers. The child attaches to the regulating mother who facilitates maximal opportunity for positive emotions/security and minimizes negative emotions, thus creating optimal health, security and resilience. A positive note on periods of mis-attunement, as long as they are brief and not chronic it does allow learning self-regulation. Brief periods of mis-attunement followed by re-attunement has the effect of teaching or engendering resilience and may also offer the foundation for developing empathy.

The assumptions underlying developmental psychology, the influence of the dyadic relationship, and the demonstrated impact upon primary areas of function aid in understanding the dynamics of regulation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012386884800001X

Learning Theory and Behaviour

E.A. Kensinger, S. Corkin, in Learning and Memory: A Comprehensive Reference, 2008

1.14.4.1 Point: The Perirhinal Cortex Supports Visual Perception

Perception refers to the process of acquiring, interpreting, and representing incoming sensory information. Memory, in contrast, refers to the retention and retrieval of these representations in the absence of the sensory information. A deficit restricted to memory, therefore, should be observed only when an individual is required to distinguish alternatives after they have been withdrawn from view. In contrast, a deficit in perception should be apparent when an individual is asked to distinguish objects in plain sight.

Some of the first evidence that perirhinal cortex may have a role in perception came from Eacott et al. (1994). They tested macaque monkeys on a matching-to-sample task. In this task, monkeys first learn that a particular object (e.g., a red square) is associated with a reward (the sample phase). They then are shown the rewarded object and a second object (e.g., a red square and a green circle), and the monkeys must choose the initially rewarded object (the match phase). Critically, Eacott et al. (1994) included simultaneous match-to-sample and delayed match-to-sample conditions. While deficits in the delayed condition could result either from perceptual or mnemonic impairments, deficits in the simultaneous condition should reflect perceptual difficulties (because the objects remain in view). They found that when there was a large perceptual load in the task (i.e., when they used many different types of stimuli across all of the trials), the monkeys with perirhinal lesions showed impairments even in the simultaneous condition. These results support the interpretation that perirhinal cortex mediates visual perception.

Further delineating the exact role of perirhinal cortex in visual perception, however, has proved challenging. For example, Eacott and colleagues’ (1994) study further showed that perirhinal cortex is not needed for all forms of object perception. When perceptual load was relatively low (i.e., when only a small set of items was used across all of the trials), monkeys with perirhinal lesions showed a delay-dependent deficit: They were impaired relative to control monkeys at the long delay but performed normally at the shortest delay. This pattern of performance is consistent with a mnemonic, and not a perceptual, deficit.

Several additional studies, using discrimination of pairs of visual stimuli, showed inconsistent results. Damage to the perirhinal cortex can lead to impairments in visual discrimination under some conditions, but it does not lead to a pervasive deficit across all assessments of visual discrimination. Thus, Buckley and Gaffan (1997), like Eacott et al. (1994), reported that monkeys with perirhinal cortex lesions performed as well as control monkeys when learning a small number of concurrent visual discriminations (see also Aggleton et al., 1997; Thornton et al., 1997; Buffalo et al., 1999; Baxter and Murray, 2001) but were impaired when required to learn a large number of discriminations (see also Buckley and Gaffan, 1998).

At a broad level, these results suggest that activity in perirhinal cortex may enhance the specificity of an object’s representation (i.e., the level or amount of visual detail). When the task includes relatively few stimuli, the representation for each stimulus does not have to be precise or specific for discrimination to be successful. In contrast, when the stimuli are numerous and potentially confusable, each one must be represented precisely to avoid stimulus confusion (see Murray and Bussey, 1999; Bussey and Saksida, 2002, for further discussion). By this view, perirhinal lesions selectively impair object discrimination on tasks that employ large stimulus sets.

A study by Eacott et al. (2001), however, suggested that the perirhinal cortex is not critical for representing all fine-grained discriminations between objects. In their study, Eacott and colleagues required rats to distinguish between squares and rectangles, and they modulated the difference in side length between the two shapes such that the rats had to maintain a precise representation of the shapes to distinguish the square from the rectangle. Even with perirhinal cortex lesions, the rats were able to perform the task successfully. Critically, successful performance of this task could be based on representation of a single object feature: edge length. It is likely that this single-feature representation can be supported by areas earlier in the visual processing stream, such as inferotemporal cortex (see Buckley et al., 1997), and does not require engagement of perirhinal cortex (see also Buckley et al., 1997, 2001). Rather, perirhinal cortex may become necessary only when the representation of feature conjunctions is required.

In an influential model (the perceptual-mnemonic/feature-conjunction model), Bussey, Murray, and Saksida (e.g., Murray and Bussey, 1999; Bussey and Saksida, 2002) have proposed that perirhinal cortex serves as the final processor in the ventral visual processing stream (Desimone and Ungerleider, 1986; Ungerleider and Haxby, 1994), coding for complex visual representations. Within this framework, the results described above can be understood by considering the degree of feature ambiguity present across the different stimuli. The greater the number of object pairs to be discriminated, the greater the likelihood that a particular object feature will be rewarded when it is part of one object but not when it is part of another object (i.e., the greater the feature ambiguity). Because of the increased probability that a feature will be present among multiple stimuli, the representation of conjunctions of complex features in perirhinal cortex will be critical for successful task performance. The preservation of function in the study by Eacott et al. (2001) can be explained by reliance on a precise representation of a single feature (edge length) rather than on a need to integrate multiple intraitem features. Thus, the role of perirhinal cortex may be best described as representing configural relations among features, or conjunctions of intraobject features.

To test this hypothesis directly, Eacott et al. (2001) designed a visual discrimination task in which combinations of features, rather than any single object feature, signaled the rewarded object. Performance on this task was compared to performance on a task in which visual discrimination among stimuli could occur based on single features. Rats with perirhinal cortex lesions were impaired only when successful performance required discrimination based on the configuration of features; when performance could rely on representation of single features, the perirhinal-lesioned rats performed normally. Bussey et al. (2002, 2003) have found similar results when assessing visual discrimination in nonhuman primates: Monkeys with perirhinal lesions were impaired on tasks that required disambiguating shared features, but not on tasks that could be solved by discrimination of single object features. These results are consistent with the conclusion that perirhinal cortex functions as part of the ventral visual processing stream and plays a critical role in feature integration (see Bussey et al., 2005).

Further corroborating evidence has come from a study demonstrating that the requirement for feature integration can exacerbate the mnemonic deficits demonstrated after perirhinal damage. In a spontaneous recognition task, Norman and Eacott (2004) used rats’ natural tendencies to explore novel items as a means to assess their ability to remember which objects had been encountered previously. The features of the novel objects were manipulated so that some differed from familiar objects in single features, whereas others differed from familiar objects in the conjunction of features. The critical finding was that perirhinal-lesioned animals showed an exaggerated memory deficit for the feature-ambiguous stimuli compared to the feature-unique stimuli. In fact, even after relatively short delays, the perirhinal-lesioned animals were at chance in distinguishing between the novel and familiar feature-ambiguous objects (Norman and Eacott, 2004).

These findings have led to the proposal that perirhinal cortex represents the association between intraitem features (Gaffan, 1994) or the gestalt representation of a whole object (Murray and Bussey, 1999). These putative functions of perirhinal cortex are consistent with its location among high-order processing regions in the ventral visual stream. These regions tend to respond to the whole object rather than to the individual features that comprise the object (Baker et al., 2002). Two open questions are whether perirhinal cortex plays a role in configural learning of all intraitem features and whether its role is limited to intraitem feature integration or also extends to the association of features that are spatially or temporally separable (see Alvarado and Bachevalier, 2005, for evidence of dissociable roles of the perirhinal and parahippocampal cortices in object vs. temporal configural memory; see Lee et al., 2005a,b, for evidence of dissociable roles of the hippocampus and MTL cortices in spatial vs. object perception; see Shaw et al., 1990, for evidence that perirhinal cortex may be important for cross-modal object processing). These questions are closely tied to the debate, discussed in the first section of this chapter, regarding the extent to which the roles of perirhinal cortex and hippocampus can be dissociated (e.g., with perirhinal cortex representing intraitem feature integration and the hippocampus representing item–item and item–context relations).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123705099000486

Odor Memory and Perception

Alexia Nunez-Parra, ... Diego Restrepo, in Progress in Brain Research, 2014

6 Neural Representation of Input to OB Can Be Shaped by Sniff

Olfactory sampling can be actively modified by changes in respiration frequency. This active control on incoming sensory information allows animals to generate critical context-dependent odorant representations. Interestingly, activation of OSNs and concomitant input transmission to the GL can occur during the sniff even in the absence of an odor, suggesting that sniff itself could somehow activate OSNs. It has been proposed that this sniff-mediated effect is likely mediated by mechanical receptors located in the OSNs, activated by the sniff air pressure (Grosmaitre et al., 2007), although it could also be caused by stimuli present in the regular air. Consistent with these findings, studies using fluorescent calcium-sensitive dyes expressed presynaptically in OSN terminals found that in the absence of odor stimulation about 50% of glomerular activation was driven by sniff (Carey et al., 2009). When OSNs are activated by odorants delivered by sniff, the number of glomeruli showing inhalation (sniff)-locked response patterns increases considerably. Interestingly, this type of odor-evoked, sniff-locked response pattern is dependent on sniff frequency: while low-frequency sampling (< 4 Hz) evokes sniff-locked responses that return rapidly to baseline, high-frequency sniffing (4–8 Hz) evokes sustained responses showing no clear coupling to inhalation. The attenuation of OSN inputs during high sniff frequency sampling might be useful to selectively suppress OSN activation by background odors during exploratory sniffing (Wachowiak, 2011). This OSN decoupling phenomenon could be mediated by low-level processes and may not depend on centrifugal modulation by behavioral state (Verhagen et al., 2007). Therefore, sniff can shape activity pattern of OSN input to glomeruli, both at a baseline levels and during odor-evoked responses (Carey et al., 2009; Verhagen et al., 2007).

Another interesting point that can be addressed by imaging studies performed in awake animals is a simultaneous comparison between neural responses and behavioral choices (Verhagen et al., 2007; Wesson et al., 2008b). In awake head-fixed animals engaged in an olfactory discrimination task, the behavioral output shows that the time required to correctly discriminate between different odorants can be as short as 140 ms, a period of time that lies within a fraction of the sniff cycle. However, OSN inputs to the bulb arrive about 100–150 ms after inhalation begins, leaving only 50–100 ms for central processing and response initiation. Yet, odor discrimination can occur before the full development of OSN activation and input arrival to the OB (Wesson et al., 2008a), indicating that although sniff could shape receptor input to the glomeruli, no significant role is played by sniff when olfactory discrimination is performed quickly (Wesson et al., 2009). These results are in contrast with other studies performed in freely moving animals that found that odor information coded within the sniff is critical for odor discrimination (Kepecs et al., 2007). It is worth to mention that these studies used different behavioral tasks (two-choice discrimination vs. passive discrimination) and states (free-moving vs. head-fixed, Wesson et al., 2009 vs. Kepecs et al., 2007, respectively) that can account for the differences observed.

Overall, at the very early glomerular input level to the OB, sniff dramatically affects the neural activity input, including patterns of spontaneous and odor-evoked glomerular activity, response intensity, and even spatial patterns of activation. However, the contribution of the change of neural activities on behavior output is debated, and more work is necessary in the future.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444633507000085

Full circle autism care includes family relationships and dynamics: practical application

Ruby Moye Salazar, Stacy Sue Rosello, in Emerging Programs for Autism Spectrum Disorder, 2021

The sensory system

Sensory Integration Theory, developed by Dr. A. Jean Ayres, can be simply conceptualized: incoming sensory information from the body and the world are continually processed in the brain. When the information is processed well, organized, and in sync, the resulting behavior is regulated, coordinated, and accurately experienced as sensation and emotion (1989).

There are eight senses that inform us about our world: taste, touch, smell, vision, hearing, body, movement, and emotional “feelings” associated with each (see Table 21.2). As sensations are processed simultaneously or integrated, this gives rise to perceptions (Ayres, 1989). When the brain makes a collective organized processing of sensations, there is meaning, dependent on developmental capacity.

Table 21.2. The eight senses.

The eight senses
Touch
Taste
Smell
Vision
Hearing
Body/muscles and joints
Movement/balance and motion
“Feelings”/ internal body senses and embodied affect

In utero, the sensory systems are developing and become more organized as a child grows. Rich developmental early life experiences of infancy, toddlerhood, preschool and beyond are fueled as the brain wires experiences within the sensory system. As a child interacts and learns about the world first through the sensory systems, motor development is driven. When a child is able to coordinate a look toward a caregiver with a smile and coo, we know that multisensory integration and affectual meaning are taking place. At the same time, emotionally laden sensations or “feelings” are emerging, inviting caregiver interest and connection. With synchronous sensory processing, robust relational exchanges occur and this developmental integration process occurs with relative ease.

However, with asynchronous processing of sensory information, outcoming behavior can be dysregulated, disorganized, or inefficiently modulated. When there are sensory processing challenges, information is like a “traffic jam” within the brain (Ayres, 1989). For example, a child may exhibit behaviors such as flapping hands, spinning, withdrawal from light touch or appearing uninterested in interaction. Because the processing is out of sync, the child reacts in an unexpected way that influences the relational and developmental process.

These behaviors can be confusing. Yet, they are a child’s way of sharing their difficulties processing internal and external experiences or conveying their unique sensory preferences. If met with attentiveness and awareness, what appears foreign, actually begins to make sense. If unconventional responses are met by caregiver and practitioner with calm interest in making meaning together, awareness of inherent developmental and neurological processes can occur so that the reciprocal response is matching the child. For example, the child may be flapping in excitement, spinning because the body needs movement to regulate, withdrawing from touch and crying because of discomfort, or not responding to a social bid because their sensory system requires more input. Understanding the child’s signals will support and respectfully define engagement, disengagement, or reciprocity. New opportunities and practice organize information, interaction, and meaning to connect the body and the mind within a relational lens, thus fostering developmental integration.

Over 90% of children on the autism spectrum have sensory processing differences (Chang et al., 2014). Sensory Response Patterns are part of those differences (Miller, 2007; Profectum, n.d.a, n.d.b, n.d.c). Overresponsivity often leads to a strong reaction to sensory information. A child may exhibit withdrawal, discomfort, caution, fearfulness, or body disorganization and restlessness. Underresponsivity is represented in a child who takes more effort to engage or tends to be “in his or her own world.” A sensory craving child appears to have an insatiable desire for continual sensory input such as chewing, running, jumping, crashing, touching, spinning, visually exploring, or humming. A child with a mixed profile, which is often the case, exhibits any mixture of these response patterns. Practitioners support families when they can begin to make sense of the child’s sensory response patterns and how they may be interpreted, and sensitively and meaningfully met as adaptations (see Table 21.3).

Table 21.3. Sensory response patterns and outward visible behaviors.

Sensory response patternOutward visible behaviors linked to sensory differences
Overresponsivity is a stronger, often negative reaction than expected to sensory input Seemingly benign sensation is perceived as noxious (e.g., tickles, shirt tags), strong food preferences, gags at smells, fearful or resistance to movement, places hand over ears; pulls away; anxious, fearful or easily irritated
Underresponsivity is a decreased response, than expected to sensory input; requires more sensory input for a acknowledged response Slow to respond to sounds, decreased response to pain, visually unaware of others or environment, delayed reaction to movement, may not notice smells; sedentary, quiet, self-absorbed or withdrawn
Sensory craving is a seemingly unsatiated desire for sensation Constant moving/incredibly active, crashing, spinning, chewing, smelling, touching, making noises/humming, visually captivated; intense presence and/or emotions; often lacks safety awareness
Mixed Any combination of the above

Source: Based on Profectum (n.d.a, n.d.b, n.d.c), Miller (2007), Parham and Ecker (2007).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323850315000141

Imaging the Addicted Brain

C.A. Hanlon, ... J.L. Jones, in International Review of Neurobiology, 2016

2.2.6 Thalamus

The thalamus is one of the most highly interconnected brain regions, serving as a hub for the majority of incoming sensory information, and a relay between the striatum, pallidum, and prefrontal cortical areas. Structural studies suggest that thalamic gray matter volume is lower among chronic cocaine users than nonusing controls (Sim et al., 2007). BOLD activity in the thalamus has been associated with the euphoric state in acute cocaine intoxication (Breiter et al., 1997; Risinger et al., 2005) and during cocaine cue exposure (Garavan et al., 2000). Similarly, cocaine users have been found to have decreased BOLD signal in the thalamus during visual attention and memory tasks (Moeller et al., 2010; Tomasi et al., 2007a). Furthermore, deactivation of the thalamus during a working memory task has been associated with decreased abstinence during treatment (Moeller et al., 2010).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0074774216301131

Attention

C. Spence, in Encyclopedia of Human Behavior (Second Edition), 2012

The Theory of Perceptual Load

According to the theory of perceptual load, our attentional resources are always fully engaged with the processing of any incoming sensory information (i.e., we cannot store them up for use at a later point in time). Thus, under those conditions in which a person's primary task is not overly demanding, any spare attentional resources they have will be available for the processing of other stimuli (e.g., the irrelevant stream of speech presented in the ‘unattended’ ear in a dichotic listening study). Lavie has argued that under such ‘low-load’ conditions, late selection can be observed. However, under those conditions in which the load of a participant's primary task is increased (such as when the complexity, or rate of presentation, of the to-be-shadowed message increases, or when the number of stimuli presented simultaneously in a visual display goes up), a participant will need to devote more resources to processing it. Hence, there will be fewer resources left for the processing of any other incoming information. Lavie argues that under such ‘high-load’ conditions, attentional selection will, on the contrary, likely occur relatively early in information processing.

Perceptual load theory has provided a popular explanation for why attentional selection sometimes occurs early in information processing, whereas at other times (and/or in other studies) it occurs much later. As such, this intuitive theory appears to offer a means of resolving the long-standing (and, for many years, seemingly intractable) debate amongst researchers over early and late attentional selection. In the years since Lavie and Tsal first proposed their theory, a large and ever-growing number of studies have been published in support of the theory's main tenets. Empirical support has come from a number of sources, including both behavioral and neuroimaging research. What is more, relatively few results have been reported that cannot be accounted for by the theory. That said, one limitation of the theory is that no objective measure of ‘load’ has yet been provided.

Thus far, the majority of studies of perceptual load have tended to restrict themselves to investigating those conditions in which participants are presented with (and hence have to select between) the information presented in just one sensory modality. Most researchers have focused on conditions of unimodal visual selection, though studies demonstrating the applicability of the theory to those conditions requiring unimodal auditory selection have now started to appear in the literature. As yet, there has been relatively little research investigating the validity of perceptual load theory in accounting for participants’ behavior under conditions requiring unimodal tactile selection. However, that said, there is no obvious reason to believe that selective attention should operate any differently within the tactile modality than it does within either of the other spatial modalities (i.e., vision or audition).

Having demonstrated attentional selection under a variety of different unimodal conditions, the next question to be addressed by researchers was whether the attentional resources that lie at the heart of perceptual load theory are unitary, or whether there are instead separate resources for the processing of stimuli in each sensory modality (as originally suggested by Christopher Wickens’ ‘multiple resource theory,’ back in the early 1980s). While the results of the early crossmodal perceptual load studies came to the conclusion that resources were indeed modality-specific, the story has, in recent years, become much less clear. While some researchers have demonstrated that varying the perceptual load in one sensory modality (e.g., vision) affects the level of processing of stimuli presented simultaneously in another sensory modality (e.g., audition), other researchers have failed to observe any such crossmodal effect.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123750006000379

What occurs when information interacts with sensory receptors?

Sensory receptors are specialized neurons that respond to specific types of stimuli. When sensory information is detected by a sensory receptor, sensation has occurred.

What occurs when information interacts with sensory receptors the eyes ears tongue nostrils and skins?

Sensation occurs when information interacts with sensory receptors—the eyes, ears, tongue, nostrils, and skin. Perception is how our brain interprets the sensation.

Is what happens when information is detected by a sensory receptor quizlet?

When sensory information is detected by a sensory receptor, sensation has occurred. For example, light that enters the eye causes chemical changes in cells that line the back of the eye. The sensitivity of a given sensory system to the relevant stimuli can be expressed as an absolute threshold.

What are the four types of information transmitted by sensory receptors?

Four aspects of sensory information are encoded by sensory systems: the type of stimulus, the location of the stimulus in the receptive field, the duration of the stimulus, and the relative intensity of the stimulus.