Which of the following is an example of a self-report measure?

How to measure outcomes and individual differences in meditation

Yi-Yuan Tang, Rongxiang Tang, in The Neuroscience of Meditation, 2020

Self-report questionnaires

Self-report is one of the most direct options for inquiring about psychological well-being from a first-person perspective. Although the subjective nature of self-report questionnaires is not without criticism, their convenience and utility in tapping into psychological health have been validated. For clinical diagnoses of psychiatric disorders, interviews and questionnaires are always used for evaluating the psychological state of the patient. In general, well-validated and reliable questionnaires are highly useful if the targeted outcomes are related to different aspects of psychological well-being. For the clinical population, questionnaires assessing psychological symptoms are readily available for researchers who would like to investigate the effects of meditation on improving psychological health. Similarly, for the healthy population, the same questionnaires could also be effective in measuring subclinical symptoms, and whether or not meditation would be beneficial in enhancing these aspects of psychological health. However, self-report questionnaires are unlikely to be accurate for cognitive-related constructs such as self-report capacity of attention control and cognitive control. People usually tend to either underestimate or overestimate their ability as these self-evaluations are not as straightforward as expressing their opinions or how they feel emotionally. For reporting practice time and experience, self-report questionnaires are also the most straightforward instruments as there are no other ways to noninvasively tapping into the amount of practice an individual engages in. However, these self-report data should be treated with caution since these reports are, again, estimations of past experiences, which would require reflecting from memory, especially for those with long-term meditation experiences. Another caveat of self-report meditation practice is that the quality and quantity of meditation practices outside of formal intervention settings are difficult to control and are subject to unforeseen noises that are unlikely to occur in formal practices. However, experience sampling methods could potentially be promising for collecting data on informal practices, as this kind of method would prompt participants from time to time through mobile devices to inquire about their experiences using simple questions that allow researchers to collect immediate/real-time responses. This experience sampling approach can be particularly useful for meditation research and can collect data on both quantity and quality with respect to meditation practices.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128182666000095

Pediatric epilepsy

Aimee W. Smith, ... Avani C. Modi, in Adherence and Self-Management in Pediatric Populations, 2020

Self-report

Although self-report measures are easy to obtain and inexpensive, they are notoriously inflated and inaccurate, unless a validated and reliable questionnaire is used (e.g., Chronic Disease Compliance Instrument; Kyngas, Skaar-Chandler, & Duffy, 2000). Epilepsy-specific self-management and adherence self-report measures are valuable, as they include more nuanced disease-specific information (e.g., items related to seizure freedom). The Pediatric Epilepsy Medication Self-management Questionnaire (Modi et al., 2010) is a useful self- and caregiver-proxy measure of self-management in children and adolescents with epilepsy (Carbone et al., 2013). Given the inflated nature of self-report, a correction factor has been proposed to improve accuracy and equate it to electronically monitored adherence rates (Modi, Guilfoyle, et al., 2011).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128160008000098

Margaret Kathleen Pichora-Fuller, in Music and the Aging Brain, 2020

Self-report measures of hearing

Self-report measures can be used to assess functioning in everyday life. One of the most common questionnaires used to assess the psychosocial effects of hearing impairment on older adults is the 10-item Hearing Handicap Inventory for the Elderly–Screening (HHIE-S) (Ventry & Weinstein, 1983). An analysis of factors contributing to scores on the HHIE-S found that the PTA explained between half to two-thirds of the systematic variance (Humes, Pichora-Fuller, & Hickson, in press). One of the most widely used questionnaires designed to assess speech understanding is the Speech, Spatial, and Qualities of Hearing Scale (SSQ) and it includes four questions about music (Gatehouse & Noble, 2004). Significant age-related differences have been found on the SSQ even when the older adults had normal audiometric thresholds below 4000 Hz, including significant age-related differences on three of the four items about music (Banh, Singh, & Pichora-Fuller, 2012). Questionnaires to assess music listening in older adults with hearing impairment have been used in a few studies examining benefit from using hearing aids. One study reported that 30% of a sample of older adults who used hearing aids had diminished enjoyment of music, with about half complaining that music was too loud or too soft (Leek, Molis, Kubli, & Tufts, 2008). Similar results were reported in another study, with 76% of hearing aid users reporting benefit for listening to recorded music and 62% reporting benefit for live music (Madsen & Moore, 2014). Overall, self-report measures provide insights into the effects of auditory aging on psychosocial functioning, speech understanding, and music listening that are not fully accounted for by the PTA and that are not fully addressed by amplification.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128174227000031

Assessment of Social Anxiety and Social Phobia

James D. Herbert, ... Lynn L. Brandsma, in Social Anxiety (Second Edition), 2010

Social Skills Questionnaires

Three self-report measures of social skills have been developed. The Social Skills Questionnaire (SSQ-P) (Spence, 1995) is a 30-item scale that assesses a parent's perception of their child's social skills. A 3-point Likert scale is used. The SSQ-P has good internal consistency and split-half reliability (Spence, 1995). The Teenage Inventory of Social Skills (TISS) (Inderbitzen & Foster, 1992) was designed to identify adolescents in grades 7 through 12 with problematic peer relationships and to help target specific problematic behaviors for intervention. It is a 40-item self-report scale with initial reports demonstrating good test–retest reliability and convergent and discriminant validity (Inderbitzen & Foster, 1992). The Matson Evaluation of Social Skills with Youngsters (MESSY) (Matson, Rotatori, & Helsel, 1983) is another self-report measure of social skills in children. It is a 62-item questionnaire that consists of five factors: overconfident, impulsive/recalcitrant, jealous/withdrawal, inappropriate assertiveness, and appropriate social skills.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012375096900002X

Pain in older adults

Ann L. Horgas, Amanda F. Elliott, in Handbook of the Psychology of Aging (Ninth Edition), 2021

Self-reported pain

Self-report is the primary means of measuring pain. In clinical settings, pain is assessed on a 0–10 numerical ratings scale, with 0 indicating no pain and 10 indicating the worst pain imaginable. In research settings, a 0–100 visual analog scale is the most common measure of pain intensity. Among older adults, however, the verbal descriptor scale (VDS) is recommended. The VDS measures pain intensity using word anchors (e.g., no pain to worst pain imaginable), is a reliable and valid measure of pain intensity, and is most preferred by older adults (Herr, Spratt, Mobily, & Richardson, 2004). However, the 0–10 scale only assesses the presence and intensity of pain, which does not address the multidimensional aspects of pain.

Beyond these unidimensional measures, there are many disease-specific tools, such as those designed to measure pain associated with cancer or osteoarthritis, but these are not specific to older adults. These measures are generally broader in scope and assess the impact of pain on everyday functioning (e.g., sleep, movement, relationships). For instance, the Western Ontario and McMaster Universities Osteoarthritis Index measures pain associated with osteoarthritis and the Brief Pain inventory measures cancer-related pain (Bellamy, Buchanan, Goldsmith, Campbell, & Stitt, 1988; Cleeland, 1989). One of the most widely used pain tools is the McGill Pain Questionnaire. This multidimensional tool measures pain affect and evaluation (based on 78 word descriptors), pain location (using a body map), and pain intensity (based on the Present Pain Intensity subscale, a single question rating subjective pain on a six-point scale) (Melzack, 1975).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012816094700012X

A roadmap for developing team trust metrics for human-autonomy teams

Kristin E. Schaefer, ... Jason S. Metcalfe, in Trust in Human-Robot Interaction, 2021

Self-report measures

Self-report measures are the most commonly used method of assessing trust across domains (e.g., interpersonal, technological, etc.); however, there is limited consistency in different measurement tools. Items range from a single question (“how much do you trust {X}”?), to scales for assessing a specific type of trust, or scales that were developed for human teams. While there are similarities and lessons to be learned from each, there are indeed differences. First, current human-robot trust researchers suggest that due to the human component in the human-robot team, a propensity to trust scale should be used in conjunction with a human-autonomy trust measure (see Yagoda, 2011). This is because an individual's propensity to trust machines can directly impact the use of a specific machine and, importantly, across different contexts (Merritt & Ilgen, 2008). However, limited work has included propensity to trust in human-technology domains, even though propensity to trust can be identified as the inherent trait-like characteristic of the person (i.e., individual differences) to trust. The Interpersonal Trust Scale (ITS; Rotter, 1967) is the most common measurement scale used to assess an individual's propensity to trust, though a newer scale has been proposed as well (Evans & Revelle, 2008) and is rapidly growing in adoption.

For state-based trust in human-autonomy teams, the most well-cited trust scale is the Checklist for Trust in Automation (Jian, Bisantz, Drury, & Llinas, 1998). While this scale is established as the main metric of trust for automated or preprogrammed automation systems and validated for use with an automated signaling system (Spain, Bustamante, & Bliss, 2008), it has not been validated for autonomy-enabled systems that are required to participate in independent decision-making and interdependent teaming. To address some of these limitations, different approaches have been offered. For example, Chen and Barnes (2012) modified the Checklist to create what they called the Usability and Trust Survey. This new scale revised the Checklist to classify impressions of a specific robot, and further, added several more usability questions to make a more robust scale. An alternative approach was to develop and validate a new scale to meet the specific needs of human-autonomy teams. Schaefer developed the Trust Perception Scale-HRI (TPS-HRI; Schaefer, 2013, 2016) that is specific to intelligent, embodied autonomy (e.g., robots) and validated across multiple contexts. Similar to a comparative analysis of both the Checklist and the TPS-HRI as part of the validation process for the TPS-HRI scale (see Schaefer, 2013), an independent analysis by Kessler, Larios, Yerdon, Walker, and Hancock (2017) found that, in comparison with the Checklist, the TPS-HRI scale was more applicable and provided increased understanding of the trust relationship when collaborating with a single human-intelligent robot dyad. Besides the above-listed examples, other researchers have focused on a specific type of trust, such as cognitive or affect-based trust, and used a metric based on a definitional foundation even if the scale was actually developed for interpersonal trust.

While research has pushed for the development of more appropriate team trust scales for human-autonomy teaming, it is also important to note that self-report measures are biased by perception and are limited in scope (see also Gutzwiller et al., 2019). Therefore, there has been a push by the scientific community to identify more objective metrics of trust. These include both behavioral and biological (“psychophysiological,” see below) indicators of trust. Both of these research areas are still being developed but have had some promising early findings to advance human-autonomy team trust metrics.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128194720000125

Amy E. Pinkham, Johanna C. Badcock, in A Clinical Introduction to Psychosis, 2020

Alternative Methods of Assessing Cognitive Bias

Self-report measures of cognitive bias in psychosis are widely used in research, but scales suitable for clinical practice have only recently been developed and are, therefore, still being evaluated. Some offer the advantage of assessing multiple cognitive and social cognitive biases concurrently. For example, the Cognitive Biases Questionnaire for Psychosis (CBQp: Peters et al., 2014) assesses jumping to conclusions, intentionalising, catastrophising, dichotomous thinking, and emotional reasoning biases. Initial evidence suggests it has good reliability and concurrent validity. One weakness is that individual biases are highly correlated; so, the CBQp may index a general thinking bias rather than distinct cognitive biases. In addition, CBQp scales are poorly correlated with experimental measures of each construct, raising doubts about the validity of subjective methods to assess cognitive bias (Bastiaens, Claes, Smits, Vanwalleghem, & De Hert, 2018).

Subjective and objective methods of assessing cognitive bias are often poorly correlated.

The Davos Assessment of Cognitive Biases Scale (DACOBS; van der Gaag et al., 2013) was designed to measure four cognitive biases (jumping to conclusions, belief inflexibility, selective attention to threat, and external attribution) believed to be specific to the positive symptoms of psychosis. Recent evidence suggests that these biases are in fact equally present in patients with psychotic and nonpsychotic disorders (Bastiaens et al., 2018). Such findings could indicate that the suite of biases covered in the DACOBS are necessary but not sufficient for the development of psychosis.

Dysfunctional beliefs and expectancies are also considered important in the development and maintenance of negative symptoms of psychosis (see Beck & Rector, 2005). A systematic review of the evidence suggests a significant, though small, association between ‘defeatist performance beliefs’ (e.g. ‘If you cannot do something well, there is little point in doing it at all’) and worse negative symptoms and functional outcomes in people with schizophrenia (Campellone, Sanchez, & Kring, 2016). Defined as overgeneralised negative thoughts about one's ability to successfully perform goal-directed behaviour, such beliefs can contribute to reduced interest and motivation to persist with social and employment opportunities (Reddy et al., 2017). Interventions targeting negative beliefs and expectancies could therefore help to improve everyday functioning and recovery in people with negative symptoms.

Defeatist performance beliefs are defined as overgeneralised negative thoughts about one's ability to successfully perform goal-directed behaviour.

One of the most widely used measures of cognitive bias in psychosis is the Beck Cognitive Insight Scale (BCIS; Beck & Rector, 2005). It provides measures of biases in self-certainty and self-reflectiveness, which are combined in a composite index of ‘cognitive insight’ (see critique in Van Camp, Sabbe, & Oldenburg, 2017). Importantly, changes in cognitive insight appear to precede changes in neuropsychological performance, which suggests that improving cognitive insight could bolster cognitive functioning (Bredemeier, Beck, & Grant, 2018). However, higher cognitive insight may also be linked to higher levels of depression and lower levels of self-reported quality of life (Lysaker, Pattison, Leonhardt, Phelps, & Vohs, 2018). Case conceptualisation with your clients will therefore benefit from careful consideration of the complexity of these effects.

Cognitive insight refers to a lack of awareness of, and ability to re-evaluate and correct, distorted thoughts and beliefs.

Self certainty refers to the degree of confidence or certainty in one's (mis)interpretations.

Self reflectiveness refers to the capacity for self-awareness and willingness to re-evaluate one's thoughts and beliefs.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128150122000080

Different ways of measuring emotions cross-culturally

Yulia E. Chentsova Dutton, Samuel H. Lyons, in Emotion Measurement (Second Edition), 2021

29.3.1.1 Self-report surveys

Self-report surveys of emotions are cheap and easy to administer to large groups of people across many different cultural contexts. These questionnaires might ask participants to rate the intensity and/or frequency of their past, present, future, hypothetical, ideal, or average emotional experiences or their use of emotion regulation strategies. They can be administered at a single time point or longitudinally, whether online, through the mail, or in person. A researcher can assess subjective experiences (e.g., “What emotions do you feel/value?”) or intersubjective beliefs (e.g., “What emotions do others feel/value?”). One area of cross-cultural emotion research that has made excellent use of survey methods is that focusing on subjective well-being. Diener, Diener, and Diener (1995) examined predictors of subjective well-being across 55 nations using data from large-scale survey studies. They have observed that country-level economic and social variables like GDP per capita and emphasis on civil rights were positively associated with subjective well-being. Additionally, culture-driven differences emerged in the frequency of reported positive affect. For example, East Asians reported experiencing happiness less often than European Americans. Other survey studies have also observed this pattern (Eid & Diener, 2001; Kitayama, Markus, & Kurakawa, 2000; Schkade & Kahneman, 1998; Veenhoven & Ehrhardt, 1995), in line with cultural differences in the emphasis on experience and expression of happiness (Bellah, Sullivan, Tipton, Swidler, & Madsen, 1985; Russell & Yik, 1996). Taking this research further, studies have shown that positive emotions such as happiness meant different things to people from different cultural contexts. In the 20th century, the meaning of happiness in the US shifted from one emphasizing luck to the one emphasizing internal feelings (Oishi, Graham, Kesebir & Galinha, 2013). It continued to focus on luck in a number of other cultural contexts. The cultural meanings of happiness also reflect individualistic versus collectivistic goals. Comparisons of Americans and Japanese (Kitayamaet al., 2000; Uchida & Kitayama, 2009) reveal that whereas feeling good and happy is associated with personal achievement and with corresponding interpersonally disengaged positive feelings, e.g., pride, for the Americans, it is associated with social harmony and with corresponding interpersonally engaged positive feelings, e.g., feeling of respect, for the Japanese.

Another pattern that emerges from the work using self-report questionnaires is that East Asian cultural contexts foster experiences of mixed emotions, or occasions when a person feels several different, and potentially contradictory, emotions at once (Leu et al., 2010; Shiota, Campos, Gonzaga, Keltner & Peng, 2010). For example, positive and negative emotions are negatively correlated in the United States, but positively correlated in Japan (Kitayama et al., 2000). East Asian cultural emphasis on interdependence, or the tendency to conceptualize one’s self in terms of social relationships, is thought to contribute to this pattern (Grossmann, Huynh & Ellsworth, 2016). Attending to needs and desires of many social actors may help people recognize many possible appraisals of emotional situations, leading to more complex emotional experiences. In East Asian cultural contexts, people tend to anticipate that positive emotions like happiness and pride may engender negative consequences, such as interpersonal tension or avoidance of harsh reality (Uchida & Kitayama, 2009), or vice versa. One implication for commercial emotion researchers is that liking and disliking of products cannot be assumed to reflect the same dimension across cultural contexts and may need to be assessed separately. Latent class analysis can be used to discover these dimensions or subtypes of emotions within and across cultures, for example to reveal variation in norms and values regarding emotional experience (Eid & Diener, 2001) or to discover emotion subcategories like benign envy versus malicious envy (van de Ven, Zeelenberg, & Pieters, 2009; but for more conservative approaches, see Falcon, 2015 on taxometric analysis, and Hoemann et al., 2020, on Gaussian Mixture Modeling).

These studies showcase the promise of self-report measures in cultural psychology. By using them, researchers are able to assess a variety of different aspects of people’s emotional experience and beliefs about emotions. Yet, this method is not without its limitations. One challenge that emerges when comparing reports of emotional experience across cultural contexts is that it can be difficult to interpret people’s ratings of their own emotions. We know that in some domains, people tend to rate themselves relative to their social context (a reference-group effect; see Heine, Lehman, Peng, & Greenholtz, 2002). If people in different cultural contexts actually experience emotions with different frequency or intensity, they can collectively establish different reference norms. Paradoxically, this can mean that self-report ratings can disguise actual differences in affective experience. For instance, mild anger may be rated as more intense in contexts that discourage anger or vice versa.

In addition to the reference-group effect, other types of reporting styles can present challenges. These include tendencies to habitually agree or disagree with presented items (acquiescence and disacquiescence response styles, respectively, see Tellis & Chandrasekaran, 2010 for work on these biases in consumer ratings), and a tendency to use mid-point values on a scale (midpoint responding) as opposed to more extreme values (extreme response style). These response styles are known to differ across cultural contexts in both emotion research studies (Baumgartner & Steenkamp, 2001; Chen, Lee, & Stevenson, 1995; Gilman et al., 2008; Van Herk, Poortinga, & Verhallen, 2004) as well as product liking work (De Beuckelaer, Zeeman, & Van Trijp, 2015; Yeh et al., 1998). For example, East Asians are more likely to show the midpoint response style relative to North Americans (Chen et al., 1995; Yeh et al., 1998). Because these differences have been detected for domains that allow researchers to use clear benchmarks to examine reporting styles (e.g., comparisons with actual behavior, Van Herk et al., 2004), it is clear that they reflect the tendency to use response scales differently rather than actual preferences. If a researcher anticipates that her data could be affected by reporting biases, she would need to statistically adjust for them by standardizing participants’ reports (Fischer, 2004) or using Likert scales with more specific labels (e.g., specific time frames for frequency), testing whether this resolves the issue, or providing participants with guidance on rating themselves relative to a specific reference group, such as a typical student. Consumer rating research suggests that methods such as paired comparisons are less affected by rating biases than scaled items (De Beuckelaer, Kampen & Van Trijp, 2013). Another strategy that is more time intensive involves calibrating participants’ ratings using standardized sets of emotional stimuli or vignettes (e.g., King & Wand, 2007) and providing a description of emotional characteristics that correspond to each anchor of the scale. Researchers that need to ascertain that the reported levels of affect (rather than their correlations) are valid may consider turning to such methods.

Complementing and building on this work, other research has examined reports of normative and ideal emotions. These studies have demonstrated that ideas about emotions are associated, but non-redundant, with reports of experienced emotions. For example, one study has shown that the more individualistic cultural contexts of the United States and Australia appear to promote much more uniform norms regarding positive emotions than the more collectivistic cultural contexts of Taiwan and China (Eid & Diener, 2001). Participants from the former set of cultural contexts were more likely to reports that all positive emotions were desirable and appropriate than participants from the latter, who were more likely to view pride, a self-focused emotion, as less desirable and appropriate than other positive emotions. Although desirability of positive emotions was associated with their frequency, the patterns observed for the two sets of ratings (ideal vs. experienced) were not identical.

Adding to the complexity, emerging work on people’s ideas about ideal emotional states demonstrates that although across cultural contexts people report that positive emotions are ideal, those in the United States are more likely to view high arousal positive emotions, such as euphoria, as ideal and less likely to view low arousal positive emotions, such as feeling calm, as ideal than those in East Asian cultural contexts (Tsai, 2017; Tsai, Knutson, & Fung, 2006). This pattern of preferences is observed for the reports of ideal emotions and is not mirrored by experienced emotions, perhaps because people have limited mastery of regulating their emotions to their ideal states. Yet, knowing more about ideal emotions in different cultural contexts is important because these beliefs guide choices of activities and favorite products that are thought to maximize desirable emotions (e.g., Tsai, Chim, & Sims, 2015). This means that people in different cultures may look for different emotional effects from their favorite products. Cues of culturally desired emotions also influence important decisions, such as investment behavior (Park, Genevsky, Knutson & Tsai, 2019). Taken together, these data suggest that researchers interested in emotional norms need to assess them directly rather than infer them from the emotional experiences (and vice versa).

Collectively, these studies illustrate that survey methods are a powerful tool for making cross-cultural comparisons of emotions, as they allow for the quick, convenient, and cost-effective collection of data within and across different cultural contexts. This work also signals the need to differentiate between measuring emotions and beliefs about emotions. Of course, one methodological concern here is that asking people to rate their emotions over a long period of time or in general creates a risk that participants may not remember enough of their actual emotions to provide an accurate response. To compensate, they may fall back on their general knowledge about emotions rather than actual emotional experiences (Kim-Prieto, Diener, Tamir, Scollon, & Diener, 2005; Robinson & Clore, 2002a). One way to address this problem is by measuring emotions in the moment.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128211243000296

Theoretical approaches to emotion and its measurement

Géraldine Coppin, David Sander, in Emotion Measurement (Second Edition), 2021

Measuring feeling

Many self-reports inspired by dimensional theories have been developed (see e.g., Ekkekakis, 2013, Chapter 9 of this volume). If feeling is a categorized blend of valence and arousal, one may think that measuring these two dimensions independently before combining them may be the best approach. However, this is also not a simple task. For instance, regarding arousal measurement, Duffy (1957, p. 265) already warned many decades ago that “the terms ‘activation’ and ‘arousal’ … refer to variations in the arousal or excitation of the individual as a whole, as indicated roughly by any one of a number of physiological measures (e.g., skin resistance, muscle tension, EEG, cardiovascular measures and others). The degree of arousal appears to be best indicated by a combination of measures”.

Feeling has only been assessed using self-reported measures, and, to the best of our knowledge, no physiological or brain responses have been specifically associated with specific feelings. Of course, in physiological and brain imaging experiments, participants do have feelings that are measured, but, to the best of our knowledge, no specific signature of feeling, as a specific component of emotion, has been found. Note that self-reports can be used in experiments where participants are placed in an emotion-elicitation situation, but also where participants are asked to imagine emotional events, or recall past emotional events, offering some flexibility. Note however that they tend to be more valid when they measure currently experienced emotions (Mauss & Robinson, 2009). Although self-reports have several important problems (e.g., not everyone accurately reports his/her current emotional states), they can also be a valuable approach (e.g., Keefer, 2014), in particular when combined with other measures.

For instance, the Geneva Emotion Wheel (Sacharin, Schlegel, & Scherer, 2012) has been developed to report feeling, inspired by an appraisal perspective. This wheel refers to discrete emotion terms (arranged in emotion families; for instance, disgust and repulsion) that are organized in a circular graphical structure. The valence and control dimensions underlie this grouping. Moreover, different levels of intensity for each emotion family are offered. New tools to measure self-reported discrete emotions are continuously being developed (e.g., Harmon-Jones, Bastian, & Harmon-Jones, 2016), and it is beyond the scope to this paper to cover them all (but see also Chapter 10).

Scales measuring emotions in specific sensory modalities also exist. For instance, scales measuring emotions elicited by olfactory stimuli have been developed (Chrea et al., 2009), in different cultures (Ferdenzi et al., 2011) (see Chapter 21). These studies suggest that feelings induced by odors are structured around few dimensions, some common across cultures (e.g., disgust-irritation), some culture-specific (e.g., spirituality in Singapore).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128211243000016

HIV/AIDS

Sylvie Naar, ... Salome Nicole Cockern, in Adherence and Self-Management in Pediatric Populations, 2020

Assessment of antiretroviral treatment adherence

Unlike many chronic conditions, the relationship between adherence measures and health outcomes in HIV is quite strong (Simoni et al., 2006). Thus, viral suppression is often considered a good proxy for adherence (Kim, Gerver, Fidler, & Ward, 2014). Clinical adherence measures include pill counts, pharmacy refill records, and various self-report methods. Although pill counts are a more robust predictor of viral load compared with self-report in pediatric HIV (Farley et al., 2008), pill counts are difficult to obtain in clinical settings. Unannounced pill counts conducted by phone provide a rigorous and practical measure of adherence in adults (Kalichman et al., 2007) and have shown some success in youth who were perinatally infected with HIV (Raymond et al., 2017). However, this method may be impractical for behaviorally infected youth who may have more psychosocial, housing, and financial instability (Pennar et al., 2019).

Self-report measures

Common self-report measures include recall procedures and visual analogue scales. The Visual Analogue Scale (Giordano, Guzman, Clark, Charlebois, & Bangsberg, 2004; Kalichman et al., 2009) for medication adherence is a single-item visual analogue rating scale that asks participants to estimate along a continuum the percentage of medication doses taken in a given time period. The scale is anchored with 0% indicating that no medication was taken, 50% half was taken, and 100% indicating that all medication was taken. The Visual Analogue Scale has moderate to strong correlations with other adherence measures and with viral load in adults and youth (Finitsis, Pellowski, Huedo-Medina, Fox, & Kalichman, 2016; Naar-King, Montepiedra, et al., 2013; Naar-King, Parsons, et al., 2009) and appears to be more strongly associated with viral load in youth than 7-day estimates of missed doses (MacDonell, Naar-King, Huszti, & Belzer, 2013). Daily phone diary methods have been used to assess adherence in younger children with HIV (Marhefka, Tepper, Farley, Sleasman, & Mellins, 2006). A recent study assessed the feasibility of technology-based diaries targeting HIV risk behaviors in youth with HIV (aged 16 to 24 years) and found that Internet-based diaries were preferred and showed higher retention than phone diaries (Cherenack, Wilson, Kreuzman, & Price, 2016).

Assays

Hair specimen assays may have utility for assessing medication adherence (e.g., indinavir, lopinavir/ritonavir) in adults on antiretroviral treatment in the United States and Africa (Ameli et al., 2009; Gandhi et al., 2009; Tabb et al., 2017). In adult studies, hair specimens were stronger predictors of therapy success than self-report measures.

Blood plasma for viral load testing as a proxy for adherence measurement requires complex specimen collection, storage, and transport. In resource-limited settings, dried blood spots, collected using capillary blood from a finger prick, are used and have shown adequate feasibility and reliability (Johannessen, Trøseid, & Calmy, 2009). A limitation is that the lower limits of virus detectability are higher than plasma and may be inadequate for detecting subtle changes in adherence. Given that home-based and self-collected sampling with adults are feasibile (van Loo, Dukers-Muijrers, Heuts, van der Sande, & Hoebe, 2017), future research may demonstrate the feasibility of this method for adherence and viral load monitoring in youth living with HIV.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128160008000128

Which of the following is a self

Self-reported measures are measures in which respondents are asked to report directly on their own behaviors, beliefs, attitudes, or intentions. For example, many common measures of attitudes such as Thur-stone scales, Likert scales, and semantic differentials are self-report.

Which of the following is the most accurate with regards to self

Which of the following is the most accurate with regards to self-report happiness measures? Self-report can used for measure happiness but because these assessments are flawed it is helpful to use them along with other types of measures.

Is a type of self

type of self report test that is created by 1st identifying 2 groups that are known to be different. widely used and researched empirically keyed self-report personality test. used to assess personality and predict outcomes. The scale features 567 items and provides info on a variety of personality characteristics.

What is a significant limitation of self

What is a significant limitation of self-report measures of personality? If a person is unaware of the psychological processes that underlie her or his motivations, behaviors, or feelings, those motivations, behaviors, and feelings can't accurately be reported.