Does a 7-day restriction on the use of social media improve cognitive functioning and emotional well-being? Results from a randomized controlled trial. Marloes M.C . van Wezel, Elger L. Abrahamse, Mariek M. P. Van den Abeele. Addictive Behaviors Reports, June 15 2021, 100365. https://doi.org/10.1016/j.abrep.2021.100365
Highlights
• We compared a 10% vs. 50% reduction in social media screen time in a RCT.
• The intervention had no effect on multiple indicators of attention and wellbeing.
• Self-control, impulsivity and FoMO did not moderate the relationships.
• Participants reported improved attention, but behavioral attention did not improve.
• Overall, a more severe screen time reduction intervention does not appear more beneficial.
Abstract
Introduction: Screen time apps that allow smartphone users to manage their screen time are assumed to combat negative effects of smartphone use. This study explores whether a social media restriction, implemented via screen time apps, has a positive effect on emotional well-being and sustained attention performance.
Methods: A randomized controlled trial (N= 76) was performed, exploring whether a week-long 50% reduction in time spent on mobile Facebook, Instagram, Snapchat and YouTube is beneficial to attentional performance and well-being as compared to a 10% reduction.
Results: Unexpectedly, several participants in the control group pro-actively reduced their screen time significantly beyond the intended 10%, dismantling our intended screen time manipulation. Hence, we analyzed both the effect of the original manipulation (i.e. treatment-as-intended), and the effect of participants’ relative reduction in screen time irrespective of their condition (i.e. treatment-as-is). Neither analyses revealed an effect on the outcome measures. We also found no support for a moderating role of self-control, impulsivity or Fear of Missing Out. Interestingly, across all participants behavioral performance on sustained attention tasks remained stable over time, while perceived attentional performance improved. Participants also self-reported a decrease in negative emotions, but no increase in positive emotions.
Conclusion: We discuss the implications of our findings in light of recent debates about the impact of screen time and formulate suggestions for future research based on important limitations of the current study, revolving among others around appropriate control groups as well as the combined use of both subjective and objective (i.e., behavioral) measures.
Keywords: screen timescreen time interventionsustained attentioncognitive performanceemotional well-beingself-report bias
4. Discussion
In the past decade, we have witnessed an increase in studies focusing on the complex associations between the use of the smartphone and its (mobile) social media apps on the one hand, and attentional functioning (Rosen et al., 2013, Judd, 2014, Kushlev et al., 2016, Ward et al., 2017, Wei et al., 2012, Fitz et al., 2019, Marty-Dugas et al., 2018) as well as emotional well-being (Twenge and Campbell, 2019, Twenge et al., 2018, Twenge and Campbell, 2018, Escobar-Viera et al., 2018, Brailovskaia et al., 2020, Tromholt, 2016, Stieger and Lewetz, 2018, Aalbers et al., 2019, Frison and Eggermont, 2017) on the other hand. While research in this field is not without criticism, among others for its over-reliance on self-report data and cross-sectional survey methodologies, the concerns over the potential harm of mobile social media use have nonetheless given impetus to the development of screen time apps that can help people to protect themselves from harm by restricting their social media use. The current study explored the effects of such a social media screen time restriction on sustained attention and emotional well-being.
The findings show that, first of all, the intervention did not have the intended effect. Specifically, we implemented a 50% restriction in social media screen time for an experimental group, and compared this to a control group with a 10% restriction. Yet, this screen time manipulation failed mostly because participants in the control group reduced their social media app use on average with 38%, which was much more than the intended 10%. We deliberately opted to not include a 0% reduction control group in our design, in order to avoid Hawthorne(-like) effects (cf. Taylor, 2004, McCambridge et al., 2014) – hence, in order to provide also the control group participants with a full-blown sense of being involved in an experiment. The current finding that a non-zero percent reduction for a control group may trigger additional – and more problematic – side effects than the Hawthorne(-like) effects that we aimed to prevent with it, is an interesting finding in itself. It provides clear suggestions for optimal implementation of control groups in intervention studies of the current type, and deserves to be followed up as a target of investigation in itself. Indeed, some participants indicated that they felt uncomfortable when encountering a time limit. It is imaginable that participants reduced their screen time more than they needed to in order to avoid that situation. Alternatively, the failed manipulation may be due to a placebo effect (cf. Stewart-Williams & Podd, 2004). In this case, the mere expectation of receiving a social media reduction may have sufficed in promoting behavior change in the form of reduced social media use. Similar placebo effects were found in marketing research (Irmak, Block, & Fitzsimons, 2005).
To deal with the failed screen time manipulation, we provided analyses both for treatment-as-intended and treatment-as-is, with the latter set of analyses disregarding the intervention conditions but rather exploring linear associations between the degree of relative screen time reduction based on the data we obtained. Interestingly, neither analyses revealed a noticeable effect on the outcome measures. This finding suggests an alternative explanation for the lack of findings, namely that there may not be any negative association between social media screen time and the outcome measures to begin with. Indeed, the pre-test data – which are unaffected by the failed screen time manipulation – did not show any of the hypothesized correlations between social media screen time, emotional well-being and attentional performance. On the contrary, the only relationships found between social media screen time and the outcome measures ran counter to what one might expect: Heavier social media users reported experiencing less attentional lapses and negative emotions. The lack of any negative association between social media screen time and the outcome measures may explain why reducing this screen time has no causal impact: If social media screen time does not affect these outcomes much, altering it will unlikely cause much change in them.
This finding is interesting in light of recent debates in the field over the validity of screen time studies. A recurring concern voiced in these debates is that self-report measures of screen time are flawed to such an extent that their use can lead to biased interpretations (Kaye et al., 2020, Sewall et al., 2020). A key strength of the current study is that we used a behavioral measure of screen time. The fact that this measure shows no relationship to cognitive performance nor emotional well-being, calls into question the ‘moral panic’ over social media screen time (Orben, 2020).
An alternative explanation that should be mentioned here, is that despite the randomization of participants, the control and experimental group were not fully equivalent in terms of their smartphone behavior in the week prior to the experiment. The control group appeared to consist of heavier Instagram users whereas the experimental group consisted of heavier WhatsApp users. It is thinkable that this non-equivalence has had some influence on our findings. After all, for the light Instagram users in the experimental group, a 50% reduction in Instagram use may not have been very impactful, whereas for the heavy Instagram users in the control group, the actually enforced relative reduction of 35% may have had a more profound impact, thus leveling out any difference between the two groups. Future researchers thus need to carefully consider their experimental procedures to maximize the chances of equivalence between conditions.
While we believe that a strength of our current study is the use of actual smartphone data and performance based measures of attention, the paucity of the use of such measures in previous work prevented us from conducting an appropriate a priori power analysis, resulting in a sample size that may have been too small – as indeed indicated by for example the accidental but significant differences between conditions in terms of their baseline app use (see above). We hope that our study can serve to that purpose in the future.
While the manipulation did not resort an effect, the findings of our study did show that – disregarding of the condition they were in – people reported experiencing less cognitive errors and attentional lapses at the post-test. This is interesting, given that their actual attentional performances did not improve. Again, these findings are interesting in light over the recent debates over the use of self-report measures in research on the associations between screen time and psychological functioning. Recent studies show that the use of self-report measures leads to an artificial inflation of effect sizes of these associations (Sewall et al., 2020, Shaw et al., 2020), that self-reports of especially smartphone use are inaccurate (Boase and Ling, 2013, Ellis et al., 2019, Vanden Abeele et al., 2013), and that the discrepancies between self-reported and behavioral measures of smartphone use are themselves correlated with psychosocial functioning (Sewall et al., 2020). The mixed findings in research on the effects of screen time have led to a call for greater conceptual and methodological thoroughness (e.g., Whitlock and Masur, 2019, Kaye et al., 2020, Sewall et al., 2020, Shaw et al., 2020), with a specific call to prioritize behavioral measures over self-report measures. The discrepancy between the behavioral and self-report attention measures may be an artifact of this shortcoming of self-report methodology.
The null-results of FoMO, self-control and impulsivity as influential moderators should be elaborated on here. It was expected that a screen time intervention would negatively impact the emotional well-being of individuals, especially those high on FoMO, since reduced social media screen time also reduces the possibility to stay up-to-date. However, our results could not corroborate this notion. Several authors have suggested that rather than being a predictor of social media use, FoMO may be a consequence of such online behavior (e.g., Alutaybi, Al-thani, McAlaney & Ali, 2020; Buglass, Binder, Bets, & Underwood, 2017; Hunt et al., 2018). In the three-week intervention study of Hunt et al. (2018) for example, reduced social media use actually reduced feelings of FoMO. With our data, we could test this possibility. Hence, we executed a repeated measures ANOVA with FoMO as within-subjects factor and condition as between-subjects factor. This analysis revealed that the intervention had no significant effect on experienced FoMO (i.e., the experimental group did not experience larger changes in FoMO than the control group: F(1,74)= 0.09, p=.762). However, there was an effect of time on FoMO: at the post-test, FoMO was significantly lower than at the pre-test (Mdif = 0.18, F(1,74)= 6.65, p= .012). Perhaps this is indicative of an “intervention effect”, since our manipulation had failed and all participant significantly reduced their social media use during the intervention week.
Also, an overall finding of this study, which aligns with what prior research has found, was that participants were not able to estimate their screen time accurately: While participants’ actual screen time decreased during the intervention week, their self-reported screen time did not differ over time. Interestingly, participants did report a decrease in habitual use and problematic use. This may suggest that people may have a vague sense of their behavior (“I reduced my smartphone use”), but are unable to convert this adequately into numbers such as screen time in minutes. Alternatively, participants may have provided a socially desirable answer. Either case, our findings aligned with both recent and older studies showing that subjective screen time measures deviate from objective measures (e.g., Andrews et al., 2015, Boase and Ling, 2013, Vanden Abeele et al., 2013, Verbeij et al., 2021).
4.1. Limitations and Future Directions
This study is among the first to examine the effectiveness of a social media screen time reduction on sustained attention and emotional well-being. One of its strengths is the inclusion of behavioral measures, both for screen time and for sustained attention. The study is not without limitations, however. A number of methodological choices were made that significantly limit comparability with other findings in the field. The lack of a true control group (in which no intervention was implemented) and the limited sample size are major limitations to the current study. Future research should include more participants and should consider the use of a true control group, in which no intervention is implemented. Moreover, future research might look at different degrees of screen time reductions, ranging from no reduction to complete abstinence, to better address to what extent the magnitude of the restriction matters. To add, future work ought to consider how to account for individuals’ unique smartphone app repertoires. For instance, some individuals in our study were super users of mobile games rather than of social media. While this may lower generalizability, researchers might account for unique app repertoires by setting time restrictions on an individual’s top 5 apps, or on screen time in-total. Also, a one-week intervention is short. It is likely that a longer intervention is needed to produce an effect on the outcomes examined. Overall, a general observation that we make is that future research on screen time interventions needs to carefully question and compare (1) which types of interventions affect (2) which outcomes, (3) for whom and (4) under which conditions, and (5) because of which theoretical mechanisms.
An additional limitation is that, although they were kept blind about which condition they were in, participants were informed about what the experiment was about because willingness to set a restriction to one’s screen time was an important eligibility criterion; installing such a timer without the participants’ informed consent was deemed unethical. Given that the timers were installed on participants’ personal phones, it was easy for participants to look up what restriction was enforced on them. Future research might explore if participants can be kept in the blind. Perhaps this can be attained via the development of a screen time app tailored to this purpose. Notably, even though we found no increase in the use of social media on alternative devices, it should be acknowledged that social media can be accessed from other devices than smartphones alone, something that could be accounted for in future work. In this context, it is relevant to mention Meier and Reinecke’s (2020) taxonomy of computer-mediated communication. Meier and Reinecke advice researchers to carefully consider which level of analysis they are focusing on, most notably that of the device (i.e., a ‘channel-based’ approach) versus that of the functionality or interaction one has through the device. Decisions regarding the level of analysis are typically grounded in theoretical assumptions about the mechanisms explaining effects. We consider this observation relevant to researchers studying ‘digital detoxes’ or screen time interventions, as they similarly have to consider what it is exactly that they want participants to ‘detox’ from, the device, a particular app or functionality, or a type of interaction. Careful consideration of this issue is important, as it may be key to understanding why the extant research shows mixed evidence. In the current study we attempted to address the type of interaction people have with social media, targeting especially ‘passive social media use’ by enforcing only a partial restriction, but we only focused on mobile social media. Future researchers may wish consider more explicitly their level of analysis and how to operationalize it in an intervention.
Finally, as other research also shows (e.g., Ohme, Araujo, de Vreese, & Piotrowski, 2020), research designs that include behavioral measures of smartphone use are both ethically and methodologically challenging. In the current study, we only invited participants to the lab with smartphones running on recent versions of IoS or Android. However, some participants showed up unaware of the operating system of their phone. Others used older versions, on which the screen time monitoring features did not function, or had forgotten to activate the screen time monitoring feature prior to the baseline measurement (which we had also specified as an eligibility criterion). This led to exclusion of several participants. Additionally, in a pilot study of the experiment, we noticed that different phone brands and types use different interfaces to display screen time information. This led to confusion, for instance, over whether the displayed numbers were weekly or daily totals. Hence, to avoid errors, we chose not to let participants record their own screen time but rather explicitly asked participants to hand over their phone to a trained researcher who copied the information into a spreadsheet and installed the timers. Participants who felt uncomfortable with this procedure were invited to closely monitor the researcher, or – if desired – to navigate the interface themselves. Although only a handful of students chose this option, this shows that there are ethical implications to using data donation procedures that researchers have to consider.
To circumvent these issues in future studies, participants could be instructed to install the same app. However, this will increase the demands placed on participants. Participation in studies of this nature are already highly demanding and intensive, since participants have to undergo a multi-day intervention on behavior that is intrinsic to their daily lives, and with sharing of personal information. Additionally, asking participants to install a specific app that potentially remotely monitors their phone use can raise ethical concerns, especially when using a commercial app that makes profit of monitoring (and selling) user data.
Overall, it became clear that it is difficult to achieve the required sample size to investigate complex designs of this nature. Nonetheless, the contrasting findings in extant research call for more research on causal relations between social media use on the one hand, and emotional well-being and cognitive functioning on the other hand. This can only be achieved by the use of slow science and large resources.
Check also Reasons for Facebook Usage: Data From 46 Countries. Marta Kowal et al. Front. Psychol., April 30 2020. https://doi.org/10.3389/fpsyg.2020.00711
Are there sex differences in Facebook usage? According to Clement (2019), 54% of Facebook users declare to be a woman. Research conducted by Lin and Lu (2011; Taiwan) showed that the key factors for men's Facebook usage are “usefulness” and “enjoyment.” Women, on the other hand, appear more susceptible to peer influence. This is concurrent with the findings of Muise et al. (2009; Canada), in which longer times spent on Facebook correlated with more frequent episodes of jealousy-related behaviors and feelings of envy among women, but not men. Similarly, in Denti et al. (2012), Swedish women who spent more time on Facebook reported feeling less happy and less content with their life; this relationship was not observed among men.
In general, women tend to have larger Facebook networks (Stefanone et al., 2010; USA), and engage in more Facebook activities than men do (McAndrew and Jeong, 2012; USA; but see Smock et al., 2011; USA, who reported that women use Facebook chat less frequently than men). Another study (Makashvili et al., 2013; Georgia) provided evidence that women exceed men in Facebook usage due to their stronger desire to maintain contact with friends and share photographs, while men more frequently use Facebook to pass time and build new relationships.