Provide a summary of the following web Content, written in the voice of the original author. If there is anything controversial please highlight the controversy. If there is something surprising, unique, or clever, please highlight that as well. Content: Title: Negativity Drives Online News Consumption Site: www.nature.com Ethics information The research complies with all relevant ethical regulations. Ethics approval (2020-N-151) for the main analysis was obtained from the Institutional Review Board (IRB) at ETH Zurich. For the user validation, ethics approval (IRB-FY2021-5555) was obtained from the IRB at New York University. Participants in the validation study were recruited from the subject pool of the Department of Psychology at New York University in exchange for 0.5 h of research credit for varying psychology courses. Participants provided informed consent for the user validation studies. New York University did not require IRB approval for the main analysis, as it is not classified as a human subjects research. Large-scale field experiments In this research, we build upon data from the Upworthy Research Archive 53 . The data have been made available through an agreement between Cornell University and Upworthy. We have access to this dataset upon the condition of following the procedure for a Registered Report. In Stage 1, we had access only to a subset of the dataset (that is, the ‘exploratory sample’), on the basis of which we conducted the preliminary analysis for pre-registering hypotheses. In Stage 2 (this paper), we had access to a separate subset of the data (that is, the ‘confirmatory sample’) on the basis of which we tested the pre-registered hypotheses. Here, our analysis was based on data from N  = 22,743 experiments (RCTs) collected on Upworthy between 24 January 2013 and 14 April 2015. Each RCT corresponds to one news story, in which different headlines for the same news story were compared. Formally, for each headline variation j in an RCT i ( \(i = 1, \ldots ,N\) ), the following statistics were recorded: (1) the number of impressions, that is, the number of users to whom the headline variation was shown (impressions ij ) and (2) the number of clicks a headline variation generated (clicks ij ). The CTR was then computed as \(\mathrm{CTR}_{ij} = \frac{{\mathrm{clicks}_{ij}}}{{\mathrm{impressions}_{ij}}}.\) The experiments were conducted separately (that is, only a single experiment was conducted at the same time for the entire website) so each test can be analysed as independent of all other tests 53 . Examples of news headlines in the experiments are presented in Table 2 . The Upworthy Research Archive contains data aggregated at the headline level and, thus, does not provide individual-level data for users. The data were subjected to the following filtering. First, all experiments solely consisting of a single headline variation were discarded. Single headline variations exist because Upworthy conducted RCTs on features of their articles other than headlines, predominantly teaser images. In many RCTs where teaser images were varied, headlines were not varied at all (image data were not made available to researchers by the Upworthy Research Archive, so we were unable to incorporate image RCTs into our analyses although we validated our findings as part of the robustness checks). Second, some experiments contained multiple treatment arms with identical headlines, which were merged into one representative treatment by summing their clicks and impressions. These occurred when images ‘and’ headlines were involved in RCTs for the same story. This is relatively rare in the dataset, but for robustness checks regarding image RCTs, see Supplementary Table 9 . The analysis in the current Registered Report Stage 2 is based on the confirmatory sample of the dataset 53 , which was made available to us only after pre-registration was conditionally accepted. In the previous pre-registration stage, we presented the results of a preliminary analysis based on a smaller, exploratory sample (see Registered Report Stage 1). Both were processed using identical methodology. The pilot sample for our preliminary analysis comprised 4,873 experiments, involving 22,666 different headlines before filtering and 11,109 headlines after filtering, which corresponds to 4.27 headlines on average per experiment. On average, there were approximately 16,670 participants in each RCT. Additional summary statistics are given in Supplementary Table 1 . Design We present a design table summarizing our methods in Table 1 . Sampling plan Given our opportunity to secure an extremely large sample where the N was predetermined, we chose to run a simulation before pre-registration to estimate the level of power we would achieve for observing an effect size represented by a regression coefficient of 0.01 (that is, a 1% effect on the odds of clicks from a standard deviation increase in negative words). This effect size is slightly more conservative than estimates of effect sizes from pilot studies (see Stage 1 of the Registered Report) and is derived from theory 76 . The size of the confirmatory Upworthy data archive is N  = 22,743 RCTs, with between 3 and 12 headlines per RCT. This thus corresponds to a total sample of between 68,229 and 227,430 headlines. Because we were not aware of the exact size during pilot testing, we generated datasets through a bootstrapping procedure that sampled N  = 22,743 RCTs with replacement from our pilot sample of tests. We simulated 1,000 such datasets and for each dataset we generated ‘clicks’ using the estimated parameters from the pilot data. Finally, each dataset was analysed using the model as described. This procedure was repeated for both models (varying intercepts, and a combination of varying intercepts and varying slopes). We found that under the assumptions of effect size, covariance matrix and data generating process from our pilot sample, we will have greater than 99% power to detect an effect size of 0.01 in the final sample for both models. Analysis plan Text mining framework Text mining was used to extract emotional words from news headlines. To prepare the data for the text mining procedure, we applied standard preprocessing to the headlines. Specifically, the running text was converted into lower-case and tokenized, and special characters (that is, punctuations and hashtags) were removed. We then applied a dictionary-based approach analogous to those of earlier research 22 , 39 , 40 , 41 . We performed sentiment analysis on the basis of the Linguistic Inquiry and Word Count (LIWC) 77 . The LIWC contains word lists classifying words according to both a positive ( n  = 620 words, for example ‘love’ and ‘pretty’) and negative sentiment ( n  = 744 words, for example ‘wrong’ and ‘bad’). A list of the most frequent positive and negative words in our dataset is given in Supplementary Table 2 . Formally, sentiment analysis was based on single words (that is, unigrams) due to the short length of the headlines (mean length: 14.965 words). We counted the number of positive words ( n positive ) and the number of negative words ( n negative ) in each headline. A word was considered ‘positive’ if it is in the dictionary of positive words (and vice versa, for ‘negative’ words). We then normalized the frequency by the length of the headline, that is, the total number of words in the headline ( n total ). This yielded the two separate scores $$\mathrm{Positive}_{ij} = \frac{{n_{\mathrm{positive}}}}{{n_{\mathrm{total}}}}\;{{{\mathrm{and}}}}\;\mathrm{Negative}_{ij} = \frac{{n_{\mathrm{negative}}}}{{n_{\mathrm{total}}}}$$ for headline j in experiment i . As such, the corresponding scores for each headline represent percentages. For example, if a headline has 10 words out of which one is classified as ‘positive’ and none as ‘negative,’ the scores are \(\mathrm{Positive}_{ij} = 10{{{\mathrm{\% }}}}\) and \(\mathrm{Negative}_{ij} = 0{{{\mathrm{\% }}}}\) . If a headline has 10 words and contains one ‘positive’ and one ‘negative’ word, the scores are \(\mathrm{Positive}_{ij} = 10{{{\mathrm{\% }}}}\) and \(\mathrm{Negative}_{ij} = 10{{{\mathrm{\% }}}}\) . A headline may contain both positive and negative words, so both variables were later included in the model. Negation words (for example, ‘not,’ ‘no’) can invert the meaning of statements and thus the corresponding sentiment. We performed negation handling as follows. First, the text was scanned for negation terms using a predefined list, and then all positive (or negative) words in the neighbourhood were counted as belonging to the opposite word list, that is, they were counted as negative (or positive) words. In our analysis, the neighbourhood (that is, the so-called negation scope) was set to 3 words after the negation. As a result, a phrase such as ‘not happy’ was coded as negative rather than positive. Here we used the implementation from the sentimentr package (details at https://cran.r-project.org/web/packages/sentimentr/readme/README.html ). Using the above dictionary approach, our objective was to quantify the presence of positive and negative words. As such, we did not attempt to infer the internal state of a perceiver on the basis of the language they write, consume or share 73 . Specifically, readers’ preference for headlines containing negative words does not imply that users ‘felt’ more negatively while reading said headlines. In contrast, we quantified how the presence of certain words is linked to concrete behaviour. Following this, our pre-registered hypotheses test whether negative words increase consumption rates (Table 1 ). We validated the dictionary approach in the context of our corpus on the basis of a pilot study 78 . Here we used the positive and negative word lists from LIWC 77 and performed negation handling as described above. Perceived judgments of positivity and negativity in headlines correlate with the number of negative and/or positive words each headline contains. Specifically, we correlated the mean of the 8 human judges’ scores for a headline with NRC sentiment rating for that headline. We found a moderate but significant positive correlation ( r s  = 0.303, P  < 0.001). These findings validate that our dictionary approach captures significant variation in the perception of emotions in headlines from perceivers. More details are available in Supplementary Tables 21 and 22 . Two additional text statistics were computed: first, we determined the length of the news headline as given by the number of words. Second, we calculated a text complexity score using the Gunning Fog index 79 . This index estimates the years of formal education necessary for a person to understand a text upon reading it for the first time: 0.4 × (ASL + 100 ×  n wsy≥3 / n w ), where ASL is the average sentence length (number of words), n w is the total number of words and n wsy≥3 is the number of words with three syllables or more. A higher value thus indicates greater complexity. Both headline length and the complexity score were used as control variables in the statistical models. Results based on alternative text complexity scores are reported as part of the robustness checks. The above text mining pipeline was implemented in R v4.0.2 using the packages quanteda (v2.0.1) and s entimentr (v2.7.1) for text mining. Empirical model We estimated the effect of emotions on online news consumption using a multilevel binomial regression. Specifically, we expected that negative language in a headline affects the probability of users clicking on a news story to access its content. To test our hypothesis, we specified a series of regression models where the dependent variable is given by the CTR. We modelled news consumption as follows: \(i = 1, \ldots ,N\) refers to the different experiments in which different headline variations for news stories are compared through an RCT; clicks ij denote the number of clicks from headline variation j belonging to news story i . Analogously, impressions ij refer to the corresponding number of impressions. We followed previous approaches 80 and modelled the number of clicks to follow a binomial distribution as $$\mathrm{clicks}_{ij} \sim \mathrm{Binomial}(\mathrm{impressions}_{ij},\theta _{ij})$$ where 0  ≤  θ ij  ≤ 1 is the probability of a user clicking on a headline in a single Ber