IJCS | Volume 33, Nº3, May / June 2020

more largely than the truth. 10 As robots accelerate the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it. 10 This is particularly important, as Pennycook et al identified that having had previous contact with information (familiarity) increases the feeling that this information is true. Furthermore, they also demonstrated that repetition amplifies this feeling of “illusory truth”. 11 How can we fight against this threat (figure 1)? Apromisingapproachto that is torelyoncomputational methods to detect fake news and misinformation. The majority of techniques to tackle this problem are developed in the area ofArtificial Intelligence (AI), mainly using Natural Language Processing (NLP) and Machine Learning (ML) methods. To automatically classifying a piece of text as fake news or not, other ML and NLP solutions are also of aid, including features extraction, 12 social context modeling, 13,14 knowledge‑based systems, 15 sentiment analysis, 16 among others. Feature extraction is particularly important to provide useful information to ML methods. They can be gathered either directly from the text or from external sources. Examples of them include 1) title representativeness, 2) quotes of external sources, 3) presence of citations of other organization and studies, 5) use of logical fallacies, 6) emotional tone of the article, 7) inference consistency, e.g., a wrong association and causation or making a fact to generalize into an incorrect conclusion, 8) originality, 9) credibility of citations, 10) number of ads, 11) confidence degree in the authors, 12) number of social calls, and others. The ML algorithms can use some of these features to approximate a classifier model able to distinguish between a fake and a truthful content. The classifier learning process uses a previously annotated data set as a training set, where the examples in this dataset are the articles, and the annotation is if it is fake or not. In some cases, it is necessary to pre‑process the data before extracting the features, using, for example, tokenization (divide the text into smaller parts called tokens), lower casing transformation, removal of commonwords that lack a proper meaning (stop words), sentence segmentation, etc. 12 Besides relying on feature engineering and extraction, recent methods based onDeep Learning take into account the content of the texts directly, in an end-to-end fashion. For example, Fang et al. developed a model to judge the authenticity of news with a precision rate of 95.5% based only on their content by using convolutional neural networks and self multi-head attention mechanism. 17 Other AI promising approaches consist of analyzing the social network features that hold the possible fake information. This scenario is relevant because it is increasingly common to use non-human accounts or bots to create fake news and spread them into a social network. 15 Thus, analyzing those social networks users' profiles, for example, can provide useful information for fake news detection. Furthermore, post-based features focus on analyzing how people Creation of trustable content Governmental Action Correction of misinformation Computational detection of misinformation Increased collaboration science-media Multiple checks of information Combating Fake Medical News Figure 1 – Proposed strategies for combating fake medical news. 204 Mesquita et al. Infodemia, fake news and medicine: science and the quest fort ruth Int J Cardiovasc Sci. 2020; 33(3):203-205 Editorial

RkJQdWJsaXNoZXIy MjM4Mjg=