Juliana Chueri
Petter Törnberg
Justus Uitermark
In: Party politics, 30 (2024) 3, p. 479-492
DOI: 10.1177/13540688231158486
Abstract: The question of the democratic character of the European Union (EU) has been a center-point of decades of political research. An important critique suggests that the development of the European political arena is still incomplete, with European parliamentarians primarily orienting themselves to national issues and politicians, implying a problematic mismatch between the political arena and their policy jurisdiction. Research has however been limited by methodological difficulties of capturing the level of Europeanization of the political arena. This paper contributes a novel method for measuring Europeanization by studying interactions from the European parliament to their national parliamentarians on Twitter in 15 EU countries. Contrary to expectations in the literature, we find substantial Europeanization of the political arena. The level of Europeanization furthermore varies greatly across countries and political groups. This has important implications on the debate on EU's democratic deficit, as communication across different levels of parliament indicates democratic debate.
Armin Pournaki
Sven Banisch
Eckehard Olbrich
In: Journal of computational social science, 6 (2023), p. 707-739
DOI: 10.1007/s42001-023-00207-w ARXIV: https://arxiv.org/abs/2110.11772
Abstract: Force-directed layout algorithms are ubiquitously-used tools for network visualization across a variety of scientific disciplines. However, they lack theoretical grounding which allows to interpret their outcomes rigorously. We propose an approach building on latent space network models, which assume that the probability of nodes forming a tie depends on their distance in an unobserved latent space. From such latent space models, we derive force equations for a force-directed layout algorithm. With this approach, force-directed layouts become interpretable, since the forces infer positions which maximize the likelihood of the given network under the latent space model. We implement these forces for (un)directed unweighted and weighted networks. We spatialise different real-world networks, where we find central network properties reflected in the layout, and compare the layouts to different force-directed algorithms already in use today.
Christian M. Dahl
Thorbjoern Knudsen
Massimo Warglien
In: Organization science, 34 (2023) 6, p. 2163-2181
Abstract: This paper addresses a notable gap at the intersection of organizational economics and organization science: how does organizational context influence aggregation of individual behavior in organizational decisions? Using basic centralized versus decentralized organizational structures as building blocks for our experimental design, we examine whether assignment of organizational positions, incentive schemes, and structural configuration induce endogenous adaptation in the form of change in reservation levels (bias) or modified discrimination capability in subjects' behavior. We found that evaluators adapted their reservation and discrimination levels in centralized structures, whereas they did not generally adapt their reservation and discrimination levels when placed in decentralized structures. We identify mechanisms that explain these findings; explain how they influence aggregate, organizational behavior; and discuss implications for research and practice.
Hawal Shamon
In: Sociological methods and research, Vol. not yet known, pp. not yet known
DOI: 10.1177/00491241231186658 ARXIV: https://arxiv.org/abs/2212.10117
Abstract: We combine empirical experimental research on biased argument processing with a computational theory of group deliberation in order to clarify the role of biased processing in debates around energy. The experiment reveals a strong tendency to consider arguments aligned with the current attitude more persuasive and to downgrade those speaking against it. This is integrated into the framework of argument communication theory in which agents exchange arguments about a certain topic and adapt opinions accordingly. We derive a mathematical model that allows to relate the strength of biased processing to expected attitude changes given the specific experimental conditions and find a clear signature of moderate biased processing. We further show that this model fits significantly better to the experimentally observed attitude changes than the neutral argument processing assumption made in previous models. Our approach provides new insight into the relationship between biased processing and opinion polarization. At the individual level our analysis reveals a sharp qualitative transition from attitude moderation to polarization. At the collective level we find (i.) that weak biased processing significantly accelerates group decision processes whereas (ii.) strong biased processing leads to a persistent conflictual state of subgroup polarization. While this shows that biased processing alone is sufficient for polarization, we also demonstrate that homophily may lead to intra-group conflict at significantly lower rates of biased processing.
Sven Banisch
Paul Van Eecke
Katrien Beuls
In: Digital scholarship in the humanities, 37 (2022) 4, p. 1358-1375
DOI: 10.1093/llc/fqab107
Abstract: Online debates and debate spheres challenge our assumptions about democracy, politics, journalism, trust, and truth in ways that make them a necessary object of study. In the present article, we argue that the study of online arguments can benefit from an interdisciplinary approach that combines computational methods for text analysis with conceptual models of opinion dynamics. The article thereby seeks to make a conceptual and methodological contribution to the field by highlighting the role of domain-crossing causal statements in debates of societal interest, and by providing a method for automatically mining such statements from textual corpora on the web. The article illustrates the relevance of this approach for the study of online debates by means of a case study in which we analyse cross-cutting statements on climate change and energy technologies from the comment section of the online newspaper The Guardian. In support of this case study, we use data and methods that are made openly available through the Penelope ecosystem of tools and techniques for computational social science.
Justus Uitermark
In: City and community, 21 (2022) 4, p. 340-361
DOI: 10.1177/15356841211068521
Abstract: We are today increasingly experiencing the city through interfaces of platforms like Google Maps, Instagram, TripAdvisor, Airbnb, and Yelp. As our very sense of the city is shaped by these technological interfaces, the media are acquiring a constitutive role in reshaping contemporary urbanity. To conceptualize how media represent urban change, this paper draws on media studies and particularly the concept of "mediatization." The paper studies the changing media representations of the gentrification of Rio de Janeiro's favela Vidigal over fifteen years across different media. Using computational methods and interpretative analysis, we find that global media representations represented Vidigal as a site for adventure and investment. However, the media representations are far from monolithic. At one moment, they mobilize cosmopolitan fascination with the "other," promoting slum tourism gentrification. At the next, they amplify critiques of gentrification and local protests against displacement. We argue that media representations are driven by their own variegated forces and cultures, which are increasingly coming to shape the dynamics of urban imaginaries.
Justus Uitermark
In: Media culture and society, 44 (2022) 3, p. 574-590
Eckehard Olbrich
Justus Uitermark
In: Frontiers in big data, 5 (2022), 840584
Justus Uitermark
John D. Boy
In: New media and society, 24 (2022) 3, p. 557-579
Abstract: Based on interviews with feminist Instagram users, this article studies emergent feminist visibilities on Instagram through the concept of filtering. Filtering entails both enhancement and subtraction: some feminist sensibilities align with Instagram's interaction order, while others become subdued and remain at the margins of visibility. Taken together, users' filtering practices contribute to the confident and happy image, individualistic streak, and accommodationist cast of popular feminism, while also amplifying feminist politics that affirm the pleasures of visibility and desire. Instagram proves a more challenging environment for feminists seeking to criticize competitive individualism and aesthetic norms. The notion of filtering enriches existing research on how online environments reconfigure feminist politics and problematizes the avowal of feminism in media culture.
Phanish Puranam
Massimo Warglien
In: Organization science, 33 (2022) 2, p. 495-871
DOI: 10.1287/orsc.2021.1449 LINK: http://hdl.handle.net/10278/3736447
Abstract: Self-selection - based division of labor has gained visibility through its role in varied organizational contexts such as nonhierarchical firms, agile teams, and project-based organizations. Yet, we know relatively little about the precise conditions under which it can outperform the traditional allocation of work to workers by managers. We develop a computational agent-based model that conceives of division of labor as a matching process between workers' skills and tasks. This allows us to examine in detail when and why different approaches to division of labor may enjoy a relative advantage. We find a specific confluence of conditions under which self-selection has an advantage over traditional staffing practices arising from matching: when employees are very skilled but at only a narrow range of tasks, the task structure is decomposable, and employee availability is unforeseeable. Absent these conditions, self-selection must rely on the benefits of enhanced motivation or better matching based on worker's private information about skills, to dominate more traditional allocation processes. These boundary conditions are noteworthy both for those who study as well as for those who wish to implement forms of organizing based on self-selection.
Sven Banisch
In: Frontiers in big data, 4 (2022), 833037
DOI: 10.3389/fdata.2021.833037 ARXIV: https://arxiv.org/abs/2106.15717
Abstract: The paper explores the notion of a reconfiguration of political space in the context of the rise of populism and its effects on the political system. We focus on Germany and the appearance of the new right wing party "Alternative for Germany'" (AfD). Many scholars of politics discuss the rise of the new populism in Western Europe and the US with respect to a new political cleavage related to globalization, which is assumed to mainly affect the cultural dimension of the political space. As such, it might replace the older economic cleavage based on class divisions in defining the dominant dimension of political conflict. An explanation along these lines suggests a reconfiguration of the political space in the sense that (1) the main cleavage within the political space changes its direction from the economic axis towards the cultural axis, but (2) also the semantics of the cultural axis itself is changing towards globalization related topics. Using the electoral manifestos from the Manifesto project database, we empirically address this reconfiguration of the political space by comparing political spaces for Germany built using topic modeling with the spaces based on the content analysis of the Manifesto project and the corresponding categories of political goals. We find that both spaces have a similar structure and that the AfD appears on a new dimension. In order to characterize this new dimension we employ a novel technique, inter-issue consistency networks (IICN) that allow to analyze the evolution of the correlations between the political positions on different issues over several elections. We find that the new dimension introduced by the AfD can be related to the split off of a new "cultural right" issue bundle from the previously existing center-right bundle.
Carloi Santagiustina
Massimo Warglien
In: PLOS ONE, 17 (2022) 6, e0270236
DOI: 10.1371/journal.pone.0270236
Abstract: We propose a framework to analyse partisan debates that involves extracting, classifying and exploring the latent argumentation structure and dynamics of online societal controversies. In this paper, the focus is placed on causal arguments, and the proposed framework is applied to the Twitter debate on the consequences of a hard Brexit scenario. Regular expressions based on causative verbs, structural topic modelling, and dynamic time warping techniques were used to identify partisan faction arguments, as well as their relations, and to infer agenda-setting dynamics. The results highlight that the arguments employed by partisan factions are mostly constructed around constellations of effect-classes based on polarised verb groups. These constellations show that the no-deal debate hinges on structurally balanced building blocks. Brexiteers focus more on arguments related to greenfield trading opportunities and increased autonomy, whereas Remainers argue more about what a no-deal Brexit could destroy, focusing on hard border issues, social tensions in Ireland and Scotland and other economy- and healthcare-related problems. More notably, inferred debate leadership dynamics show that, despite their different usage of terms and arguments, the two factions' argumentation dynamics are strongly intertwined. Moreover, the identified periods in which agenda-setting roles change are linked to major events, such as extensions, elections and the Yellowhammer plan leak, and to new issues that emerged in relation to these events.
Roland Mühlenbernd
In: Experimental economics, 25 (2022) 3, p. 1052-1078
DOI: 10.1007/s10683-021-09742-7
Abstract: Experimental game theory studies the behavior of agents who face a stream of one-shot games as a form of learning. Most literature focuses on a single recurring identical game. This paper embeds single-game learning in a broader perspective, where learning can take place across similar games. We posit that agents categorize games into a few classes and tend to play the same action within a class. The agent's categories are generated by combining game features (payoffs) and individual motives. An individual categorization is experience-based, and may change over time. We demonstrate our approach by testing a robust (parameter-free) model over a large body of independent experimental evidence over 2x2 symmetric games. The model provides a very good fit across games, performing remarkably better than standard learning models.
Marc Tuters
Cassian Osborne-Carey
Daniel Jurg
Ivan Kisjes
In: Digital scholarship in the humanities, 37 (2022) 4, p. 949-971
DOI: 10.1093/llc/fqab076
Abstract: Around 2018, YouTube became heavily criticized for its radicalizing function by allowing far-right actors to produce hateful videos that were in turn amplified through algorithmic recommendations. Against this 'algorithmic radicalization' hypothesis, Munger and Phillips (2019, A supply and demand framework for YouTube politics. Preprint. osf.io/73jys/download; Munger and Phillips, 2020, RightwingYouTube: a supply and demand perspective. The International Journal ofPress/ Politics, 21(2). doi: 10.1177/1940161220964767.)) argued that far-right radical content on YouTube fed into audience demand, suggesting researchers adopt a 'supply and demand' framework. Navigating this debate, our article deploys novelmethods for examining radicalization in the language of far-right pundits and their audiences withinYouTube's so-called'Alternative InfluenceNetwork' (Lewis, 2018, Alternative Influence. Data & Society Research Institute. datasociety.net/library/alternative-influence/ (accessed 9 December 2020).). To that end, we operationalize the concept 'extreme speech'-developed to account for 'the inherent ambiguity of speech contexts' online (Pohjonen and Udupa, 2017, Extreme speech online: an anthropological critique of hate speech debates. International Journal of Communication, 11: 1173-91)-to an analysis of a right-wing 'Bloodsports' debate subculture that thrived on the platform at the time. Highlighting the topic of 'race realism', we develop a novel mixed-methods approach: repurposing the far-right websiteMetapedia as a corpus to detect unique terms related to the issue. We use this corpus to analyze the transcripts and comments from an archive of 950 right-wing channels, collected from 2008 until 2018. In line with Munger and Phillips' framework, our empirical study identifies a market for extreme speech on the platform, which came into public view in 2017.
Felix Gaisbauer
Eckehard Olbrich
In: Entropy, 24 (2022) 10, 1484
DOI: 10.3390/e24101484
Abstract: What are the mechanisms by which groups with certain opinions gain public voice and force others holding a different view into silence? Furthermore, how does social media play into this? Drawing on neuroscientific insights into the processing of social feedback, we develop a theoretical model that allows us to address these questions. In repeated interactions, individuals learn whether their opinion meets public approval and refrain from expressing their standpoint if it is socially sanctioned. In a social network sorted around opinions, an agent forms a distorted impression of public opinion enforced by the communicative activity of the different camps. Even strong majorities can be forced into silence if a minority acts as a cohesive whole. On the other hand, the strong social organisation around opinions enabled by digital platforms favours collective regimes in which opposing voices are expressed and compete for primacy in public. This paper highlights the role that the basic mechanisms of social information processing play in massive computer-mediated interactions on opinions.
Paul Van Eecke
Jeroen Van Soest
Katrien Beuls
In: Computational communication research, 3 (2021) 1, p. 117-132
DOI: 10.5117/CCR2021.1.005.WILL
Abstract: The data-driven study of cultural information diffusion in online (social) media is currently an active area of research. The availability of data from the web thereby generates new opportunities to examine how words propagate through online media and communities, as well as how these diffusion patterns are intertwined with the materiality and culture of social media platforms. In support of such efforts, this paper introduces an online tool for tracking the consecutive occurrences of words across subreddits on Reddit between 2005 and 2017. By processing the full Pushshift.io Reddit comment archive for this period (Baumgartner et al., 2020), we are able to track the first occurrences of 76 million words, allowing us to visualize which subreddits subsequently adopt any of those words over time. We illustrate this approach by addressing the spread of terms referring to famous internet controversies, and the percolation of alt-right terminology. By making our instrument and the processed data publically available, we aim to facilitate a range of exploratory analyses in computational social science, the digital humanities, and related fields.
Katrien Beuls
Paul Van Eecke
Jeroen Van Soest
In: Frontiers in big data, 4 (2021), 695667
DOI: 10.3389/fdata.2021.695667
Abstract: With more and more voices and opinions entering the public domain, a key challenge facing journalists and editors is maximizing the context of the information that is presented on news websites. In this paper, we argue that systems for exposing readers to the many aspects of societal debates should be grounded in methods and tools that can provide a fine-grained understanding of these debates. The present article thereby explores the conceptual transition from opinion observation to opinion facilitation by introducing and discussing the Penelope opinion facilitator: a proof-of-concept reading instrument for online news media that operationalizes emerging methods for the computational analysis of cultural conflict developed in the context of the H2020 ODYCCEUS project. It will be demonstrated how these methods can be combined into an instrument that complements the reading experience of the news website The Guardian by automatically interlinking news articles on the level of semantic frames. In linguistic theory, semantic frames are defined as coherent structures of related concepts. We thereby zoom in on instances of the 'causation' frame, such as 'climate change causes global warming,' and illustrate how a reading instrument that links articles based on such frames might reconfigure our readings of climate news coverage, with specific attention to the case of global warming controversies. Finally, we relate our findings to the context of the development of computational social science, and discuss pathways for the evaluation of the instrument, as well as for the future upscaling of qualitative analyses and close readings.
Petter Törnberg
Justus Uitermark
In: International journal of communication, 15 (2021), p. 2156-2176
LINK: https://ijoc.org/index.php/ijoc/article/view/15867
Abstract: Social scientists have long studied international differences in political culture and communication. An influential strand of theory within political science argues that different types of political systems generate different parliamentary cultures: Systems with proportional representation generate cross-party cohesion, whereas majoritarian systems generate division. To contribute to this long-standing discussion, we study parliamentarian retweets across party lines using a database of 2.3 million retweets by 4,018 incumbent parliamentarians across 19 countries during 2018. We find that there is at most a tenuous relationship between democratic systems and cross-party retweeting: Majoritarian systems are not unequivocally more divisive than proportional systems. Moreover, we find important qualitative differences: Countries are not only more or less divisive, but they are cohesive and divisive in different ways. To capture this complexity, we complement our quantitative analysis with Visual Network Analysis to identify four types of network structures: divided, bipolar, fringe party, and cohesive.
In: Frontiers in big data, 4 (2021), 699653
DOI: 10.3389/fdata.2021.699653
Abstract: Ideas about morality are deeply entrenched into political opinions. This article examines the online communication of British parliamentarians from May 2017-December 2019, following the 2016 referendum that resulted in Britain's exit (Brexit) from the European Union. It aims to uncover how British parliamentarians use moral foundations to discuss the Brexit withdrawal agreement on Twitter, using Moral Foundations Theory as a classification basis for their tweets. It is found that the majority of Brexit related tweets contain elements of moral reasoning, especially relating to the foundations of Authority and Loyalty. There are common underlying foundations between parties, but parties express opposing viewpoints within a single foundation. The study provides useful insights into Twitter's use as an arena for moral argumentation, as well as uncovers the politician's uses of moral arguments during Brexit agreement negotiations on Twitter. It contributes to the limited body of work focusing on the moral arguments made by politicians through Twitter.
Justus Uitermark
In: Big data and society, 8 (2021) 2, p. 1-13
DOI: 10.1177/20539517211047725
Abstract: The proliferation of digital data has been the impetus for the emergence of a new discipline for the study of social life: 'computational social science'. Much research in this field is founded on the premise that society is a complex system with emergent structures that can be modeled or reconstructed through digital data. This paper suggests that computational social science serves practical and legitimizing functions for digital capitalism in much the same way that neoclassical economics does for neoliberalism. In recognition of this homology, this paper develops a critique of the complexity perspective of computational social science and argues for a heterodox computational social science founded on the meta-theory of critical realism that is critical, methodological pluralist, interpretative and explanative. This implies diverting computational social science' computational methods and digital data so as to not be aimed at identifying invariant laws of social life, or optimizing state and corporate practices, but to instead be used as part of broader research strategies to identify contingent patterns, develop conjunctural explanations, and propose qualitatively different ways of organizing social life.
Claes Andersson
Kristian Lindgren
Sven Banisch
In: PLOS ONE, 16 (2021) 10, e0258259
DOI: 10.1371/journal.pone.0258259 CODE: https://zenodo.org/record/5481816
Abstract: Rising political polarization in recent decades has hampered and gridlocked policymaking, as well as weakened trust in democratic institutions. These developments have been linked to the idea that new media technology fosters extreme views and political conflict by facilitating self-segregation into 'echo chambers' where opinions are isolated and reinforced. This opinion-centered picture has recently been challenged by an emerging political science literature on 'affective polarization', which suggests that current polarization is better understood as driven by partisanship emerging as a strong social identity. Through this lens, politics has become a question of competing social groups rather than differences in policy position. Contrary to the opinion-centered view, this identity-centered perspective has not been subject to dynamical formal modeling, which generally permits hypotheses about micro-level explanations for macro-level phenomena to be systematically tested and explored. We here propose a formal model that links new information technology to affective polarization via social psychological mechanisms of social identity. Our results suggest that new information technology catalyzes affective polarization by lowering search and interaction costs, which shifts the balance between centrifugal and centripetal forces of social identity. We find that the macro-dynamics of social identity is characterized by two stable regimes on the societal level: one fluid regime, in which identities are weak and social connections heterogeneous, and one solid regime in which identities are strong and groups homogeneous. We also find evidence of hysteresis, meaning that a transition into a fragmented state is not readily reversed by again increasing those costs. This suggests that, due to systemic feedback effects, if polarization passes certain tipping points, we may experience run-away political polarization that is highly difficult to reverse.
Massimo Warglien
ARXIV: https://arxiv.org/abs/2103.16387
Abstract: In the last decade, political debates have progressively shifted to social media. Rhetorical devices employed by online actors and factions that operate in these debating arenas can be captured and analysed to conduct a statistical reading of societal controversies and their argumentation dynamics. In this paper, we propose a five-step methodology, to extract, categorize and explore the latent argumentation structures of online debates. Using Twitter data about a "no-deal" Brexit, we focus on the expected effects in case of materialisation of this event. First, we extract cause-effect claims contained in tweets using RegEx that exploit verbs related to Creation, Destruction and Causation. Second, we categorise extracted ''no-deal'' effects using a Structural Topic Model estimated on unigrams and bigrams. Third, we select controversial effect topics and explore within-topic argumentation differences between self-declared partisan user factions. We hence type topics using estimated covariate effects on topic propensities, then, using the topics correlation network, we study the topological structure of the debate to identify coherent topical constellations. Finally, we analyse the debate time dynamics and infer lead/follow relations among factions. Results show that the proposed methodology can be employed to perform a statistical rhetorics analysis of debates, and map the architecture of controversies across time. In particular, the "no-deal" Brexit debate is shown to have an assortative argumentation structure heavily characterized by factional constellations of arguments, as well as by polarized narrative frames invoked through verbs related to Creation and Destruction. Our findings highlight the benefits of implementing a systemic approach to the analysis of debates, which allows the unveiling of topical and factional dependencies between arguments employed in online debates.
In: Frontiers in big data, 4 (2021), 689036
DOI: 10.3389/fdata.2021.689036
Abstract: The following reports on research undertaken concerning the 'misinformation problem' on social media during the run-up to the U.S. presidential elections in 2020. Employing techniques borrowed from data journalism, it develops a form of cross-platform analysis that is attuned to both commensurability as well as platform specificity. It analyses the most engaged-with or top-ranked political content on seven online platforms: TikTok, 4chan, Reddit, Twitter, Facebook, Instagram and Google Web Search. Discussing the extent to which social media platforms marginalize mainstream media and mainstream the fringe, the analyses found that TikTok parodies mainstream media, 4chan and Reddit dismiss it and direct users to alternative influencer networks and extreme YouTube content. Twitter prefers the hyperpartisan over it. Facebook's 'fake news' problem also concerns declining amounts of mainstream media referenced. Instagram has influencers (rather than, say, experts) dominating user engagement. By comparison, Google Web Search buoys the liberal mainstream (and sinks conservative sites), but generally gives special interest sources, as they were termed in the study, the privilege to provide information rather than official sources. The piece concludes with a discussion of source and 'platform criticism', concerning how online platforms are seeking to filter the content that is posted or found there through increasing editorial intervention. These 'editorial epistemologies', applied especially around COVID-19 keywords, are part of an expansion of so-called content moderation to what I call 'serious queries', or keywords that return official information. Other epistemological strategies for editorially moderating the misinformation problem are also treated.
Felix Gaisbauer
Sven Banisch
Eckehard Olbrich
In: Journal of digital social research, 3 (2021) 1, p. 106-118
DOI: 10.33621/jdsr.v3i1.64 ARXIV: https://arxiv.org/abs/2003.03599
Abstract: We present an open-source interface for scientists to explore Twitter data through interactive network visualizations. Combining data collection, transformation and visualization in one easily accessible framework, the twitter explorer connects distant and close reading of Twitter data through the interactive exploration of interaction networks and semantic networks. By lowering the technological barriers of data-driven research, it aims to attract researchers from various disciplinary backgrounds and facilitates new perspectives in the thriving field of computational social science.
Sven Banisch
In: Frontiers in big data, 4 (2021), 731349
DOI: 10.3389/fdata.2021.731349 ARXIV: https://arxiv.org/abs/2106.15717
Abstract: The paper explores the notion of a reconfiguration of political space in the context of the rise of populism and its effects on the political system. We focus on Germany and the appearance of the new right wing party "Alternative for Germany'" (AfD). Many scholars of politics discuss the rise of the new populism in Western Europe and the US with respect to a new political cleavage related to globalization, which is assumed to mainly affect the cultural dimension of the political space. As such, it might replace the older economic cleavage based on class divisions in defining the dominant dimension of political conflict. An explanation along these lines suggests a reconfiguration of the political space in the sense that (1) the main cleavage within the political space changes its direction from the economic axis towards the cultural axis, but (2) also the semantics of the cultural axis itself is changing towards globalization related topics. Using the electoral manifestos from the Manifesto project database, we empirically address this reconfiguration of the political space by comparing political spaces for Germany built using topic modeling with the spaces based on the content analysis of the Manifesto project and the corresponding categories of political goals. We find that both spaces have a similar structure and that the AfD appears on a new dimension. In order to characterize this new dimension we employ a novel technique, inter-issue consistency networks (IICN) that allow to analyze the evolution of the correlations between the political positions on different issues over several elections. We find that the new dimension introduced by the AfD can be related to the split off of a new "cultural right" issue bundle from the previously existing center-right bundle.
Robin Lamarche-Perrin
Raphael Fournier-S'niehotta
Rémy Poulain
Lionel Tabourier
Fabien Tarissan
In: Theoretical computer science, 859 (2021), p. 80-115
DOI: 10.1016/j.tcs.2021.01.013 ARXIV: https://arxiv.org/abs/2001.01296
Abstract: Diversity is a concept relevant to numerous domains of research varying from ecology, to information theory, and to economics, to cite a few. It is a notion that is steadily gaining attention in the information retrieval, network analysis, and artificial neural networks communities. While the use of diversity measures in network-structured data counts a growing number of applications, no clear and comprehensive description is available for the different ways in which diversities can be measured. In this article, we develop a formal framework for the application of a large family of diversity measures to heterogeneous information networks (HINs), a flexible, widely-used network data formalism. This extends the application of diversity measures, from systems of classifications and apportionments, to more complex relations that can be better modeled by networks. In doing so, we not only provide an effective organization of multiple practices from different domains, but also unearth new observables in systems modeled by heterogeneous information networks. We illustrate the pertinence of our approach by developing different applications related to various domains concerned by both diversity and networks. In particular, we illustrate the usefulness of these new proposed observables in the domains of recommender systems and social media studies, among other fields.
Justus Uitermark
Petter Törnberg
In: PLOS ONE, 16 (2021) 8, e0256696
DOI: 10.1371/journal.pone.0256696
Abstract: Despite the prevalence of disagreement between users on social media platforms, studies of online debates typically only look at positive online interactions, represented as networks with positive ties. In this paper, we hypothesize that the systematic neglect of conflict that these network analyses induce leads to misleading results on polarized debates. We introduce an approach to bring in negative user-to-user interaction, by analyzing online debates using signed networks with positive and negative ties. We apply this approach to the Dutch Twitter debate on 'Black Pete'-an annual Dutch celebration with racist characteristics. Using a dataset of 430,000 tweets, we apply natural language processing and machine learning to identify: (i) users' stance in the debate; and (ii) whether the interaction between users is positive (supportive) or negative (antagonistic). Comparing the resulting signed network with its unsigned counterpart, the retweet network, we find that traditional unsigned approaches distort debates by conflating conflict with indifference, and that the inclusion of negative ties changes and enriches our understanding of coalitions and division within the debate. Our analysis reveals that some groups are attacking each other, while others rather seem to be located in fragmented Twitter spaces. Our approach identifies new network positions of individuals that correspond to roles in the debate, such as leaders and scapegoats. These findings show that representing the polarity of user interactions as signs of ties in networks substantively changes the conclusions drawn from polarized social media activity, which has important implications for various fields studying online debates using network analysis.
Justus Uitermark
Petter Törnberg
In: Social networks, 66 (2021), p. 10-25
DOI: 10.1016/j.socnet.2021.01.001
Abstract: How do new scientific ideas diffuse? Computational studies reveal how network structures facilitate or obstruct diffusion; qualitative studies demonstrate that diffusion entails the continuous translation and transformation of ideas. This article bridges these computational and qualitative approaches to study diffusion as a complex process of continuous adaptation. As a case study, we analyze the spread of Granovetter's Strength of Weak Ties hypothesis, published in American Journal of Sociology in 1973. Through network analysis, topic modeling and a close reading of a diffusion network created using Web of Science data, we study how different communities in this network interpret and develop Granovetter's hypothesis in distinct ways. We further trace how these communities originate, merge and split, and examine how central scholars emerge as community leaders or brokers in the diffusion process.
Liza Mügge
In: The British journal of sociology, 72 (2021) 2, p. 360-378
Abstract: Kimberlé Crenshaw coined the term ''intersectionality'' in 1989 as a critique of feminist and critical race scholarship's neglect of-respectively-race and gender. Since then, the concept has been interpreted and reinterpreted to appeal to new disciplinary, geographical, and sociocultural audiences, generating heated debates over its appropriation and continued political significance. Drawing on all 3,807 publications in Scopus that contain the word ''intersectionality'' in the title, abstract, or keywords, we map the spread of intersectionality in academia through its citations. Network analysis reveals the contours of its diffusion among the 6,098 scholars in our data set, while automated text analysis, manual coding, and the close reading of publications reveal how the application and interpretation of intersectional thinking has evolved over time and space. We find that the diffusion network exhibits communities that are not well demarcated by either discipline or geography. Communities form around one or a few highly referenced scholars who introduce intersectionality to new audiences while reinterpreting it in a way that speaks to their research interests. By examining the microscopic interactions of publications and citations, our complex systems approach is able to identify the macroscopic patterns of a controversial concept's diffusion.
Carlo Santagiustina
In: Journal of the Royal Statistical Society / A, 184 (2021) 4, p. 1283-1302
DOI: 10.1111/rssa.12704 ARXIV: https://arxiv.org/abs/2012.13267
Abstract: Web activity records into counts yields time series with peculiar features, including the coexistence of smooth paths and sudden jumps, as well as cross-sectional and temporal dependence. Using Twitter posts about country risks for the United Kingdom and the United States, this paper proposes an innovative state space model for multivariate count data with jumps. We use the proposed model to assess the impact of public concerns in these countries on market systems. To do so, public concerns inferred from Twitter data are unpacked into country-specific persistent terms, risk social amplification events and co-movements of the country series. The identified components are then used to investigate the existence and magnitude of country-risk spillovers and social amplification effects on the volatility of financial markets.
Etienne Toureille
Romain Leconte
Marta Severo
In: Frontiers in big data, 4 (2021), 718809
DOI: 10.3389/fdata.2021.718809
Abstract: This study proposes a geopolitical analysis of opinion dynamics based on a statistical exploration of a press dataset covering 2014-2019. This exploration questions three case studies of geopolitical and international interest: international migration, political borders, and pandemics. Through the framework of geopolitical agenda, the aim of this study is to question the 'crisis'status of changes in the media coverage of the three topics in a cross-analysis and multilingual analysis of 20 western European newspapers. It concludes that there is a prevalence of national agendas.
Marco LiCalzi
Massimo Warglien
In: Strategy science, 6 (2021) 2, p. 124-140
Abstract: We study agents who distill the complex world around them using cognitive frames. We assume that agents share the same frame and analyze how the frame affects their collective performance. In one-shot and repeated interactions, the frame causes agents to be either better or worse off than if they could perceive the environment in full detail: it creates a fog of cooperation or a fog of conflict. In repeated interactions, the frame is as important as agents' patience in determining the outcome: for a fixed discount factor, when all agents choose what they perceive as their best play, there remain significant performance differences induced by different frames. A low-performing team conducting a site visit to observe a high-performing team will be mystified, sometimes observing different actions than they expected or being given unexpected reasons for the actions they expected. Finally, we distinguish between incremental versus radical changes in frames, and we develop a model of category formation to analyze challenges faced by a leader who seeks to improve the agents' collective performance.
Armin Pournaki
Sven Banisch
Eckehard Olbrich
In: PLOS ONE, 16 (2021) 3, e0249241
DOI: 10.1371/journal.pone.0249241 ARXIV: https://arxiv.org/abs/2009.01666
Abstract: This article analyses public debate on Twitter via network representations of retweets and replies. We argue that tweets observable on Twitter have both a direct and mediated effect on the perception of public opinion. Through the interplay of the two networks, it is possible to identify potentially misleading representations of public opinion on the platform. The method is employed to observe public debate about two events: The Saxon state elections and violent riots in the city of Leipzig in 2019. We show that in both cases, (i) different opinion groups exhibit different propensities to get involved in debate, and therefore have unequal impact on public opinion. Users retweeting far-right parties and politicians are significantly more active, hence their positions are disproportionately visible. (ii) Said users act significantly more confrontational in the sense that they reply mostly to users from different groups, while the contrary is not the case.
Robin Lamarche-Perrin
In: Computational social networks, 8 (2021), 15
DOI: 10.1186/s40649-020-00083-8 ARXIV: https://arxiv.org/abs/1906.11727
Abstract: Socio-technical systems usually consists of many intertwined networks, each connecting different types of objects (or actors) through a variety of means. As these networks are co-dependent, one can take advantage of this entangled structure to study interaction patterns in a particular network from the information provided by other related networks. A method is hence proposed and tested to recover the weights of missing or unobserved links in heterogeneous information networks (HIN) - abstract representations of systems composed of multiple types of entities and their relations. Given a pair of nodes in a HIN, this work aims at recovering the exac2t weight of the incident link to these two nodes, knowing some other links present in the HIN. To do so, probability distributions resulting from path-constrained random walks i.e., random walks where the walker is forced to follow only a specific sequence of node types and edge types, capable to capture specific semantics and commonly called a meta-path, are combined in a linearly fashion in order to approximate the desired result. This method is general enough to compute the link weight between any types of nodes. Experiments on Twitter and bibliographic data show the applicability of the method.
Paul Van Eecke
Vanja Sophie Cangalovic
In: Linguistics vanguard : multimodal online journal, 7 (2021) 1, 20180015
DOI: 10.1515/lingvan-2018-0015 LINK: https://ehai.ai.vub.ac.be/assets/pdfs/beuls2021computational.pdf
Abstract: This paper introduces a novel methodology for extracting semantic frames from text corpora. Building on recent advances in computational construction grammar, the method captures expert knowledge of how semantic frames can be expressed in the form of conventionalised form-meaning pairings, called constructions. By combining these constructions in a semantic parsing process, the frame-semantic structure of a sentence is retrieved through the intermediary of its morpho-syntactic structure. The main advantage of this approach is that state-of-the-art results are achieved, without the need for annotated training data. We demonstrate the method in a case study where causation frames are extracted from English newspaper articles, and compare it to a commonly used approach based on Conditional Random Fields (CRFs). The computational construction grammar approach yields a word-level F 1 score of 78.5%, outperforming the CRF approach by 4.5 percentage points.
Eckehard Olbrich
In: The journal of artificial societies and social simulation, 24 (2021) 1, 1
DOI: 10.18564/jasss.4434 ARXIV: https://arxiv.org/abs/1809.06134
Abstract: A multi-level model of opinion formation is presented which takes into account that attitudes on different issues are usually not independent. In the model, agents exchange beliefs regarding a series of facts. A cognitive structure of evaluative associations links different (partially overlapping) sets of facts to different political issues and determines an agents' attitudinal positions in a way borrowed from expectancy value theory. If agents preferentially interact with other agents that hold similar attitudes on one or several issues, this leads to biased argument pools and polarization in the sense that groups of agents selectively belief in distinct subsets of facts. Besides the emergence of a bi-modal distribution of opinions on single issues that most previous opinion polarization models address, our model also accounts for the alignment of attitudes across several issues along ideological dimensions.
Paul Van Eecke
Katrien Beuls
Luc Steels
In: Social media and society, 6 (2020) 2, p. 1-12
Abstract: Social media house a trove of relevant information for the study of online opinion dynamics. However, harvesting and analyzing the sheer overload of data that is produced by these media poses immense challenges to journalists, researchers, activists, policy makers, and concerned citizens. To mitigate this situation, this article discusses the creation of (social) media observatories: platforms that enable users to capture the complexities of social behavior, in particular the alignment and misalignment of opinions, through computational analyses of digital media data. The article positions the concept of 'observatories' for social media monitoring among ongoing methodological developments in the computational social sciences and humanities and proceeds to discuss the technological innovations and design choices behind social media observatories currently under development for the study of opinions related to cultural and societal issues in European spaces. Notable attention is devoted to the construction of Penelope: an open, web-services-based infrastructure that allows different user groups to consult and contribute digital tools and observatories that suit their analytical needs. The potential and the limitations of this approach are discussed on the basis of a climate change opinion observatory that implements text analysis tools to study opinion dynamics concerning themes such as global warming. Throughout, the article explicitly acknowledges and addresses potential risks of the machine-guided and human-incentivized study of opinion dynamics. Concluding remarks are devoted to a synthesis of the ethical and epistemological implications of the exercise of positioning observatories in contemporary information spaces and to an examination of future pathways for the development of social media observatories.
Petter Törnberg
Justus Uitermark
In: PLOS ONE, 15 (2020) 9, e0237073
DOI: 10.1371/journal.pone.0237073
Abstract: This article introduces the Twitter Parliamentarian Database (TPD), a multi-source and manually validated database of parliamentarians on Twitter. The TPD includes parliamentarians from all European Free Trade Association countries where over 45% of parliamentarians are on Twitter as well as a selection of English-speaking countries. The database is designed to move beyond the one-off nature of most Twitter-based research and in the direction of systematic and rigorous comparative and transnational analysis. The TPD incorporates, in addition to data collected through Twitter's streaming API and governmental websites, data from the Manifesto Project Database; the Electoral System Design Database; the ParlGov database; and the Chapel Hill Expert Survey. By compiling these different data sources it becomes possible to compare different countries, political parties, political party families, and different kinds of democracies. To illustrate the opportunities for comparative and transnational analysis that the TPD opens up, we ask: What are the differences between countries in parliamentarian Twitter interactions? How do political parties differ in their use of hashtags and what is their common ground? What is the structure of interaction between parliamentarians in the transnational debate? Alongside some interesting similarities, we find striking cross-party and particularly cross-national differences in how parliamentarians engage in politics on the social media platform.
Justus Uitermark
In: Frontiers in sustainable cities, 2 (2020), 6
Abstract: Digital platforms are reshaping cities in the twenty-first century, providing not only new ways of seeing and navigating the world, but also new ways of organizing the economy, our cities and social lives. They bring great promises, claiming to facilitate a new ''sharing'' economy, outside of the exploitation of the market and the inefficiencies of the state. This paper reflects on this promise, and its associated notion of ''self-organization'', by situating digital platforms in a longer history of control, discipline and surveillance. Using Foucault, Deleuze, and Bauman, we scrutinize the theoretical and political notion of ''self-organization'' and unpack its idealistic connotations: To what extent does self-organization actually imply empowerment or freedom? Who is the ''self'' in ''self-organization'', and who is the user on urban digital platforms? Is self-organization necessarily an expression of the interests of the constituent participants? In this way, the paper broadens the analysis of neoliberal governmentalities to reveal the forms of power concealed under the narratives of ''sharing'' and ''self-organization'' of the platform era. We find that control is increasingly moving to lower-level strata, operating by setting the context and conditions for self-organization. Thus, the order of things emerge seemingly naturally from the rules of the game. This points to an emerging form of complex control, which has gone beyond the fast and flexible forms of digital control theorized by Deleuze.
Letizia Chiappini
In: Environment and planning / A, 52 (2020) 3, p. 553-572
Abstract: Airbnb has recently become a growing topic of both concern and interest for urban researchers, policymakers, and activists. Previous research has emphasized Airbnb's economic impact and its role as a driver of residential gentrification, but Airbnb also fosters place entrepreneurs, geared to extract value from a global symbolic economy by marketing the urban frontier to a transnational middle class. This emphasizes the cultural impact of Airbnb on cities, and its power of symbolizing and communicating who belongs in specific places, responding to questions of class, gender, and ethnicity - and thereby potentially driving cultural displacement. Coming from this perspective, this paper uses computational critical discourse analysis to study how white and black hosts market black-majority neighborhoods in New York City on Airbnb, and how guests describe their consumption experience. The analysis shows how white entrepreneurs attempt to attract guests through a form of colonial discourse: exoticizing difference, emphasizing foreignness, and treating communities as consumable experiences for an outside group. White visitors, in turn, consume these cultural symbols to decorate their own identities of touristic consumption, describing themselves in colonial tropes of brave white adventurers exploring uncharted territories: glorious conquests no longer over gold and ivory, but over sandwiches at a local bodega. This situates Airbnb's marketing at the urban frontier in a longer history of colonialism and racialized expropriation.
Sal Hagen
In: New media and society, 22 (2020) 12, p. 2218-2237
Abstract: Previously theorised as vehicles for expressing progressive dissent, this article considers how political memes have become entangled in the recent reactionary turn of web subcultures. Drawing on Chantal Mouffe's work on political affect, this article examines how online anonymous communities use memetic literacy, memetic abstraction, and memetic antagonism to constitute themselves as political collectives. Specifically, it focuses on how the subcultural and highly reactionary milieu of 4chan's /pol/ board does so through an anti-Semitic meme called triple parentheses. In aggregating the contents of this peculiar meme from a large dataset of /pol/ comments, the article finds that /pol/ users, or anons, tend to use the meme to formulate a nebulous out-group resonant with populist demagoguery.
Marta Severo
Paolo Furia
In: AI and society : the journal of human-centred systems and machine intelligence, 35 (2020) 1, p. 73-86
DOI: 10.1007/s00146-018-0856-2 LINK: https://hal.archives-ouvertes.fr/hal-01824173/
Abstract: Today, there is an emerging interest for the potential role of hermeneutics in reflecting on the practices related to digital technologies and their consequences. Nonetheless, such an interest has neither given rise to a unitary approach nor to a shared debate. The primary goal of this paper is to map and synthetize the different existing perspectives to pave the way for an open discussion on the topic. The article is developed in two steps. In the first section, the authors analyze digital hermeneutics 'in theory' by confronting and systematizing the existing literature. In particular, they stress three main distinctions among the approaches: (1) between 'methodological' and 'ontological' digital hermeneutics; (2) between data- and text-oriented digital hermeneutics; and (3) between 'quantitative' and 'qualitative' credos in digital hermeneutics. In the second section, they consider digital hermeneutics 'in action', by critically analyzing the uses of digital data (notably tweets) for studying a classical object such as the political opinion. In the conclusion, we will pave the way to an ontological turn in digital hermeneutics. Most of this article is devoted to the methodological issue of interpreting with digital machines. The main task of an ontological digital hermeneutics would consist instead in wondering if it is legitimate, and eventually to which extent, to speak of digital technologies, or at least of some of them, as interpretational machines.
In: European journal of communication, 35 (2020) 3, p. 213-229
Abstract: Extreme, anti-establishment actors are being characterized increasingly as 'dangerous individuals' by the social media platforms that once aided in making them into 'Internet celebrities'. These individuals (and sometimes groups) are being 'deplatformed' by the leading social media companies such as Facebook, Instagram, Twitter and YouTube for such offences as 'organised hate'. Deplatforming has prompted debate about 'liberal big tech' silencing free speech and taking on the role of editors, but also about the questions of whether it is effective and for whom. The research reported here follows certain of these Internet celebrities to Telegram as well as to a larger alternative social media ecology. It enquires empirically into some of the arguments made concerning whether deplatforming 'works' and how the deplatformed use Telegram. It discusses the effects of deplatforming for extreme Internet celebrities, alternative and mainstream social media platforms and the Internet at large. It also touches upon how social media companies' deplatforming is affecting critical social media research, both into the substance of extreme speech as well as its audiences on mainstream as well as alternative platforms
Claes Andersson
In: Adaptive behavior, 28 (2020) 5, p. 329-358
DOI: 10.1177/1059712318822298 LINK: https://www.researchgate.net/publication/330519373
Abstract: We review issues stemming from current models regarding the drivers of cultural complexity and cultural evolution. We disagree with the implication of the treadmill model, based on dual-inheritance theory, that population size is the driver of cultural complexity. The treadmill model reduces the evolution of artifact complexity, measured by the number of parts, to the statistical fact that individuals with high skills are more likely to be found in a larger population than in a smaller population. However, for the treadmill model to operate as claimed, implausibly high skill levels must be assumed. Contrary to the treadmill model, the risk hypothesis for the complexity of artifacts relates the number of parts to increased functional efficiency of implements. Empirically, all data on hunter-gatherer artifact complexity support the risk hypothesis and reject the treadmill model. Still, there are conditions under which increased technological complexity relates to increased population size, but the dependency does not occur in the manner expressed in the treadmill model. Instead, it relates to population size when the support system for the technology requires a large population size. If anything, anthropology and ecology suggest that cultural complexity generates high population density rather than the other way around.
Ilya Obabkov
Eckehard Olbrich
Ivan P. Yamshchikov
In: Proceedings of the 5th international conference on complexity, future information systems and risk : COMPLEXIS 2020 ; May 8-9, 2020 ; Volume 1 / Reinhold Behringer... (eds.)
Setúbal (Portugal) : SCITEPRESS, 2020. - P. 149-154
DOI: 10.5220/0009792101490154 LINK: https://www.researchgate.net/publication/341506538
Abstract: In this position paper, we implement an automatic coding algorithm for electoral programs from the Manifesto Project Database. We propose a new approach that works with new words that are out of the training vocabulary, replacing them with the words from training vocabulary that are the closest neighbors in the space of word embeddings. A set of simulations demonstrates that the proposed algorithm shows classification accuracy comparable to the state-of-the-art benchmarks for monolingual multi-label classification. The agreement levels for the algorithm is comparable with manual labeling. The results for a broad set of model hyperparam-eters are compared to each other.
Fabien Tarissan
In: Information processing and management, 57 (2020) 2, 102169
DOI: 10.1016/j.ipm.2019.102169 LINK: https://hal.archives-ouvertes.fr/hal-02415624/
Paul Van Eecke
Katrien Beuls
In: Frontiers in robotics and AI, 7 (2020), 84
Abstract: Autonomous agents perceive the world through streams of continuous sensori-motor data. Yet, in order to reason and communicate about their environment, agents need to be able to distill meaningful concepts from their raw observations. Most current approaches that bridge between the continuous and symbolic domain are using deep learning techniques. While these approaches often achieve high levels of accuracy, they rely on large amounts of training data, and the resulting models lack transparency, generality, and adaptivity. In this paper, we introduce a novel methodology for grounded concept learning. In a tutor-learner scenario, the method allows an agent to construct a conceptual system in which meaningful concepts are formed by discriminative combinations of prototypical values on human-interpretable feature channels. We evaluate our approach on the CLEVR dataset, using features that are either simulated or extracted using computer vision techniques. Through a range of experiments, we show that our method allows for incremental learning, needs few data points, and that the resulting concepts are general enough to be applied to previously unseen objects and can be combined compositionally. These properties make the approach well-suited to be used in robotic agents as the module that maps from continuous sensory input to grounded, symbolic concepts that can then be used for higher-level reasoning tasks.
In: Theoretical computer science, 806 (2020), p. 90-115
DOI: 10.1016/j.tcs.2018.12.009 ARXIV: https://arxiv.org/abs/1807.06874
Abstract: Graph compression is a data analysis technique that consists in the replacement of parts of a graph by more general structural patterns in order to reduce its description length. It notably provides interesting exploration tools for the study of real, large-scale, and complex graphs which cannot be grasped at first glance. This article proposes a framework for the compression of temporal graphs, that is for the compression of graphs that evolve with time. This framework first builds on a simple and limited scheme, exploiting structural equivalence for the lossless compression of static graphs, then generalises it to the lossy compression of link streams, a recent formalism for the study of temporal graphs. Such generalisation relies on the natural extension of (bidimensional) relational data by the addition of a third temporal dimension. Moreover, we introduce an information-theoretic measure to quantify and to control the information that is lost during compression, as well as an algebraic characterisation of the space of possible compression patterns to enhance the expressiveness of the initial compression scheme. These contributions lead to the definition of a combinatorial optimisation problem, that is the Lossy Multistream Compression Problem, for which we provide an exact algorithm.
Massimo Warglien
In: Journal of organization design, 9 (2020) 1, 18
DOI: 10.1186/s41469-020-00078-9
Abstract: When members of an organization share communication codes, coordination across subunits is easier. But if groups interact separately, they will each develop a specialized code. This paper asks: Can organizations shape how people interact in order to create shared communication codes? What kinds of design interventions in communication structures and systems are useful? In laboratory experiments on triads composed of dyads that solve distributed coordination problems, we examine the effect of three factors: transparency of communication (versus privacy), role differentiation, and the subjects' social history. We find that these factors impact the harmonization of dyadic codes into triadic codes, shaping the likelihood that groups develop group-level codes, converge on a single group-level code, and compress the group-level code into a single word. Groups with transparent communication develop more effective codes, while acyclic triads composed of strangers are more likely to use multiple dyadic codes, which are less efficient than group-level codes. Groups of strangers put into acyclic configurations appear to have more difficulty establishing 'ground rules' - that is, the 'behavioral common ground' necessary to navigate acyclic structures. These coordination problems are transien - groups of different structures end up with the same average communication performance if given sufficient time. However, lasting differences in the code that is generated remain.
In: The International communication gazette, 82 (2020) 3, p. 231-259
DOI: 10.1177/1748048518825091 ARXIV: https://arxiv.org/abs/1810.04912
Abstract: This article proposes a quantitative model of the circulation of foreign news based on a gravity-like model of spatial interaction disaggregated by time, media, and countries of interest. The analysis of international RSS news stories published by 31 daily newspapers in 2015 demonstrates, first, that many of the laws of circulation of international news predicted half a century ago by Galtung and Ruge and by Östgaard are still valid. The salience of countries in media remains strongly determined by size effects (area, population), with prominent coverage of rich countries (GDP/capita) with elite status (permanent members of United Nations Security Council, the Holy See). The effect of geographical distance and a common language remains a major factor of media coverage in newsrooms. Contradicting the flat world hypothesis, global journalism remains an exception, and provincialism is the rule. The disaggregation of the model by media demonstrates that newspapers are not following exactly the same rules and are more or less sensitive to distance, a common language or elite status. The disaggregation of the model by week suggests that the rules governing foreign news can be temporarily modified by exceptional events that eliminate the usual effects of salience and relatedness, producing short periods of 'global consensus' that can benefit small, poor, and remote countries. The residuals of the model help to identify countries that are characterized by a permanent excess of media coverage (like the US or the Australia in our sample) or media that received a coverage more important than usual during several months (Yemen, Ukraine) or years (Syria, Greece) because a situation of long-term political or economic crisis.
Eckehard Olbrich
Sven Banisch
In: Physical review / E, 102 (2020) 4, 042303
MIS-Preprint: 24/2020 DOI: 10.1103/PhysRevE.102.042303 ARXIV: https://arxiv.org/abs/1912.12631
Abstract: We introduce a model of public opinion expression. Two groups of agents with different opinion on an issue interact with each other, changing the willingness to express their opinion according to whether they perceive themselves as part of the majority or minority opinion. We formulate the model as a multi-group majority game and investigate the Nash equilibria. We also provide a dynamical systems perspective: Using the reinforcement learning algorithm of Q-learning, we reduce the N-agent system in a mean-field approach to two dimensions which represent the two opinion groups. This two-dimensional system is analyzed in a comprehensive bifurcation analysis of its parameters. The model identifies structural conditions for public opinion predominance of different groups. We also show under which circumstances a minority can dominate public discourse. This offers direct connection to a central opinion expression theory of the social sciences, the spiral of silence.
Justus Uitermark
In: Social media and society, 6 (2020) 3, p. 1-10
Abstract: Commentators and scholars view both social media and cities as sites of fragmentation. Since both urban dwellers and social media users tend to form assortative social ties, so the reasoning goes, identity-based divisions are fortified and polarization is exacerbated in digital and urban spaces. Drawing on a dataset of 34.4 million interactions among Amsterdam Instagram users over half a year, this article seeks to gauge the level of fragmentation that occurs at the interface of digital and urban spaces. We find some evidence for fragmentation: users form clusters based on shared tastes and leisure activities, and these clusters are embedded in four distinct lifestyle zones at the interface of social media and the city. However, we also find connections that span divisions. Similarly, places that are tagged by Instagram users generally include a heterogeneity of clusters. While there is evidence that Instagram users sort into groups, there is no evidence that these groups are isolated from one another. In fact, our findings suggest that Instagram enables ties across different groups and mitigates against particularity and idiosyncrasy. These findings have important implications for how we should understand and study social media in the context of everyday life. Scholars should not only look for evidence of division through standard network analytic techniques like community detection, but also allow for countervailing tendencies.
Felix Gaisbauer
Eckehard Olbrich
ARXIV: https://arxiv.org/abs/2003.08154
Abstract: What are the mechanisms by which groups with certain opinions gain public voice and force others holding a different view into silence? And how does social media play into this? Drawing on recent neuro-scientific insights into the processing of social feedback, we develop a theoretical model that allows to address these questions. The model captures phenomena described by spiral of silence theory of public opinion, provides a mechanism-based foundation for it, and allows in this way more general insight into how different group structures relate to different regimes of collective opinion expression. Even strong majorities can be forced into silence if a minority acts as a cohesive whole. The proposed framework of social feedback theory (SFT) highlights the need for sociological theorising to understand the societal-level implications of findings in social and cognitive neuroscience.
Tiphaine Viard
Matthieu Latapy
Robin Lamarche-Perrin
In: Computer networks, 161 (2019), p. 197-209
DOI: 10.1016/j.comnet.2019.07.002
Abstract: This paper aims at precisely detecting and identifying anomalous events in IP traffic. To this end, we adopt the link stream formalism which properly captures temporal and structural features of the data. Within this framework, we focus on finding anomalous behaviours with respect to the degree of IP addresses over time, i.e. the number of distinct IP addresses with which they interact over time. Due to diversity in IP profiles, this feature is typically distributed heterogeneously, preventing us to directly find anomalies. To deal with this challenge, we design a method to detect outliers as well as precisely identify their cause in a sequence of similar heterogeneous distributions. We apply it to several IP traffic captures and we show that it succeeds in detecting relevant patterns in terms of anomalous network activity.
Robin Lamarche-Perrin
ARXIV: https://arxiv.org/abs/1906.02541
Abstract: In social network Twitter, users can interact with each other and spread information via retweets. These millions of interactions may result in media events whose influence goes beyond Twitter framework. In this paper, we thoroughly explore interactions to provide a better understanding of the emergence of certain trends. First, we consider an interaction on Twitter to be a triplet (s,a,t) meaning that user s, called the spreader, has retweeted a tweet of user a, called the author, at time t. We model this set of interactions as a data cube with three dimensions: spreaders, authors and time. Then, we provide a method which builds different contexts, where a context is a set of features characterizing the circumstances of an event. Finally, these contexts allow us to find relevant unexpected behaviors, according to several dimensions and various perspectives: a user during a given hour which is abnormal compared to its usual behavior, a relationship between two users which is abnormal compared to all other relationships, etc. We apply our method to a set of retweets related to the 2017 French presidential election and show that one can build interesting insights regarding political organization on Twitter.
Sven Banisch
Paul Van Eecke
Katrien Beuls
ARXIV: https://arxiv.org/abs/1912.01252
Abstract: News website comment sections are spaces where potentially conflicting opinions and beliefs are voiced. Addressing questions of how to study such cultural and societal conflicts through technological means, the present article critically examines possibilities and limitations of machine-guided exploration and potential facilitation of on-line opinion dynamics. These investigations are guided by a discussion of an experimental observatory for mining and analyzing opinions from climate change-related user comments on news articles from the theguardian.com. This observatory combines causal mapping methods with computational text analysis in order to mine beliefs and visualize opinion landscapes based on expressions of causation. By (1) introducing digital methods and open infrastructures for data exploration and analysis and (2) engaging in debates about the implications of such methods and infrastructures, notably in terms of the leap from opinion observation to debate facilitation, the article aims to make a practical and theoretical contribution to the study of opinion dynamics and conflict in new media environments.
Massimo Warglien
Simon Levis Sullam
Deborah Paci
In: Proceedings of the 1st international workshop on computational approaches to historical language change : Florence, Italy ; August 2, 2019 / Nina Tahmasebi... (eds.)
Stroudsburg, PA : Association for Computational Linguistics, 2019. - P. 115-125
DOI: 10.18653/v1/W19-4715 ARXIV: https://arxiv.org/abs/1906.01440
Abstract: We investigate some aspects of the history of antisemitism in France, one of the cradles of modern antisemitism, using diachronic word embeddings. We constructed a large corpus of French books and periodicals issues that contain a keyword related to Jews and performed a diachronic word embedding over the 1789-1914 period. We studied the changes over time in the semantic spaces of 4 target words and performed embedding projections over 6 streams of antisemitic discourse. This allowed us to track the evolution of antisemitic bias in the religious, economic, socio-politic, racial, ethic and conspiratorial domains. Projections show a trend of growing antisemitism, especially in the years starting in the mid-80s and culminating in the Dreyfus affair. Our analysis also allows us to highlight the peculiar adverse bias towards Judaism in the broader context of other religions.
Roberto Navigli
In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP) : November 3-7, 2019 Hong Kong, China
Stroudsburg, PA, USA : Association for Computational Linguistics, 2019. - P. 88-99
DOI: 10.18653/v1/D19-1009 LINK: https://www.researchgate.net/publication/336995692
Abstract: Game-theoretic models, thanks to their intrinsic ability to exploit contextual information, have shown to be particularly suited for the Word Sense Disambiguation task. They represent ambiguous words as the players of a non cooperative game and their senses as the strategies that the players can select in order to play the games. The interaction among the players is modeled with a weighted graph and the payoff as an embedding similarity function, that the players try to maximize. The impact of the word and sense embedding representations in the framework has been tested and analyzed extensively: experiments on standard benchmarks show state-of-art performances and different tests hint at the usefulness of using disambiguation to obtain contextualized word representations.
Robin Lamarche-Perrin
LINK: https://hal.archives-ouvertes.fr/hal-02187224
Abstract: Social research on public opinion has been affected by the recent deluge of new digital data on the Web, from blogs and forums to Facebook pages and Twitter accounts. This fresh type of information useful for mining opinions is emerging as an alternative to traditional techniques, such as opinion polls. Firstly, by building the state of the art of studies of political opinion based on Twitter data, this paper aims at identifying the relationship between the chosen data analysis method and the definition of political opinion implied in these studies. Secondly, it aims at investigating the feasibility of performing multiscale analysis in digital social research on political opinion by addressing the merits of several methodological techniques, from content-based to interaction-based methods, from statistical to semantic analysis, from supervised to unsupervised approaches. The end result of such an approach is to identify future trends in social science research on political opinion.
In: The SAGE handbook of web history / Niels Brügger... (eds.)
Los Angeles : SAGE, 2019. - P. 42-56
LINK: https://www.researchgate.net/publication/327403018
Abstract: Since the founding of the Internet Archive in mid-1990s, approaches to Web archiving have evolved from striving to save all websites to focusing efforts on those dedicated to riveting events (elections and disasters), national heritage and most recently the self in social media. Each approach implies or affords a certain historiography: site-biographical, event-based, national and autobiographical (or selfie) history writing. Having proposed a periodization of the history of web archiving and the kinds of histories implied by each period's dominant approach, the article turns to the so-called 'crisis' in scholarly web archiving use, and proposes a methodological imagination to address it. Among the digital methods put forward to repurpose existing web archives, one may make screencast documentaries about the history of the web, create thematic collections and query them for social history purposes, conjure a past state of the web through historical hyperlink analysis and discover missing materials, and finally examine websites' underlying code allowing for the study of tracking over time. In all the piece calls for inventive methods to invite the further use of web archives.
Jens Nevens
Paul Van Eecke
Katrien Beuls
LINK: https://arxiv.org/abs/2004.09218
Abstract: The question of how an effective and efficient communication system can emerge in a population of agents that need to solve a particular task attracts more and more attention from researchers in many fields, including artificial intelligence, linguistics and statistical physics. A common methodology for studying this question consists of carrying out multi-agent experiments in which a population of agents takes part in a series of scripted and task-oriented communicative interactions, called 'language games'. While each individual language game is typically played by two agents in the population, a large series of games allows the population to converge on a shared communication system. Setting up an experiment in which a rich system for communicating about the real world emerges is a major enterprise, as it requires a variety of software components for running multi-agent experiments, for interacting with sensors and actuators, for conceptualising and interpreting semantic structures, and for mapping between these semantic structures and linguistic utterances. The aim of this paper is twofold. On the one hand, it introduces a high-level robot interface that extends the Babel software system, presenting for the first time a toolkit that provides flexible modules for dealing with each subtask involved in running advanced grounded language game experiments. On the other hand, it provides a practical guide to using the toolkit for implementing such experiments,taking a grounded colour naming game experiment as a didactic example.
Roland Mühlenbernd
In: Games, 10 (2019) 1, 5
DOI: 10.3390/g10010005
Abstract: We study a model where agents face a continuum of two-player games and categorize them into a finite number of situations to make sense of their complex environment. Agents need not share the same categorization. Each agent can cooperate or defect, conditional on the perceived category. The games are fully ordered by the strength of the temptation to defect and break joint cooperation. In equilibrium agents share the same categorization, but achieve less cooperation than if they could perfectly discriminate games. All the equilibria are evolutionarily stable, but stochastic stability selects against cooperation. We model agents' learning when they imitate successful players over similar games, but lack any information about the opponents' categorizations. We show that imitation conditional on reaching an intermediate aspiration level leads to a shared categorization that achieves higher cooperation than under perfect discrimination.
Etienne Toureille
Claude Grasland
In: Socio-anthropologie : revue interdisciplinaire de sciences sociales, 40 (2019), p. 181-199
DOI: 10.4000/socio-anthropologie.6235
Abstract: If the agenda-setting terms of discussion of international mobility in 2015 were ''refugee crisis'', the absence of any abrupt change in the migration system suggests that the ''crisis'' stemmed from media representations. Taking a corpus of online Press materials, with a global and multi-lingual scope, our analysis practises reading at a distance, and a geographical approach, in order to broach the spatial and temporal dimensions of the media's highlighting of migration. The temporal variations in information flow concerning migration prove to be linked to the quantitative changes in the spatial content of the representations of the migratory situation (the geographical coverage concentrated on the EU and the Eastern Mediterranean). Due to these links, examined here with attention to the production of geographical imaginaries and to differences of scale, a global geopolitical upheaval in migration was identified.
Clémence Magnien
Tiphaine Viard
In: Temporal network theory / Petter Holme... (eds.)
Cham : Springer, 2019. - P. 49-64
(Computational social sciences)
DOI: 10.1007/978-3-030-23495-9_3 ARXIV: https://arxiv.org/abs/1906.04840
Abstract: We recently introduced a formalism for the modeling of temporal networks, that we call stream graphs. It emphasizes the streaming nature of data and allows rigorous definitions of many important concepts generalizing classical graphs. This includes in particular size, density, clique, neighborhood, degree, clustering coefficient, and transitivity. In this contribution, we show that, like graphs, stream graphs may be extended to cope with bipartite structures, with node and link weights, or with link directions. We review the main bipartite, weighted or directed graph concepts proposed in the literature, we generalize them to the cases of bipartite, weighted, or directed stream graphs, and we show that obtained concepts are consistent with graph and stream graph ones. This provides a formal ground for an accurate modeling of the many temporal networks that have one or several of these features.
Clémence Magnien
Fabien Tarissan
In: IEEE transactions on network science and engineering, 6 (2019) 4, p. 940-951
DOI: 10.1109/TNSE.2018.2880344 LINK: https://hal.archives-ouvertes.fr/hal-01925647
Abstract: For a long time, researchers have worked on defining different metrics able to characterize the importance of nodes in static networks. Recently, researchers have introduced extensions that consider the dynamics of networks. These extensions study the time-evolution of the importance of nodes, which is an important question that has yet received little attention in the context of temporal networks. They follow different approaches for evaluating a node's importance at a given time and the value of each approach remains difficult to assess. In order to study this question more in depth, we compare in this paper a method we recently introduced to three other existing methods. We use several datasets of different nature, and show and explain how these methods capture different notions of importance. We also show that in some cases it might be meaningless to try to identify nodes that are globally important. Finally, we highlight the role of inactive nodes, that still can be important as a relay for future communications.
Robin Lamarche-Perrin
In: Complex networks X : proceedings of the 10th conference on complex networks ; CompleNet 2019 / Sean Cornelius... (eds.)
Cham : Springer, 2019. - P. 97-109
(Springer proceedings in complexity)
DOI: 10.1007/978-3-030-14459-3_8 LINK: https://hal.archives-ouvertes.fr/hal-02085410/
Abstract: Heterogeneous information networks (HINs) are abstract representations of systems composed of multiple types of entities and their relations. Given a pair of nodes in a HIN, this work aims at recovering the exact weight of the incident link to these two nodes, knowing some other links present in the HINs. Actually, this weight is approximated by a linear combination of probabilities, results of path-constrained random walks, i.e., random walks where the walker is forced to follow only a specific sequence of node types and edge types which is commonly called a meta path, performed on the HINs. This method is general enough to compute the link weight between any types of nodes. Experiments on Twitter data show the applicability of the method.
Marco LiCalzi
In: Econometrica : journal of the Econometric Society, 87 (2019) 3, p. 837-865
DOI: 10.3982/ECTA13673
Abstract: We revisit the Nash bargaining model and axiomatize a procedural solution that maximizes the probability of successful bargaining. This probability-based approach nests both the standard and the ordinal Nash solution, and yet need not assume that bargainers have preferences over lotteries or that choice sets are convex. We consider both mediator-assisted bargaining and standard unassisted bargaining. We solve a long-standing puzzle and offer a natural interpretation of the product operator underlying the Nash solution. We characterize other known solution concepts, including the egalitarian and the utilitarian solutions.
Eckehard Olbrich
In: The journal of mathematical sociology, 43 (2019) 2, p. 76-103
DOI: 10.1080/0022250X.2018.1517761 ARXIV: https://arxiv.org/abs/1704.02890
Abstract: We explore a new mechanism to explain polarization phenomena in opinion dynamics. The model is based on the idea that agents evaluate alternative views on the basis of the social feedback obtained on expressing them. A high support of the favored and therefore expressed opinion in the social environment, is treated as a positive social feedback which reinforces the value associated to this opinion. In this paper we concentrate on the model with dyadic communication and encounter probabilities defined by an unweighted, time-homogeneous network. The model captures polarization dynamics more plausibly compared to bounded confidence opinion models and avoids extensive opinion flipping usually present in binary opinion dynamics. We perform systematic simulation experiments to understand the role of network connectivity for the emergence of polarization.
Lionel Tabourier
Matthieu Latapy
In: Journal of interdisciplinary methodologies and issues in sciences, 5 (2019), p. 1-26
DOI: 10.18713/JIMIS-150719-5-3 ARXIV: https://arxiv.org/abs/1804.01465
Abstract: Capturing both the structural and temporal aspects of interactions is crucial for many real world datasets like contact between individuals. Using the link stream formalism to capture the dynamic of the systems, we tackle the issue of activity prediction in link streams, that is to say predicting the number of links occurring during a given period of time and we present a protocol that takes advantage of the temporal and structural information contained in the link stream. Using a supervised learning method, we are able to model the dynamic of our system to improve the prediction. We investigate the behavior of our algorithm and crucial elements affecting the prediction. By introducing different categories of pair of nodes, we are able to improve the quality as well as increase the diversity of our prediction.
Petter Törnberg
In: Biological theory, 14 (2019) 2, p. 86-102
DOI: 10.1007/s13752-018-0313-y
Abstract: Despite remarkable empirical and methodological advances, our theoretical understanding of the evolutionary processes that made us human remains fragmented and contentious. Here, we make the radical proposition that the cultural communities within which Homo emerged may be understood as a novel exotic form of organism. The argument begins from a deep congruence between robust features of Pan community life cycles and protocell models of the origins of life. We argue that if a cultural tradition, meeting certain requirements, arises in the context of such a "social protocell," the outcome will be an evolutionary transition in individuality whereby traditions and hominins coalesce into a macroscopic bio-socio-technical system, with an organismal organization that is culturally inherited through irreversible fission events on the community level. We refer to the resulting hypothetical evolutionary individual as a "sociont." The social protocell provides a preadapted source of alignment of fitness interests that addresses a number of open questions about the origins of shared adaptive cultural organization, and the derived genetic (and highly unusual) adaptations that support them. Also, social cooperation between hominins is no longer in exclusive focus since cooperation among traditions becomes salient in this model. This provides novel avenues for explanation. We go on to hypothesize that the fate of the hominin in such a setting would be mutualistic coadaptation into a part-whole relation with the sociont, and we propose that the unusual suite of derived features in Homo is consistent with this hypothesis.
Sharwin Rezagholi
In: Proceedings of the 3rd international conference on complexity, future information systems and risk : COMPLEXIS 2018 ; March 20-21, 2018, in Funchal, Madeira, Portugal ; Volume 1
Setúbal (Portugal) : SCITEPRESS, 2018. - P. 113-119
DOI: 10.5220/0006761601130119 ARXIV: https://arxiv.org/abs/1805.12083
Abstract: This paper employs a novel method for the empirical analysis of political discourse and develops a theoretical model that demonstrates dynamics comparable with the empirical data. Applying a set of binary text classifers based on convolutional neural networks, we label statements in the political programs of the Democratic and the Republican Party in the United Statesa. Extending the framework of the Colonel Blotto game by a stochastic activation structure, we show that, under a simple learning rule, parties show temporal dynamics that resemble the empirical data.
Tiphaine Viard
Matthieu Latapy
Robin Lamarche-Perrin
In: TMA conference 2018 : proceedings of the 2nd network traffic measurement and analysis conference ; Vienna, Austria, June 26-29, 2018
Piscataway, NJ : IEEE, 2018. - P. 1-8
DOI: 10.23919/TMA.2018.8506575 ARXIV: https://arxiv.org/abs/1906.02524 LINK: http://tma.ifip.org/2018/wp-content/uploads/sites/3/2018/06/tma2018_paper44.pdf
Abstract: Precise detection and identification of anomalous events in IP traffic are crucial in many applications. This paper intends to address this task by adopting the link stream formalism which properly captures temporal and structural features of the data. Within this framework we focus on finding anomalous behaviours with the degree of IP addresses over time. Due to diversity in IP profiles, this feature is typically distributed heterogeneously, preventing us to find anomalies. To deal with this challenge, we design a method to detect outliers as well as precisely identify their cause in a sequence of similar heterogeneous distributions. We apply it to a MAWI capture of IP traffic and we show that it succeeds at detecting relevant patterns in terms of anomalous network activity.
Clémence Magnien
Matthieu Latapy
In: Information processing letters, 133 (2018), p. 44-48
DOI: 10.1016/j.ipl.2018.01.006 ARXIV: https://arxiv.org/abs/1712.06970
Abstract: Link streams model interactions over time, and a clique in a link stream is defined as a set of nodes and a time interval such that all pairs of nodes in this set interact permanently during this time interval. This notion was introduced recently in the case where interactions are instantaneous. We generalize it to the case of interactions with durations and show that the instantaneous case actually is a particular case of the case with durations. We propose an algorithm to detect maximal cliques that improves our previous one for instantaneous link streams, and performs better than the state of the art algorithms in several cases of interest.
Raphael Fournier-S'niehotta
Clémence Magnien
Matthieu Latapy
In: Complex networks IX : proceedings of the 9th conference on complex networks ; CompleNet 2018 / Sean Cornelius... (eds.)
Cham : Springer, 2018. - P. 233-241
(Springer proceedings in complexity)
DOI: 10.1007/978-3-319-73198-8_20 ARXIV: https://arxiv.org/abs/1710.07107
Abstract: Studying IP traffic is crucial for many applications. We focus here on the detection of (structurally and temporally) dense sequences of interactions that may indicate botnets or coordinated network scans. More precisely, we model a MAWI capture of IP traffic as a link streams, i.e., a sequence of interactions (t1,t2,u,v) meaning that devices u and v exchanged packets from time t1 to time t2. This traffic is captured on a single router and so has a bipartite structure: Links occur only between nodes in two disjoint sets. We design a method for finding interesting bipartite cliques in such link streams, i.e., two sets of nodes and a time interval such that all nodes in the first set are linked to all nodes in the second set throughout the time interval. We then explore the bipartite cliques present in the considered trace. Comparison with the MAWILab classification of anomalous IP addresses shows that the found cliques succeed in detecting anomalous network activity.
Katrien Beuls
In: Zeitschrift für Anglistik und Amerikanistik, 66 (2018) 3, p. 341-355
Abstract: Computational construction grammar aims to provide concrete processing models that operationalise construction grammar accounts of the different aspects of language. This paper discusses the computational mechanisms that allow construction grammar models to exhibit, to a certain extent, the creativity and inventiveness that is observed in human language use. It addresses two main types of language-related creativity. The first type concerns the 'free combination of constructions,' which gives rise to the open-endedness of language. The second type concerns the 'appropriate violation of usual constraints' that permits language users to go beyond what is possible when adhering to the usual constraints of the language, and be truly creative by relaxing these constraints and by introducing novel constructions. All mechanisms and examples discussed in this paper are fully operationalised and implemented in Fluid Construction Grammar.
Anton Törnberg
In: Big data and society, 5 (2018) 2, p. 1-16
Abstract: This paper reviews the contemporary discussion on the epistemological and ontological effects of Big Data within social science, observing an increased focus on relationality and complexity, and a tendency to naturalize social phenomena. The epistemic limits of this emerging computational paradigm are outlined through a comparison with the discussions in the early days of digitalization, when digital technology was primarily seen through the lens of dematerialization, and as part of the larger processes of ôpostmodernityò. Since then, the online landscape has become increasingly centralized, and the ôliquidityò of dematerialized technology has come to empower online platforms in shaping the conditions for human behavior. This contrast between the contemporary epistemological currents and the previous philosophical discussions brings to the fore contradictions within the study of digital social life: While qualitative change has become increasingly dominant, the focus has gone towards quantitative methods; while the platforms have become empowered to shape social behavior, the focus has gone from social context to naturalizing social patterns; while meaning is increasingly contested and fragmented, the role of hermeneutics has diminished; while platforms have become power hubs pursuing their interests through sophisticated data manipulation, the data they provide is increasingly trusted to hold the keys to understanding social life. These contradictions, we argue, are partially the result of a lack of philosophical discussion on the nature of social reality in the digital era; only from a firm metatheoretical perspective can we avoid forgetting the reality of the system under study as we are affected by the powerful social life of Big Data.
LINK: https://www.researchgate.net/publication/327792197
Abstract: This essay places contemporary platform capitalism in a larger historical trajectory, emphasizing the transformations of control and power with the progression of modernity. It argues that liquidity of modernity means that control becomes increasingly organized in lower ontological stratas: rather than top-down command-and-control, control paradoxically appears bottom-up: the outcomes resulting from a set of underlying rules seemingly only accidentally play in the hands of certain interests. Platform modernity is the result of digitalization coming into this history, enabling the rapid design of emergent control. This puts new epistemological demands on critique, as the study of control increasingly implies disentangling the complex causal pathways of emergent mechanisms. Complexity Science, hailed for its capacity precisely for such disentanglement, has so far proven an inadequate epistemological vehicle for critique-criticized for its tendency to naturalize social phenomena, while disregarding conflict and power differentials. This essay suggests that a path to a much-needed critical complexity science-aimed at laying bare the hidden mechanisms of emergent power-requires repeating Cybernetics' move to a second-order. For only by incorporating reflexitivity can Complexity Science hope to grasp, critique, and change power relations.
In: Review of social economy, 76 (2018) 4, p. 509-534
DOI: 10.1080/00346764.2018.1480796
Abstract: Contemporary economic theory has entered into an era of unprecedented pluralism. Convincing arguments have been presented for the integration of this pluralism, the possibilities for which however rest on questions of ontology. This paper looks at two hubs of pluralist research, complexity economics and heterodox economics, to evaluate the possibilities for an integration. Complexity economics constitutes an ontological broadening of neoclassicism, but is based on an implicit and incomplete social ontology. Heterodox economics has been argued to be systematized by a critical realist ontology, but has been criticized for limits in the operationalization of this ontology. An ontological merge is sketched, resulting in Complex Realist economics, which is argued to be capable of resolving the 'confused state' of complexity economics, providing the heterodox tradition with the necessary methodologies to study the phenomena that it theorizes, and constituting a consistent ontological foundation for an 'interested pluralism'.
Petter Törnberg
In: PLOS ONE, 13 (2018) 9, e0203958
DOI: 10.1371/journal.pone.0203958
Abstract: The viral spread of digital misinformation has become so severe that the World Economic Forum considers it among the main threats to human society. This spread have been suggested to be related to the similarly problematized phenomenon of 'echo chambers', but the causal nature of this relationship has proven difficult to disentangle due to the connected nature of social media, whose causality is characterized by complexity, non-linearity and emergence. This paper uses a network simulation model to study a possible relationship between echo chambers and the viral spread of misinformation. It finds an 'echo chamber effect': the presence of an opinion and network polarized cluster of nodes in a network contributes to the diffusion of complex contagions, and there is a synergetic effect between opinion and network polarization on the virality of misinformation. The echo chambers effect likely comes from that they form the initial bandwagon for diffusion. These findings have implication for the study of the media logic of new social media.
Justus Uitermark
Petter Törnberg
LINK: https://www.researchgate.net/publication/325812908
Abstract: This study examines differences in endorsement networks on Twitter amongst parliamentarians in 23 different countries. It draws upon a database that tracks all Twitter activity and Twitter interactions of members of parliament from 23 different countries. This article serves to introduce this dataset and provide a first look on the patterns that it reveals. We focus on the network patterns that emerge from the politicians' retweets, and find that generally speaking, politicians are fiercely loyal to their party: they mostly retweet fellow party members. As a consequence, clusters identified through community detection generally coincide with party membership. However, there are also important variations between the countries in terms of the degree of partisanship and patterns of conflict. We construct a typology to capture such differences in political coalitions and divisions.
Fabien Tarissan
Jean-Loup Guillaume
In: Complex networks and their applications VI : proceedings of complex networks 2017 ; the sixth international conference on complex networks and their applications / Chantal Cherifi... (eds.)
Cham : Springer, 2018. - P. 278-289
(Studies in computational intelligence ; 689)
DOI: 10.1007/978-3-319-72150-7_23 LINK: https://hal.archives-ouvertes.fr/hal-01657093/
Abstract: This study proposes ComSim, a new algorithm to detect communities in bipartite networks. This approach generates a partition of nodes by relying on similarity between the nodes in terms of links towards nodes. In order to show the relevance of this approach, we implemented and tested the algorithm on 2 small datasets equipped with a ground-truth partition of the nodes. It turns out that, compared to 3 baseline algorithms used in the context of bipartite graph, ComSim proposes the best communities. In addition, we tested the algorithm on a large scale network. Results show that ComSim has good performances, close in time to Louvain. Besides, a qualitative investigation of the communities detected by ComSim reveals that it proposes more balanced communities.
Natalia Sánchez-Querubin
Richard Rogers
In: Social media and society, 4 (2018) 1, p. 1-13
Abstract: The article builds upon critical border studies for the study of the European migration crisis that take into account the digital, both in terms of telecommunications infrastructure and media platforms. In putting forward an approach to migration studies with digital devices, here the emphasis is shifted from 'bordering' to 'routing'. First, the current analytical situation is sketched as one where the 'connective' route is contrasted to the 'securitised' one, made by European policy and monitoring software. Subsequently, we ask, how are connective migrant routes being made into accounts and issues in social media? Two case studies are presented, each describing routing in terms of the distinctive accounts made of migrant journeying. In the first, routes are seen from the point of view of its curation in Getty Images, and in particular of the images privileged by its social layer. In the image collection, the 'sanitised route' (as we call it) gradually leads to a soft landing in Europe, cleansed of anti-refugee sentiment. In the second, we ask how camps and borders are problematized from the point of view of the traveler using TripAdvisor. In the 'interrupted tourist route,' would-be visitors are concerned with a Europe made unsafe, thereby rerouting their own journeys on the basis of social media commenting. We conclude with reflection about the advantages of employing social media in migration and border studies for the study of 'media journeys' as routes from multiple vantage points, developing the idea that route-work also can be understood as platform-work.
Robin Lamarche-Perrin
In: Revue Francaise de Sociologie, 59 (2018) 3, p. 507-532
DOI: 10.3917/rfs.593.0507 LINK: https://hal.archives-ouvertes.fr/hal-02188391
Abstract: Social research on public opinion has been affected by the recent deluge of new digital data on the Web, from blogs and forums to Facebook pages and Twitter accounts. This fresh type of information useful for mining opinions is emerging as an alternative to traditional techniques, such as opinion polls. Firstly, by building the state of the art of studies of political opinion based on Twitter data, this paper aims at identifying the relationship between the chosen data analysis method and the definition of political opinion implied in these studies. Secondly, it aims at investigating the feasibility of performing multiscale analysis in digital social research on political opinion by addressing the merits of several methodological techniques, from content-based to interaction-based methods, from statistical to semantic analysis, from supervised to unsupervised approaches. The end result of such an approach is to identify future trends in social science research on political opinion.
Laurent Beauguitte
In: Quantitative semiotic analysis / Dario Compagno (ed.)
Cham : Springer, 2018. - P. 171-189
(Lecture notes in morphogenesis)
DOI: 10.1007/978-3-319-61593-6_9
Abstract: Together with politics, international news is often considered to be one of the most prestigious fields of journalism. However, making international news attractive is increasingly difficult. Today, one of the main strategies employed by journalists consists in mentioning individuals in the news. The reader is supposed to identify with the mentioned individual(s), and the story is expected to be more successful as a consequence. This paper investigates the interest of using quali-quantitative content analysis to study the semiotics of international news. We analyse six daily newspapers from three developed countries and examine three complementary aspects of the relation between individuals and international news: the level of personification, the type of individual mentioned and the geographical scale to which individuals is connected.
In: International journal of communication, 12 (2018), p. 450-472
LINK: http://ijoc.org/index.php/ijoc/article/view/6407
Abstract: Klout scores and similar are often called 'vanity metrics' because they measure and display performance in (what is referred to as) the 'success theater' of social media. The notion of vanity metrics implies a critique of metrics concerning both the object of measurement as well as their capacity to measure unobtrusively or only to encourage performance. While discussing that critique, the article, however, focuses mainly on how one may consider reworking the metrics. In the research project I call 'critical analytics,' the proposal is to repurpose 'alt metrics' scores and other engagement measures for social research, and seek to measure the 'otherwise engaged,' or other modes of engagement (than vanity) in social media such as dominant voice, concern, commitment, positioning and alignment, thereby furnishing digital methods with a conceptual and applied research agenda concerning online metrics.
In: PArtecipazione e COnflitto : PACO = PArticipation and COnflict, 11 (2018) 2, p. 557-570
DOI: 10.1285/i20356609v11i2p557
Abstract: Social media data as source for empirical studies have recently come under renewed scrutiny, given the widespread deletion of Russian disinformation pages by Facebook as well as the suspension of Alt Right accounts by Twitter. Missing data is one issue, compounded by the fact that the 'archives' (CrowdTangle for Facebook and Gnip for Twitter) are also owned by the companies. Previously questions revolved around the extent to which corporate data collected for one purpose (e.g., advertising) could be employed by social science for another (e.g., political engagement). Social media data also could be said to be far from 'good data', since the platforms not only change and introduce new data fields ('reactions' on Facebook), but also increasingly narrow what is available to researchers for privacy reasons. Profound ethical issues were also put on display recently during the Cambridge Analytica scandal, as science became implicated in the subsequent 'locking down' of social media data by the corporations. How to approach social media data these days?
Fabien Tarissan
In: 2018 Fifth international conference on social networks analysis, management and security (SNAMS) : 15-18 October, 2018 ; Valencia, Spain
[S.l.] : IEEE, 2018. - P. 3-10
DOI: 10.1109/SNAMS.2018.8554895 LINK: https://hal.archives-ouvertes.fr/hal-01917792
Abstract: Whether it be through a problematic related to information ranking (e.g. search engines) or content recommendation (on social networks for instance), algorithms are at the core of processes selecting which information is made visible. Those algorithmic choices have in turn a strong impact on users activity and therefore on their access to information. This raises the question of measuring the quality of the choices made by algorithms and their impact on the users. As a first step into that direction, this paper presents a framework to analyze the diversity of the information accessed by the users. By depicting the activity of the users as a tripartite graph mapping users to products and products to categories, we analyze how categories catch users attention and in particular how this attention is distributed. We then propose the (calibrated) herfindahl diversity score as a metric quantifying the extent to which this distribution is diverse and representative of the existing categories. In order to validate this approach, we study a dataset recording the activity of users on online music platforms. We show that our score enables to discriminate between very specific categories that capture dense and coherent sub-groups of listeners, and more generic categories that are distributed on a wider range of users. Besides, we highlight the effect of the volume of listening on users attention and reveal a saturation effect above a certain threshold.
Tiphaine Viard
Clémence Magnien
In: Social network analysis and mining, 8 (2018) 1, 61
DOI: 10.1007/s13278-018-0537-7 ARXIV: https://arxiv.org/abs/1710.04073 LINK: https://hal.archives-ouvertes.fr/hal-01665084
Abstract: Graph theory provides a language for studying the structure of relations, and it is often used to study interactions over time too. However, it poorly captures the both temporal and structural nature of interactions, that calls for a dedicated formalism. In this paper, we generalize graph concepts in order to cope with both aspects in a consistent way. We start with elementary concepts like density, clusters, or paths, and derive from them more advanced concepts like cliques, degrees, clustering coefficients, or connected components. We obtain a language to directly deal with interactions over time, similar to the language provided by graphs to deal with relations. This formalism is self-consistent: usual relations between different concepts are preserved. It is also consistent with graph theory: graph concepts are special cases of the ones we introduce. This makes it easy to generalize higher-level objects such as quotient graphs, line graphs, k-cores, and centralities. This paper also considers discrete versus continuous time assumptions, instantaneous links, and extensions to more complex cases.
Justus Uitermark
Petter Törnberg
LINK: https://www.researchgate.net/publication/325721924
Abstract: This is a study on the diffusion of novel scientific ideas. We examine how scholarly communities mediate diffusion in the academic landscape. As a case study, we analyze the diffusion of a specific scientific idea, namely the 'Strength of Weak Ties' hypothesis, introduced by Granovetter in his 1973 paper. Using Web of Science data, we construct a network of scholars who referenced Granovetter's paper. By combining topic modeling, network analysis and close reading, we show that the diffusion network features communities of scholars who interpret and use Granovetter's hypothesis in distinct ways. Such communities collaboratively interpret Granovetter's hypothesis to amend it to their specific perspectives and interests. Our analysis further shows that communities are clustered around figureheads, i.e., scholars who are central within their communities and perform pivotal roles in translating the general hypothesis into their specific field. The larger implication of our study is that scientific ideas change as they spread. We argue that the methodology presented in this paper has potential beyond the scientific domain, particularly in the study of the diffusion of opinions, symbols, and ideas.
Clémence Magnien
Fabien Tarissan
LINK: https://hal.archives-ouvertes.fr/hal-01915209
Abstract: The ability to detect important nodes in temporal networks has been investigated lately. This has been a challenge on both the theoretical aspects as well as computational ones. In this study we propose and evaluate different strategies to detect nodes that have high temporal closeness.
Oana Balalau
Mauro Sozio
In: WWW '18 : proceedings of the 2018 World Wide Web conference ; Lyon, France ; April 23 - 27, 2018
[Geneva, Switzerland] : International World Wide Web Conference Committee, 2018. - P. 589-598
Abstract: Motivated by recent studies in the data mining community which require to efficiently list all k-cliques, we revisit the iconic algorithm of Chiba and Nishizeki and develop the most efficient parallel algorithm for such a problem. Our theoretical analysis provides the best asymptotic upper bound on the running time of our algorithm for the case when the input graph is sparse. Our experimental evaluation on large real-world graphs shows that our parallel algorithm is faster than state-of-the-art algorithms, while boasting an excellent degree of parallelism. In particular, we are able to list all k-cliques (for any k) in graphs containing up to tens of millions of edges as well as all $10$-cliques in graphs containing billions of edges, within a few minutes and a few hours respectively. Finally, we show how our algorithm can be employed as an effective subroutine for finding the k-clique core decomposition and an approximate k-clique densest subgraphs in very large real-world graphs.
Rym Baccour
Matthieu Latapy
In: Complex networks and their applications VI : proceedings of complex networks 2017 ; the sixth international conference on complex networks and their applications / Chantal Cherifi... (eds.)
Cham : Springer, 2018. - P. 166-177
(Studies in computational intelligence ; 689)
DOI: 10.1007/978-3-319-72150-7_14 ARXIV: https://arxiv.org/abs/1710.08158 LINK: https://hal.archives-ouvertes.fr/hal-01617992
Abstract: Bitcoin is a cryptocurrency attracting a lot of interest both from the general public and researchers. There is an ongoing debate on the question of users' anonymity: while the Bitcoin protocol has been designed to ensure that the activity of individual users could not be tracked, some methods have been proposed to partially bypass this limitation. In this article, we show how the Bitcoin transaction network can be studied using complex networks analysis techniques, and in particular how community detection can be efficiently used to re-identify multiple addresses belonging to a same user.
Petter Törnberg
In: Futures : the journal of policy, planning and futures studies, 95 (2018), p. 118-138
DOI: 10.1016/j.futures.2017.11.001
Abstract: Traditional scientific policy approaches and tools are increasingly seen as inadequate, or even counter-productive, for many purposes. In response to these shortcomings, a new wave of approaches has emerged based on the idea that societal systems are irreducibly complex. The new categories that are thereby introduced - like 'complex' or 'wicked' - suffer, however, by a lack of shared understanding. We here aim to reduce this confusion by developing a meta-ontological map of types of systems that have the potential to 'overwhelm us': characteristic types of problems, attributions of function, manners of design and governance, and generating and maintaining processes and phenomena. This permits us, in a new way, to outline an inner anatomy of the motley collection of system types that we tend to call 'complex'. Wicked problems here emerge as the product of an ontologically distinct and describable type of system that blends dynamical and organizational complexity. The framework is intended to provide systematic meta-theoretical support for approaching complexity and wickedness in policy and design. We also points to a potential causal connection between innovation and wickedness as a basis for further theoretical improvement.
Gothenborg, Sweden : Chalmers University of Technology, 2017. - VI, 422 p.
(Doktorsavhandlingar vid Chalmers tekniska högskola ; 4215)
Zugleich: Gothenburg, Chalmers University of Technology, Dissertation
ISBN 978-91-7597-534-4
LINK: https://research.chalmers.se/en/publication/247725
Abstract: This thesis engages with questions on the boundary between what has traditionally been understood as social and natural. The introductory essay contextualizes the specific contributions of the included papers, by noting and exploring a reinvigoration of 'naturalism' (the notion of a continuity between the human realm and the rest of natural phenomena) under the banner of Complexity Science. This notion is put under explicit light, by revisiting the age-old question of naturalism and connecting ideas in complexity science with the work of e.g. Roy Bhaskar, Mario Bunge, William Wimsatt, and David Lane. A philosophical foundation for a complexity science of societal systems is thereby sketched, taking the form of an integrative and methodologically pluralist 'complex realism'. The first two papers provide a theoretical perspective on the distinction between social and natural: Paper I notes that societal systems combine two qualities that are commonly referred to as complexity and complicatedness into an emergent quality that we refer to as 'wickedness', and that is fundamentally and irreducibly different from either quality in isolation. This explains the recalcitrance of societal systems to the powerful approaches that exist for dealing with both of these qualities in isolation, and implies that they indeed ought to be treated as a distinct class of systems. Paper II uses the plane spanned by complexity and complicatedness to categorize seven different system classes, providing a systematic perspective on the study of societal systems. The suggested approach to societal systems following from these conclusions is exemplified by three studies in different fields and empirical contexts. Paper III combines a number of theories that can be seen as responses to wickedness, in the form of evolutionary developmental theories and theories of societal change, to develop a synthetic theory for cultural evolution. Paper IV exemplifies how simulation can be integrated with social theory for the study of emergent effects in societal systems, contributing a network model to investigate how the structural properties of free social spaces impact the diffusion of collective mobilization. Paper V exemplifies how digital trace data analysis can be integrated with qualitative social science, by using topic modeling as a form of corpus map to aid critical discourse analysis, implying a view of formal methods as aids for qualitative exploration, rather than as part of a reductionist approach.
In: Internet histories : digital technology, culture and society, 1 (2017) 1/2, p. 160-172
DOI: 10.1080/24701475.2017.1307542
Abstract: Among the conceptual and methodological opportunities afforded by the Internet Archive, and more specifically, the WayBack Machine, is the capacity to capture and 'play back' the history a web page, most notably a website's homepage. These playbacks could be construed as 'website histories', distinctive at least in principle from other uses put to the Internet Archive such as 'digital history' and 'internet history'. In the following, common use cases for web archives are put forward in a discussion of digital source criticism. Thereafter, I situate website history within traditions in web historiography. The particular approach to website history introduced here is called 'screencast documentaries'. Building upon Jon Udell's pioneering screencapturing work retelling the edit history of a Wikipedia page, I discuss overarching strategies for narrating screencast documentaries of websites, namely histories of the Web as seen through the changes to a single page, media histories as negotiations between new and old media as well as digital histories made from scrutinising changes to the list of priorities at a tone-setting institution such as whitehouse.gov.
In: The datafied society : studying culture through data / Mirko Tobias Schäfer... (eds.)
Amsterdam : Amsterdam University Press, 2017. - P. 75-94
LINK: https://library.oapen.org/bitstream/handle/20.500.12657/31843/624771.pdf?sequence=1#page=76
Abstract: The chapter starts with a short summary of what we consider to be five central challenges concerning the recent move towards Digital Methods. We then interrogate David Berry's concept of 'digital Bildung' as a means of facing these challenges. Our goal in this discussion is, maybe paradoxically, to move the spotlight from 'the digital' and programming, to the plethora of concepts and knowledges mobilized in digital tools. To this end, we discuss three examples that allow us to both concretise and complicate the debate about what kind of skill set is needed by digital scholars.
Lionel Tabourier
Matthieu Latapy
In: Computational science and its applications ICCSA 2017 : 17th international conference, Trieste, Italy, July 3-6, 2017, proceedings, part II / Osvaldo Gervasi... (eds.)
Cham : Springer, 2017. - P. 84-97
(Lecture notes in computer science ; 10405)
DOI: 10.1007/978-3-319-62395-5_7 LINK: https://hal.archives-ouvertes.fr/hal-01550334
Abstract: Databases recording cattle exchanges offer unique opportunities for a better understanding and fighting of disease spreading. Most studies model contacts with (sequences of) networks, but this approach neglects important dynamical features of exchanges, that are known to play a key role in spreading. We use here a fully dynamic modeling of contacts and empirically compare the spreading outbreaks obtained with it to the ones obtained with network approaches. We show that neglecting time information leads to significant overestimates of actual sizes of spreading cascades, and that these sizes are much more heterogeneous than generally assumed. Our approach also makes it possible to study the speed of spreading, and we show that the observed speeds vary greatly, even for a same cascade size.
Maurice Tchuente
Matthieu Latapy
In: Proceedings of the 13th international conference on web information systems and technologies : Volume 1 WEBIST ; April 25-27, 2017, in Porto, Portugal / Tim A. Majchrzak... (eds.)
Setúbal, Portugal : SciTePress, 2017. - P. 268-275
DOI: 10.5220/0006288202680275 LINK: https://hal.archives-ouvertes.fr/hal-01500348
Abstract: Recommender systems are an answer to information overload on the web. They filter and present to customer, a small subset of items that he is most likely to be interested in. Since user's interests may change over time, accurately capturing these dynamics is important, though challenging. The Session-based Temporal Graph (STG) has been proposed by Xiang et al. to provide temporal recommendations by combining long-and short-term preferences. Later, Yu et al. have introduced an extension called Topic-STG, which takes into account topics extracted from tweets' textual information. Recently, we pushed the idea further and proposed Content-based STG. However, in all these frameworks, the importance of links does not depend on their arrival time, which is a strong limitation: at any given time, purchases made last week should have a greater influence than purchases made a year ago. In this paper, we address this problem by proposing Time Weight Content-based STG, in which we assign a time-decreasing weight to edges. Using Time-Averaged Hit Ratio, we show that this approach outperforms all previous ones in real-world situations.
Marco LiCalzi
Massimo Warglien
LINK: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3034826
Abstract: We study strategic interaction between agents who distill the complex world around them into simpler situations. Assuming agents share the same cognitive frame, we show how the frame affects equilibrium outcomes. In one-shot and repeated interactions, the frame causes agents to be either better or worse off than if they could perceive the environment in full detail: it creates a fog of cooperation or a fog of conflict. In repeated interaction, the frame is as important as agents' patience in determining the set of equilibria: for a fixed discount factor, when all agents coordinate on what they perceive as the best equilibrium, there remain significant performance differences across dyads with different frames. Finally, we analyze some tensions between incremental versus radical changes in the cognitive frame.
Florent Coriat
Lionel Tabourier
In: ASONAM 2017 : proceedings of the 2017 IEEE/ACM international conference on advances in social networks analysis and mining 2017 / Jana Diesner... (eds.)
New York : ACM, 2017. - P. 667-674
LINK: https://hal.archives-ouvertes.fr/hal-01550340
Abstract: International audience; The ability of a node to relay information in a network is often measured using betweenness centrality. In order to take into account the fact that the role of the nodes vary through time, several adaptations of this concept have been proposed to time-evolving networks. However, these definitions are demanding in terms of computational cost, as they call for the computation of time-ordered paths. We propose a definition of centrality in link streams which is node-centric, in the sense that we only take into account the direct neighbors of a node to compute its centrality. This restriction allows to carry out the computation in a shorter time compared to a case where any couple of nodes in the network should be considered. Tests on empirical data show that this measure is relatively highly correlated to the number of times a node would relay information in a flooding process. We suggest that this is a good indication that this measurement can be of use in practical contexts where a node has a limited knowledge of its environment, such as routing protocols in delay tolerant networks.
Remy Cazabet
In: Complex networks VIII : proceedings of the 8th Conference on Complex Networks ; CompleNet 2017 / Bruno Goncalves... (eds.)
Cham : Springer, 2017. - P. 81-92
(Springer proceedings in complexity)
DOI: 10.1007/978-3-319-54241-6_7 LINK: https://hal.archives-ouvertes.fr/hal-01500356
Abstract: International audience; The analysis of dynamic networks has received a lot of attention in recent years, thanks to the greater availability of suitable datasets. One way to analyse such dataset is to study temporal motifs in link streams , i.e. sequences of links for which we can assume causality. In this article, we study the relationship between temporal motifs and communities, another important topic of complex networks. Through experiments on several real-world networks, with synthetic and ground truth community partitions, we identify motifs that are overrepresented at the frontier - or inside of - communities.
Pierre Borgnat
Pablo Jensen
LINK: https://hal.archives-ouvertes.fr/hal-01500352
Abstract: Bicycle Sharing Systems are now ubiquitous in large cities around the world. In most of these systems, journeys' data can be extracted, providing rich information to better understand it. Recent works have used network based-machine learning, and in particular space-corrected node clustering, to analyse such datasets. In this paper, we show that spatial-null models used in previous methods have a systematic bias, and we propose a degree-contrained null-model to improve the results. We finally apply the proposed method on the BSS of a city.
Pierre Borgnat
Pablo Jensen
In: Complex networks VIII : proceedings of the 8th Conference on Complex Networks ; CompleNet 2017 / Bruno Goncalves... (eds.)
Cham : Springer, 2017. - P. 47-55
(Springer proceedings in complexity)
DOI: 10.1007/978-3-319-54241-6_4 LINK: https://hal.archives-ouvertes.fr/hal-01500354
Abstract: Null models have many applications on networks, from testing the significance of observations to the conception of algorithms such as community detection. They usually preserve some network properties, such as degree distribution. Recently, some null-models have been proposed for spatial networks, and applied to the community detection problem. In this article, we propose a new null-model adapted to spatial networks, that, unlike previous ones, preserves both the spatial structure and the degrees of nodes. We show the efficacy of this null-model in the community detection case on synthetic networks.
Lionel Tabourier
Matthieu Latapy
In: ASONAM 2017 : proceedings of the 2017 IEEE/ACM international conference on advances in social networks analysis and mining 2017 / Jana Diesner... (eds.)
New York : ACM, 2017. - P. 935-942
LINK: https://hal.archives-ouvertes.fr/hal-01550324
Abstract: A link stream is a sequence of triplets (t, u, v) meaning that nodes u and v have interacted at time t. Capturing both the structural and temporal aspects of interactions is crucial for many real world datasets like contact between individuals. We tackle the issue of activity prediction in link streams, that is to say predicting the number of links occurring during a given period of time and we present a protocol that takes advantage of the temporal and structural information contained in the link stream. We introduce a way to represent the information captured using different features and combine them in a prediction function which is used to evaluate the future activity of links.