2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA) Data Science and Advanced Analytics (DSAA), 2023 IEEE 10th International Conference on. :1-10 Oct, 2023
Entity Resolution (ER) is the problem of semi-automatically determining when two entities refer to the same underlying entity, with applications ranging from healthcare to e-commerce. Traditional ER solutions required considerable manual expertise, including feature engineering, as well as identification and curation of training data. In many instances, such techniques are highly dependent on the domain. With recent advent in large language models (LLMs), there is an opportunity to make ER much more seamless and domain-independent. However, it is also well known that LLMs can pose risks, and that the quality of their outputs can depend on so-called prompt engineering. Unfortunately, a systematic experimental study on the effects of different prompting methods for addressing ER, using LLMs like ChatGPT, has been lacking thus far. This paper aims to address this gap by conducting such a study. Although preliminary in nature, our results show that prompting can significantly affect the quality of ER, although it affects some metrics more than others, and can also be dataset dependent.
Computer Science - Artificial Intelligence, Computer Science - Databases, and Computer Science - Information Retrieval
Abstract
Efficiently finding doctors and locations is an important search problem for patients in the healthcare domain, for which traditional information retrieval methods tend not to work optimally. In the last ten years, knowledge graphs (KGs) have emerged as a powerful way to combine the benefits of gleaning insights from semi-structured data using semantic modeling, natural language processing techniques like information extraction, and robust querying using structured query languages like SPARQL and Cypher. In this short paper, we present a KG-based search engine architecture for robustly finding doctors and locations in the healthcare domain. Early results demonstrate that our approach can lead to significantly higher coverage for complex queries without degrading quality. Comment: Presented as an applied data science poster in KDD 2023
Large Language Models (LLMs), such as ChatGPT, have achieved impressive milestones in natural language processing (NLP). Despite their impressive performance, the models are known to pose important risks. As these models are deployed in real-world applications, a systematic understanding of different risks posed by these models on tasks such as natural language inference (NLI), is much needed. In this paper, we define and formalize two distinct types of risk: decision risk and composite risk. We also propose a risk-centric evaluation framework, and four novel metrics, for assessing LLMs on these risks in both in-domain and out-of-domain settings. Finally, we propose a risk-adjusted calibration method called DwD for helping LLMs minimize these risks in an overall NLI architecture. Detailed experiments, using four NLI benchmarks, three baselines and two LLMs, including ChatGPT, show both the practical utility of the evaluation framework, and the efficacy of DwD in reducing decision and composite risk. For instance, when using DwD, an underlying LLM is able to address an extra 20.1% of low-risk inference tasks (but which the LLM erroneously deems high-risk without risk adjustment) and skip a further 19.8% of high-risk tasks, which would have been answered incorrectly.
Computer Science - Artificial Intelligence and Computer Science - Databases
Abstract
Entity Resolution (ER) is the problem of determining when two entities refer to the same underlying entity. The problem has been studied for over 50 years, and most recently, has taken on new importance in an era of large, heterogeneous 'knowledge graphs' published on the Web and used widely in domains as wide ranging as social media, e-commerce and search. This chapter will discuss the specific problem of named ER in the context of personal knowledge graphs (PKGs). We begin with a formal definition of the problem, and the components necessary for doing high-quality and efficient ER. We also discuss some challenges that are expected to arise for Web-scale data. Next, we provide a brief literature review, with a special focus on how existing techniques can potentially apply to PKGs. We conclude the chapter by covering some applications, as well as promising directions for future research. Comment: To appear as a book chapter by the same name in an upcoming (Oct. 2023) book `Personal Knowledge Graphs (PKGs): Methodology, tools and applications' edited by Tiwari et al
Computer Science - Social and Information Networks
Abstract
Complex systems research and network science have recently been used to provide novel insights into economic phenomena such as patenting behavior and innovation in firms. Several studies have found that increased mobility of inventors, manifested through firm switching or transitioning, is associated with increased overall productivity. This paper proposes a novel structural study of such transitioning inventors, and the role they play in patent co-authorship networks, in a cohort of highly innovative and economically influential companies such as the five Big Tech firms (Apple, Microsoft, Google, Amazon and Meta) in the post-recession period (2010-2022). We formulate and empirically investigate three research questions using Big Tech patent data. Our results show that transitioning inventors tend to have higher degree centrality than the average Big Tech inventor, and that their removal can lead to greater network fragmentation than would be expected by chance. The rate of transition over the 12-year period of study was found to be highest between 2015-2017, suggesting that the Big Tech innovation ecosystem underwent non-trivial shifts during this time. Finally, transition was associated with higher estimated impact of co-authored patents post-transition.
Doctor, Katarina, Task, Christine, Kildebeck, Eric, Kejriwal, Mayank, Holder, Lawrence, and Leong, Russell
Subjects
Computer Science - Artificial Intelligence
Abstract
Artificial Intelligence (AI) systems planned for deployment in real-world applications frequently are researched and developed in closed simulation environments where all variables are controlled and known to the simulator or labeled benchmark datasets are used. Transition from these simulators, testbeds, and benchmark datasets to more open-world domains poses significant challenges to AI systems, including significant increases in the complexity of the domain and the inclusion of real-world novelties; the open-world environment contains numerous out-of-distribution elements that are not part in the AI systems' training set. Here, we propose a path to a general, domain-independent measure of domain complexity level. We distinguish two aspects of domain complexity: intrinsic and extrinsic. The intrinsic domain complexity is the complexity that exists by itself without any action or interaction from an AI agent performing a task on that domain. This is an agent-independent aspect of the domain complexity. The extrinsic domain complexity is agent- and task-dependent. Intrinsic and extrinsic elements combined capture the overall complexity of the domain. We frame the components that define and impact domain complexity levels in a domain-independent light. Domain-independent measures of complexity could enable quantitative predictions of the difficulty posed to AI systems when transitioning from one testbed or environment to another, when facing out-of-distribution data in open-world tasks, and when navigating the rapidly expanding solution and search spaces encountered in open-world domains.
Computer Science - Artificial Intelligence, Computer Science - Computation and Language, and Computer Science - Computer Vision and Pattern Recognition
Abstract
We conduct a pilot study selectively evaluating the cognitive abilities (decision making and spatial reasoning) of two recently released generative transformer models, ChatGPT and DALL-E 2. Input prompts were constructed following neutral a priori guidelines, rather than adversarial intent. Post hoc qualitative analysis of the outputs shows that DALL-E 2 is able to generate at least one correct image for each spatial reasoning prompt, but most images generated are incorrect (even though the model seems to have a clear understanding of the objects mentioned in the prompt). Similarly, in evaluating ChatGPT on the rationality axioms developed under the classical Von Neumann-Morgenstern utility theorem, we find that, although it demonstrates some level of rational decision-making, many of its decisions violate at least one of the axioms even under reasonable constructions of preferences, bets, and decision-making prompts. ChatGPT's outputs on such problems generally tended to be unpredictable: even as it made irrational decisions (or employed an incorrect reasoning process) for some simpler decision-making problems, it was able to draw correct conclusions for more complex bet structures. We briefly comment on the nuances and challenges involved in scaling up such a 'cognitive' evaluation or conducting it with a closed set of answer keys ('ground truth'), given that these models are inherently generative and open-ended in responding to prompts.
Economics - General Economics and Computer Science - Social and Information Networks
Abstract
In recent decades, trade between nations has constituted an important component of global Gross Domestic Product (GDP), with official estimates showing that it likely accounted for a quarter of total global production. While evidence of association already exists in macro-economic data between trade volume and GDP growth, there is considerably less work on whether, at the level of individual granular sectors (such as vehicles or minerals), associations exist between the complexity of trading networks and global GDP. In this paper, we explore this question by using publicly available data from the Atlas of Economic Complexity project to rigorously construct global trade networks between nations across multiple sectors, and studying the correlation between network-theoretic measures computed on these networks (such as average clustering coefficient and density) and global GDP. We find that there is indeed significant association between trade networks' complexity and global GDP across almost every sector, and that network metrics also correlate with business cycle phenomena such as the Great Recession of 2007-2008. Our results show that trade volume alone cannot explain global GDP growth, and that network science may prove to be a valuable empirical avenue for studying complexity in macro-economic phenomena such as trade. Comment: Peer-reviewed and presented at The 11th International Conference on Complex Networks and their Applications (2022)
Computer Science - Computation and Language and Computer Science - Artificial Intelligence
Abstract
In recent years, transformer-based language representation models (LRMs) have achieved state-of-the-art results on difficult natural language understanding problems, such as question answering and text summarization. As these models are integrated into real-world applications, evaluating their ability to make rational decisions is an important research agenda, with practical ramifications. This article investigates LRMs' rational decision-making ability through a carefully designed set of decision-making benchmarks and experiments. Inspired by classic work in cognitive science, we model the decision-making problem as a bet. We then investigate an LRM's ability to choose outcomes that have optimal, or at minimum, positive expected gain. Through a robust body of experiments on four established LRMs, we show that a model is only able to `think in bets' if it is first fine-tuned on bet questions with an identical structure. Modifying the bet question's structure, while still retaining its fundamental characteristics, decreases an LRM's performance by more than 25\%, on average, although absolute performance remains well above random. LRMs are also found to be more rational when selecting outcomes with non-negative expected gain, rather than optimal or strictly positive expected gain. Our results suggest that LRMs could potentially be applied to tasks that rely on cognitive decision-making skills, but that more research is necessary before they can robustly make rational decisions.
Acquiring commonsense knowledge and reasoning is an important goal in modern NLP research. Despite much progress, there is still a lack of understanding (especially at scale) of the nature of commonsense knowledge itself. A potential source of structured commonsense knowledge that could be used to derive insights is ConceptNet. In particular, ConceptNet contains several coarse-grained relations, including HasContext, FormOf and SymbolOf, which can prove invaluable in understanding broad, but critically important, commonsense notions such as 'context'. In this article, we present a methodology based on unsupervised knowledge graph representation learning and clustering to reveal and study substructures in three heavily used commonsense relations in ConceptNet. Our results show that, despite having an 'official' definition in ConceptNet, many of these commonsense relations exhibit considerable sub-structure. In the future, therefore, such relations could be sub-divided into other relations with more refined definitions. We also supplement our core study with visualizations and qualitative analyses. Comment: arXiv admin note: substantial text overlap with arXiv:2011.14084
Computer Science - Computation and Language and Computer Science - Artificial Intelligence
Abstract
Recent work on transformer-based neural networks has led to impressive advances on multiple-choice natural language understanding (NLU) problems, such as Question Answering (QA) and abductive reasoning. Despite these advances, there is limited work still on understanding whether these models respond to perturbed multiple-choice instances in a sufficiently robust manner that would allow them to be trusted in real-world situations. We present four confusion probes, inspired by similar phenomena first identified in the behavioral science community, to test for problems such as prior bias and choice paralysis. Experimentally, we probe a widely used transformer-based multiple-choice NLU system using four established benchmark datasets. Here we show that the model exhibits significant prior bias and to a lesser, but still highly significant degree, choice paralysis, in addition to other problems. Our results suggest that stronger testing protocols and additional benchmarks may be necessary before the language models are used in front-facing systems or decision making with real world consequences.
Kejriwal, Mayank, Selvam, Ravi Kiran, Ni, Chien-Chun, and Torzec, Nicolas
2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) Advances in Social Networks Analysis and Mining (ASONAM), 2020 IEEE/ACM International Conference on. :507-514 Dec, 2020
Research has continued to shed light on the extent and significance of gender disparity in social, cultural and economic spheres. More recently, computational tools from the Natural Language Processing (NLP) literature have been proposed for measuring such disparity using relatively extensive datasets and empirically rigorous methodologies. In this paper, we contribute to this line of research by studying gender disparity, at scale, in copyright-expired literary texts published in the pre-modern period (defined in this work as the period ranging from the mid-nineteenth through the mid-twentieth century). One of the challenges in using such tools is to ensure quality control, and by extension, trustworthy statistical analysis. Another challenge is in using materials and methods that are publicly available and have been established for some time, both to ensure that they can be used and vetted in the future, and also, to add confidence to the methodology itself. We present our solution to addressing these challenges, and using multiple measures, demonstrate the significant discrepancy between the prevalence of female characters and male characters in pre-modern literature. The evidence suggests that the discrepancy declines when the author is female. The discrepancy seems to be relatively stable as we plot data over the decades in this century-long period. Finally, we aim to carefully describe both the limitations and ethical caveats associated with this study, and others like it.
Santos, Henrique, Shen, Ke, Mulvehill, Alice M., Razeghi, Yasaman, McGuinness, Deborah L., and Kejriwal, Mayank
Subjects
Computer Science - Computation and Language
Abstract
Programming machines with commonsense reasoning (CSR) abilities is a longstanding challenge in the Artificial Intelligence community. Current CSR benchmarks use multiple-choice (and in relatively fewer cases, generative) question-answering instances to evaluate machine commonsense. Recent progress in transformer-based language representation models suggest that considerable progress has been made on existing benchmarks. However, although tens of CSR benchmarks currently exist, and are growing, it is not evident that the full suite of commonsense capabilities have been systematically evaluated. Furthermore, there are doubts about whether language models are 'fitting' to a benchmark dataset's training partition by picking up on subtle, but normatively irrelevant (at least for CSR), statistical features to achieve good performance on the testing partition. To address these challenges, we propose a benchmark called Theoretically-Grounded Commonsense Reasoning (TG-CSR) that is also based on discriminative question answering, but with questions designed to evaluate diverse aspects of commonsense, such as space, time, and world states. TG-CSR is based on a subset of commonsense categories first proposed as a viable theory of commonsense by Gordon and Hobbs. The benchmark is also designed to be few-shot (and in the future, zero-shot), with only a few training and validation examples provided. This report discusses the structure and construction of the benchmark. Preliminary results suggest that the benchmark is challenging even for advanced language representation models designed for discriminative CSR question answering tasks. Benchmark access and leaderboard: https://codalab.lisn.upsaclay.fr/competitions/3080 Benchmark website: https://usc-isi-i2.github.io/TGCSR/
Computer Science - Social and Information Networks and Physics - Physics and Society
Abstract
In recent years, there has been a growing recognition that higher-order structures are important features in real-world networks. A particular class of structures that has gained prominence is known as a simplicial complex. Despite their application to complex processes such as social contagion and novel measures of centrality, not much is currently understood about the distributional properties of these complexes in communication networks. Furthermore, it is also an open question as to whether an established growth model, such as scale-free network growth with triad formation, is sophisticated enough to capture the distributional properties of simplicial complexes. In this paper, we use empirical data on five real-world communication networks to propose a functional form for the distributions of two important simplicial complex structures. We also show that, while the scale-free network growth model with triad formation captures the form of these distributions in networks evolved using the model, the best-fit parameters are significantly different between the real network and its simulated equivalent. An auxiliary contribution is an empirical profile of the two simplicial complexes in these five real-world networks. Comment: 4 pages, 2 figures