Results for 'black box problem'

969 found
Order:
  1. Federated learning, ethics, and the double black box problem in medical AI.Joshua Hatherley, Anders Søgaard, Angela Ballantyne & Ruben Pauwels - manuscript
    Federated learning (FL) is a machine learning approach that allows multiple devices or institutions to collaboratively train a model without sharing their local data with a third-party. FL is considered a promising way to address patient privacy concerns in medical artificial intelligence. The ethical risks of medical FL systems themselves, however, have thus far been underexamined. This paper aims to address this gap. We argue that medical FL presents a new variety of opacity -- federation opacity -- that, in turn, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Does Black Box AI In Medicine Compromise Informed Consent?Samuel Director - 2025 - Philosophy and Technology 38 (2):1-24.
    Recently, there has been a large push for the use of artificial intelligence in medical settings. The promise of artificial intelligence (AI) in medicine is considerable, but its moral implications are in-sufficiently examined. If AI is used in medical diagnosis and treatment, it may pose a substantial problem for informed consent. The short version of the problem is this: medical AI will likely surpass human doctors in accuracy, meaning that patients have a prudential reason to prefer treatment from (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. In defence of post-hoc explanations in medical AI.Joshua Hatherley, Lauritz Munch & Jens Christian Bjerring - forthcoming - Hastings Center Report.
    Since the early days of the Explainable AI movement, post-hoc explanations have been praised for their potential to improve user understanding, promote trust, and reduce patient safety risks in black box medical AI systems. Recently, however, critics have argued that the benefits of post-hoc explanations are greatly exaggerated since they merely approximate, rather than replicate, the actual reasoning processes that black box systems take to arrive at their outputs. In this article, we aim to defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4.  98
    (1 other version)A moving target in AI-assisted decision-making: Dataset shift, model updating, and the problem of update opacity.Joshua Hatherley - 2025 - Ethics and Information Technology 27 (2):20.
    Machine learning (ML) systems are vulnerable to performance decline over time due to dataset shift. To address this problem, experts often suggest that ML systems should be regularly updated to ensure ongoing performance stability. Some scholarly literature has begun to address the epistemic and ethical challenges associated with different updating methodologies. Thus far, however, little attention has been paid to the impact of model updating on the ML-assisted decision-making process itself, particularly in the AI ethics and AI epistemology literatures. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  65
    Deep opacity and AI: A treat to XAI and to privacy protection mechanisms.Vincent C. Müller - 2025 - In Martin Hähnel & Regina Müller, A Companion to Applied Philosophy of AI. Wiley-Blackwell.
    It is known that big data analytics and AI pose a threat to privacy, and that some of this is due to some kind of “black box problem” in AI. I explain how this becomes a problem in the context of justification for judgments and actions. Furthermore, I suggest distinguishing three kinds of opacity: 1) the subjects do not know what the system does (“shallow opacity”), 2) the analysts do not know what the system does (“standard (...) box opacity”), or 3) the analysts cannot possibly know what the system might do (“deep opacity”). If the agents, data subjects as well as analytics experts, operate under opacity, then these agents cannot provide justifications for judgments that are necessary to protect privacy, e.g., they cannot give “informed consent”, or guarantee “anonymity.” It follows from these points that agents in big data analytics and AI ofen cannot make the judgments needed to protect privacy. So I conclude that big data analytics makes the privacy problems worse and the remedies less effective. As a positive note, I provide a brief outlook on technical ways to handle this situation. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Justice and the Grey Box of Responsibility.Carl Knight - 2010 - Theoria: A Journal of Social and Political Theory 57 (124):86-112.
    Even where an act appears to be responsible, and satisfies all the conditions for responsibility laid down by society, the response to it may be unjust where that appearance is false, and where those conditions are insufficient. This paper argues that those who want to place considerations of responsibility at the centre of distributive and criminal justice ought to take this concern seriously. The common strategy of relying on what Susan Hurley describes as a 'black box of responsibility' has (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  8. Enhancing Interpretability in Distributed Constraint Optimization Problems.M. Bhuvana Chandra C. Anand - 2025 - International Journal of Multidisciplinary Research in Science, Engineering and Technology 8 (1):361-364.
    Distributed Constraint Optimization Problems (DCOPs) provide a framework for solving multi-agent coordination tasks efficiently. However, their black-box nature often limits transparency and trust in decision-making processes. This paper explores methods to enhance interpretability in DCOPs, leveraging explainable AI (XAI) techniques. We introduce a novel approach incorporating heuristic explanations, constraint visualization, and modelagnostic methods to provide insights into DCOP solutions. Experimental results demonstrate that our method improves human understanding and debugging of DCOP solutions while maintaining solution quality.
    Download  
     
    Export citation  
     
    Bookmark  
  9. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller, Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  11. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  12. The Black Box in Stoic Axiology.Michael Vazquez - 2023 - Pacific Philosophical Quarterly 104 (1):78–100.
    The ‘black box’ in Stoic axiology refers to the mysterious connection between the input of Stoic deliberation (reasons generated by the value of indifferents) and the output (appropriate actions). In this paper, I peer into the black box by drawing an analogy between Stoic and Kantian axiology. The value and disvalue of indifferents is intrinsic, but conditional. An extrinsic condition on the value of a token indifferent is that one's selection of that indifferent is sanctioned by context-relative ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Scaffolding Natural Selection.Walter Veit - 2022 - Biological Theory 17 (2):163-180.
    Darwin provided us with a powerful theoretical framework to explain the evolution of living systems. Natural selection alone, however, has sometimes been seen as insufficient to explain the emergence of new levels of selection. The problem is one of “circularity” for evolutionary explanations: how to explain the origins of Darwinian properties without already invoking their presence at the level they emerge. That is, how does evolution by natural selection commence in the first place? Recent results in experimental evolution suggest (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  14. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the roles (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Group Field Theories: Decoupling Spacetime Emergence from the Ontology of non-Spatiotemporal Entities.Marco Forgione - 2024 - European Journal for Philosophy of Science 14 (22):1-23.
    With the present paper I maintain that the group field theory (GFT) approach to quantum gravity can help us clarify and distinguish the problems of spacetime emergence from the questions about the nature of the quanta of space. I will show that the mechanism of phase transition suggests a form of indifference between scales (or phases) and that such an indifference allows us to black-box questions about the nature of the ontology of the fundamental levels of the theory. I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity.Adrian Mróz - 2019 - Kultura I Historia 36 (2):17-40.
    By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine (...) solving as well as a new difficulty to be overcome. I weave together of web of “artificial stupidity”, which denotes the mechanic (1), the human (2), or the global (3). With regards to machine intelligence, artificial stupidity refers to: 1a) Weak A.I. or a rhetorical inversion of designating contemporary practices of narrow task-based procedures by algorithms in opposition to “True A.I.”; 1b) the restriction or employment of constraints that weaken the effectiveness of A.I., which is to say a “dumbing-down” of A.I. by intentionally introducing mistakes by programmers for safety concerns and human interaction purposes; 1c) the failure of machines to perform designated tasks; 1d) a lack of a noetic capacity, which is a lack of moral and ethical discretion; 1e) a lack of causal reasoning (true intelligence) as opposed to statistical associative “curve fitting”; or 2) the phenomenon of increasing human “stupidity” or drive-based behaviors, which is considered as the degradation of human intelligence and/or “intelligent human behavior” through technics; and finally, 3) the global phenomenon of increasing entropy due to a black-box economy of closed systems and/or industry consolidation. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Hempel’s Raven Revisited.Andrew Bollhagen - 2021 - Journal of Philosophy 118 (3):113-137.
    The paper takes a novel approach to a classic problem—Hempel’s Raven Paradox. A standard approach to it supposes the solution to consist in bringing our inductive logic into “reflective equilibrium” with our intuitive judgements about which inductive inferences we should license. This approach leaves the intuitions as a kind of black box and takes it on faith that, whatever the structure of the intuitions inside that box might be, it is one for which we can construct an isomorphic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. THE SPECTACLE OF REFLECTION: ON DREAMS, NEURAL NETWORKS AND THE VISUAL NATURE OF THOUGHT.Magdalena Szalewicz - manuscript
    The article considers the problem of images and the role they play in our reflection turning to evidence provided by two seemingly very distant theories of mind together with two sorts of corresponding visions: dreams as analyzed by Freud who claimed that they are pictures of our thoughts, and their mechanical counterparts produced by neural networks designed for object recognition and classification. Freud’s theory of dreams has largely been ignored by philosophers interested in cognition, most of whom focused solely (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Bundle Theory’s Black Box: Gap Challenges for the Bundle Theory of Substance.Robert Garcia - 2014 - Philosophia 42 (1):115-126.
    My aim in this article is to contribute to the larger project of assessing the relative merits of different theories of substance. An important preliminary step in this project is assessing the explanatory resources of one main theory of substance, the so-called bundle theory. This article works towards such an assessment. I identify and explain three distinct explanatory challenges an adequate bundle theory must meet. Each points to a putative explanatory gap, so I call them the Gap Challenges. I consider (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  20.  47
    AI Ethics in Legal Decision-Making Bias, Transparency, And Accountability.J. D. Jelena Vujicic - 2025 - International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering 14 (5).
    Artificial Intelligence (AI) systems used in legal decision-making processes have created significant ethical challenges through their integration, leading to problems with bias and necessitating better transparency and accountability measures. This paper investigates the discriminatory effects of algorithmic bias by analyzing AI technologies that learn from historical legal datasets containing potential institutional biases. The opacity of AI decision-making, referred to as "black-boxed" decisions, creates complex obstacles to achieving both explainable judgments and fair outcomes. The article examines the absent responsibility structure (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Peeking Inside the Black Box: A New Kind of Scientific Visualization.Michael T. Stuart & Nancy J. Nersessian - 2018 - Minds and Machines 29 (1):87-107.
    Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The visualization (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  23. Opening the black box of commodification: A philosophical critique of actor-network theory as critique.Henrik Rude Hvid - manuscript
    This article argues that actor-network theory, as an alternative to critical theory, has lost its critical impetus when examining commodification in healthcare. The paper claims that the reason for this, is the way in which actor-network theory’s anti-essentialist ontology seems to black box 'intentionality' and ethics of human agency as contingent interests. The purpose of this paper was to open the normative black box of commodification, and compare how Marxism, Habermas and ANT can deal with commodification and ethics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Glanville’s ‘Black Box’: what can an Observer know?Lance Nizami - 2020 - Revista Italiana di Filosofia Del Linguaggio 14 (2):47-62.
    A ‘Black Box’ cannot be opened to reveal its mechanism. Rather, its operations are inferred through input from (and output to) an ‘observer’. All of us are observers, who attempt to understand the Black Boxes that are Minds. The Black Box and its observer constitute a system, differing from either component alone: a ‘greater’ Black Box to any further-external-observer. To Glanville (1982), the further-external-observer probes the greater-Black-Box by interacting directly with its core Black Box, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Inferring causation in epidemiology: mechanisms, black boxes, and contrasts.Alex Broadbent - 2011 - In Phyllis McKay Illari Federica Russo, Causality in the Sciences. Oxford, GB: Oxford University Press. pp. 45--69.
    This chapter explores the idea that causal inference is warranted if and only if the mechanism underlying the inferred causal association is identified. This mechanistic stance is discernible in the epidemiological literature, and in the strategies adopted by epidemiologists seeking to establish causal hypotheses. But the exact opposite methodology is also discernible, the black box stance, which asserts that epidemiologists can and should make causal inferences on the basis of their evidence, without worrying about the mechanisms that might underlie (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  26. We might be afraid of black-box algorithms.Carissa Veliz, Milo Phillips-Brown, Carina Prunkl & Ted Lechterman - 2021 - Journal of Medical Ethics 47.
    Fears of black-box algorithms are multiplying. Black-box algorithms are said to prevent accountability, make it harder to detect bias and so on. Some fears concern the epistemology of black-box algorithms in medicine and the ethical implications of that epistemology. Durán and Jongsma (2021) have recently sought to allay such fears. While some of their arguments are compelling, we still see reasons for fear.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  27. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello, Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. MInd and Machine: at the core of any Black Box there are two (or more) White Boxes required to stay in.Lance Nizami - 2020 - Cybernetics and Human Knowing 27 (3):9-32.
    This paper concerns the Black Box. It is not the engineer’s black box that can be opened to reveal its mechanism, but rather one whose operations are inferred through input from (and output to) a companion observer. We are observers ourselves, and we attempt to understand minds through interactions with their host organisms. To this end, Ranulph Glanville followed W. Ross Ashby in elaborating the Black Box. The Black Box and its observer together form a system (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. The Electric Mountain Bike as Pharmakon: Examining the Problems and Possibilities of an Emerging Technology.Jim Cherrington & Jack Black - 2023 - Mobilities 18 (6):1000-1015.
    In the last decade there has been an upsurge in the popularity of electric mountain bikes. However, opinion is divided regarding the implications of this emerging technology. Critics warn of the dangers they pose to landscapes, habitats, and ecological diversity, whilst advocates highlight their potential in increasing the accessibility of the outdoors for riders who would otherwise be socially and/or physically excluded. Drawing on interview data with 30 electric mountain bike users in England, this paper represents one of the first (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be ineffective in the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. The Panda’s Black Box: Opening Up the Intelligent Design Controversy edited by Nathaniel C. Comfort. [REVIEW]W. Malcolm Byrnes - 2008 - The National Catholic Bioethics Quarterly 8 (2):385-387.
    Download  
     
    Export citation  
     
    Bookmark  
  32. Infinite Regresses of Justification.Oliver Black - 1988 - International Philosophical Quarterly 28 (4):421-437.
    This paper uses a schema for infinite regress arguments to provide a solution to the problem of the infinite regress of justification. The solution turns on the falsity of two claims: that a belief is justified only if some belief is a reason for it, and that the reason relation is transitive.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  33. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  34. The global workspace theory, the phenomenal concept strategy, and the distribution of consciousness.Dylan Black - 2020 - Consciousness and Cognition 84 (C):102992.
    Peter Carruthers argues that the global workspace theory implies there are no facts of the matter about animal consciousness. The argument is easily extended to other cognitive theories of consciousness, posing a general problem for consciousness studies. But the argument proves too much, for it also implies that there are no facts of the matter about human consciousness. A key assumption of the argument is that scientific theories of consciousness must explain away the explanatory gap. I criticize this assumption (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. "Love Thy Social Media!": Hysteria and the Interpassive Subject.Jack Black - 2022 - CLCWeb: Comparative Literature and Culture 24 (4):1--10.
    According to the 2020 docudrama, The Social Dilemma, our very addiction to “social media” has, today, become encapsulated in the tensions between its facilitation as a mode of interpersonal communication and as an insidious conduit for machine learning, surveillance capitalism and manipulation. Amidst a variety of interviewees – many of whom are former employees of social media companies – the documentary finishes on a unanimous conclusion: something must change. By using the docudrama as a pertinent example of our “social media (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Tragbare Kontrolle: Die Apple Watch als kybernetische Maschine und Black Box algorithmischer Gouvernementalität.Anna-Verena Nosthoff & Felix Maschewski - 2020 - In Anna-Verena Nosthoff & Felix Maschewski, Black Boxes - Versiegelungskontexte und Öffnungsversuche. pp. 115-138.
    Im Beitrag wird die Apple-Watch vor dem Hintergrund ihrer „Ästhetik der Existenz“ als biopolitisches Artefakt und kontrollgesellschaftliches Dispositiv, vor allem aber als kybernetische Black Box aufgefasst und analysiert. Ziel des Aufsatzes ist es, aufzuzeigen, dass sich in dem feedbacklogischen Rückkopplungsapparat nicht nur grundlegende Diskurse des digitalen Zeitalters (Prävention, Gesundheit, bio- und psychopolitische Regulierungsformen etc.) verdichten, sondern dass dieser schon ob seiner inhärenten Logik qua Opazität Transparenz, qua Komplexität Simplizität (d.h. Orientierung) generiert und damit nicht zuletzt ein ganz spezifisches Menschenbild (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. The subjective and objective violence of terrorism: analysing 'British values' in newspaper coverage of the 2017 London Bridge attack.Jack Black - 2019 - Critical Studies on Terrorism 12 (2):228-249.
    This article examines how Žižek’s analysis of “subjective” violence can be used to explore the ways in which media coverage of a terrorist attack is contoured and shaped by less noticeable forms of “objective” (symbolic and systemic) violence. Drawing upon newspaper coverage of the 2017 London Bridge attack, it is noted how examples of “subjective” violence were grounded in the externalization of a clearly identifiable “other”, which symbolically framed the terrorists and the attack as tied to and representative of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber who (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Review of the book Algorithmic Desire: Toward a New Structuralist Theory of Social Media, by Matthew Flisfeder. [REVIEW]Jack Black - 2024 - Postdigital Science and Education 6 (2):691--704.
    It is this very contention that sits at the heart of Matthew Flisfeder’s, Algorithmic Desire: Towards a New Structuralist Theory of Social Media (2021). In spite of the accusation that, today, our social media is in fact hampering democracy and subjecting us to increasing forms of online and offline surveillance, for Flisfeder (2021: 3), ‘[s]ocial media remains the correct concept for reconciling ourselves with the structural contradictions of our media, our culture, and our society’. With almost every aspect of our (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Creating specialized corpora from digitized historical newspaper archives: An iterative bootstrapping approach.Joshua Wilson Black - 2022 - Digital Scholarship in the Humanities:1-19.
    The availability of large digital archives of historical newspaper content has transformed the historical sciences. However, the scale of these archives can limit the direct application of advanced text processing methods. Even if it is computationally feasible to apply sophisticated language processing to an entire digital archive, if the material of interest is a small fraction of the archive, the results are unlikely to be useful. Methods for generating smaller specialized corpora from large archives are required to solve this (...). This article presents such a method for historical newspaper archives digitized using the METS/ALTO XML standard (Veridian Software, n.d.). The method is an ‘iterative bootstrapping’ approach in which candidate corpora are evaluated using text mining techniques, items are manually labelled, and Naïve Bayes text classifiers are trained and applied in order to produce new candidate corpora. The method is illustrated by a case study that investigates philosophical content, broadly construed, in pre-1900 English-language New Zealand newspapers. Extensive code is provided in Supplementary Materials. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Organic wastes, black-soldier flies, and environmental problems through the lens of the stock market.Quan-Hoang Vuong & Minh-Hoang Nguyen - manuscript
    As the world’s population grows and urbanization continues, the global waste crisis is becoming more severe, especially in developing countries. Without proper waste management, they may encounter various environmental and health risks. Biological technologies are regarded as promising waste management and recycling approaches in developing countries due to their cost-effectiveness and capability to handle diverse waste categories. One prominent technology in this aspect is the vermicomposting of organic waste utilizing the black soldier fly larvae. Nevertheless, significant financial resources are (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Not the Measurement Problem's Problem: Black Hole Information Loss with Schrödinger's Cat.Saakshi Dulani - 2025 - Philosophy of Science.
    Recently, several philosophers and physicists have increasingly noticed the hegemony of unitarity in the black hole information loss discourse and are challenging its legitimacy in the face of the measurement problem. They proclaim that embracing non-unitarity solves two paradoxes for the price of one. Though I share their distaste over the philosophical bias, I disagree with their strategy of still privileging certain interpretations of quantum theory. I argue that information-restoring solutions can be interpretation-neutral because the manifestation of non-unitarity (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Why You Should One-box in Newcomb's Problem.Howard J. Simmons - manuscript
    I consider a familiar argument for two-boxing in Newcomb's Problem and find it defective because it involves a type of divergence from standard Baysian reasoning, which, though sometimes justified, conflicts with the stipulations of the Newcomb scenario. In an appendix, I also find fault with a different argument for two-boxing that has been presented by Graham Priest.
    Download  
     
    Export citation  
     
    Bookmark  
  44. Beyond black dots and nutritious things: A solution to the indeterminacy problem.Marc Artiga - 2021 - Mind and Language 36 (3):471-490.
    The indeterminacy problem is one of the most prominent objections against naturalistic theories of content. In this essay I present this difficulty and argue that extant accounts are unable to solve it. Then, I develop a particular version of teleosemantics, which I call ’explanation-based teleosemantics’, and show how this outstanding problem can be addressed within the framework of a powerful naturalistic theory.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  45. Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  46.  74
    Using enactive robotics to think outside of the problem-solving box: How sensorimotor contingencies constrain the forms of emergent autononomous habits.Matthew Egbert & Xabier E. Barandiaran - 2022 - Frontiers in Neurorobotics 16:1-23.
    We suggest that the influence of biology in ‘biologically inspired robotics’ can be embraced at a deeper level than is typical, if we adopt an enactive approach that moves the focus of interest from how problems are solved to how problems emerge in the first place. In addition to being inspired by mechanisms found in natural systems or by evolutionary design principles directed at solving problems posited by the environment, we can take inspiration from the precarious, self-maintaining organization of living (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Black Hole Philosophy.Gustavo E. Romero - 2021 - Crítica. Revista Hispanoamericana de Filosofía 53 (159):73–132.
    Black holes are arguably the most extraordinary physical objects we know in the universe. Despite our thorough knowledge of black hole dynamics and our ability to solve Einstein’s equations in situations of ever increasing complexity, the deeper implications of the very existence of black holes for our understanding of space, time, causality, information, and many other things remain poorly understood. In this paper I survey some of these problems. If something is going to be clear from my (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  49. Philosophical issues about Black holes.Gustavo E. Romero - 2014 - In Abraham Barton, Advances in Black Holes Research. New York: Nova Science Publishers. pp. 25-58.
    Black holes are extremely relativistic objects. Physical processes around them occur in a regime where the gravitational field is extremely intense. Under such conditions, our representations of space, time, gravity, and thermodynamics are pushed to their limits. In such a situation philosophical issues naturally arise. In this chapter I review some philosophical questions related to black holes. In particular, the relevance of black holes for the metaphysical dispute between presentists and eternalists, the origin of the second law (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  50. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 969
OSZAR »