New Technologies at Work

The AEFC sector is considered as highly segmented, innovation-averse and struggling with talent-detention. Recent research initiatives like the National Centre of Competence in Research (NCCR) foster interdisciplinary collaborations encompassing technological development and expertise from six different academic disciplines striving for visionary architectonic results with industry partners. In this project we aim to understand these emerging interdisciplinary work processes within the research and development teams themselves. Applying different qualitative research methods such as non-participatory field observations and interviews in three selected case studies, the comparison of the results will allow us to determine both favorable and hindrance factors for integrative design and implementation of technology in interdisciplinary research.
This project is carried out within the external pageSocio-Economic Working Group of external pageNCCR Digital Fabrication.
For more information, please contact Aniko Kahlert
 

The opaqueness of machine-learning (ML) based systems is one of the key barriers to overcome in order to fully benefit from these technologies, especially in high-risk operations such as the railways that require strict regulatory oversight. This challenge is exacerbated in the case of collaborative work processes that involve several human operators, possibly in different occupational domains, and several ML technologies. The project addresses the challenge of designing explainable AI within an overarching design framework for socio-technical integration based on the recently proposed concept of networks of accountability. This concept outlines the interdependencies created between technology developers, organizational and individual users, and regulators due to the continuous process of data production and data use in ML-supported decision-making. The project will develop methods for extracting the different explainability requirements and for supporting their implementation, using a multi-method approach comprising expert interviews, participant observation, design workshops, and work process simulation. The project will be carried out with several partners at SBB and Siemens, focusing on solutions for application domains in traffic control and operations, inspections, and predictive maintenance. For more information, please contact Lena Schneider.
 

The increasing reliance on artificial intelligence (AI) for decision-making and task performance in organizational contexts did and will continue to affect creative processes in organizations. In this multi-faceted research project, we look at the effect of AI utilization on creativity from different sides. One area of focus relates to the increasing involvement of AI in the performance of generative tasks, such as the creation of organizational artifacts. Because of this, human involvement in organizational creativity will shift from idea generation to idea evaluation. Given that humans are clouded by biases when they evaluate creativity, we examine whether the identity of the producer of a given artifact as artificial intelligence (AI) or human is a source of bias affecting people’s creativity evaluation of such artifact, what mechanisms drive this effect, and what boundary conditions affect it. Another area of focus in this project is the role of AI as an evaluator and how this affects decision-making, behaviors and, in particular, creativity among human workers. We study not only how being evaluated by AI changes people’s cognitive, affective, and behavioral reactions, including their preferences for being evaluated by humans or AI in different contexts (algorithm aversion vs. algorithm appreciation), but also whether knowing that AI will evaluate their output affects their creativity, as well as why and when this happens. For more information, please contact Dr. Federico Magni.

After relying on web-based recruitment for more than two decades organizations are increasingly utilizing Artificial Intelligence (AI) to identify, attract, profile, interview, and assess candidates. This trend has developed because of the advantages that AI provides in terms of efficiency, unbiasedness and, data processing power. However, as AI takes on a more active role in decision-making compared to previous technologies, extensive reliance on it may have undesirable consequences due to its opaque decision-making process and lack of common sense. In line with these countervailing perspectives, AI recruitment can engender both positive and negative reactions among job applicants. So, it remains unclear whether, how, and when having an AI (vs. human) recruiter influences relevant recruitment outcomes. Drawing upon theories of algorithm aversion and appreciation, we study how and under which conditions attraction to the organization, job choices, and other relevant recruitment outcomes are affected by interfacing with an AI (vs. human) recruiter. For more information, please contact Dr. Federico Magni.

Lack of explainability in machine learning (ML) is seen as one of the major hurdles to the successful adoption of ML in medicine. Current approaches to solving this issue by designing explainable AI (XAI) have shown limited success or could be misguided because clinicians may have different requirements regarding the use of ML in clinical practice. In this multi-method study, we explore these potential differences in a sample of clinicians and developers as they engage in the co-design of an ML system and develop a framework with specific design recommendations for XAI in medicine. For more information, please contact Dr. Nadine Bienefeld.

In this project, we investigate how team decision-making and coordination processes change when AI/ML-based technologies are introduced. In particular, we are interested in shared team mental models, transactive memory systems, and the therewith related team process and outcome measures. Based on our findings and in collaboration with various hospitals, we develop and evaluate training initiatives that help physicians and nurses to “team-up” with AI/ML-based technologies. For more information, please contact Dr. Nadine Bienefeld.

As AI/ML-based technologies are increasingly used in various work domains, important questions regarding how to best align the technology with the needs of people, organizations, and society at large arise. Efforts towards socio-technical integration in system design, however, encounter major difficulties stemming from a lack of methods that can support user involvement in early-phase technological development as well as a lack of awareness of the relevance of socio-technical integration by organizational decision-makers. We use a set of methods that can be used to address these difficulties and develop recommendations for the design of both technological and work systems. We study these new socio-technical systems in the example of healthcare teams because there the potential opportunities but also the risks of AI/ML are especially high. To this end, we also assess the viewpoints of various stakeholders within the healthcare ecosystem (e.g., healthcare professionals, AI developers, patients, management, ethicists, and policymakers) and thus provide an all-around perspective. For more information, please contact Dr. Nadine Bienefeld.

AI-based technologies are often described as potentially transforming healthcare, ultimately contributing to a safer, more efficient medical future. However, the successful adoption of these technologies depends on multiple groups of stakeholders, one of which is the public. In this project, we investigate what risks and benefits people from the public perceive for the application of AI in healthcare and how these perceptions relate to healthcare choices when facing the option to receive different configurations of AI- or human-based care. Further, we identify cognitive-affective factors that contribute to these perceptions and choices. Based on our results, we develop recommendations on addressing and integrating the public's preferences regarding the use of AI in healthcare in AI design and healthcare-related political decision-making. For more information, please contact Sophie Kerstan.

Using AI to support the analysis of medical imaging data, like X-rays or CT scans, ranks amongst the most prominently discussed use cases of AI in healthcare. Although these technologies can, e.g., mark cancerous regions on mammograms with impressive accuracy, they are far from infallible. For the foreseeable future, such technologies will thus not be implemented as standalone systems that make diagnostic decisions autonomously but more as tools to support physicians' decision-making. Decades of research in other domains where decision-support systems are in place, e.g., aviation, have revealed that people tend to rely on such systems to an extent where they do not notice when the technology makes mistakes. That means that erroneous suggestions from AI regarding the classification of medical images might be blindly approved. Despite the severity of the potential consequences of this issue, ways to mitigate overreliance on technological support have yet to be identified. In this project, we thus investigate how physicians' attitudes towards AI might contribute to or prevent them from overly relying on AI-based decision support. Based on the results from our experiments, we develop recommendations for training healthcare professionals who engage with AI. For more information, please contact Sophie Kerstan.

With the development of evermore sophisticated computational methods and continuous increases in technological capabilities, a shift in how AI's role in organizations is portrayed has been triggered. While, so far, AI has mostly been depicted as a tool used to support employees with very narrow aspects of a task, it is now discussed to what extent it can also fill in the role of a more flexible, adaptive teammate. Research shows that people have specific expectations for what AI should be able to do to qualify as a team member. In interactions with AI, however, these expectations might or might not be fulfilled. Based on broader research on expectation-(dis)confirmation, it could be assumed that violating team members' expectations about their AI team members could have detrimental consequences for collaboration and task-related outcomes in human-AI teams. To shed light on the exact nature of these relationships, we investigate how expectations and their (dis)confirmation affect human-AI teamwork. Based on the results of our study, we will develop recommendations for designing AI team members and training employees regarding their AI colleagues' capabilities. For more information, please contact Sophie Kerstan.

JavaScript has been disabled in your browser