Abstract

Advances in artificial intelligence (AI) offer new opportunities for human–AI cooperation in engineering design. Human trust in AI is a crucial factor in ensuring an effective human–AI cooperation, and several approaches to enhance human trust in AI have been explored in prior studies. However, it remains an open question in engineering design whether human designers have more trust in an AI and achieve better joint performance when they are deceived into thinking they are working with another human designer. This research assesses the impact of design facilitator identity (“human” versus AI) on human designers through a human subjects study, where participants work with the same AI design facilitator and they can adopt their AI facilitator’s design anytime during the study. Half of the participants are told that they work with an AI, and the other half of the participants are told that they work with another human participant but in fact they work with the AI design facilitator. The results demonstrate that, for this study, human designers adopt their facilitator’s design less often on average when they are deceived about the identity of the AI design facilitator as another human designer. However, design facilitator identity does not have a significant impact on human designers’ average performance, perceived workload, and perceived competency and helpfulness of their design facilitator in the study. These results caution against deceiving human designers about the identity of an AI design facilitator in engineering design.

References

1.
National Science and Technology Council
,
2016
,
Preparing for the Future of Artificial Intelligence
,
National Science and Technology Council Report
,
Washington, DC
, https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf
2.
Wilson
,
H. J.
, and
Daugherty
,
P. R.
,
2018
, “
Collaborative Intelligence: Humans and AI Are Joining Forces
,”
Harvard Business Rev.
,
96
(
4
), pp.
114
123
. https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces
3.
The Chartered Institute of Ergonomics and Human Factors
,
2021
,
Human Factors and Ergonomics in Healthcare AI
,
The Chartered Institute of Ergonomics and Human Factors (CIEHF) White Paper
,
Wootton Park, UK
, https://ergonomics.org.uk/resource/human-factors-in-healthcare-ai.html
4.
Razzak
,
M. I.
,
Naz
,
S.
, and
Zaib
,
A.
,
2018
, “Deep Learning for Medical Image Processing: Overview, Challenges and the Future,”
Classification in BioApps
,
N.
Dey
,
A. S.
Ashour
, and
S.
Borra
, eds.,
Springer International Publishing AG
,
Gewerbestrasse, Switzerland
, pp.
323
350
.
5.
Mikolov
,
T.
,
Sutskever
,
I.
,
Chen
,
K.
,
Corrado
,
G. S.
, and
Dean
,
J.
,
2013
, “
Distributed Representations of Words and Phrases and Their Compositionality
,”
Proceedings of the 26th International Conference on Neural Information Processing Systems, Vol. 2
,
Lake Tahoe, NV
,
Dec. 5–8
.
6.
Manning
,
C.
, and
Schutze
,
H.
,
1999
,
Foundations of Statistical Natural Language Processing
,
MIT Press
,
Cambridge, MA
.
7.
Chen
,
H. Q.
,
Honda
,
T.
, and
Yang
,
M. C.
,
2013
, “
Approaches for Identifying Consumer Preferences for the Design of Technology Products: A Case Study of Residential Solar Panels
,”
ASME J. Mech. Des.
,
135
(
6
), p.
061007
.
8.
Camburn
,
B.
,
He
,
Y.
,
Raviselvam
,
S.
,
Luo
,
J.
, and
Wood
,
K.
,
2020
, “
Machine Learning-Based Design Concept Evaluation
,”
ASME J. Mech. Des.
,
142
(
3
), p.
031113
.
9.
Williams
,
G.
,
Meisel
,
N. A.
,
Simpson
,
T. W.
, and
McComb
,
C.
,
2019
, “
Design Repository Effectiveness for 3D Convolutional Neural Networks: Application to Additive Manufacturing
,”
ASME J. Mech. Des.
,
141
(
11
), p.
111701
.
10.
Lopez
,
C. E.
,
Miller
,
S. R.
, and
Tucker
,
C. S.
,
2019
, “
Exploring Biases Between Human and Machine Generated Designs
,”
ASME J. Mech. Des.
,
141
(
2
), p.
021104
.
11.
Raina
,
A.
,
McComb
,
C.
, and
Cagan
,
J.
,
2019
, “
Learning to Design From Humans: Imitating Human Designers Through Deep Learning
,”
ASME J. Mech. Des.
,
141
(
11
), p.
111102
.
12.
Zhang
,
G.
,
Raina
,
A.
,
Cagan
,
J.
, and
McComb
,
C.
,
2021
, “
A Cautionary Tale About the Impact of AI on Human Design Teams
,”
Des. Stud.
,
72
, p.
100990
.
13.
Glikson
,
E.
, and
Woolley
,
A. W.
,
2020
, “
Human Trust in Artificial Intelligence: Review of Empirical Research
,”
Acad. Manage. Ann.
,
14
(
2
), pp.
627
660
.
14.
Siau
,
K.
, and
Wang
,
W.
,
2018
, “
Building Trust in Artificial Intelligence, Machine Learning, and Robotics
,”
Cutter Business Technol. J.
,
31
(
2
), pp.
47
53
. https://www.cutter.com/article/building-trust-artificial-intelligence-machine-learning-and-robotics-498981.
15.
Hoff
,
K. A.
, and
Bashir
,
M.
,
2015
, “
Trust in Automation: Integrating Empirical Evidence on Factors That Influence Trust
,”
Hum. Factors
,
57
(
3
), pp.
407
434
.
16.
Lee
,
J. D.
, and
See
,
K. A.
,
2004
, “
Trust in Automation: Designing for Appropriate Reliance
,”
Hum. Factors
,
46
(
1
), pp.
50
80
.
17.
Jan
,
S. T.
,
Ishakian
,
V.
, and
Muthusamy
,
V.
,
2020
, “
AI Trust in Business Processes: The Need for Process-Aware Explanations
,”
Proceedings of the AAAI Conference on Artificial Intelligence
,
New York
,
Feb. 7–12
, pp.
13403
13404
.
18.
Asan
,
O.
,
Bayrak
,
A. E.
, and
Choudhury
,
A.
,
2020
, “
Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians
,”
J. Med. Internet Res.
,
22
(
6
), p.
e15154
.
19.
Wang
,
W.
, and
Siau
,
K.
,
2019
, “
Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda
,”
J. Database Manage.
,
30
(
1
), pp.
61
79
.
20.
Nazaretsky
,
T.
,
Ariely
,
M.
,
Cukurova
,
M.
, and
Alexandron
,
G.
,
2022
, “
Teachers’ Trust in AI-Powered Educational Technology and a Professional Development Program to Improve It
,”
Brit. J. Educ. Technol.
,
53
(
4
), pp.
914
931
.
21.
Wang
,
W.
, and
Benbasat
,
I.
,
2007
, “
Recommendation Agents for Electronic Commerce: Effects of Explanation Facilities on Trusting Beliefs
,”
J. Manage. Inform. Syst.
,
23
(
4
), pp.
217
246
.
22.
Pieters
,
W.
,
2011
, “
Explanation and Trust: What to Tell the User in Security and AI?
,”
Ethics Inform. Technol.
,
13
(
1
), pp.
53
64
.
23.
Chong
,
L.
,
Zhang
,
G.
,
Goucher-Lambert
,
K.
,
Kotovsky
,
K.
, and
Cagan
,
J.
,
2022
, “
Human Confidence in Artificial Intelligence and in Themselves: The Evolution and Impact of Confidence on Adoption of AI Advice
,”
Comput. Hum. Behav.
,
127
, p.
107018
.
24.
Gillath
,
O.
,
Ai
,
T.
,
Branicky
,
M. S.
,
Keshmiri
,
S.
,
Davison
,
R. B.
, and
Spaulding
,
R.
,
2021
, “
Attachment and Trust in Artificial Intelligence
,”
Comput. Hum. Behav.
,
115
, p.
106607
.
25.
Li
,
M.
, and
Suh
,
A.
,
2022
, “
Anthropomorphism in AI-Enabled Technology: A Literature Review
,”
Electron. Markets
, pp.
1
31
.
26.
de Visser
,
E. J.
,
Krueger
,
F.
,
McKnight
,
P.
,
Scheid
,
S.
,
Smith
,
M.
,
Chalk
,
S.
, and
Parasuraman
,
R.
,
2012
, “
The World Is Not Enough: Trust in Cognitive Agents
,”
Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting
,
Boston, MA,
,
Oct. 22–26
, pp.
263
267
.
27.
Pak
,
R.
,
Fink
,
N.
,
Price
,
M.
,
Bass
,
B.
, and
Sturre
,
L.
,
2012
, “
Decision Support Aids With Anthropomorphic Characteristics Influence Trust and Performance in Younger and Older Adults
,”
Ergonomics
,
55
(
9
), pp.
1059
1072
.
28.
Kulms
,
P.
, and
Kopp
,
S.
,
2019
, “
More Human-Likeness, More Trust? The Effect of Anthropomorphism on Self-Reported and Behavioral Trust in Continued and Interdependent Human-Agent Cooperation
,”
Proceedings of Mensch Und Computer 2019
,
Hamburg, Germany
,
Sept. 8–11
, pp.
31
42
.
29.
de Visser
,
E. J.
,
Monfort
,
S. S.
,
Goodyear
,
K.
,
Lu
,
L.
,
O’Hara
,
M.
,
Lee
,
M. R.
,
Parasuraman
,
R.
, and
Krueger
,
F.
,
2017
, “
A Little Anthropomorphism Goes a Long Way: Effects of Oxytocin on Trust, Compliance, and Team Performance With Automated Agents
,”
Hum. Factors
,
59
(
1
), pp.
116
133
.
30.
de Visser
,
E. J.
,
Monfort
,
S. S.
,
McKendrick
,
R.
,
Smith
,
M. A.
,
McKnight
,
P. E.
,
Krueger
,
F.
, and
Parasuraman
,
R.
,
2016
, “
Almost Human: Anthropomorphism Increases Trust Resilience in Cognitive Agents
,”
J. Exp. Psychol.: Appl.
,
22
(
3
), pp.
331
349
.
31.
Verberne
,
F. M.
,
Ham
,
J.
, and
Midden
,
C. J.
,
2015
, “
Trusting a Virtual Driver That Looks, Acts, and Thinks Like You
,”
Hum. Factors
,
57
(
5
), pp.
895
909
.
32.
Pelau
,
C.
,
Dabija
,
D.-C.
, and
Ene
,
I.
,
2021
, “
What Makes an AI Device Human-Like? The Role of Interaction Quality, Empathy, and Perceived Psychological Anthropomorphic Characteristics in the Acceptance of Artificial Intelligence in the Service Industry
,”
Comput. Hum. Behav.
,
122
, p.
106855
.
33.
Fox
,
J.
,
Ahn
,
S. J.
,
Janssen
,
J. H.
,
Yeykelis
,
L.
,
Segovia
,
K. Y.
, and
Bailenson
,
J. N.
,
2015
, “
Avatars Versus Agents: A Meta-Analysis Quantifying the Effect of Agency on Social Influence
,”
Hum.-Comput. Interact.
,
30
(
5
), pp.
401
432
.
34.
O'Leary
,
D. E.
,
2019
, “
GOOGLE'S Duplex: Pretending to Be Human
,”
Intell. Syst. Account. Finance Manage.
,
26
(
1
), pp.
46
53
.
35.
Kant
,
I.
,
1948
,
Moral Law: Groundwork of the Metaphysics of Morals
,
Routledge
,
New York
.
36.
Carson
,
T. L.
,
2010
,
Lying and Deception: Theory and Practice
,
Oxford University Press
,
Oxford, UK
.
37.
Shim
,
J.
, and
Arkin
,
R. C.
,
2013
, “
A Taxonomy of Robot Deception and Its Benefits in HRI
,”
Proceedings of the 2013 IEEE International Conference on Systems, Man, and Cybernetics
,
Manchester, UK
,
Oct. 13–16
,pp. 2328–2335.
38.
Raina
,
A.
,
Cagan
,
J.
, and
McComb
,
C.
,
2022
, “
Design Strategy Network: A Deep Hierarchical Framework to Represent Generative Design Strategies in Complex Action Spaces
,”
ASME J. Mech. Des.
,
144
(
2
), p.
021404
.
39.
McComb
,
C.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2015
, “
Rolling With the Punches: An Examination of Team Performance in a Design Task Subject to Drastic Changes
,”
Des. Stud.
,
36
, pp.
99
121
.
40.
Brownell
,
E.
,
Cagan
,
J.
, and
Kotovsky
,
K.
,
2021
, “
Only As Strong As the Strongest Link: The Relative Contribution of Individual Team Member Proficiency in Configuration Design
,”
ASME J. Mech. Des.
,
143
(
8
), p.
081402
.
41.
Hart
,
S. G.
, and
Staveland
,
L. E.
,
1988
, “
Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research
,”
Adv. Psychol.
,
52
, pp.
139
183
.
42.
Raina
,
A.
,
Puentes
,
L.
,
Cagan
,
J.
, and
McComb
,
C.
,
2021
, “
Goal-Directed Design Agents: Integrating Visual Imitation With One-Step Lookahead Optimization for Generative Design
,”
ASME J. Mech. Des.
,
143
(
12
), p.
124501
.
43.
LeCun
,
Y.
,
Bottou
,
L.
,
Bengio
,
Y.
, and
Haffner
,
P.
,
1998
, “
Gradient-Based Learning Applied to Document Recognition
,”
Proc. IEEE
,
86
(
11
), pp.
2278
2324
.
44.
Sullivan
,
G. M.
, and
Feinn
,
R.
,
2012
, “
Using Effect Size—Or Why the P Value Is Not Enough
,”
J. Graduate Med. Educ.
,
4
(
3
), pp.
279
282
.
45.
Cohen
,
J.
,
1988
,
Statistical Power Analysis for the Behavioral Sciences
,
Lawrence Erlbaum Associates, Inc.
,
New York
.
46.
Miller
,
R. G.
,
1981
,
Simultaneous Statistical Inference
,
Springer-Verlag
,
New York
.
You do not currently have access to this content.