Abstract

Cyber–physical–social systems (CPSS) with highly integrated functions of sensing, actuation, computation, and communication are becoming the mainstream consumer and commercial products. The performance of CPSS heavily relies on the information sharing between devices. Given the extensive data collection and sharing, security and privacy are of major concerns. Thus, one major challenge of designing those CPSS is how to incorporate the perception of trust in product and systems design. Recently, a trust quantification method was proposed to measure the trustworthiness of CPSS by quantitative metrics of ability, benevolence, and integrity. The CPSS network architecture can be optimized by choosing a subnet such that the trust metrics are maximized. The combinatorial network optimization problem, however, is computationally challenging. Most of the available global optimization algorithms for solving such problems are heuristic methods. In this paper, a surrogate-based discrete Bayesian optimization method is developed to perform network design, where the most trustworthy CPSS network with respect to a reference node is formed to collaborate and share information with. The applications of ability and benevolence metrics in design optimization of CPSS architecture are demonstrated.

1 Introduction

Cyberphysical systems (CPS) are physical devices that have highly integrated functions of sensing, actuation, computation, and communication. Currently, both consumer and commercial products are becoming more intelligent with the implementations of them as CPS. These CPS devices have embedded sensors and can collect data of the surrounding environment. The data are shared between those devices, which help human users as well as the intelligent devices to make individual decisions. The decisions can be further executed with the actuation units of the devices. The CPS devices are the essential elements for smart home, smart city, intelligent manufacturing, personalized medicine, autonomous and safe transportation, omnipresent energy supplies, and many other applications. When CPS interact with human users and are integrated with human society, they are also termed as cyberphysicalsocial systems (CPSS), where the social dimension of the systems needs to be considered.

The design of CPSS is challenging because various factors and constraints in the cyber, physical, and social dimensions of design space need to be considered. There are unique challenges in CPSS design, such as sustainability, reliability, resilience, interoperability, adaptability, bio-compatibility, flexibility, and safety in the physical subspace. There are also principles of human-in-the-loop, data-driven design, co-design, scalability, usability, and security that need to be considered in the cyber subspace. In social subspace, the perceptions of risk, trust, and privacy, as well as memory capacity and emotion of users need to be incorporated.

The rapid growth of CPSS requires engineers to adopt a new design for connectivity principle. Different from traditional products, CPSS devices heavily rely on information sharing with each other to be functioning. A standalone CPSS device that is disconnected from networks cannot perform the functions which it is designed for. Thus, network connectivity is essential for CPSS. Those devices form the Internet of Things (IoT). How to consider the connectivity related issues in product design, therefore, is new to engineers. Particularly, each CPSS device constantly collects data and shares them with other devices in the networks. Information security and privacy become critical issues in designing such networked systems. At the high-level application layer, decisions of what data can be collected, where data are stored, who can access the data, which portion of data can be shared, etc. need to be made during the software design. These design decisions will simultaneously affect hardware and mechanism design. The effectiveness of CPSS functionalities critically depends on what and how information is shared between each other. Therefore, trust is an important design feature for these systems to work together. Designing the decision-making units on CPSS or decision support for human users need to incorporate the social dimension of trust.

Furthermore, how to design trustworthy CPSS that human users are willing to adopt and use is critical, as personal information are likely to be collected and shared by the devices. The users’ trust perceptions about a system may vary and can affect the effectiveness of human–device interactions. Thus, the social dimension of trust is an important factor for design engineers to consider.

Trust has been extensively studied in the domains of psychology, organizational behavior, marketing, and computer science. However, most studies remain conceptual and qualitative. Quantitative measurements of trustworthiness are needed when the concept is applied in engineering design and optimization. Some quantitative studies of trust have been conducted in computer science, where trustworthiness is mostly quantified by quality of service (QoS), e.g., success rate as well as consistency in packet forwarding and other transactions, in network communication. The reputations in user ratings and recommendations online were also used. These metrics are quantities only in cyber design space. There is still lack of trustworthiness metrics in both cyber and social design spaces, which are important to guide the design of trustworthy CPSS at the levels of network architecture and devices.

In this work, the perception of trust is quantified and applied in the CPSS architecture design, where a node’s collaboration network can be obtained by maximizing the level of trustworthiness. The quantitative trustworthiness metrics are based on the recently proposed ability-benevolence-integrity (A-B-I) model [13], where trustworthiness is quantified by the cybersocial metrics of ability, benevolence, and integrity. Ability shows how well a trustee party is capable of doing what it claims to perform. Benevolence indicates whether the motivation of the trustee is purely for the benefit of itself. Integrity measures if the trustee does what it claims to. Based on a mesoscale probabilistic graph model [4,5] of CPSS, the perceptions of ability, benevolence, and integrity can be quantified with the probabilities of good judgements for the nodes as well as the information dependencies among nodes.

In this paper, we further demonstrate how to apply the quantitative trustworthy metrics as the design criteria in network architecture design and optimization. The metrics of ability and benevolence are used as the utilities to identify an optimal subset of nodes in the network that a node can trust and collaborate with. A new discrete Bayesian optimization (DBO) method is proposed to solve the combinatorial network optimization problem. Bayesian optimization is a surrogate-based global optimization scheme that incorporates uncertainty in the searching process. The proposed discrete optimization method employs Gaussian process surrogates with a new discrete kernel function in searching the best combinations of nodes. The new discrete kernel is developed to better measure the similarity between networks with respect to the objective function.

Different from other global optimization approaches such as the commonly used genetic algorithms, simulated annealing, and other “memoryless” heuristic algorithms, Bayesian optimization keeps the search history. In addition, an acquisition function is constructed and used to guide the searching or sequential sampling process. It is designed to strike a balance between exploration and exploitation. During sequential sampling, the surrogate of the objective function is continuously updated based on the Bayesian belief update when new samples are available. Therefore, the searching process in Bayesian optimization can be accelerated with the properly designed surrogate model and acquisition function. This provides unique advantages in discrete optimization over traditional heuristic algorithms, especially for complex combinatorial problems where exhaustive search in the discrete solution space is computationally prohibitive.

In the remainder of this paper, the existing work of system-level design of CPSS, discrete Bayesian optimization, and trust quantification approaches are reviewed in Sec. 2, where the probabilistic graph model of CPSS is also introduced. In Sec. 3, the metrics of ability and benevolence in the A-B-I trust model are introduced. The discrete Bayesian optimization method is described in Sec. 4. The application of Bayesian optimization to the CPSS network architecture design is demonstrated with the ability and benevolence metrics.

2 Background

Here, an overview of CPSS system-level design is given. The existing research on discrete Bayesian optimization and trust quantification are reviewed. The probabilistic graph model of CPSS which the A-B-I model is based upon is also introduced.

2.1 Systems-Level Design of CPSS.

Compared with traditional products, the design of CPSS requires engineers to have better understanding of the systems-level behaviors [6], from conceptual design to design optimization of multidisciplinary and hierarchical architecture [7]. Given the evolutionary nature of cyber and physical technologies, adaptability that enables self-learning, self-organization, and context awareness is important [8]. As the complexity of the CPSS networks grows, the emphasis of large networks should be more on resilience (the ability to recover) than reliability (the ability to stay functioning) [4,5].

Some systems modeling methods and tools have been applied for CPSS design and analysis, such as hybrid discrete-event and continuous simulations [911], inductive constraint logic programming [12], abductive reasoning [13], hybrid timed automaton [14], ontologies [15], information schema [16], UML [17], SysML [18], and information dynamics modeling [19]. The high-dimensional design space of CPSS includes not only the cyber and physical subspaces but also the social subspace. The modalities for human–system interaction [20], context awareness and personalized human–system communication [21], as well as trusted collaboration [13] have been studied.

To support systems design, developing optimization methods for the large-scale network at the metasystem level is necessary. Network optimization usually involves combinatorial problems. Here, we propose to use Bayesian optimization to solve these problems.

2.2 Bayesian Optimization for Discrete Problems.

Bayesian optimization is a class of surrogate-based methods to search global optimum under uncertainty with Bayesian sequential sampling strategies. The search or sampling process is based on an acquisition function that is defined in the same input space of the objective function. In parallel, a surrogate model of the objective is also constructed and updated during the search. The most used surrogate is the Gaussian process regression (GPR) model which is updated based on the Bayesian principle. The surrogate keeps the search history since it is constructed from the samples. At the same time, it helps decide the next sample in the sequential sampling. Therefore, if the surrogate model is designed properly, surrogate-based optimization methods can be more efficient than other “memoryless” searching methods. Bayesian optimization has been widely used in the continuous domain and only recently gained attentions in the discrete domains. Here, the review is focused on its use to solve discrete problems.

For mixed-integer problems, Tran et al. [22] proposed a Gaussian mixture approach to combine a discrete number of design subspaces for continuous variables. Each subspace contains a GPR surrogate model, and the global one is the Gaussian mixture model. Iyer et al. [23] mapped the discrete variables to a continuous latent space so that the mixed-integer problem is converted to continuous problem.

For discrete problems, the straightforward extension is just treating discrete variables as continuous ones and round the variable values to the closest integers during the searching process. Baptista and Poloczek [24] proposed a quadratic acquisition function for combinatorial problems and converted the binary variables to high-dimensional vectors during the searching process. The solutions are then projected back to the binary space. However, this approach may fail to identify the true optimum and be trapped in the local region because there is a mismatch between the true discontinuous objective function and the assumed continuous acquisition function. Zaefferer et al. [25] replaced the continuous distance with discrete distance measures and compared the performance using the expected improvement (EI) acquisition function. Garrido-Merchán and Hernández-Lobato [26] developed an input variable transformation to ensure the distance between any two discrete variables remain unchanged in evaluating kernels when the variables perturb into the continuous space. Zhang et al. [27] proposed a new kernel function based on the Hamming distance for permutation problems and the prior knowledge about similarity in the problems. The sparse Gaussian process model was used to reduce the computational cost of kernel update. Oh et al. [28] represented the discrete solutions of the combinatorial problems as combinatorial graphs and the adjacency information is embedded in the kernel function.

The major research question for discrete Bayesian optimization is how to design discrete kernels so that the differences between samples in the discrete space, which are problem-specific, can be quantitatively reflected in the distance measure. There is still a lack of thorough comprehension.

2.3 Trust Quantification for CyberPhysical Systems.

Conceptually, trust is the willingness to be vulnerable to another. It is a different concept from security. Security is critical for trust. However, security alone cannot guarantee the trustworthiness. For instance, although security protocols can ensure data are not intercepted during transmission, they provide no guarantee against the misuse by the receiving party or against fraud by the transmitting party. In recent studies in cyberspace, trust was quantified with reputation, ratings, and user recommendations in information systems and social networks [22,29]. It was also measured by QoS, routing and delivery success rates, and consistency of data forwarding in computer networks and sensor networks [30,31]. Approaches of probability [3234], imprecise probability [35,36], and fuzzy logic [3739] have been developed to quantify the human perception of trust. It should be noted that trust in social space and its dynamics need to be taken into consideration [40,41].

To quantify the trustworthiness of CPS, Chen et al. [42] developed a fuzzy model of trust based on the reputation of communication efficiency. Huang et al. [43] represented trust as probabilistic measures of trustor’s belief and trustee’s performance. Al-Hamadi and Chen [44] calculated trust from user ratings aggregated from different time periods and different locations. Yu et al. [45] quantify trustworthiness as a weighted average of reliability, availability, and security. Xu et al. [46] used the weighted average of direct user experiences and other’s recommendations to evaluate the trust of edge computing devices. Tang et al. [47] measured the sensor data trustworthiness in sensor networks based on sensor-object distances, whereas Tao et al. [48] used the consistency with reference datasets. Xu et al. [46] quantified the trustworthiness of CPS nodes by a combination of QoS and reputation, whereas Junejo et al. [49] used QoS measurements and Xia et al. [50] used reputation.

Different from the above, Wang [13] developed a quantitative A-B-I model with multi-faceted metrics of ability, benevolence, and integrity. The considerations of these three factors are broader than those in the above approaches. These factors have been qualitatively investigated in the studies of social organizations. As comprehensively studied by Mayer et al. [51], the common concepts and keywords to describe trust in human society can be grouped into these three categories. For instance, the ability category includes expertise, competence, and the similar. The benevolence category includes loyalty, openness, receptivity, and availability. Integrity is associated with consistency, discreetness, fairness, promise fulfillment, and reliability. The three trust factors have also been adopted in designing trustable information systems such as e-commerce [52,53], e-banking [54], and mobile health [55]. In the quantitative A-B-I model [13] for CPS networks, metrics of ability, benevolence, and integrity are developed based on measurable quantities. Ability characterizes a node’s capabilities of sensing, reasoning, and influence to other nodes. Benevolence characterizes the motivation of a node for its information sharing. Integrity is related to the traditional cyber and physical security and can be quantified from QoS. These A-B-I metrics can be quantitatively measured, calculated, and compared. For instance, Wang et al. [56] applied the quantitative A-B-I model to evaluate the trustworthiness of IoT nodes with data collection and communication behaviors.

In order to build large-scale networks, trustworthiness should be treated as transferrable quantities so that it can be propagated in scalable systems. With the quantitative measures of trustworthiness, the risk of deploying CPS can be quantified and assessed more thoroughly in highly complex networks where a global view of the networks is difficult to obtain. Trust quantification in this work is based on a probabilistic graph model of CPSS, as introduced in Sec. 2.4.

2.4 Probabilistic Graph Model of CPSS.

The probabilistic graph model [2,5] is an abstraction of CPSS networks at the mesoscale. It captures the sensing, computing, and communication capabilities of CPSS by the prediction probabilities for all nodes in a CPSS network and the pair-wise reliance probabilities between nodes as the extent of information dependency and mutual influences. The model is illustrated in Fig. 1. The prediction and reliance probabilities of nodes are defined as follows.

Fig. 1
Probabilistic graph model of CPSS networks
Fig. 1
Probabilistic graph model of CPSS networks
Close modal
A probabilistic graph G=(V,E,P,R) consists of a set of vertices V={vk} and a set of directed edges E={(vi,vj)}. Each node vk is associated with a prediction probability pkP, and each directed edge (vi, vj) is associated with a reliance probability pijR. The prediction probability that the kth node detects the true state of world θ is
P(xk=θ)=pk
(1)
where xk is the state variable. Without loss of generality, here only binary-valued state variables (= θ or ≠ θ) are considered. State variables with multiple discrete values can be easily extended. Continuous variables are usually discretized in a digital computing environment.
With binary-valued state variables, we can define P-reliance probability
P(xj=θ|xi=θ)=pij
(2)
as the probability that the jth node predicts the true state of world given that the ith node predicts correctly. We also define Q-reliance probability
P(xj=θ|xiθ)=qij
(3)
as the probability that the jth node predicts the true state of world given that the ith node does not predict the same.

The state variables contain the results from sensing. The values can be updated from computing or reasoning. Therefore, the prediction probabilities capture the sensing and computing functionalities, whereas the reliance probabilities indicate the functionality of communication. The random state variables with binary values can be extended to multiple values or continuous. For instance, one sensor measures a value which follows some distribution, as in prediction probability. If there are a finite set of possible values {θ1, …, θT} for state variables. The prediction probability P(xk = θn) and reliance probability P(xj = θn|xi = θm), where 1 ≤ m, nT, can be enumerated similarly.

The edges in the probabilistic graph are directional. The neighbors of each node can be further differentiated as source nodes or destination nodes, as illustrated in Fig. 2. For one node, its source nodes are those sending information to this node, whereas the destination nodes are those receiving information from it. When receiving different cues from source nodes, a CPSS node can update its prediction probability to reflect its perception of the world. The aggregation of prediction probabilities sensitively depends on the rules of information fusion during the prediction update.

Fig. 2
Source and destination nodes with respect to node j are differentiated
Fig. 2
Source and destination nodes with respect to node j are differentiated
Close modal
If P(xk) and P(xkC) denote the probabilities of a positive and a negative prediction from node k, respectively, a best-case fusion rule can be defined as
P(xk)=1(1P(xk))i=1MPP(xi)(1P(xk|xi))j=1MNP(xjC)(1P(xk|xjC))
(4)
where node k updates its prediction based on its own current prediction and those cues from its MP + MN source nodes, out of which MP of the source nodes provide positive predictions whereas MN of them provide negative predictions, P(xk|xi) indicates the probability that a positive message from node i leads to a positive prediction of node k, and P(xk|xjC) is the probability that a negative message from node j leads to a positive prediction of node k. Therefore, if any of the cues from the source nodes is positive, the prediction of the node is positive. Some variations of this fusion rules exist. For instance, the previous prediction from itself can be either included or excluded during the update.
Similarly, a worst-case fusion rule can be defined as
P(xk)=P(xk)i=1MPP(xi)P(xk|xi)j=1MNP(xjC)P(xk|xjC)
(5)
That is, if any of the cues from the source nodes is negative, the prediction of the node is negative. The Bayesian fusion rule is defined as
P(xk)=P(xk)maxP{(P(xk))r(1P(xk))Sr}(P(xk))r(1P(xk))SrdP
(6)
where the prediction of the node is updated to P′ from prior prediction P, and out of S cues that the neighboring nodes provide, r of them provide are positive, if the maximum likelihood principle is taken.

The probabilistic graph model provides a mesoscale description of CPSS networks, where information exchange and aggregation are captured. Prediction and reliance probabilities can be easily obtained in a physical system from the collected historical data. The prediction probability of a node can be based on the data collected by its sensing and reasoning units. The probability can be estimated from the frequencies of observing correct state variable values under uncertainty or sharing correct observations. Similarly, the reliance probability associated with an edge can be estimated from the frequencies of positive or negative predictions by the destination node given the source node’s own prediction. For instance, in a sensor network or industrial ethernet, if the prediction probability of a sensor is used to quantify its sensitivity, the probability can be estimated as the ratio of the number of observations per time unit sent by this node to a baseline reference number that the best performer in the local network sends. The known best performer sets an upper limit. The reliance probability for each edge of the sensor network can be estimated as the ratio of the number of packets received by the destination to the number sent by the source, or the ratio of correct observations, as a measure of communication reliability [5].

If no experimental data are available to quantify the probabilities, subjective estimations from domain experts can be elicited. Probability elicitation is well known in both practice and literature. Standard procedures are usually taken to elicit probabilities associated with some events from domain experts as subjective estimates.

3 The Ability-Benevolence-Integrity Trust Model

Based on the probabilistic graph model, the trust metrics of ability and benevolence in the A-B-I model [13] can be calculated. The quantitative metrics in the A-B-I model are summarized in Fig. 3. The trust level is quantified by three orthogonal metrics of ability, benevolence, and integrity. The ability of a CPSS node is measured with its capability of performing correct predictions and capability of information processing for decision making from the perspectives of sensing and computation, as well as its influence to other nodes. The benevolence is measured by reciprocity as the willingness to share information reciprocally and motive as the motivation of sharing from the perspective of communication. The integrity of a CPSS node is closely related to the cybersecurity and can be evaluated with consistency, frequency of compromises, QoS, and other security measurements.

Fig. 3
Metrics in the A-B-I trust model
Fig. 3
Metrics in the A-B-I trust model
Close modal

Here, only the metrics of ability and benevolence are summarized. They will be used as the utilities to demonstrate the network optimization. Since integrity has been studied extensively in cybersecurity, ability and benevolence can show the uniqueness of our proposed trust measurements. The complete description of the A-B-I trust model as well as the illustrations of the metrics and their use for detecting malicious attacks can be found in Ref. [2].

3.1 Ability.

The ability of a CPSS node is evaluated by its capabilities of prediction and information processing as well as its influence to other nodes. The capability of prediction for a node is measured by its functionality of data collection. The capability of information processing is by its functionality of reasoning based on data obtained from its neighbors. The influence to others is quantified by how influential its information shared to others is in their decision making. Those quantities can be quantified by the prediction probability and reliance probabilities perceived by others, as well as the precisions of the perceptions.

The perceived ability of node j with the consideration of its prediction capability is Aj(θ) = ℙ(P(xj = θ)), where ℙ(·) denotes perception. Suppose that all perceptions follow Gaussian distributions. The prediction capability can be quantified by its mean
E(Aj(θ))=pj
(7)
and its variance
V(Aj(θ))=τj1
(8)
That is, if a node has a higher prediction capability with less variability than others, it is more trustworthy.

Based on the directions of information sharing between nodes, the neighboring nodes for each node in the network are categorized as source nodes and destination nodes, as illustrated in Fig. 2. With respect to node j, the set of source nodes that share information with node j is denoted as Sj={vi|(vi,vj)E}, and the set of destination nodes that receive information from node j is denoted as Dj={vk|(vj,vk)E}.

The perceptions about the P- and Q-reliance probabilities for nodes i and j are related to the information processing capability of node j. A high P-reliance probability indicates that node j can absorb knowledge quickly. A high Q-reliance probability shows that node j can have good judgement even in a noisy and uncertain situation. We simplify the notations as Lij = ℙ(P(xj = θ|xi = θ)) and Lijc=P(P(xj=θ|xiθ)), respectively. They are assumed to follow Gaussian distributions with means E(Lij|Aj) = pij and E(Lijc|Aj)=qij, and variances V(Lij|Aj)=τij,p1 and V(Lijc|Aj)=τij,q1, respectively.

The perceived ability of node j with the considerations of both capabilities of prediction and information processing is then quantified with mean
E(Aj(θ|L(+j)))=τjpj+iSjτij,ppij+iSjτij,qqijτj+iSjτij,p+iSjτij,q
(9)
and variance
V(Aj(θ|L(+j)))=(τj+iSjτij,p+iSjτij,q)1
(10)
based on Bayes’ rule of belief update. Bayesian belief update is an intuitive way to combine multiple factors. The simple forms of the posterior mean in Eq. (9) and posterior variance in Eq. (10) are due to the Gaussian distributions of prior and likelihood.
Leadership should be regarded as one’s ability. Here, it is estimated as its influence to others by sharing information. The perceived ability of node j with the considerations of its prediction capability and influence is quantified with mean
E(Aj(θ|L(j)))=τjpj+kDjτjk,ppjk+kDjτjk,q(1qjk)τj+kDjτjk,p+kDjτjk,q
(11)
and variance
V(Aj(θ|L(j)))=(τj+kDjτjk,p+kDjτjk,q)1
(12)
where Bayes’ rule is similarly applied.
The overall and comprehensive ability perception with the simultaneous considerations of its capabilities of prediction and information processing, as well as influence is calculated as
E(Aj(θ|L(+j),L(j)))=τjpj+iSjτij,ppij+iSjτij,qqij+kDjτjk,ppjk+kDjτjk,q(1qjk)τj+iSjτij,p+iSjτij,q+kDjτjk,p+kDjτjk,q
(13)
V(Aj(θ|L(+j),L(j)))=(τj+iSjτij,p+iSjτij,q+kDjτjk,p+kDjτjk,q)1
(14)

Therefore, a node that gives accurate predictions, makes sound decisions, and brings positive influences to others is deemed to be trustworthy.

The perception of one’s ability can also be dictated by the abilities of those ones that are closely associated. That is, if a neighbor or associate, who is influenced by a node, has high ability, the perception of this node’s ability is also increased. Therefore, higher-order perception of ability can be defined. If the ability in Eqs. (13) and (14) is first-order and has values of mean E(Aj(θ|L(+j),L(j)))=Ej and variance V(Aj(θ|L(+j),L(j)))=Vj, the second-order ability is defined as
E(2)(Aj(θ|L(+j),L(j)))=Vj1Ej+kDjτjk,ppjk(Vk1Ek)+kDjτjk,q(1qjk)(Vk1Ek)τj+kDjτjk,ppjkVk1+kDjτjk,q(1qjk)Vk1
(15)
V(2)(Aj(θ|L(+j),L(j)))=(τj+kDjτjk,ppjkVk1+kDjτjk,q(1qjk)Vk1)1
(16)

Higher-order perceptions of ability can be similarly defined.

3.2 Benevolence.

The benevolence of a CPSS node is evaluated by the reciprocity and motive. The perception of reciprocity is measured by the willingness of sharing information to others while receiving information simultaneously. The motive is quantified by the quality of information shared to others and the frequency of sharing.

The expected reciprocity for node j perceived by node i is defined as
E(Ri,j)=DKL(pij||pji)DKL(pji||pij)+b0
(17)
where pji=k=ji1pk,k+1 is the product of all P-reliance probabilities pk,k+1 corresponding to the shortest path from node j to node i, DKL(P||Q)=iPilog(Pi/Qi) is the Kullback–Leibler divergence from probability Q to P, and b0 is a reference value such that E(Ri,j) > b0 when node j has a larger reciprocity with respect to node i. Intuitively, if node j is willing to share accurate information with node i without necessarily expecting node i to share information as a return, node j has a high reciprocity to node i. In other words, node i can trust node j. Here, b0 = 0.5 such that reciprocity has a value between 0 and 1. A higher value of reciprocity indicates higher trustworthiness. Furthermore, E(Ri,i) = b0. The variance associated with the perceived reciprocity is conservatively estimated as
V(Ri,j)=min(jiτab1+ijτcd1,Vmax)
(18)
where τab and τcd are the precisions associated with the P-reliance probabilities along paths ji and ij, respectively, and Vmax = 1.0 is the theoretical maximum value of variance associated with probabilities. V(Ri,i) = 0.
Motive measures the intention of information sharing within a community. Sharing high-quality information with neighbors indicates the good purpose of improving the overall functionality of the community. Thus, perceived motive of node j is defined as
E(Mj)=pjdj
(19)
V(Mj)=τj1
(20)
where pj is the prediction probability associated with node j with precision τj and dj=|Dj| is the number of destination nodes for node j.
The overall benevolence of node j perceived by node i is
E(Bi,j)=V1(Ri,j)E(Ri,j)+V1(Mj)E(Mj)V1(Ri,j)+V1(Mj)
(21)
V(Bi,j)=(V1(Ri,j)+V1(Mj))1
(22)

4 Discrete Bayesian Optimization

The trust-based network optimization is to identify a subset of nodes in the network which are the most trustworthy with respect to a reference node. The optimization problem involves choosing the best subset of nodes and, therefore, is combinatorically complex. The traditional approach to solve these problems is using heuristic algorithms such as genetic algorithms and simulated annealing.

Here, a new dBO method is developed to perform the CPSS network optimization. The design problem is to choose the optimum subgraph out of a graph with respect to a reference node such that the trustworthiness level perceived by the reference node is maximized.

The sampling strategy of choosing the next sample is to maximize the acquisition function instead of the objective surrogate. One example of acquisition functions is the EI
aEI(x;{xi,yi}i=1D,θ)=σ(x;{xi,yi}i=1D,θ)(γ(x)Φ(γ(x))+ϕ(γ(x)))
(23)
where ϕ(·) and Φ(·) are the probability density function and cumulative distribution function of the standard normal distribution, γ(x)=(μ(x;{xi,yi}i=1D,θ)ybest)/σ(x;{xi,yi}i=1D,θ) is the deviation away from the best solution ybest found so far, with posterior mean μ(x;{xi,yi}i=1D,θ) and posterior standard deviation σ(x;{xi,yi}i=1D,θ), given the existing D samples {xi,yi}i=1D and GPR hyper-parameter θ.
Another example of the acquisition function is upper confidence bound (UCB)
aUCB(x;{xi,yi}i=1D,θ)=μ(x;{xi,yi}i=1D,θ)+κσ(x;{xi,yi}i=1D,θ)
(24)
where κ is a hyper-parameter for the exploitation–exploration balance. To simply the optimization process, in this work, we choose κ = 1.5 as a constant instead.
In the proposed dBO method for network design, the GPR surrogate of the objective function f(z)GP(m(z),k(z,z)) has mean function m(z) and covariance kernel function k(z, z), where z = [z1, …, zN] is an index vector of N binary values (zi{0,1},i=1,,N) for a graph with N nodes. A “1” indicates that the corresponding node is included in the subgraph as the solution, and a “0” indicates not. The major construct of the GPR model is the kernel function, defined as
k(z,z)=exp(i=1Nd(zi,zi)/θi)
(25)
where d(·) is a distance function defined in the discrete space such as the Hamming distance, and θi’s are the hyper-parameters of scales. The advantage of one independent scale parameter being associated with each node comparison is that the different importance levels of nodes for trust quantification can be captured. In other words, not every node in a network is equally trustworthy with respect to a reference node. The scale parameters after the training can provide the weights of importance. The disadvantage of the kernel function in Eq. (25) is that the quickly increased number of hyper-parameters for large networks requires large training datasets. The prediction will not be accurate otherwise. One easy way to mitigate the risk and reduce the computational load is to assume that all hyper-parameters have the same value, as
k(z,z)=exp(i=1Nd(zi,zi)/θ)
(26)
That is, there is only one hyper-parameter θ. This greatly simplifies the training process, however at the expense of losing model granularity.

5 Trust-Based Strategic Network Design

A strategic network for a node is the most trustworthy network that the node can form the strategic collaboration relation. The design of such strategic network is to identify a subset of nodes within the complete network so that the node has the highest trustworthiness level. The trustworthiness metrics of ability and benevolence are used here to demonstrate the trust-based strategic network design. The network optimization based on other metrics such as integrity can be done similarly.

5.1 Ability as the Optimization Criteria.

Ability in Eq. (13) is first utilized as the metric to identify the most trustworthy network for a reference node. The strategic network of the reference node can be obtained by finding the network where the ability of the reference node is maximized. Three networks with 20, 40, and 60 nodes, shown in Fig. 4, are generated with random connections for tests. The prediction and reliance probabilities are also randomly generated. Note that the random networks are generated to better test the robustness and scalability of the design optimization method than some deterministic ones.

Fig. 4
Three example networks for optimization tests, with (a) 20 nodes and 192 edges, (b) 40 nodes and 787 edges, and (c) 60 nodes and 1731 edges
Fig. 4
Three example networks for optimization tests, with (a) 20 nodes and 192 edges, (b) 40 nodes and 787 edges, and (c) 60 nodes and 1731 edges
Close modal

The EI acquisition in Eq. (23) and UCB acquisition in Eq. (24) along with the two kernel functions in Eqs. (25) and (26) are tested for the 20-node-192-edge example. The Hamming distance is used in the kernels. When searching for the optimum network to maximize the ability of node 0, they have different convergence rates, as compared in Fig. 5(a). The optimum solution, as shown in Fig. 5(b), is found with the EI acquisition in combination with the multi-parameter kernel. During the search, a simulated annealing algorithm is applied to maximize the acquisition to decide the next sample. It is seen that the search can be trapped at the local optimum when the single-parameter kernel function in Eq. (26) is used. The single-parameter kernel function does not provide as much granularity as the multi-parameter kernel and does not differentiate much about the different contributions between nodes for the ability of node 0. Therefore, the parameter training tends to be not optimal. The UCB acquisition function emphasizes more on exploitation than the EI acquisition. Thus, the search tends to get trapped in local optima.

Fig. 5
(a) Convergence speeds of four cases with EI and UCB acquisition functions, along with single-parameter and multiple-parameter kernel functions, are compared for the 20-node-192-edge example. (b) The optimum network with the ability of node 0 maximized is found with the EI acquisition and multiple-parameter kernel.
Fig. 5
(a) Convergence speeds of four cases with EI and UCB acquisition functions, along with single-parameter and multiple-parameter kernel functions, are compared for the 20-node-192-edge example. (b) The optimum network with the ability of node 0 maximized is found with the EI acquisition and multiple-parameter kernel.
Close modal

The convergence speeds for the networks of different sizes are further tested. The results are shown in Fig. 6. It is seen that as the size of network increases, more iterations are required to find the global optimum. The reason is two-fold. First, larger networks result in the higher dimension of the searching space. The searching complexity for the possible solutions grows exponentially. Second, as the dimension of searching space increases, more samples are required to construct reliable surrogate models. Therefore, more iterations are necessary to ensure the convergence to the global optimum.

Fig. 6
(a) Convergence speeds when searching in the 20-, 40-, and 60-node networks, with the EI acquisition and multi-parameter kernel functions, (b) the optimum in the 40-node network, and (c) the optimum in the 60-node network
Fig. 6
(a) Convergence speeds when searching in the 20-, 40-, and 60-node networks, with the EI acquisition and multi-parameter kernel functions, (b) the optimum in the 40-node network, and (c) the optimum in the 60-node network
Close modal

To compare the performance of the dBO method with the commonly used heuristic algorithms, simulated annealing is applied for the same network optimization problems. For each of the three examples with 20, 40, and 60 nodes, the simulated annealing algorithm to maximize the ability metric is run 5 times with different annealing steps ranging from 50 to 300. The means and standard deviations of the obtained optimal ability values for those test runs are listed in Tables 13, respectively. The means and standard deviations of results for 5 runs of the dBO algorithm after 50 iterations are also listed in these tables, where EI acquisition and multi-parameter kernel are used. The number of annealing steps indicates the computational cost where each step involves one evaluation of the original objective function. In the dBO searching, 50 initial samples with the evaluations of the objective function were obtained to construct the initial GPR surrogate. Additional samples are added for each of the iterations in Figs. 5 and 6. Each iteration involves one evaluation of the objective function, whereas the evaluation of the acquisition function in Bayesian optimization is based on the surrogate and usually costs much less, especially when the original objective function requires heavy computation. Therefore, the cost of dBO for 50 iterations is approximately equivalent to the cost of simulated annealing for 100 steps in these examples. From the comparisons, it is seen that the dBO method can find better solutions than the simulated annealing with the similar cost. Furthermore, the results of the dBO method have much less variability. In other words, the dBO algorithm is also more robust than the heuristic simulated annealing.

Table 1

The means and standard deviations of the maximum ability for the 20-node network using simulated annealing with different annealing steps, where the bold values for the case of 100 annealing steps has the similar computational cost as in the dBO of 50 iterations

StepsMeanStandard deviation
500.7041287580.024803099
1000.7177320620.01618725
1500.7246779740.021446642
2000.7381497530.026914332
2500.728427030.018894042
3000.7268422860.014625707
dBO0.7639049960.002614458
StepsMeanStandard deviation
500.7041287580.024803099
1000.7177320620.01618725
1500.7246779740.021446642
2000.7381497530.026914332
2500.728427030.018894042
3000.7268422860.014625707
dBO0.7639049960.002614458
Table 2

The means and standard deviations of the maximum ability for the 40-node network using simulated annealing with different annealing steps, where the bold values for the case of 100 annealing steps has the similar computational cost as in the dBO of 50 iterations

StepsMeanStandard deviation
500.6385952210.060644109
1000.6841157670.035342407
1500.6969344090.028088683
2000.680541120.023215712
2500.7091944290.031983543
3000.704403410.023225232
dBO0.7466617920.00340882
StepsMeanStandard deviation
500.6385952210.060644109
1000.6841157670.035342407
1500.6969344090.028088683
2000.680541120.023215712
2500.7091944290.031983543
3000.704403410.023225232
dBO0.7466617920.00340882
Table 3

The means and standard deviations of the maximum ability for the 60-node network using simulated annealing with different annealing steps, where the bold values for the case of 100 annealing steps has the similar computational cost as in the dBO of 50 iterations

StepsMeanStandard deviation
500.6233910130.056150683
1000.650128410.039877341
1500.6572174190.046396371
2000.6797893370.005860135
2500.6786789030.005974927
3000.6761958120.00793658
dBO0.6925544580.003021649
StepsMeanStandard deviation
500.6233910130.056150683
1000.650128410.039877341
1500.6572174190.046396371
2000.6797893370.005860135
2500.6786789030.005974927
3000.6761958120.00793658
dBO0.6925544580.003021649

Besides the comprehensive ability metric, capabilities in Eq. (9) and influence in Eq. (11) can also be applied individually as the criteria to perform design optimization based on specific interests. In addition, the second-order ability in Eq. (15) can also be used as the optimization criterion. The respective optimum networks based on these three criteria for node 0 in the 20-node example are shown in Fig. 7. It is seen that different criteria lead to different optimum networks. The capabilities and influence criteria result in two different set of optimal nodes, given that two different types of information (source nodes versus destination nodes) are applied in calculating the trustworthiness in Eqs. (9) and (11). When the ability metric in Eq. (13) is used where both types of information are combined, the assessment of trustworthiness will be more comprehensive. The most trustable nodes, as seen in Fig. 5(b), are reduced to the ones that appear in both of the previous optimum networks. Some nodes become less trustworthy when more information is considered. The second-order ability is calculated with more information where the abilities of the destination nodes are more influential. Therefore, the result of the second-order ability is different from that of the first-order one.

Fig. 7
Optimum networks with respect to node 0 in the 20-node-192-edge example by different ability metrics: (a) capabilities in Eq. (9) as criterion, (b) influence in Eq. (11) as criterion, and (c) second-order ability in Eq. (15) as criterion
Fig. 7
Optimum networks with respect to node 0 in the 20-node-192-edge example by different ability metrics: (a) capabilities in Eq. (9) as criterion, (b) influence in Eq. (11) as criterion, and (c) second-order ability in Eq. (15) as criterion
Close modal

5.2 Benevolence as the Optimization Criteria.

The design optimization procedure can be similarly applied with benevolence as the criterion. Because the reciprocity in Eq. (17) and benevolence in Eq. (21) are defined as pair-wise metrics, the optimization can be based on the weighted average benevolence perceived by node i as
U(i)=jV(i)wjB¯j
(27)
for all neighboring nodes V(i) of node i, where B¯j=(1/nj)kV(i)Bj,k is the average benevolence of node j among its nj neighbors, and weights wj’s (0 ≤ wj ≤ 1) indicate the self-interest level. When wi = 1 and wj=0(ji) with respect to node i, it is a “selfish” mode. Only the benevolence of node i is considered as the criterion to find the optimum network for node i. On the other hand, when wi = 0 and jiwj=1, it is considered to be a “altruistic” mode. The weighted average reciprocity can be calculated similarly.

In the 20-node-192-edge example, the optimum networks for node 0 with the benevolence criteria are shown in Fig. 8. It is seen when the self-interest weight w0 is lower, it is easier to build a larger trustworthy network. The obtained most trustable networks in Fig. 8 based on the benevolence criteria are different from the one in Fig. 5(b) based on the ability criteria. The only common trustworthy node is node 13 between Figs. 5(b) and 8(a), and is node 15 between Figs. 5(b) and 8(b) in the more “selfish” modes of benevolence. For the more “altruistic” mode in Fig. 8(c), there is no node that is trustworthy measured by both benevolence and ability. Therefore, competitions and conflicts exist when different criteria of ability and benevolence are applied. If multiple criteria are considered simultaneously, multi-objective optimization methods are needed to identify the Pareto solutions and make tradeoffs.

Fig. 8
Optimum networks with respect to node 0 in the 20-node-192-edge example by different benevolence metrics: (a) weighted average benevolence as criterion with w0 = 1, (b) weighted average benevolence as criterion with w0 = 1/2 and all other weights are 1/38, and (c) weighted average reciprocity as criterion with w0 = 1/2 and all other weights are 1/38
Fig. 8
Optimum networks with respect to node 0 in the 20-node-192-edge example by different benevolence metrics: (a) weighted average benevolence as criterion with w0 = 1, (b) weighted average benevolence as criterion with w0 = 1/2 and all other weights are 1/38, and (c) weighted average reciprocity as criterion with w0 = 1/2 and all other weights are 1/38
Close modal

6 Concluding Remarks

In this paper, quantitative trustworthiness metrics are used as the design criteria to perform optimization of cyberphysicalsocial system networks. Each node can choose its own most trusted strategic network so that they can collaborate and share information. The trustworthiness is quantified as multi-faceted quantities in both cyber and social spaces, including the dimensions of ability, benevolence, and integrity. In CPSS, the ability and benevolence can be calculated based on statistics from their working history to measure the capacities of information gathering, reasoning, and information sharing. The most trusted strategic network for a node is the subnet that maximizes the ability of the node if ability is used as the criterion. A node that has the high capacities of observing the state of world accurately, making sound decisions based on available information, and bringing positive impacts to others is deemed to possess a high level of ability and thus a trustworthy individual. Similarly, a node that is willing to share accurate information with others is also regarded as trustworthy. The strategic network is the one that leads to the maximum level of ability for the reference node, or consists of a group of collaborators that are the most willing to collaborate with the reference node.

Our previous study [2] showed that the new quantitative metrics of ability and benevolence are sensitive to trust attacks. It was seen that when a malicious node generates false predictions and sends them to other nodes, its perceived trustworthiness will drop quickly when measured by ability and benevolence. When the attack stops, the perceived trustworthiness will gradually increase and recover. This matches well with human social behaviors. It usually takes time to establish a trust relation, whereas the damage can be done much more quickly. When designing the trusted strategic network, the risks of attacks also need to be considered. Instead of targeting at the maximum trust level as shown in this paper, additional criteria for robustness need to be incorporated in future work.

The proposed discrete Bayesian optimization performs reasonably well for the combinatorial problem of network design, where search efficiency is improved and variability of results can be reduced. For the kernel function based on the Hamming distance, more hyper-parameters can help increase the flexibility of the kernel, whereas a small number of hyper-parameters is not robust enough for optimization. The limitation of using multiple hyper-parameters is the training efficiency. More samples are required to train a larger number of hyper-parameters, which makes it not feasible for small problems. Combinatorial problems usually have very large searching space. Introducing additional hyper-parameters can potentially bring the benefit of faster convergence.

In this work, only single-objective optimization is applied. The multi-faceted trustworthiness metrics eventually will need a multi-objective optimization approach [57] for trust-based design, where multiple metrics are considered simultaneously and tradeoffs need to be made. The scalability of the discrete Bayesian optimization also requires further investigation, given that the Bayesian update procedure in GPR is computationally expensive when the number of samples is large. The proposed scheme for large-scale networks will require further tests. Enhancement such as sparse GPR is likely to bring better scalability.

Footnote

A shorter version of the paper was presented at ASME IDETC/CIE2020 as Paper No. IDETC2020-22661.

Acknowledgment

This work was supported in part by the National Science Foundation under grant CMMI-1663227.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Wang
,
Y.
,
2018
, “
Trust Based Cyber-Physical Systems Network Design
,”
Proceedings of the ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2018)
,
Quebec City, Canada
,
Aug. 26–29
, p.
V01AT02A037
.
2.
Wang
,
Y.
,
2018
, “
Trust Quantification for Networked Cyber-Physical Systems
,”
IEEE Internet Things J.
,
5
(
3
), pp.
2055
2070
. 10.1109/JIOT.2018.2822677
3.
Wang
,
Y.
,
2018
, “
Trustworthiness in Designing Cyber-Physical Systems
,”
Proceedings of the 12th International Symposium on Tools and Methods of Competitive Engineering (TMCE2018)
,
Las Palmas, Gran Canaria, Spain
,
May 7–11
, pp.
27
40
.
4.
Wang
,
Y.
,
2016
, “
System Resilience Quantification for Probabilistic Design of Internet-of-Things Architecture
,”
Proceedings of the 2016 ASME International Design Engineering Technical Conferences and the Computer and Information in Engineering Conference (IDETC/CIE2016)
,
Charlotte, NC
,
Aug. 21–24
, p.
V01BT02A011
.
5.
Wang
,
Y.
,
2018
, “
Resilience Quantification for Probabilistic Design of Cyber-Physical System Networks
,”
ASCE-ASME J. Risk Uncertain. Eng. Syst., B: Mech. Eng.
,
4
(
3
), p.
031006
. 10.1115/1.4039148
6.
Tavčar
,
J.
, and
Horváth
,
I.
,
2018
, “
A Review of the Principles of Designing Smart Cyber-Physical Systems for Run-Time Adaptation: Learned Lessons and Open Issues
,”
IEEE Trans. Syst., Man, Cybern.: Syst.
,
49
(
1
), pp.
145
158
. 10.1109/TSMC.2018.2814539
7.
Grimm
,
M.
,
Anderl
,
R.
, and
Wang
,
Y.
,
2014
, “
Cyber-Physical Augmentation: An Exploration
,”
Proceedings of the 10th International Symposium on Tools and Methods of Competitive Engineering (TMCE2014)
,
Budapest, Hungary
,
May 19–23
, pp.
61
72
.
8.
Horváth
,
I.
, and
Gerritsen
,
B. H.
,
2012
, “
Cyber-Physical Systems: Concepts, Technologies and Implementation Principles
,”
Proceedings of the 9th International Symposium on Tools and Methods of Competitive Engineering (TMCE2012)
,
Karlsruhe, Germany
,
May 7–11
, pp.
19
36
.
9.
Jeon
,
J.
,
Chun
,
I.
, and
Kim
,
W.
,
2012
, “
Metamodel-Based CPS Modeling Tool
,”
Embedded and Multimedia Computing Technology and Service, Lecture Notes in Electrical Engineering.
Vol.
181
, pp.
285
291
.
Springer
.
10.
Lee
,
K. H.
,
Hong
,
J. H.
, and
Kim
,
T. G.
,
2015
, “
System of Systems Approach to Formal Modeling of CPS for Simulation-Based Analysis
,”
ETRI J.
,
37
(
1
), pp.
175
185
. 10.4218/etrij.15.0114.0863
11.
Lee
,
E. A.
,
Niknami
,
M.
,
Nouidui
,
T. S.
, and
Wetter
,
M.
,
2015
, “
Modeling and Simulating Cyber-Physical Systems Using CyPhySim
,”
Proceedings of the 12th IEEE International Conference on Embedded Software
,
Amsterdam, The Netherlands
,
Oct. 4–9
, pp.
115
124
.
12.
Saeedloei
,
N.
, and
Gupta
,
G.
,
2011
, “
A Logic-Based Modeling and Verification of CPS
,”
ACM SIGBED Rev.
,
8
(
2
), pp.
31
34
. 10.1145/2000367.2000374
13.
Horváth
,
I.
,
2019
, “
A Computational Framework for Procedural Abduction Done by Smart Cyber-Physical Systems
,”
Designs
,
3
(
1
), p.
1
. 10.3390/designs3010001
14.
Burmester
,
M.
,
Magkos
,
E.
, and
Chrissikopoulos
,
V.
,
2012
, “
Modeling Security in Cyber–Physical Systems
,”
Int. J. Crit. Infrastruct. Prot.
,
5
(
3–4
), pp.
118
126
. 10.1016/j.ijcip.2012.08.002
15.
Petnga
,
L.
, and
Austin
,
M.
,
2016
, “
An Ontological Framework for Knowledge Modeling and Decision Support in Cyber-Physical Systems
,”
Adv. Eng. Inform.
,
30
(
1
), pp.
77
94
. 10.1016/j.aei.2015.12.003
16.
Pourtalebi
,
S.
, and
Horváth
,
I.
,
2017
, “
Information Schema Constructs for Instantiation and Composition of System Manifestation Features
,”
Front. Inform. Technol. Electron. Eng.
,
18
(
9
), pp.
1396
1415
. 10.1631/FITEE.1601235
17.
Magureanu
,
G.
,
Gavrilescu
,
M.
,
Pescaru
,
D.
, and
Doboli
,
A.
,
2010
, “
Towards UML Modeling of Cyber-Physical Systems: A Case Study for Gas Distribution
,”
Proceedings of the IEEE 8th International Symposium on Intelligent Systems and Informatics
,
Subotica, Serbia
,
Sept. 10–11
, pp.
471
476
.
18.
Palachi
,
E.
,
Cohen
,
C.
, and
Takashi
,
S.
,
2013
, “
Simulation of Cyber Physical Models Using SysML and Numerical Solvers
,”
Proceedings of the 2013 IEEE International Systems Conference (SysCon)
,
Orlando, FL
,
Apr. 15–18
, pp.
671
675
.
19.
Wang
,
Y.
,
2020
, “
Information Dynamics in the Network of Cyber-Physical Systems
,”
Proceedings of the 13th International Symposium on Tools and Methods of Competitive Engineering (TMCE2020)
,
Dublin, Ireland
,
May 11–15
, pp.
13
26
.
20.
Horváth
,
I.
, and
Wang
,
J.
,
2015
, “
Towards a Comprehensive Theory of Multi-Aspect Interaction With Cyber Physical Systems
,”
Proceedings of the ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE2015)
,
Boston, MA
,
Aug. 2–5
, p.
V01BT02A009
.
21.
Li
,
Y.
,
Horváth
,
I.
, and
Rusák
,
Z.
,
2018
, “
Constructing Personalized Messages for Informing Cyber-Physical Systems Based on Dynamic Context Information Processing
,”
Proceedings of the 12th International Symposium on Tools and Methods of Competitive Engineering (TMCE2018)
,
Las Palmas, Gran Canaria, Spain
,
May 7–11
, pp.
105
120
.
22.
Tran
,
A. V.
,
Tran
,
M.
, and
Wang
,
Y.
,
2019
, “
Constrained Mixed-Integer Gaussian Mixture Bayesian Optimization and Its Applications in Designing Fractal and Auxetic Metamaterials
,”
Struct. Multidiscipl. Optim.
,
59
(
6
), pp.
2131
2154
. 10.1007/s00158-018-2182-1
23.
Iyer
,
A.
,
Zhang
,
Y.
,
Prasad
,
A.
,
Tao
,
S.
,
Wang
,
Y.
,
Schadler
,
L.
,
Brinson
,
L. C.
, and
Chen
,
W.
,
2019
, “
Data-Centric Mixed-Variable Bayesian Optimization for Materials Design
,”
Proceedings of the ASME 2019 IDETC/CIE Conferences
,
Anaheim, CA
,
Aug. 18–21
, p.
V02AT03A066
.
24.
Baptista
,
R.
, and
Poloczek
,
M.
,
2018
, “
Bayesian Optimization of Combinatorial Structures
,”
Proceedings of the 35th International Conference on Machine Learning, PMLR80
,
Stockholm, Sweden
, pp.
462
471
.
25.
Zaefferer
,
M.
,
Stork
,
J.
,
Friese
,
M.
,
Fischbach
,
A.
,
Naujoks
,
B.
, and
Bartz-Beielstein
,
T.
,
2014
, “
Efficient Global Optimization for Combinatorial Problems
,”
Proceedings of the 2014 ACM Annual Conference on Genetic and Evolutionary Computation
,
Vancouver, Canada
,
July 12–16
, pp.
871
878
.
26.
Garrido-Merchán
,
E. C.
, and
Hernández-Lobato
,
D.
,
2020
, “
Dealing With Categorical and Integer-Valued Variables in Bayesian Optimization With Gaussian Processes
,”
Neurocomputing
,
380
, pp.
20
35
. 10.1016/j.neucom.2019.11.004
27.
Zhang
,
J.
,
Yao
,
X.
,
Liu
,
M.
, and
Wang
,
Y.
,
2019
, “
A Bayesian Discrete Optimization Algorithm for Permutation Problems
,”
Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence (SSCI 2019)
,
Xiamen, China
,
Dec. 6–9
, pp.
871
881
.
28.
Oh
,
C.
,
Tomczak
,
J.
,
Gavves
,
E.
, and
Welling
,
M.
,
2019
, “
Combinatorial Bayesian Optimization Using the Graph Cartesian Product
,”
Proceedings of the 2019 Advances in Neural Information Processing Systems (NIPS 2019)
,
Vancouver, Canada
.
29.
Ruan
,
Y.
, and
Durresi
,
A.
,
2016
, “
A Survey of Trust Management Systems for Online Social Communities–Trust Modeling, Trust Inference and Attacks
,”
Knowl.-Based Syst.
,
106
, pp.
150
163
. 10.1016/j.knosys.2016.05.042
30.
Li
,
X.
,
Zhou
,
F.
, and
Du
,
J.
,
2013
, “
LDTS: A Lightweight and Dependable Trust System for Clustered Wireless Sensor Networks
,”
IEEE Trans. Inf. Forensics Secur.
,
8
(
6
), pp.
924
935
. 10.1109/TIFS.2013.2240299
31.
Chen
,
Z.
,
Tian
,
L.
, and
Lin
,
C.
,
2017
, “
Trust Model of Wireless Sensor Networks and Its Application in Data Fusion
,”
Sensors
,
17
(
4
), p.
703
. 10.3390/s17040703
32.
Barber
,
K. S.
, and
Kim
,
J.
,
2001
, “
Belief Revision Process Based on Trust: Agents Evaluating Reputation of Information Sources
,”
Trust in Cyber-Societies
, pp.
73
82
.
Springer
.
33.
Kim
,
H.
,
Lee
,
H.
,
Kim
,
W.
, and
Kim
,
Y.
,
2010
, “
A Trust Evaluation Model for QoS Guarantee in Cloud Systems
,”
Int. J. GridDistrib. Comput.
,
3
(
1
), pp.
1
10
.
34.
Li
,
X.
,
Ma
,
H.
,
Zhou
,
F.
, and
Gui
,
X.
,
2014
, “
Service Operator-Aware Trust Scheme for Resource Matchmaking Across Multiple Clouds
,”
IEEE Trans. Parallel Distribut. Syst.
,
26
(
5
), pp.
1419
1429
. 10.1109/tpds.2014.2321750
35.
Yu
,
B.
, and
Singh
,
M. P.
,
2002
, “
Distributed Reputation Management for Electronic Commerce
,”
Comput. Intell.
,
18
(
4
), pp.
535
549
. 10.1111/1467-8640.00202
36.
Reddy
,
V. B.
,
Venkataraman
,
S.
, and
Negi
,
A.
,
2017
, “
Communication and Data Trust for Wireless Sensor Networks Using D–S Theory
,”
IEEE Sens. J.
,
17
(
12
), pp.
3921
3929
. 10.1109/jsen.2017.2699561
37.
Falcone
,
R.
,
Pezzulo
,
G.
, and
Castelfranchi
,
C.
,
2002
, “
A Fuzzy Approach to a Belief-Based Trust Computation
,”
Proceedings of the Workshop on Deception, Fraud and Trust in Agent Societies
,
Bologna, Italy
,
July 15
, Springer, pp.
73
86
. http://dx.doi.org/10.1007/3-540-36609-1_7.
38.
Alhamad
,
M.
,
Dillon
,
T.
, and
Chang
,
E.
,
2011
, “
A Trust-Evaluation Metric for Cloud Applications
,”
Int. J. Mach. Learn. Comput.
,
1
(
4
), p.
416
.
39.
Ashtiani
,
M.
, and
Azgomi
,
M. A.
,
2016
, “
Trust Modeling Based on a Combination of Fuzzy Analytic Hierarchy Process and Fuzzy VIKOR
,”
Soft Comput.
,
20
(
1
), pp.
399
421
. 10.1007/s00500-014-1516-1
40.
Hoogendoorn
,
M.
,
Jaffry
,
S. W.
,
Van Maanen
,
P. P.
, and
Treur
,
J.
,
2014
, “
Design and Validation of a Relative Trust Model
,”
Knowl-Based Syst.
,
57
, pp.
81
94
. 10.1016/j.knosys.2013.12.012
41.
Hu
,
W. L.
,
Akash
,
K.
,
Reid
,
T.
, and
Jain
,
N.
,
2018
, “
Computational Modeling of the Dynamics of Human Trust During Human–Machine Interactions
,”
IEEE Trans. Human-Mach. Syst.
,
49
(
6
), pp.
485
497
. 10.1109/thms.2018.2874188
42.
Chen
,
D.
,
Chang
,
G.
,
Sun
,
D.
,
Li
,
J.
,
Jia
,
J.
, and
Wang
,
X.
,
2011
, “
TRM-IoT: A Trust Management Model Based on Fuzzy Reputation for Internet of Things
,”
Comput. Sci. Inf. Syst.
,
8
(
4
), pp.
1207
1228
. 10.2298/csis110303056c
43.
Huang
,
J.
,
Seck
,
M. D.
, and
Gheorghe
,
A.
,
2016
, “
Towards Trustworthy Smart Cyber-Physical-Social Systems in the Era of Internet of Things
,”
IEEE Proceedings of the 2016 11th System of Systems Engineering Conference (SoSE)
,
Kongsberg, Norway
,
June 12–16
, pp.
1
6
. http://dx.doi.org/10.1109/sysose.2016.7542961
44.
Al-Hamadi
,
H.
, and
Chen
,
R.
,
2017
, “
Trust-Based Decision Making for Health IoT Systems
,”
IEEE Internet of Things J.
,
4
(
5
), pp.
1408
1419
. 10.1109/jiot.2017.2736446
45.
Yu
,
Z.
,
Zhou
,
L.
,
Ma
,
Z.
, and
El-Meligy
,
M. A.
,
2017
, “
Trustworthiness Modeling and Analysis of Cyber-Physical Manufacturing Systems
,”
IEEE Access
,
5
, pp.
26076
26085
. 10.1109/access.2017.2777438
46.
Xu
,
Q.
,
Su
,
Z.
,
Wang
,
Y.
, and
Dai
,
M.
,
2018
, “
A Trustworthy Content Caching and Bandwidth Allocation Scheme With Edge Computing for Smart Campus
,”
IEEE Access
,
6
, pp.
63868
63879
. 10.1109/access.2018.2872740
47.
Tang
,
L. A.
,
Yu
,
X.
,
Kim
,
S.
,
Gu
,
Q.
,
Han
,
J.
,
Leung
,
A.
, and
La Porta
,
T.
,
2013
, “
Trustworthiness Analysis of Sensor Data in Cyber-Physical Systems
,”
J. Comput. Syst. Sci.
,
79
(
3
), pp.
383
401
. 10.1016/j.jcss.2012.09.012
48.
Tao
,
H.
,
Bhuiyan
,
M. Z. A.
,
Rahman
,
M. A.
,
Wang
,
T.
,
Wu
,
J.
,
Salih
,
S. Q.
,
Li
,
Y.
, and
Hayajneh
,
T.
,
2020
, “
TrustData: Trustworthy and Secured Data Collection for Event Detection in Industrial Cyber-Physical System
,”
IEEE Trans. Ind. Inform.
,
16
(
5
), pp.
3311
3321
. 10.1109/tii.2019.2950192
49.
Junejo
,
A. K.
,
Komninos
,
N.
,
Sathiyanarayanan
,
M.
, and
Chowdhry
,
B. S.
,
2020
, “
Trustee: A Trust Management System for Fog-Enabled Cyber Physical Systems
,”
IEEE Trans. Emerg. Top. Comput.
(in press). 10.1109/tetc.2019.2957394
50.
Xia
,
H.
,
Xiao
,
F.
,
Zhang
,
S.-S.
,
Cheng
,
X.-G.
, and
Pan
,
Z.-K.
,
2020
, “
A Reputation-Based Model for Trust Evaluation in Social Cyber-Physical Systems
,”
IEEE Trans. Netw. Sci. Eng.
,
7
(
2
), pp.
792
804
. 10.1109/tnse.2018.2866783
51.
Mayer
,
R. C.
,
Davis
,
J. H.
, and
Schoorman
,
F. D.
,
1995
, “
An Integrative Model of Organizational Trust
,”
Acad. Manag. Rev.
,
20
(
3
), pp.
709
734
.
52.
Lee
,
M. K.
, and
Turban
,
E.
,
2001
, “
A Trust Model for Consumer Internet Shopping
,”
Int. J. Electron. Commer.
,
6
(
1
), pp.
75
91
.
53.
Chen
,
H.
,
2012
, “
The Influence of Perceived Value and Trust on Online Buying Intention
,”
J. Comput.
,
7
(
7
), pp.
1655
1662
.
54.
Yousafzai
,
S. Y.
,
Pallister
,
J. G.
, and
Foxall
,
G. R.
,
2005
, “
Strategies for Building and Communicating Trust in Electronic Banking: A Field Experiment
,”
Psychol. Market.
,
22
(
2
), pp.
181
201
. 10.1002/mar.20054
55.
Akter
,
S.
,
D'Ambra
,
J.
, and
Ray
,
P.
,
2011
, “
Trustworthiness in MHealth Information Services: An Assessment of a Hierarchical Model With Mediating and Moderating Effects Using Partial Least Squares (PLS)
,”
J. Am. Soc. Inf. Sci. Technol.
,
62
(
1
), pp.
100
116
. 10.1002/asi.21442
56.
Wang
,
T.
,
Luo
,
H.
,
Jia
,
W.
,
Liu
,
A.
, and
Xie
,
M.
,
2020
, “
MTES: An Intelligent Trust Evaluation Scheme in Sensor-Cloud-Enabled Industrial Internet of Things
,”
IEEE Trans. Ind. Inform.
,
16
(
3
), pp.
2054
2062
. 10.1109/tii.2019.2930286
57.
Shu
,
L.
,
Jiang
,
P.
,
Shao
,
X.
, and
Wang
,
Y.
,
2020
, “
A New Multi-Objective Bayesian Optimization Formulation With the Acquisition Function for Convergence and Diversity
,”
ASME J. Mech. Des.
,
142
(
9
), p.
091703
. 10.1115/1.4046508