## Abstract

Recent advances in design optimization have significant potential to improve the function of mechanical components and systems. Coupled with additive manufacturing, topology optimization is one category of numerical methods used to produce algorithmically generated optimized designs making a difference in the mechanical design of hardware currently being introduced to the market. Unfortunately, many of these algorithms require extensive manual setup and control, particularly of tuning parameters that control algorithmic function and convergence. This paper introduces a framework based on machine learning approaches to recommend tuning parameters to a user in order to avoid costly trial and error involved in manual tuning. The algorithm reads tuning parameters from a repository of prior, similar problems adjudged using a dissimilarity metric based on problem metadata and refines them for the current problem using a Bayesian optimization approach. The approach is demonstrated for a simple topology optimization problem with the objective of achieving good topology optimization solution quality and then with the additional objective of finding an optimal “trade” between solution quality and required computational time. The goal is to reduce the total number of “wasted” tuning runs that would be required for purely manual tuning. With more development, the framework may ultimately be useful on an enterprise level for analysis and optimization problems—topology optimization is one example but the framework is also applicable to other optimization problems such as shape and sizing and in high-fidelity physics-based analysis models—and enable these types of advanced approaches to be used more efficiently.

## 1 Introduction

Recent advances in design optimization have significant potential to improve the function of mechanical products. Coupled with progress in additive and advanced manufacturing, there is the possibility to improve performance in many industries including applications in aerospace [1,2], thermal management [3,4], and medicine [5]. Topology optimization (TO) [6–9] is one category of numerical methods used to produce algorithmically generated optimized structures. The promise of TO is that the algorithm can create a design led by physics when supplied with basic information that defines the problem. Due to its effectiveness, there has been an expansion into disciplines beyond the traditional core of static structural mechanics such as crashworthiness [10], active composites [11], fluid flow [12–14], and heat transfer [15–17] as well as adoption in industry.

*ρ*represents a fictitious density serving as local design variable,

**K**is the stiffness matrix,

**U**is the displacement matrix, and

*g*is a constraint. For minimizing the compliance with a volume constraint, the objective can be stated in an integral form as $l(\rho )=c(\rho )=\u222b\Omega bud\Omega +\u222b\Gamma tuds$, where

_{i}*b*represents body forces,

*u*displacements, and

*t*surface traction. The constraint is given by

*g*

_{i}= (

*v*/

*v*

_{o}) −

*v*

_{f}, where

*v*is the volume of the solid material,

*v*

_{0}is the volume of the design space, and

*v*is the imposed volume fraction constraint.

_{f}*w*

_{C}, and perimeter,

*w*

_{P}(Eq. (4)). Perimeter penalty [18,19] is selected here rather than another means of regularization such as density or sensitivity filtering in order to provide a simple and accessible example for use in defining the problem of TO parameter tuning.

Poorly tuned numerical parameters can result in inferior quality or nonsensical results. Take as an example the standard cantilevered beam problem (Fig. 1). Oversmoothing (Fig. 2(a)) occurs if the *w*_{P} term is set too high leading to a smeared result of intermediate density and an ill-defined structure. At a lower value of *w*_{P}, it is possible to find an acceptable balance (Fig. 2(b)). As *w*_{P} is further decreased, its contribution becomes negligible leading to an isoline that is stair-stepped and rough (Fig. 2(c)). Even further reduction gives rise to mesh-dependent results and numerical instabilities, often referred to as checker-boarding [20].

A list of other potential tuning parameters for the cantilevered beam problem is given in Table 1. Other types of topology optimization problems may have tuning parameters in addition to or instead of the ones shown, for example, level set TO [21] which may require choices regarding initial seeding [22].

The purpose of this contribution is to introduce an approach where problem definition and tuning parameter history can be captured and, possibly on an enterprise level, leveraged to mitigate the issue of tuning. The state-of-the-art process to establish appropriate values for tuning parameters is manual and tedious, requiring trial and error and multiple TO runs. Similar efforts may be repeated many times amongst different practitioners. Ideally, the solution would not need extensive data to begin but would reduce the number of required TO runs over time by leveraging built-up experience (Fig. 3). A two-stage framework, based on machine learning (ML) approaches, is presented in Sec. 2 and applied to simple example problems in Sec. 3. The limitations and outlook for the future are discussed in Sec. 4.

## 2 Approach

### 2.1 Framework.

The approach is divided into two stages due to the envisioned need to account for data-rich and data-poor scenarios. The first stage seeks to use similarities between the current problem and previous problems and provide a recommendation for tuning parameters based on similarity to existing problems stored in a database (data-rich scenario). The second stage is intended to refine the tuning parameters for the specific problem and would dominate in the case where insufficient prior data are present, such as if a new TO feature is introduced (data-poor scenario). The balance between the two steps would be expected to shift over time as experience builds. The overall process is depicted in Fig. 4.

#### 2.1.1 Stage 1: Initiation of Tuning Parameters From Existing Designs Via Metalearning.

The search can be accelerated if the optimization is started with near-optimal sets of tuning parameters, * θ_{i}*. If a design problem can be described by a set of metafeatures (

*m*) that define the problem, the value of

*for the initiation is chosen based on proximity as measured by a distance metric on the space of*

**θ**_{i}*m*. At a high level, problems with the same types of metafeatures are the closest to one another (e.g., variations of a structural problem), and problems with dissimilar categories of metafeatures are not close (e.g., structural versus fluid).

This paper focuses on the case where there are consistent but numerically different quantities, e.g., orientation of load or volume fraction, and uses a simple Euclidean distance (*L*_{2} norm of difference between metafeature vectors) between two design problems. Functional distance metric between two relatively similar design problems, e.g., defined by the negative Spearman correlation coefficient [23], could be used in more complex situations but is reserved for future work.

#### 2.1.2 Stage 2: Metamodel-Based Tuning Parameter Search.

*, obtained based on distance are then provided to a metamodel-based Bayesian optimization [23]. A metric,*

**θ**_{i}*f*, is used to assess how

*performs in the TO algorithm. Figure 4 explains how a Gaussian process (GP)-based metamodel, $M$ [24], is constructed to establish*

**θ**_{i}*–*

**θ****mapping. Bayesian optimization sequentially samples**

*f**in an effort to minimize*

**θ***f*, choosing points where the acquisition function is maximum. The acquisition function is defined as the expected positive improvement (EI) over prior known points [25]

*p*

_{M}is the predictive distribution over the objective space (

*y*) that is parameterized by

**for model $M$.**

*θ***with respect to nondominated set A (relative to reference point**

*θ**R*), provided its probability density function (PDF) over the objective space (Fig. 5) [26]

For a bi-objective case, an exact computation of EHI can be performed [26] with time complexity of *O*(*n*), where *n* denotes the number of current set of nondominated observations.

### 2.2 Case Study.

The 2D cantilever beam of Fig. 1 is used as the primary TO example in this section. Two of the possible metafeatures, *m*, that define the problem are illustrated in Fig. 6: the angle at which a force is applied (Fig. 6(a)) and the volume fraction constraint (Fig. 6(b)).

The metric *f* used in the metamodeling step represents the quality of the TO result produced by a set of tuning parameters ** θ**. It is intended to translate the qualitative human perception of solution quality into a quantitative value upon which ML-based optimization may be performed. In the present case, it quantifies a solution as being neither too diffuse nor too rough.

*q*

_{1}, is defined as deviation from the ideal 0–1 binary material density distribution in the TO solution, expressed mathematically in Eq. (7) related to a histogram of density distribution (Fig. 7(a)). In the equation,

*c*refers to a normalized (fractional) density of the

_{j}*j*th bucket and

*e*refers to the expected value.

_{i}*N*is the number of buckets of the histogram and

*v*is the volume fraction constraint imposed by the TO problem. The minimal value of 0 for

_{f}*q*

_{1}is achieved in a perfectly binary TO result.

*q*

_{2}, assesses smoothness and is defined in Eq. (8) using the ratio between the original isoline,

*L*, and a smoothed version,

_{i}*L*(Fig. 7(b)). The smoothing in this case is performed using a simple coarsening operation, similar to projecting density distribution onto a coarser mesh. In the future, a more rigorous mathematical procedure guaranteeing smoothness, such as the use of a filter, would be preferred. The minimum value of 0 occurs when there is no difference between initial and smoothed, and large values result when the initial line contains large amounts of fine-scale roughness along the entire isoline.

_{s}*f*, is given in Eq. (9), where

*a*

_{1}and

*a*

_{2}are scaling constants simply defined by the inverse of minimum occurring values of

*q*

_{1}and

*q*

_{2}, respectively.

### 2.3 Implementation.

This two-step ML approach was implemented in a modular style. The ML framework was created as a stand-alone module in python. The commercial finite element software comsol was used for TO, controlled via matlab api. A python translator was created to convert the direction of the ML package into instructions for matlab/comsol.

## 3 Results

### 3.1 Single Tuning Parameter, Single Objective.

The framework was first applied to the 2D cantilevered beam problem (Sec. 2.1). Varying metafeatures force angle and volume fraction resulted in dramatically different TO results (Fig. 8) and required different values of tuning parameters to achieve the best TO results. The state-of-art is for the appropriate values of *w _{p}* (Eq. (4)) to be established manually for each case in the figure.

In the envisioned use case, a repository would be populated over time by designers working on a variety of similar problems. The ML algorithm would refer to the repository to choose problems with small distance from the current problem and the tuning parameters from those cases would be taken as an initial set *θ** _{i}*. Being a new algorithm, there was no such pre-existing repository; instead, a set of manual line searches of

*w*

_{P}was performed at different metafeature settings in order to populate a repository for demonstration.

The line search for a single combination of metafeatures is shown in Fig. 9. The quality metric has a minimum value at around log(*w _{P}*) = −3.8 indicating the best value to use for that particular combination of metafeatures. Other values of

*w*

_{P}produce higher

*f*, indicating that the TO results are either more diffuse or more stair-stepped than the optimum.

Eight different combinations of the metafeatures were explored (Fig. 10(a)). The best values of *w*_{P} were then extracted from the individual line searches (thus forming the repository) and plotted in Fig. 10(b). Best value of *w*_{P} varies as a function of the metafeatures.

A new unique combination of the metafeatures (“New point” in Fig. 11(a)) was then specified with the intention of using the framework to obtain a recommendation of *w*_{P}. In the envisioned use case, this is analogous to a new mission profile. The metalearning step (M-L) produced *w*_{P} recommendations based on the distance (Euclidean norm) in the metafeature space, *m*, between the new point and the three closest points in the repository. These closest prior points are labeled M-L Rec1, 2, and 3 in Fig. 11(a) and listed in Table 2.

In order to understand the quality of these recommendations and to provide an illustration, a manual line search of *w*_{P} was performed at the new point (Fig. 11(b)) and indicated a minimum in the quality metric at approximately log 10(*w _{P}*) = −5. The recommended

*w*

_{P}values are superimposed on the plot as arrows pointing to the

*w*

_{P}axis. All three recommendations are located near, though not exactly at, the minimum. This indicates that the recommendations were indeed a good starting point for the metamodeling stage of the framework.

The metamodeling operation then received the three recommended *w*_{P} values as inputs for the Bayesian optimization, which served to refine the recommendations and seek the optimum *w*_{P} for this specific metafeature combination. The value of log 10(*w _{P}*) = −5 apparent from the line search was recovered after only two incremental metamodeling cycles. These values are also indicated by arrows in Fig. 11(b) (M-M1 and 2), and the final value is provided in Table 2. The Bayesian search required only 5 executions of the TO (3 from recommended points to initialize the GP model and 2 in the incremental Bayesian optimization) as opposed to the 12 required to establish the line search.

### 3.2 Dual Tuning Parameters, Dual Objectives.

The metamodeling framework was evaluated for the case of two tuning parameters and two objectives. The two tuning parameters used were *w*_{P} and finite element mesh size. The two objectives were the quality metric, used previously, and elapsed time for the TO problem to run. The goal of the ML optimization, therefore, was to establish tuning parameters leading to the optimal trade between solution quality and time required to obtain it, which enables effective use of an engineering or computational budget. The first step, metalearning, was not performed in this example. Instead, the metafeatures of the cantilevered beam problem were fixed using *v _{f}* = 0.3 and force angle of 90 deg. The Bayesian optimization was then seeded randomly with four initial sets of tuning parameters. This was a more difficult setup than if strong recommendations of tuning parameters had been provided from the metalearning step.

The results of the tuning parameter optimization are shown in Fig. 12. Circles indicate the quality and elapsed time of the four initial points. One of the initial points had a very high value of quality metric (∼27, indicating poor quality) and long elapsed time (∼450 s). The inset picture of the TO solution shows that the result was little more than a diffuse density field. Another initial point had better quality (∼12) and lower time (∼250 s), but the inset image also shows a qualitatively poor quality TO result.

The metamodeling algorithm created and advanced a Pareto front (see Fig. 5) using the initial points as a basis. The *x* markers in Fig. 12 indicate the final set of nondominated solutions after 15 ML iterations. TO solutions requiring long times to achieve high quality are located in the upper left corner. Moving to the right, the elapsed time decreases while the solution quality gets worse (metric increases) allowing a designer to make a time-quality tradeoff.

### 3.3 Scalability and Adaptability.

The metamodeling framework was extended to a simply supported beam problem (Fig. 13) having symmetric 3 × 1 aspect ratio, and TO was performed using software developed by the University of Colorado Boulder. Three turning parameters governing a SIMP continuation scheme were used (Table 3), where the SIMP penalty parameter *P* is increased by Δ*P* prior to each continuation step. Quality metric in this case was tied to the histogram density distribution only (without perimeter). The initial sampling was populated with 7 random points followed by 30 iterations of metamodeling optimization for a total of 37 explored points. The resulting Pareto front (Fig. 13) represents an optimal trade between solution quality and time, and was obtained similarly as for the simpler 2D problem in Sec. 3.2. The result in the upper left corner had more binary density distribution and cleaner structural features indicating higher quality whereas the design in the lower right-hand corner had more diffuse density and less-stiff structural features, indicating worse quality.

## 4 Discussion

The problem of manually tuning TO algorithms prevents true design automation. In this contribution, we showed for a simple example that tuning parameters may need to be changed with the TO problem specification and may require manual rework when a problem is altered. We also introduced one possible algorithm, based on machine learning approaches, which could automate tuning and result in fewer TO runs. The proposed framework does not itself optimize mechanical part designs. Rather, it mitigates the large amount of human intervention required to run algorithms like TO and obtain meaningful results. The overall idea is not limited to TO, but rather it is applicable to a broad class of numerical analysis and optimization methods. We focused on TO due to its recent increase in popularity and the prevalence of tuning parameters that are difficult to adjust unless used by an expert.

The examples and algorithmic configurations selected for this paper, including the use of perimeter penalty, were simple by intention in order to clearly introduce and demonstrate the proposed algorithm. In order to be useful in a real-world scenario, the algorithm needs to be scaled up and demonstrated on different and more complicated problems. This can include TO for fluid flow and multiphysics phenomena as well as with a greater number of tuning parameters.

As the scope of problems expands, we anticipate that there will be a need to further differentiate problem type and their accompanying metafeatures. For instance, a simple Euclidean distance metric is not appropriate to assess distance between a cantilevered beam problem and a fluid flow problem. Thus, there is a need to work on more abstract distance metrics. We envision one possibility being to split Step 1 (metalearning) into two substeps. The first, Step 1a, could be a classification-based metric, which would determine the class of a problem in terms of physics. Step 1b could be a further evaluation for relatively similar problems in terms of Euclidean or functional distance metric, similar to the approach demonstrated above.

One eventual challenge will be the introduction of multiphysics into our framework. The presence of multiple physics may introduce interactions not present in single-physics problems, which would complicate the calculation of distance metric. We reserve this challenge for future work, specifically in defining a high-level classification-based metric assessment.

In the near-term, improvements to the framework will aid functionality and efficiency. One example is to prevent points from bunching on the Pareto during the metamodeling optimization. This will ensure that space is efficiently explored. In addition, multifidelity metamodeling would be helpful to thorough and efficient exploration. A final example is a robust definition of metafeatures for structural mechanical problems which will enable a wide variety of problems with single physics in common to be stored and assessed.

One eventual future application of this work is to enable an enterprise-wide approach for capturing and using knowledge associated with automated design and numerical analysis. Besides TO, there are other fields, such as the general areas of computational fluid dynamics and finite element analysis, where tuning parameters that control solver settings are regularly used. The current state-of-art in large organizations is for individual designers to manually tune these parameters.

## 5 Conclusion

Algorithmic design optimization is a promising means to generate effective mechanical components and systems. Topology optimization is one category of such methods but can require extensive manual setup and control, particularly of tuning parameters that control algorithmic function and convergence. This paper introduced a machine learning framework to recommend tuning parameters to a user in order to avoid costly trial and error involved in manual tuning. This framework consisted of two steps, a metalearning step where recommendation is drawn from similar problems and a metamodeling step where Bayesian optimization is used to efficiently optimize the parameters for the specific TO problem. A quality metric was developed to quantify a human's perception of solution quality. The framework was then demonstrated on relatively simple problems in cases with one to three tuning parameters using single (quality) and dual (quality and time) objectives. The approach was shown to be more effective than line search. Future work should center on handling of more complex TO problems, multiphysics, development of a classification-based metric for very different problem types, and scale-up.

## Acknowledgment

This work was funded by the DARPA TRAnsformative DESign (TRADES) program (Contract Grant No. HR0011-17-2-0022; Funder ID: 10.13039/100000185). The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing computing and collaboration resources that have contributed to the research results reported within this paper.