Creativity is often considered to be a critical aspect of engineering innovation and successful product design. Many methods have been proposed for enhancing creativity, originality, and innovation. When these methods are tested, the experiment often generates large numbers of concepts that must be evaluated by experts in a time-consuming process. Similarly, the increased use of crowd-sourcing for generating concepts often leads to a plethora of alternatives that must be evaluated. Accordingly, engineering design practitioners and researchers alike often find themselves evaluating large numbers of concepts. In this paper, the feasibility of using non-experts to evaluate engineering creativity is investigated. Dozens of students at two universities are asked to rate the originality of several different solutions to a design problem, for which validated expert ratings are available. Results indicate that it is possible to extract expert-level ratings from the non-expert student raters by focusing on the student raters with excellent inter-rater agreement amongst themselves and by training the students with example problems prior to the rating exercise. These results suggest that it may be possible to evaluate originality reliably with a large set of novice raters, perhaps with a Mechanical Turk type of approach.

This content is only available via PDF.
You do not currently have access to this content.