Abstract

Heating, ventilation, and air-conditioning (HVAC) systems consume over 5 quads of energy annually, representing 30% of energy consumption in the U.S. commercial buildings. Additionally, commercial refrigeration (R) systems add about 1 quad to commercial buildings energy consumption. Most HVAC systems operate with one or more faults that result in increased energy consumption. Fault detection and diagnostics (FDD) tools have been developed to address this national issue, and many tools are commercially available. FDD tools have the potential to save considerable energy for an existing commercial rooftop unit (RTU) and refrigeration systems. These devices can be used for both retro commissioning and, when faults are addressed, continuous commissioning. However, there appears to be multiple market barriers for this technology. Although there are efforts to develop FDD tool standards, currently there are no standards and methods to define functions, capabilities, accuracy, and reliability of FDD tools in the field. Moreover, most of the commercial FDD tools have not been verified in the field independently. This paper presents a comprehensive approach for bringing HVAC FDD tools into the mainstream. The approach involves demonstrating ten commercially available FDD tools installed at ten different sites, independent testing and evaluation of the FDD tools, communication with various stakeholders, identifying and assessing market barriers, creating a process evaluation methodology, and assisting utility companies in developing incentive programs. The preliminary baseline results from the case study demonstrate how the use of an independent monitoring system (IMS) can be used for ground-truth in evaluating FDD tools in the field.

1 Introduction

Heating, ventilation, and air-conditioning (HVAC) systems consume over 5 quads of energy annually, representing 30% of energy consumption in the U.S. commercial buildings [1]. Packaged rooftop units (RTUs) provide heating and cooling for over 60 percent of the commercial building space (about 90 billion ft2) in the U.S., and they are a significant source of energy consumption and peak demand. It is estimated that 40,000 10-ton RTUs are sold each year in the U.S. There are over 486,000 RTUs in the Northeast region and about 2700 units sold in 2014 in Connecticut. Another important market segment is commercial refrigeration. It contributes about 1 quad to commercial building energy consumption [2].

Most RTUs have one or more fault types such as low/high refrigerant charge, valve leakage, condenser/evaporator fouling, filter/dryer restriction, economizer, and controls. These types of faults can increase RTU energy consumption. If these faults are detected, diagnosed, and addressed, then significant energy could be saved. Automated Fault Detection and Diagnostics (AFDD) tools have the potential to save considerable energy for existing commercial RTUs. These devices, when installed on RTUs and other HVAC systems, can be used for retro commissioning. In addition, when faults are addressed, AFDD tools can be used for continuous commissioning. According to Navigant Consulting [2], national energy savings of 111 TBtu can be saved by employing fault detection and diagnostics (FDD) tools for commercial RTUs. Faults in commercial refrigeration systems not only result in significant energy use, but also can lead to shutdown of equipment that can cause damage to refrigerated products. Goetzler et al. [1] indicated that a popular supermarket employed a leakage detection and energy usage monitoring system that reduced electricity use by 23 million kWh per year.

To address this need, countless efforts to develop FDD tools have been ongoing for at least three decades [38], and now, there are several commercial FDD products marketed by companies. These products include field-portable FDD tools, factory-installed on-board FDD tools, hardware-based retrofit on-board FDD tools, and software-as-a-service (SaaS) FDD tools. However, FDD systems have not yet achieved a high HVAC market penetration. In an online article, “FDD Going Mainstream? Whose Fault is it?,” and several issues with FDD implementation are listed, such as lack of data, rules specific to systems, how to handle the FDD information, using the diagnostic data, prognostics data, and alternative ways to deploy FDD [9]. A key market barrier for this technology appears to be the lack of awareness of FDD products among building owners, who are the majority of potential customers. Most HVAC contractors are not familiar with the latest FDD technologies, and HVAC technicians lack knowledge and skills regarding these technologies. Quantifying potential benefits to building owners is difficult. Moreover, many building owners look for short-term return on investment (ROI) for their investments. Many of the hard to quantify FDD benefits, such as reduced HVAC downtime due to early warning and repairs, avoidance of catastrophic failures, and predictable maintenance, will not be so evident to the building owners. FDD technology appears to be like an “unknown saint.” It has promise but is not fully understood, evidenced, or exploited. To make matters more complicated, many companies package FDD tools with energy-efficiency retrofit kits, and it is not clear to customers which ones to use. Even though there are efforts to develop standards, currently there are no standards and methods to define functions, capabilities, accuracy, and reliability of FDD tools. Many potential customers do not have the tools to evaluate the cost of FDD tools or to assess real value and understand the value proposition. In addition, it is not clear how the process of FDD installation and the communication of faults, severity of faults, and actions occur during use. It is quite clear that addressing these issues and overcoming these barriers are challenging, but necessary. In that regard, there is a continued push for bringing FDD products to the commercial market. Western HVAC Performance Alliance has been working on an FDD Road Map and released a Master List of existing FDD products (over 100) [10]. Some efforts by utilities, federal agencies, and state agencies are underway to bring attention to FDD tools. For instance, California Energy Commission's Title 24-Part 6 requires that economizer FDD tools be installed on air-cooled unitary air-conditioning systems over 4.5 tons cooling capacity, with the ability to detect faults.

The uniqueness of this study approach is its comprehensive nature. It includes the identification of diverse FDD products, selection of different commercial building types for the field demonstrations, process evaluation, performance verification, determination of technical, and economic viability, supporting the development of utility incentives, education, and outreach. Identification of technical and market barriers, and development of strategies to address them by bringing together all stakeholders are critical aspects of the study. Regarding establishing economic viability of AFDD tools, one of the aims of the study is to determine the energy savings due to AFDD implementation in the field under naturally occurring fault conditions rather than introducing artificial faults. For achieving this, AFDD tools are installed on RTUs in the field and the energy use is monitored during the baseline period. At the end of the baseline period, faults identified by the AFDD tools are verified and any repairs and adjustments needed are implemented as a process of retrofitting. The energy use following the baseline testing is monitored (considered a part of continuous commissioning), and the energy savings due to AFDD tools is calculated considering weather normalization. Since there might be a significant variability in naturally occurring faults for different sites, and there are differences in features of AFDD tools, there is no direct comparison made among different AFDD tools regarding their performance. The AFDD tool capabilities and results are presented independently as individual case studies. This paper presents the first phase of the project from AFDD tool selection through initial baseline testing results. The results of phase II (post-retrofit testing), which will conclude in August 2021, will be presented in a future paper.

2 Fault Detection and Diagnostics Tool and Site Selection

2.1 Fault Detection and Diagnostics Tool Selection.

The goal was to select ten different commercially available FDD tools for the field demonstration study; therefore, an FDD tool selection matrix was developed. The matrix included various performance parameters of FDD such as the tool type (whether hardware-based or SaaS-based), type and range of faults detected, capability to detect heating and cooling faults, feature to diagnose and verify faults, limitation on the size of HVAC system, fault communication and frequency, skill level needed to use the tool, and FDD tool pricing. FDD vendors received a Request for Information (RFI) that included the questions from the selection matrix. The study received responses from 12 vendors. Based upon the responses, each tool received a normalized score for each performance metric. Table 1 shows the scoring results. Of these twelve FDD tools, one tool is not market ready and one FDD tool vendor dropped out from the study. Therefore, the study considered the remaining ten FDD tools for field-testing and evaluation.

Table 1

Overall normalized scoring of FDD tools based upon vendor RFI responses

FDD toolRaw scorePercentTool type
Reference, max17.89100%NA
Tool 112.9272%Hybrid
Tool 216.7994%Hardware
Tool 313.6476%Hardware
Tool 413.3775%Hardware
Tool 516.4992%Software
Tool 616.4492%Software
Tool 714.5481%Software
Tool 811.6065%Hardware
Tool 915.2985%Hybrid
Tool 1014.6582%Software
Tool 1115.7488%Hybrid
Tool 1217.3997%Hardware
FDD toolRaw scorePercentTool type
Reference, max17.89100%NA
Tool 112.9272%Hybrid
Tool 216.7994%Hardware
Tool 313.6476%Hardware
Tool 413.3775%Hardware
Tool 516.4992%Software
Tool 616.4492%Software
Tool 714.5481%Software
Tool 811.6065%Hardware
Tool 915.2985%Hybrid
Tool 1014.6582%Software
Tool 1115.7488%Hybrid
Tool 1217.3997%Hardware

2.2 Site Selection.

The project's target was ten sites for the field demonstration of the FDD tools, with the goal of matching one tool per site. The project team developed the following FDD installation criteria: (1) commercial or industrial building type, (2) ease of access to rooftop units and corresponding ductwork, (3) RTU age, (4) a diverse portfolio, or building set, to demonstrate a variety of cases to potential customers, and (5) the ability to install a varied set of software versus hardware FDD tools across the ten sites. Table 2 shows all criteria details. Connecticut utility partners, United Illuminating, and Eversource sent a memorandum of understanding (MOU) along with the project scope to several of their customers for the purposes of developing candidate sites. The research team conducted several site visits after receiving MOUs. During these visits, the team-validated responses submitted as part of the MOU process and collected additional information for site criteria, such as ease of access to equipment. A set of ten sites were selected from 15 candidate sites, and each site was paired with one of the ten FDD tools, as shown in Table 3.

Table 2

Site selection factors considered in the study

Site factorFactor metric
Building useBuilding type
Year-round use
RTU characteristicsRTU characteristics: size, age, type
RTU and ductwork accessRTU accessibility
Ductwork accessibility for installing sensors for M&V
Building system characteristicsBuilding energy management system
Network access capabilities and security
Recent and planned energy projects
Status of current building and operations
Commercial refrigeration
Maintenance practiceCurrent RTU service status
Preventative maintenance schedule
Previous repairs and maintenance
Potential future savingNationwide chain or multiple sites
Site factorFactor metric
Building useBuilding type
Year-round use
RTU characteristicsRTU characteristics: size, age, type
RTU and ductwork accessRTU accessibility
Ductwork accessibility for installing sensors for M&V
Building system characteristicsBuilding energy management system
Network access capabilities and security
Recent and planned energy projects
Status of current building and operations
Commercial refrigeration
Maintenance practiceCurrent RTU service status
Preventative maintenance schedule
Previous repairs and maintenance
Potential future savingNationwide chain or multiple sites
Table 3

List of sites for FDD tool installation

SiteBuilding typeFDD toolTool type
Site 1Manufacturing facilityTool 1Hardware
Site 2RestaurantTool 2Hardware
Site 3University dormitoryTool 3Hardware
Site 4Distribution centerTool 4Hardware
Site 5Municipal office buildingTool 5Software
Site 6University servicesTool 6Software
Site 7K-12 schoolTool 7Software
Site 8RetailerTool 8Software
Site 9Health & racquet clubTool 9Software
Site 10Corporate research centerTool 10Software
SiteBuilding typeFDD toolTool type
Site 1Manufacturing facilityTool 1Hardware
Site 2RestaurantTool 2Hardware
Site 3University dormitoryTool 3Hardware
Site 4Distribution centerTool 4Hardware
Site 5Municipal office buildingTool 5Software
Site 6University servicesTool 6Software
Site 7K-12 schoolTool 7Software
Site 8RetailerTool 8Software
Site 9Health & racquet clubTool 9Software
Site 10Corporate research centerTool 10Software

3 Field-Testing Methodology

The purpose of the field demonstration is to study and verify the technical feasibility, economic feasibility, ease of installation, operational impacts, and environmental impacts of FDD through the field installation of FDD tools and independent monitoring. Section 3.1 describes the background and basis for the evaluation methodology. Sections 3.2 through 3.5 describe the field-testing methodology.

3.1 Background.

Braun and Yuill [11,12] developed methods to evaluate and assess the performance of FDD protocols for air-conditioning devices (RTUs and split systems) using a fault intensity (FI) and fault impact ratio (FIR). The study demonstrated that their developed FDD evaluator can be used to evaluate the strength and weakness of the FDD protocol by implementing the Refrigerant Charge and Airflow (RCA) diagnostics protocol and normal model approach (developed by a non-faulted set of a data). Five scenarios of test outcomes are presented in the FDD protocol (no response, correct, false alarm, misdiagnosis, and missed detection) for six types of faults (refrigerant charge, low-side and high-side heat transfer faults, liquid line restriction, non-condensable gas in the refrigerant, and compressor valve leakage). The authors used the approach of feeding the FDD protocol with a different set of experimental input data under different conditions and then observing the responses. They used the normal model set of data and four versions of the RCA protocol as a case study to demonstrate the use of their developed evaluation methodology. The RCA protocol was found to perform poorly using their evaluation methodology, resulting in up to 51% of faults where no faults were presented, 26% misdiagnosing faults, and 32% not detecting faults where the faults were presented.

Wall and Guo [13] studied the next-generation automated fault detection and diagnostics (AFDD) tools for commercial building energy efficiency. The results of the study reported benefits and outcomes of AFDD and performance evaluation of AFDD. The authors studied six different AFDD tools at six different buildings located in Australia for different space types (office, airport, museum, hospital, and laboratory). They connected the AFDD tools to a building management system (BMS) to obtain real-time data, and the results for each building were reported in terms of energy reduction in annual energy savings, National Australian Built Environment Rating System (NABERS) ratings, improved occupant comfort, avoidable energy, maintenance issues, and site energy intensity. The study also showed the top issues identified in each building.

Dexter and Pakanen [14] studied demonstrating AFDD methods in operating buildings in twelve countries (Belgium, China, Canada, Finland, France, Germany, Japan, Netherlands, Sweden, Switzerland, United Kingdom, and the United States of America) to develop different computer-based demonstration systems, which can interface with the HVAC system. The authors developed 23 prototype performance-monitoring tools and three performance validation tools based on artificial intelligence techniques (neural networks, fuzzy logic, expert and rule-based system, case-based reasoning, bond graphs, and qualitative methods). The authors studied 26 different FDD tools and approaches for different HVAC systems in different buildings for different space types (factory, hotel, lab, office, research and development center, college, school, and indoor swimming pool). The report defines three main groups of faults (natural fault, artificial fault, and simulated fault) that are considered in the study.

Jagpal [15] conducted a new study based on the results obtained in a previous study [14] for demonstrating AFDD methods in operating buildings to illustrate the benefit of a computer-aided FDD system by working with FDD vendors, contractors, customers, and end users. The report considered three main objectives: (1) assessing the cost-effectiveness of FDD methods, (2) building a prototype computer-aided performance evaluation system that has the ability to detect the performance deficiency and diagnose faults, and (3) investigating the feasibility of developed performance evaluation systems by testing them in operating buildings. The author identified that the availability of measured data and faults that have similar symptoms are the main barriers to identifying the HVAC faults.

Most of the approaches used in the previous study cases and literature are based upon laboratory experiments where authors introduced faults to the system to study fault behavior and to see if the FDD protocol is responding to the fault. However, it is difficult in this study to convince building managers to agree to introduce failures and faults into real-time operating systems currently servicing operating buildings. Therefore, the study team developed the monitoring and validation system and implemented it in parallel with the FDD tool to obtain real data and to evaluate the faults indicated by the FDD products based on FI and FIR of each fault. The study team developed a measurement and verification (M&V) plan based on the rules and techniques illustrated in previous research [11,12,1618] to show the feasibility of market-available FDD tools by testing them in a set of real-time operating commercial and industrial buildings.

One key objective of the study is to undertake a comprehensive evaluation of FDD tools in the field, which is a complex task that requires examination of the FDD tool performance, cost, ease of implementation, ease of use, data requirements, training requirements, and applicability to the needs of a particular site or customer [19]. A framework proposed by Frank, et al. [20] for FDD performance evaluation was reviewed carefully. The framework is found to be useful for evaluating FDD protocols, but not for evaluating FDD tools in the field under naturally occurring faults, which can vary from no-fault to single faults of varying intensity and frequency, to multiple faults. Therefore, there is no control over some aspects of input samples for the FDD tools. However, the framework principles are used in the FDD tool performance verification. For instance, three categories of faults will be considered—(1) condition-based, (2) outcome-based, and (3) behavior-based. Also, ground-truth is employed in the verification of faults by FDD. Ground-truth is defined by a combination of independent monitoring of the RTU with essential sensors and instrumentation, and in-the-field checking and observation of the system, its operation, and controls.

3.2 Independent Monitoring and Validation System.

The study team developed a measurement and validation plan based on methodologies developed by Braun and Yuill [11,12] and Mehrabi and Yuill [1618]. The plan is also based upon the framework developed by Frank et al. [20]. The plan considers faults of an RTU and uses FI and FIR measures to identify them and to calculate their energy impact. The identification and impact of faults associated with the following subsystems and flows (energy and refrigerant) are considered: (1) refrigerant charge, (2) refrigerant quality, (3) economizer, (4) controls, (5) condenser, (6) evaporator, (7) expansion device, (8) compressor, (9) refrigerant-liquid lines, and (10) sensors as well as the RTU system level effectiveness of (11) energy performance degradation. Based upon possible faults occurring in the subsystems listed earlier and the fault impact calculations employed by the methodology in Sec. 3.4, the team determined the required components and sensors needed for collecting input data. The Independent Monitoring System (IMS) evaluator tool uses sensors and meters to collect real-time data as shown in Table 4. The table presents the specifications and planned locations of monitoring system sensors and instruments. These sensors are capable of measuring both refrigeration-side and air side performance. In addition, the monitoring system is capable of calculating airside and refrigerant-side performance indicators such as energy-efficiency ratio (EER) and cooling capacity. Complete specification of all sensors in the FDD evaluator sensor network is shown in the  Appendix.

Table 4

Sensors and instrumentation used for IMS

Sensor typeLocationNumber
TemperatureSuction line1
Discharge line2
Air side after condenser3
Before the expansion device4
Air at evaporator outlet before the fan5
Temperature and relative humiditySupply duct6
Return duct7
Mixed air8
Outdoor9
Indoor10
Supply duct averaging11
Return duct averaging12
AirflowSupply duct13
Power meterCompressor, blower, main14
Current transformersEach leg of compressor, blower, and main15
Pressure transducerSuction line16
Discharge line17
Sensor typeLocationNumber
TemperatureSuction line1
Discharge line2
Air side after condenser3
Before the expansion device4
Air at evaporator outlet before the fan5
Temperature and relative humiditySupply duct6
Return duct7
Mixed air8
Outdoor9
Indoor10
Supply duct averaging11
Return duct averaging12
AirflowSupply duct13
Power meterCompressor, blower, main14
Current transformersEach leg of compressor, blower, and main15
Pressure transducerSuction line16
Discharge line17

A comprehensive IMS evaluator tool comprising a remotely accessible data acquisition system along with essential sensors and instrumentation has been designed and used in the verification process. IMS is literally a laboratory installed in the field on an RTU with over 20 sensors. One-minute interval data is stored and can then be downloaded remotely through a cellular modem. This prevents the project team from having to return to the site to collect data. A schematic of this Intellilogger data acquisition system is shown in Fig. 1.

Fig. 1
A schematic of Intellilogger, a data acquisition and monitoring system, used as a verification system (IMS evaluator tool)
Fig. 1
A schematic of Intellilogger, a data acquisition and monitoring system, used as a verification system (IMS evaluator tool)
Close modal

The location of each sensor was determined based on the required input data collection needed. Based on the initial site visit, the team was able to customize the required sensors and instrumentation according to the RTU characteristics and the location of supply and return ducts for the building. A schematic of an RTU with locations of all sensors is shown in Fig. 2.

Fig. 2
A schematic of a rooftop HVAC unit with labels indicating sensor locations for the IMS evaluator tool
Fig. 2
A schematic of a rooftop HVAC unit with labels indicating sensor locations for the IMS evaluator tool
Close modal

3.3 Post-Processing and Fault Verification.

A majority of the FDD tools require a BMS, a building automation system (BAS), or energy management system (EMS) to be installed to read the data points within an RTU. A BMS is an intelligent system that controls and monitors a building's main technical components, such as the HVAC and lighting. Since BMS-based FDD tools can monitor all the RTUs on a building without additional hardware expense, the FDD tool will be setup to monitor all RTUs on a building. However, the independent monitoring system (IMS) is installed only on one RTU for validation. For non-BMS, hardware-based FDD tools, one FDD tool will be installed along with the IMS on one RTU per building. In general, all FDD tools share the same four general installation and use processes: (1) receive and install the tool, (2) verify functionality of the tool, (3) allow the tool to run over baseline period, (4) review the faults after the baseline period, and (5) implement repairs or controls changes to address faults. Many FDD tools require additional tuning processes for more accurately reading and detecting faults.

The team developed an M&V plan using an independent monitoring tool. The plan outlines the steps for the tool pre-installation, installation, data collection, analysis, and post-processing. For pre-installation, the team visited each study site to collect information and field measurements. These included RTU make/model, duct dimensions, determining pressure transducer/temperature sensor locations, and determining RTU accessibility. The team then used the field information to customize each monitoring system to the site. After ordering and receiving the sensors and instrumentations, the team pre-wired and calibrated the monitoring system and verified operation before field installation. A licensed HVAC technician installed all monitoring tools in the field with support from the project team. The HVAC technicians spent 1–2 days installing each monitoring tool in the field, which varied depending upon unit accessibility and a technician's skill and experience level. Once an FDD tool and the IMS were installed, the RTU was monitored for six to eight weeks and the data were collected based upon naturally occurring faults. The research team reviewed and analyzed the real-time data from the IMS weekly.

3.4 Methodology for Evaluating Fault Detection and Diagnostics Tool Capabilities.

A complete set of fault algorithms has been implemented as part of the verification of FDD tools, based upon a previous work by Yuill and Braun [12] and the Pacific Northwest National Laboratory (PNNL) [7]. Additional calculations from the ASHRAE Handbook of Fundamentals [21] have been used during this development, as well as an online thermophysical property tool from National Institute of Standards and Technology (NIST) [22]. This section provides the methodology for identifying RTU faults using the monitoring system by calculating the FI, as well as examining the impact of these faults on the energy efficiency and energy consumption through the FIR. The methods that will be described are selected based on practicality as well as time constraints of the field study. The FIR can be defined as the ratio between the faulted and non-faulted values of the metric of interest. If the FIR passes a certain threshold determined by the user, then a system is considered “faulted.” The FI can be defined in terms of measurable numeric quantities related to the condition of the fault [12]. This approach can also be used to sort the prevalence of a fault. By setting a higher threshold, fewer alarms are detected. All calculations for this study are for constant-speed equipment rather than variable speed. The coefficient of performance (COP) is calculated based upon real-time input data from the monitoring system for purposes of determining FDD tool performance and calculating energy savings.

Refrigerant charge fault: The maximum performance of an HVAC-R system may occur at lower or higher charge levels than the nominal charge level. Therefore, an accurate charge level is required for a maximum performance, and the charging table from the manufacturer is not helpful. Braun and Yuill [11] use Eq. (1) to calculate fault intensity for refrigerant charge (FIch) where mact is actual mass of charge and mnom is nominal correct mass of charge and found in a manufacturer's specification
FIch=mactmnomm
(1)
The research team used a virtual refrigerant charge sensor approach presented in Ref. [23] since this approach is easier to implement for field study. This approach requires data acquired from surface-mounted temperature sensors and uses Eq. (2) to calculate FIch where Tsc is measured system subcooling temperature, Tsh is measured system superheat temperature, and Tsc,rated and Tsh,rated are the system superheat and subcooling at standard conditions and values from [7]
FIch=1Kch{(TscTsc,rated)KshKsc(TshTsh,rated)}
(2)
Although the measured superheat and subcooling values can be calculated using temperature measurements, a more accurate method is to use the data acquired from the pressure transducers to calculate the temperatures at given pressures using refrigerant-specific saturation tables. The superheat and subcooling values can then be calculated with Eqs. (3) and (4) where Tliq and Tsuc are the temperature of liquid and suction lines, respectively, Tcond is the condensing temperature at the liquid pressure, and Tevap is the evaporating temperature at the suction pressure [7]
Tsc=TcondTliq
(3)
Tsh=TsucTevap
(4)
Parameters Kch, Ksh, Ksc in Eq. (2) are calculated using Eqs. (5) and (6) presented in Ref. [23] at given subcooling and superheat at rated and operating conditions where α0 is the ratio of refrigerant charge necessary to have saturated liquid at the exit of the condenser to the rated refrigerant charge and Xhs,rated is the ratio of high-side charge to the total refrigerant charge at the rated condition. Li and Braun [23] and Kim and Braun [24] use default values for α0 and Xhs,rated; therefore, we use 0.75 for α0 and 0.73 for Xhs,rated. FIch is then compared to a threshold value, to determine whether the systems are overcharged or undercharged.
Kch=Tsc,rated(1α0)Xhs,rated
(5)
KshKsc=TscTsc,ratedTshTsh,rated
(6)
Mehrabi and Yuill [17] developed a general relation to calculate the fault impact ratio using Eq. (7) for the refrigerant charge fault (FIRch) based on the COP of the RTU and the analysis of the experimental data collected from previous studies for A and B standard test conditions for cooling mode based on the AHRI standard 210/410 (2008) [25]. This relationship holds for a single speed fan and compressor system where a0, a1, and a2 are regression coefficients. These coefficients depend upon operating conditions and whether the system has a fixed orifice (FXO) or thermostatic expansion valves (TXVs). Here, we assumed the A test condition and adapted the coefficient from Ref. [17] as follows: a0 = 0.97807, a1 = 0.32443, a2 = −1.47617 for systems with fixed office expansion valves (FXO) and a0 = 1.00920, a1 = 0.17685, a2 = −1.60799 for systems with thermal expansion valves (TXV)
FIRch,COP=a0+a1FIch+a2FIch2
(7)
Condenser fouling: Condenser fouling reduces the airflow, which reduces heat rejection from condenser coils to the surroundings. The fault intensity of condenser fouling (FICA) can be calculated with Eq. (8) where V˙con,nom is the nominal condenser air flowrate specified in the manufacturer's literature and V˙con,act is the actual condenser air flowrate [12]
FICA=V˙con,actV˙con,nomV˙con,nom
(8)
The actual condenser air flowrate in ft3/s is calculated using Eq. (9) where m˙ref is the refrigerant mass flowrate, hdis is the enthalpy at the compressor discharge in Btu/lb at Tdis, and Pdis and hliq are the liquid line enthalpy in Btu/lb at Tliq, and TCAO and TCAI are air temperatures at the outlet and inlet of the condenser [7].
V˙con,actual=m˙ref(hdishliq)(TCAOTCAI)v˙con,air1.013
(9)
m˙ref is calculated using Eq. (10) when the subcooling temperature is not zero by applying the energy balance on the compressor where m˙ref is the refrigerant mass flowrate, hdis is the enthalpy at the compressor discharge in Btu/lb at Tdis and Pdis [7], hliq is the liquid line enthalpy in Btu/lb at Tliq, TCAO, and TCAI are air temperatures at the outlet and inlet of the condenser, Wcomp is the compressor input power, and hsuc is the compressor suction enthalpy in Btu/lb at Tsuc.
m˙ref=0.95Wcomphdishsuc
(10)
v˙con,air is the specific volume of air in ft3/lb and can be calculated by using the measured temperature at the air inlet and outlet to the condenser with Eq. (11).
v˙con,air=0.04512((TCAITCAO)2+273.15)
(11)
FICA is determined to identify a condenser fouling fault by comparing it with a preset threshold. If FICA > threshold, then condenser fouling is present. Then, the general relationship developed in Ref. [17] can be used to calculate the fault impact ratio with Eq. (12) for the condenser fouling fault (FIRCA) based on the COP of the RTU where a0, a1, and a2 are regression coefficients. Here, we assumed the A test condition and adapted the coefficient from Ref. [17]: a0 = 1.00, a1 = 0.31372, a2 = −0.3647 for systems with FXO and a0 = 1.0093, a1 = 0.79837, a2 = 0.54101 for systems with a TXV
FIRCA,COP=a0+a1FICA+a2FICA2
(12)
Evaporator fouling: Evaporator fouling also reduces airflow and usually occurs due to filter fouling. The fault intensity of evaporator fouling (FIEV) is calculated with Eq. (13) where V˙sup,nom is the nominal evaporator air flowrate specified in the manufacturer's literature and V˙sup,act is the actual (measured) supply air flowrate over the evaporator coil in ft3/s [12]
FIEV=V˙sup,actV˙sup,nomV˙con,nom
(13)
FIEV is used to identify an evaporator fouling fault by comparing it with a preset threshold. If FIEV > threshold, then evaporator fouling is present. The general relationship is then developed in Ref. [17] to calculate the fault impact ratio for the evaporator fouling fault (FIREV) based upon COP of the RTU where a0, a1, and a2 are regression coefficients using Eq. (14). The A test condition is assumed and so use the coefficients from Ref. [17]: a0 = 0.99381, a1 = 00.13673, a2 = 0.09088 for systems with FXO and a0 = 0.99322, a1 = −0.02259, a2 = −0.34522 for systems with TXV
FIREV,COP=a0+a1FIEV+a2FIEV2
(14)
Energy Performance Degradation: Energy degradation can give an overall indication of how well the system is performing. This can be quantified by the COP using Eq. (15), which is the ratio between desired heat removal from the air by the evaporator to the required power input to the compressor, and the EER of the refrigerant side [7,21] where Qref is the system capacity calculated with Eq. (16); Wmain is the total system power input, and Wfan is the power input to the evaporator fan.m˙ref is calculated from Eq. (10) and takes into consideration the heat loss through the compressor, where hevap,oulet and hevap,inlet are the enthalpies at the evaporator outlet and inlet, respectively
COPref=QrefWfanWmain
(15)
Qref=m˙ref(hevap,oulethevap,inlet)
(16)
Airside EER is also calculated based on the ASHRAE handbook of Fundamentals [21] (details of this calculation is not presented here). The estimated EER values for both the refrigerant side and air side are calculated with Eq. (17) based upon the calculated COP
EER=3.412COP
(17)
Economizer Faults: It is difficult to identify economizer faults without knowing the status of the outdoor air damper. Economizer faults can include an air temperature sensor failure, not economizing when it should, economizing when it should not, a damper not modulating, and intake of excess outdoor air. However, the methodology developed in Ref. [7] relates the damper position to outdoor air percentage (OA%) using Eq. (18). If OA% is 0%, then the damper is fully closed and 100% is fully opened. This method uses the outdoor air temperature (TOA), mixed air temperature (TMA), and return air temperature (TRET) to find the portion of air entering the building
OA%=TRETTMATRETTOA
(18)

Control Faults: Control faults are generally difficult to directly measure and consist of incorrect set points, scheduling errors, or unusual control board malfunctions. Instead of causing wear and degradation to the equipment, these faults increase the energy expenditure through a multitude of faulty or suboptimal operations. To determine inefficient and faulty operations, a power analysis is conducted by monitoring the energy use of the RTU through a main meter and submeters for the compressor and evaporator fan. Using these energy data, an operator is able to determine whether the unit is active during unoccupied building times, running for either the entire day or a long duration time when it should not be or short cycling periodically. This power analysis method is demonstrated for Tool 1 in Sec. 6.1.

3.5 Measurement and Verification of Energy Savings.

Energy savings due to fault correction will be calculated using Eq. (19)
Savings(kWh)=Faultedenergyuse(kWh)Unfaulted(Retrofit)energyuse(kWh)
(19)

The annual energy cost savings will be calculated using Eq. (20) by multiplying the energy usage by the cost of electricity (e.g., $0.12)

Annualcostsavings=Energysavings×Rate+Month(monthlydemandsavings×Rate)
(20)

To calculate energy savings, the energy use data, which was obtained by the IMS for faulted and un-faulted (retrofit) scenarios, were correlated with the outdoor ambient temperature or cooling degree-days (CDD) or heating degree-days (HDD) to calculate weather-normalized annual energy savings by FDD tool by site.

4 Market Evaluation

The purpose of the market evaluation is to determine the current market conditions for streamlining adoption of FDD technologies and to analyze stakeholder feedback for the purposes of designing a utility marketing and incentive program that increases the adoption of FDD technologies. The goal is to propose a marketing and incentive program that the study partners or other utility companies can replicate in other states. To identify energy-efficiency technology adoption attitudes and FDD market barriers, two preliminary surveys were conducted at a utility customer outreach event in Connecticut in 2018. Thirty people responded to different aspects of the survey, and their roles are characterized in Fig. 3.

Fig. 3
Outreach event preliminary market survey participant characteristics
Fig. 3
Outreach event preliminary market survey participant characteristics
Close modal

Table 5 shows survey questions developed to understand the perspectives on the value proposition for energy efficiency and participant responses. A five-point Likert scale was applied to understand relative value propositions. Each respondent was asked, “How important are the following value propositions to your organization, customers, or stakeholders when investing in building energy efficiency technologies?” Table 5 depicts the responses from 28 people and calculates the mean and rank order. The results indicate that (1) energy consumption, (2) energy cost, (3) energy demand, and (4) occupant comfort ranked with the highest value priorities followed by (5) operation cost and (6) maintenance cost.

Table 5

A preliminary market survey on the value of energy efficiency objectives in buildings

A preliminary market survey on value of energy efficiency objectives in buildings
How important are the following value propositions to your organization, customers, or stakeholders when investing in building energy efficiency technologies?5—Very important4—Important3—Fairly important2—Slightly important1—Not importantDo not knowMeanRank
Reduce energy consumption23410004.791
Reduce energy demand19441004.463
Reduce energy costs23320004.752
Reduce other utility cost12852104.008
Facility operation costs141220004.435
Facility maintenance costs16920014.366
Operational performance: customer comfort levels for HVAC-R or product quality for refrigeration16930004.464
Operational performance: cooling quality(product131131004.297
A preliminary market survey on value of energy efficiency objectives in buildings
How important are the following value propositions to your organization, customers, or stakeholders when investing in building energy efficiency technologies?5—Very important4—Important3—Fairly important2—Slightly important1—Not importantDo not knowMeanRank
Reduce energy consumption23410004.791
Reduce energy demand19441004.463
Reduce energy costs23320004.752
Reduce other utility cost12852104.008
Facility operation costs141220004.435
Facility maintenance costs16920014.366
Operational performance: customer comfort levels for HVAC-R or product quality for refrigeration16930004.464
Operational performance: cooling quality(product131131004.297

Table 6 shows survey questions developed to understand the severity of market barriers to the adoption of FDD technologies. A five-point Likert scale was applied to understand relative market barriers. Table 6 also tabulates the 30 participant responses and calculates the mean and barrier rank order.

Table 6

A preliminary survey of FDD market barriers with results

Fault detection and diagnostics project stakeholder survey
How difficult do you precise each barrier you or your organization might face in adapting FDD technologies at you facility or your customers’ facilities?Very importantImportantFairly importantSlightly importantNot importantDo not knowMeanRank
InformationalInterpreting and understanding the value proposition3559533.3011
Lack of understanding of how technology works06512612.8314
Determining the purchase model36105243.497
Determining how best to use contractors and service providers24710613.0513
Determining how FDD technology fits with higher level energy management practices09107403.1712
OrganizationalChange from current decision making, practices, and technologies to select and use new technology41147313.694
Understand how increased operation and maintenance expenses can offset value obtained in other areas6694323.763
Accepting of some technical and cost risk3977313.526
TechnicalIT integration and /or data integration4882533.685
Communication integration1994523.3510
Physical integration with common building automation system3787413.428
Physical system integration with other building system11076333.409
Lack of integration standards51155313.802
Expensive to implement in smaller building13922224.401
Fault detection and diagnostics project stakeholder survey
How difficult do you precise each barrier you or your organization might face in adapting FDD technologies at you facility or your customers’ facilities?Very importantImportantFairly importantSlightly importantNot importantDo not knowMeanRank
InformationalInterpreting and understanding the value proposition3559533.3011
Lack of understanding of how technology works06512612.8314
Determining the purchase model36105243.497
Determining how best to use contractors and service providers24710613.0513
Determining how FDD technology fits with higher level energy management practices09107403.1712
OrganizationalChange from current decision making, practices, and technologies to select and use new technology41147313.694
Understand how increased operation and maintenance expenses can offset value obtained in other areas6694323.763
Accepting of some technical and cost risk3977313.526
TechnicalIT integration and /or data integration4882533.685
Communication integration1994523.3510
Physical integration with common building automation system3787413.428
Physical system integration with other building system11076333.409
Lack of integration standards51155313.802
Expensive to implement in smaller building13922224.401

Figure 4 graphically depicts the survey results for information barriers, organizational barriers, and technical barriers.

Fig. 4
Analysis and results of outreach event preliminary market survey
Fig. 4
Analysis and results of outreach event preliminary market survey
Close modal

Based on the preliminary survey at the outreach event and previous work [14,2632], detailed market surveys targeted for different stakeholders (building owners/facility managers; HVAC contractors/energy consultants; utilities; FDD vendors) have been developed and executed with stakeholders participating in the current study. The questions are organized into sections as shown in the  Appendix: (i) knowledge for FDD products, (ii) awareness for FDD products, (iii) attitudes for FDD products, and (iv) respondent characteristics. The results of the market study will (1) help vendors in developing more effective FDD products, (2) support state Conservation and Load Management (CLM) programs in determining proper incentive and rebates for FDD products, and (3) assist FDD vendors and CLM programs in the development of effective FDD marketing campaigns.

5 Process Evaluation

The purpose of the process evaluation is to analyze the current processes of purchasing, installing, tuning, and using current FDD tools for retro commissioning (RCM). Common process steps across all installations are shown in the  Appendix. The process for BMS-based FDD tools is different than equivalent processes for non-BMS, hardware-based FDD tools. For example, Tool 1, which is a hardware-based tool, was installed at Site 1. A trained technician was able to install it in less than 4 h. Another hardware-based FDD tool, Tool 2, was installed at Site 2, and this installation proved to be a difficult installation due to limited access to ductwork in the attic space and limited space within the RTU to place sensors, especially the refrigerant pressure sensors. The installation was completed over 1.5 days, including troubleshooting time. The detailed process elevation of FDD products of the ongoing will be completed during fall 2020. The full process evaluation results and analysis will show how each step differs by FDD tool, will analyze variability by process step across tools, and will offer recommendations to improve these processes for future FDD product installations.

6 Results and Discussion

The IMS was installed at ten sites in Connecticut to analyze the performance of each FDD product in the field. Data were collected to verify all of the sensors were connected and working properly. Additional visits to the sites are made to rectify monitoring sensor errors when irregularities in sensor data are identified. Preliminary faults detected by the FDD tools are categorized into sensor, control, hardware, and other fault groups and summarized in Table 7 for each tool's baseline period. The IMS data were analyzed, and the faults were identified based on the FI and FIR-based methods described in Sec. 3.4.

Table 7

Summary of faults identified by FDD tools for each site during first baseline period

SiteToolFault typeComments
SensorControlHardwareOther
Site 1Tool 1xDrive fault
Site 2Tool 2xxLow subcool; low evaporator efficiency; low compressor efficiency; economizer fault; Low condenser efficiency; high superheat; high evaporation temperature
Site 3Tool 3None have been detected
Site 4Tool 4None have been detected
Site 5Tool 5xxIndoor air temp set point fault; discharge damper manual override fault
Site 6Tool 6xOccupancy sensor fault
Site 7Tool 7xxxImproper cooling staging; sensor mismatch; supply air too high
Site 8Tool 8xxxxCooling short cycling; excessive cooling; fan short cycling; Low discharge air temp; OA damper open overnight
Site 9Tool 9None have been detected
Site 10Tool 10Not yet started on FDD analytics
SiteToolFault typeComments
SensorControlHardwareOther
Site 1Tool 1xDrive fault
Site 2Tool 2xxLow subcool; low evaporator efficiency; low compressor efficiency; economizer fault; Low condenser efficiency; high superheat; high evaporation temperature
Site 3Tool 3None have been detected
Site 4Tool 4None have been detected
Site 5Tool 5xxIndoor air temp set point fault; discharge damper manual override fault
Site 6Tool 6xOccupancy sensor fault
Site 7Tool 7xxxImproper cooling staging; sensor mismatch; supply air too high
Site 8Tool 8xxxxCooling short cycling; excessive cooling; fan short cycling; Low discharge air temp; OA damper open overnight
Site 9Tool 9None have been detected
Site 10Tool 10Not yet started on FDD analytics

A summary of findings based upon analysis of IMS data is presented in Table 8. The EER is calculated using three different methods: two by using air side performance and one-way by using refrigerant-side performance. This method is applied for redundancy since airflow measurement in the field may not be accurate in some instances. Also, sometimes inaccurate temperature measurements may result in inaccurate refrigerant-side EER. The main EER is calculated based upon return air and mixed temperatures and represents the performance of cooling, based on the ability to cool the building. The refrigerant-side EER represents the performance of the vapor compression system and uses the refrigerant line temperatures and refrigerant pressures for calculation.

Table 8

Summary of faults identified for each site by the IMS

SiteFault typeOverall health
EER degradationRefrigerant underchargeRefrigerant overchargeEconomizer faultCondenser faultEvaporator fault
Site 1xa,bGood
Site 2xaxxAverage
Site 3xxAverage
Site 4XxaxPoor
Site 5xaxAverage
Site 6xGood
Site 7xGood
Site 8xxa,bxNeeds immediate attention
Site 9xxAverage
Site 10xa,bGood
SiteFault typeOverall health
EER degradationRefrigerant underchargeRefrigerant overchargeEconomizer faultCondenser faultEvaporator fault
Site 1xa,bGood
Site 2xaxxAverage
Site 3xxAverage
Site 4XxaxPoor
Site 5xaxAverage
Site 6xGood
Site 7xGood
Site 8xxa,bxNeeds immediate attention
Site 9xxAverage
Site 10xa,bGood
a

Compressor A.

b

Compressor B.

This paper demonstrates the methods employed in the study and discusses only preliminary results for the FDD Tool 1 at Site 1 in detail in Secs. 6.1 and 6.2. Analysis and results for the other nine tool-site pairs are planned as part of future work. Figure 5 displays photos of the installation of the data logger and pressure/temperature sensors for the IMS, while Fig. 6 presents the photos of the FDD Tool 1, which is a hybrid, hardware-based tool.

Fig. 5
The installation of the IMS at Site 1
Fig. 5
The installation of the IMS at Site 1
Close modal
Fig. 6
Photos of the installed FDD Tool 1 at Site 1
Fig. 6
Photos of the installed FDD Tool 1 at Site 1
Close modal

6.1 Monitoring Tool Findings.

In this paper, Site 1 was selected for presenting detailed results from the independent monitoring tool. Figure 7 presents the calculated overall system EER (air side) for a hot summer day (July 21, 2019). The weather data corresponding to this day for this site are shown in Fig. 8. The rated EER for this RTU is 11 as denoted in Fig. 7.

Fig. 7
Calculated air side EER based upon the measurement data from the IMS evaluator tool for Site 1
Fig. 7
Calculated air side EER based upon the measurement data from the IMS evaluator tool for Site 1
Close modal
Fig. 8
Outdoor air temperature and relative humidity profiles for Site 1 on July 21, 2019
Fig. 8
Outdoor air temperature and relative humidity profiles for Site 1 on July 21, 2019
Close modal

As shown in Fig. 7, the overall EER is slightly over 11.0, indicating no reduction in its performance related to the rated EER.

Figure 9 shows percentage of outdoor air entering the building as well as return, outdoor, and mixed air temperatures. As seen in Fig. 9, when the outdoor temperature is low, the unit is bringing in 100% outdoor air and when the outdoor temperature begins to increase, the outdoor air is reduced to about 20% indicating minimum ventilation air. This result shows that the economizer seems to be working as desired. By examining the outdoor, return, and mixed air temperatures, one can glean how much outdoor air draws through the economizer.

Fig. 9
Percentage of outdoor air entering the building; return, outdoor, and mixed air temperatures of the RTU at Site 1 from the IMS evaluator tool
Fig. 9
Percentage of outdoor air entering the building; return, outdoor, and mixed air temperatures of the RTU at Site 1 from the IMS evaluator tool
Close modal

Figure 10 represents the fault intensity (FI) of the refrigerant charge for both refrigerant circuits. The FI value indicates a slight overcharge for Circuit A (about +6%) and a slight overcharge for Circuit B (about +15%). It is known that refrigerant overcharge has little effect on EER and the only concern, generally, is that it may flood the compressor and damage it. Since there is no significant reduction in EER based upon the calculations, overcharge is not a large concern. Since the RTU at this site has TXV, subcooling is calculated and used to verify the refrigerant charge impact. Subcooling temperatures are illustrated in Fig. 11. For Circuit A, the subcooling is in the range of about 12–14 °C and for Circuit B, it is in the range of 16–18 °C. Typical subcooling for TXV-based systems is in the range of about 8–10 °C, indicating both circuits are slightly overcharged.

Fig. 10
Fault intensity of refrigerant charge for the RTU at Site 1 from the IMS evaluator tool
Fig. 10
Fault intensity of refrigerant charge for the RTU at Site 1 from the IMS evaluator tool
Close modal
Fig. 11
Subcooling for refrigerant Circuits A and B of RTU at Site 1 from the IMS evaluator tool
Fig. 11
Subcooling for refrigerant Circuits A and B of RTU at Site 1 from the IMS evaluator tool
Close modal

The fault intensity (FI) for evaporator and condenser fouling faults were also calculated and found to be acceptable, indicating no fouling faults.

Finally, the RTU runtime was examined to see if there is excessive cycling of the unit. Figure 12 presents the power for the main and its components. As shown in Fig. 12, there is cycling of the unit during nighttime, but the system is on continuously (at least one compressor stage) during the day. However, this July day was a hot summer day. During nighttime, there were three cycles per hour, indicating the system has been sized properly and that the cycling losses are minimal.

Fig. 12
RTU runtime with measured powers for various components of the system
Fig. 12
RTU runtime with measured powers for various components of the system
Close modal

6.2 Analysis of Results and Discussion.

As presented previously, FDD Tool 1 is a hybrid tool. A sample screenshot of the FDD Tool 1 dashboard is shown in Fig. 13. It shows various monitored variables as well as whether there are any issues related to energy performance, health of the unit, and comfort. The color green was observed for Energy and Health indicating there were no issues. For the initial baseline monitoring period, the FDD Tool did not indicate any performance, health, or comfort faults for the RTU at Site 1. The Tool indicated only a few transient drive fault alarms, related to its variable frequency drive (VFD), and they resolved themselves as shown in Fig. 14. These alarms did not have any impact on the RTU performance and the FDD tool indicated that the RTU at Site 1 was performing well during the initial baseline period. The IMS tool did not identify any faults for this site as discussed in Sec. 6.1. However, this limited verification of FDD Tool 1 does not indicate how effective it is in identifying faults. Unfortunately, it is the reality when demonstrating such tools in the field operating conditions.

Fig. 13
A sample screenshot of FDD Tool 1's dashboard
Fig. 13
A sample screenshot of FDD Tool 1's dashboard
Close modal
Fig. 14
A screenshot of Tool 1 platform showing the detected VFD drive faults
Fig. 14
A screenshot of Tool 1 platform showing the detected VFD drive faults
Close modal

Also, the capability of IMS in identifying faults can be verified by introducing controlled faults artificially. However, this approach would interfere with study that is based on naturally occurring faults. Also, getting the approval from the building owners to introduce faults into RTUs is a barrier for recruiting sites for study participation.

7 Conclusions and Future Work

The importance of FDD tools and the need for a comprehensive approach to bring the FDD tools into the mainstream are highlighted in this work. A methodology for evaluating the technical performance was proposed for FDD tools installed on RTUs operating under real conditions. The methodology was demonstrated for one installed FDD tool. The baseline testing will continue through summer of 2021, and recommendations for repairs and retro commissioning of RTUs will be identified by the data analysis using the IMS and FDD tools. Once the retro commissioning is complete, the RTUs will be monitored through continuous commissioning, and energy savings potential will be calculated. Also, the capability of IMS in identifying faults can be verified by introducing controlled faults artificially. This approach would interfere with study that is based on naturally occurring faults. However, this task could be attempted at the end of the project. This analysis, results, and recommendations for all sites will be provided in future work. This research also develops and defines an approach for conducting market and process evaluations for FDD products in the field. Detailed results and analysis of the market and process evaluations will also be provided in future work.

Acknowledgment

The authors would like to thank the United Illuminating Company and Eversource employees for helping the research team to recruit sites for the study and for review of proposed methods and results. We would also like to thank building owners, energy managers, mechanical contractors, and FDD vendors for participating in this study.

Funding Data

  • This study has been funded by the U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy under contract no. EE0008189. Cost sharing is provided by the United Illuminating Company and Eversource through funds from Energize CT, the University of New Haven, and the University of Connecticut.

Data Availability Statement

The authors attest that all data for this study are included in the paper.

Nomenclature

     
  • h =

    enthalpy

  •  
  • m =

    mass

  •  
  • P =

    pressure

  •  
  • Q =

    system capacity

  •  
  • T =

    temperature

  •  
  • W =

    power

  •  
  • m˙ =

    mass flowrate

  •  
  • V˙ =

    air flowrate

  •  
  • Xhs,rated =

    ratio of high-side charge to the total refrigerant charge at the rated condition

  •  
  • α0 =

    ratio of refrigerant charge necessary to have saturated liquid at the exit of the condenser to the rated refrigerant charge

  •  
  • υ˙ =

    specific volume of air

Subscripts

     
  • act =

    actual

  •  
  • CA =

    condenser fouling

  •  
  • CAI =

    condenser air inlet

  •  
  • CAO =

    condenser air outlet

  •  
  • ch =

    refrigerant charge

  •  
  • comp =

    compressor

  •  
  • cond =

    condenser

  •  
  • dis =

    discharge

  •  
  • EV =

    evaporator fouling

  •  
  • evap =

    evaporator

  •  
  • liq =

    liquid line

  •  
  • MA =

    mixed air

  •  
  • nom =

    nominal

  •  
  • OA =

    outdoor air

  •  
  • ref =

    refrigerant

  •  
  • RET =

    return air

  •  
  • sc =

    subcooling

  •  
  • sh =

    superheat

  •  
  • suc =

    suction line

  •  
  • sup =

    supply

Appendix

Table 9 provides more details on the specification of the FDD evaluator tool and the sensors used as part of its sensor network. The first column describes the sensor, the second column describes the model number, and the third column provides further details and specifications of the sensor. All sensors were pretested in the laboratory before installing in the field evaluator tool.

Table 9

Specifications for IMS sensors and instrumentation

Sensor/variableModelSpecification
Power meter (main, compressor, fans)Watt node WNB-3D-400-P0Hz = 300Wattnode pulse output energy meter; 3ph 3-wire 400 V (no neutral) or 3ph 4-wire 230 V/400 V. Custom 300 Hz full-scale output frequency (adds 10%)
Supply duct avg tempKele ST-FZR85-18-XN3 with T85U-3XW-M±0.36 °F (±0.2 °C) accuracy, (XN3 NIST certificate, three reference points 32 °F/77 °F/158 °F(0 °C/25 °C/70 °C)
Return duct average tempKele ST-FZR85-18-XN3 with T85U-3XW-M±0.36 °F (±0.2 °C) accuracy, (XN3 NIST certificate, three reference points 32 °F/77 °F/158 °F(0 °C/25 °C/70 °C)
Refrigerant temp (suction, discharge, condenser outlet)Logic Beach TC-TTU5-0.5-3UT-S-24AWG0.125“dia × 0.5” 316SS probe with potted ungrounded junction and 20 ft of 24AWG lead wire.
Air temp before fanLogic Beach TC-TVT6-00-20-S020 ft 22AWG Teflon stranded leads with Teflon fused tip for low mass thermal response and media protection. Part meets “1/2 Std. Limits” spec resulting in improved accuracy of +/−0.3 °C over 0–125 °C range.
Air temp after condenserLogic Beach TC-TVT6-00-20-S020 ft 22AWG Teflon stranded leads with Teflon fused tip for low mass thermal response and media protection. Part meets “1/2 Std. Limits” spec resulting in improved accuracy of +/−0.3 °C over 0–125 °C range.
Duct airflowFlow Measurement Technology NL 1 × 3±2% of reading, 3 nodes, single strut, NIST, Thermal Dispersion Airflow
Return temp/RHSetra SRH1-3P-D-11-T3-N0–99% RH, 3% humidity accuracy, −58 to 140 °F temp, 4-20 mA out, +/− 1.1 °F accuracy
Supply temp/RHSetra SRH1-3P-D-11-T3-N0–99% RH, 3% humidity accuracy, −58 to 140F temp, 4–20 mA out, +/− 1.1F accuracy
Mixed temp/RHSetra SRH1-3P-D-11-T3-N0–99% RH, 3% humidity accuracy, −58 to 140 °F temp, 4–20 mA out, +/− 1.1 °F accuracy
Outdoor temp/RHSetra SRH1-3P-O-11-T30–99% RH, 3% humidity accuracy, −58 to 140 °F temp, 4–20 mA out, +/−1.1F accuracy
Indoor temp and RHSetra SRH1-3P-W-11-T30–99% RH, 3% humidity accuracy, −58 to 140 °F temp, 4–20 mA out, +/−1.1 °F accuracy
Solar radiationLICOR L-Cor 200SZSolar Radiation Pyranometer; Includes mounting and leveling base and 50′ extension lead.
Refrigerant pressure transducerDwyer 628CR-75-GH-P2-E3-S1-NIST1% full scale, 4–20 mA, 1/4″ female National Pipe Thread (NPT), 0–10 bar
Refrigerant pressure transducerDwyer 628CR-81-GH-P2-E3-S1-NIST1% full scale, 4–20 mA, 1/4″ female NPT, 0–40 bar
Data loggerLogic Beach IL-80IntelliLogger IL-80, Data Acquisition and Reporting Instrument, network enabled with integral Web Page server. Includes Universal Serial Bus (USB) cable, Ethernet cable, 120Vac to 12Vdc power adapter
Sensor/variableModelSpecification
Power meter (main, compressor, fans)Watt node WNB-3D-400-P0Hz = 300Wattnode pulse output energy meter; 3ph 3-wire 400 V (no neutral) or 3ph 4-wire 230 V/400 V. Custom 300 Hz full-scale output frequency (adds 10%)
Supply duct avg tempKele ST-FZR85-18-XN3 with T85U-3XW-M±0.36 °F (±0.2 °C) accuracy, (XN3 NIST certificate, three reference points 32 °F/77 °F/158 °F(0 °C/25 °C/70 °C)
Return duct average tempKele ST-FZR85-18-XN3 with T85U-3XW-M±0.36 °F (±0.2 °C) accuracy, (XN3 NIST certificate, three reference points 32 °F/77 °F/158 °F(0 °C/25 °C/70 °C)
Refrigerant temp (suction, discharge, condenser outlet)Logic Beach TC-TTU5-0.5-3UT-S-24AWG0.125“dia × 0.5” 316SS probe with potted ungrounded junction and 20 ft of 24AWG lead wire.
Air temp before fanLogic Beach TC-TVT6-00-20-S020 ft 22AWG Teflon stranded leads with Teflon fused tip for low mass thermal response and media protection. Part meets “1/2 Std. Limits” spec resulting in improved accuracy of +/−0.3 °C over 0–125 °C range.
Air temp after condenserLogic Beach TC-TVT6-00-20-S020 ft 22AWG Teflon stranded leads with Teflon fused tip for low mass thermal response and media protection. Part meets “1/2 Std. Limits” spec resulting in improved accuracy of +/−0.3 °C over 0–125 °C range.
Duct airflowFlow Measurement Technology NL 1 × 3±2% of reading, 3 nodes, single strut, NIST, Thermal Dispersion Airflow
Return temp/RHSetra SRH1-3P-D-11-T3-N0–99% RH, 3% humidity accuracy, −58 to 140 °F temp, 4-20 mA out, +/− 1.1 °F accuracy
Supply temp/RHSetra SRH1-3P-D-11-T3-N0–99% RH, 3% humidity accuracy, −58 to 140F temp, 4–20 mA out, +/− 1.1F accuracy
Mixed temp/RHSetra SRH1-3P-D-11-T3-N0–99% RH, 3% humidity accuracy, −58 to 140 °F temp, 4–20 mA out, +/− 1.1 °F accuracy
Outdoor temp/RHSetra SRH1-3P-O-11-T30–99% RH, 3% humidity accuracy, −58 to 140 °F temp, 4–20 mA out, +/−1.1F accuracy
Indoor temp and RHSetra SRH1-3P-W-11-T30–99% RH, 3% humidity accuracy, −58 to 140 °F temp, 4–20 mA out, +/−1.1 °F accuracy
Solar radiationLICOR L-Cor 200SZSolar Radiation Pyranometer; Includes mounting and leveling base and 50′ extension lead.
Refrigerant pressure transducerDwyer 628CR-75-GH-P2-E3-S1-NIST1% full scale, 4–20 mA, 1/4″ female National Pipe Thread (NPT), 0–10 bar
Refrigerant pressure transducerDwyer 628CR-81-GH-P2-E3-S1-NIST1% full scale, 4–20 mA, 1/4″ female NPT, 0–40 bar
Data loggerLogic Beach IL-80IntelliLogger IL-80, Data Acquisition and Reporting Instrument, network enabled with integral Web Page server. Includes Universal Serial Bus (USB) cable, Ethernet cable, 120Vac to 12Vdc power adapter

The goal of the market study is to determine the levels of knowledge, awareness, and attitudes that exist for study participants that may influence the adoption of the FDD technology. The market study will be conducted with all study participants including customers, HVAC and controls technicians, FDD vendors, and utility program managers and engineers. Four market studies were reviewed to determine best practices for conducting this market study and survey: (1) 2012 Connecticut Efficient Lighting Saturation and Market Assessment [32], (2) 2015 Barriers to Commercial and Industrial Program Participation with a Focus on Financing and Cancellations [26], (3) 2014 Connecticut Ground Source Heat Pump Impact Evaluation & Market Assessment [28], and (4) 2015 Connecticut Commercial & Industrial (C&I) Market Research [31]. Table 10 depicts the survey questions grouped into four sections: (1) customer knowledge for FDD products, (2) customer awareness for FDD Products, (3) customer attitudes for FDD products, and (4) respondent characteristics. Descriptive statistics will be calculated and reported for each level of knowledge, awareness, and attitude for FDD adoption. Advanced analysis will be conducted to determine if correlations exist between knowledge, awareness, and attitudes and the respondent's intention to adopt FDD technology. Results of this study will be used to design marketing, outreach, and education programs that educate customers on the use of FDD technologies.

Table 10

Market survey questionnaire

Section 1: Customer knowledge for fault detection and diagnostics (FDD) products 
1. How would you characterize your knowledge level or experience with fault detection and diagnostics (FDD) products for HVAC-R systems?
2. What capabilities and characteristics should a good FDD tool possess?
3. Rank order the following HVAC-R faults from highest priority to lowest priority that you would like an FDD tool to detect.
4. Who do you think has more influence to promote and explain the benefits of FDD products in terms of energy efficiency and saving for you?
5. What information do you look to help you to make a decision when purchasing new products like FDD?
6. How could mechanical and electrical contractors help you better understand FDD products or explain and promote these products to you?
Section 2: Customer awareness for fault detection and diagnostics (FDD) products 
7. Is your company aware of the existence of fault detection and diagnostics  (FDD) products for HVAC and refrigeration systems? How would you characterize your awareness?
8. Have you ever asked your mechanical contractor, HVAC controls company, or energy consultant about FDD products?
9. Has anyone approached you to purchase FDD products? Who are the parties that get involved in these activities?
10. Do you think utility outreach efforts could help to spread the adoption and use of FDD products?
Section 3: Customer attitudes for fault detection and diagnostics (FDD) products
11. What are your key considerations for purchasing and using FDD products? What are you trying to improve or gain? Select all that apply.
12. Would any of the following motivate you to adopt FDD products, beyond the key considerations you identified in the last question?
13. Would including an installation and support option from an FDD vendor in the purchase price encourage you to adopt FDD products?
14. Do you own a building that has tenants that pay their own utility bills?
15. Do you think the long-term financing option for adopting FDD will help to increase adoption and use of FDD products?
16. In your opinion what is the most significant activity that utility energy efficiency programs could do to encourage businesses to purchase and install FDD products?
17. Who is the right person in your business to contact for doing an FDD project? Who is the final decision maker for investments in the building?
18. Rank order from highest level of barrier to lowest level these information barriers to the adoption of FDD products.
19. Rank order from highest level of barrier to lowest level these value proposition barriers to the adoption of FDD products.
20. Rank order from highest level of barrier to lowest level these operational barriers to the adoption of FDD products.
21. Rank order from highest level of barrier to lowest level for these technical barriers to the adoption of FDD products
22. How could these barriers or disincentives be removed or minimized?
23. Do you have a negative perception towards FDD products? Are you likely to not investigate or adopt FDD products?
Section 4: Respondent characteristics
24. Which of the following best describes your industry type? Select all that apply.
25. Which of the following categories best explain your role in FDD products? Select all that apply.
26. What is your largest building size?
27. What is your job title?
Section 1: Customer knowledge for fault detection and diagnostics (FDD) products 
1. How would you characterize your knowledge level or experience with fault detection and diagnostics (FDD) products for HVAC-R systems?
2. What capabilities and characteristics should a good FDD tool possess?
3. Rank order the following HVAC-R faults from highest priority to lowest priority that you would like an FDD tool to detect.
4. Who do you think has more influence to promote and explain the benefits of FDD products in terms of energy efficiency and saving for you?
5. What information do you look to help you to make a decision when purchasing new products like FDD?
6. How could mechanical and electrical contractors help you better understand FDD products or explain and promote these products to you?
Section 2: Customer awareness for fault detection and diagnostics (FDD) products 
7. Is your company aware of the existence of fault detection and diagnostics  (FDD) products for HVAC and refrigeration systems? How would you characterize your awareness?
8. Have you ever asked your mechanical contractor, HVAC controls company, or energy consultant about FDD products?
9. Has anyone approached you to purchase FDD products? Who are the parties that get involved in these activities?
10. Do you think utility outreach efforts could help to spread the adoption and use of FDD products?
Section 3: Customer attitudes for fault detection and diagnostics (FDD) products
11. What are your key considerations for purchasing and using FDD products? What are you trying to improve or gain? Select all that apply.
12. Would any of the following motivate you to adopt FDD products, beyond the key considerations you identified in the last question?
13. Would including an installation and support option from an FDD vendor in the purchase price encourage you to adopt FDD products?
14. Do you own a building that has tenants that pay their own utility bills?
15. Do you think the long-term financing option for adopting FDD will help to increase adoption and use of FDD products?
16. In your opinion what is the most significant activity that utility energy efficiency programs could do to encourage businesses to purchase and install FDD products?
17. Who is the right person in your business to contact for doing an FDD project? Who is the final decision maker for investments in the building?
18. Rank order from highest level of barrier to lowest level these information barriers to the adoption of FDD products.
19. Rank order from highest level of barrier to lowest level these value proposition barriers to the adoption of FDD products.
20. Rank order from highest level of barrier to lowest level these operational barriers to the adoption of FDD products.
21. Rank order from highest level of barrier to lowest level for these technical barriers to the adoption of FDD products
22. How could these barriers or disincentives be removed or minimized?
23. Do you have a negative perception towards FDD products? Are you likely to not investigate or adopt FDD products?
Section 4: Respondent characteristics
24. Which of the following best describes your industry type? Select all that apply.
25. Which of the following categories best explain your role in FDD products? Select all that apply.
26. What is your largest building size?
27. What is your job title?

The goal of the process study is to record the current process for installation of FDD products in the field. The research team monitored major installation steps and recorded what went well and where there were issues or problems with an FDD installation. Table 11 indicates 23 major steps that occurred in some or all of the installations. Typical steps, times, and issues will be reported in the process study as well as variance between installations.

Table 11

Common process steps for process evaluation methodology

1Select the buildings and units for FDD to be installed
2Determine FDD tool selection criteria: hardware-software, others
3Find FDD vendors and tools that meet selection criteria
4Contact FDD vendors and request quotation/proposal, request demos and case studies
5Collect the building and unit information that vendors require and send
6Receive quotation/proposal from FDD vendor
7Evaluate for information security and obtain permission from IT and facilities to connect and install tool
8Order FDD tool
9Confirm delivery date
10Receive and inspect hardware
11Receive installation instructions
12Coordinate contractor to install hardware and schedule installation date
13Coordinate installation date with people internal to organization
14Observe or oversee hardware installation, noting any issues or problems
15Resolve any installation, connection, or integration problems
16Return and exchange hardware items if needed
17Receive notification that installation is complete and working
18Request and receive FDD portal login information from vendor or technician
19Obtain technical support and request and receive FDD portal use information
20Schedule and attend a demonstration or training for portal use
21Add, modify, or delete the original rules
22Request and receive a report on baseline unit performance or self-create
23Plan actions based upon baseline unit performance
1Select the buildings and units for FDD to be installed
2Determine FDD tool selection criteria: hardware-software, others
3Find FDD vendors and tools that meet selection criteria
4Contact FDD vendors and request quotation/proposal, request demos and case studies
5Collect the building and unit information that vendors require and send
6Receive quotation/proposal from FDD vendor
7Evaluate for information security and obtain permission from IT and facilities to connect and install tool
8Order FDD tool
9Confirm delivery date
10Receive and inspect hardware
11Receive installation instructions
12Coordinate contractor to install hardware and schedule installation date
13Coordinate installation date with people internal to organization
14Observe or oversee hardware installation, noting any issues or problems
15Resolve any installation, connection, or integration problems
16Return and exchange hardware items if needed
17Receive notification that installation is complete and working
18Request and receive FDD portal login information from vendor or technician
19Obtain technical support and request and receive FDD portal use information
20Schedule and attend a demonstration or training for portal use
21Add, modify, or delete the original rules
22Request and receive a report on baseline unit performance or self-create
23Plan actions based upon baseline unit performance

References

1.
Goetzler
,
W.
,
Shandross
,
R.
,
Young
,
J.
,
Petritchenko
,
O.
,
Ringo
,
D.
, and
McClive
,
S.
,
2017
, “
Energy Savings Potential and RD&D Opportunities for Commercial Building HVAC Systems
,”
(No. DOE/EE-1703), Navigant Consulting, Burlington, MA, Report Submitted to U.S., Department of Energy, Energy Efficiency and Renewable Energy Building Technologies Program
.
2.
Goetzler
,
W.
,
Goffri
,
S.
,
Jasinski
,
S.
,
Legett
,
R.
,
Lisle
,
H.
,
Marantan
,
A.
,
Millard
,
M.
,
Pinault
,
D.
,
Westphalen
,
D.
, and
Zogg
,
R.
,
2009
, “
Energy Savings Potential and R&D Opportunities for Commercial Refrigeration
,”
Navigant Consulting Inc., Chicago, IL, Report Submitted to U.S., Department of Energy, Energy Efficiency and Renewable Energy Building Technologies Program
.
3.
Chen
,
B.
, and
Braun
,
J. E.
,
2000
, “
Simple Fault Detection and Diagnosis Methods for Packaged Air Conditioners Packaged Air Conditioners
,”
International Refrigeration and Air Conditioning Conference
,
Purdue University, West Lafayatte, IN
,
July 25–28
, Paper No. 498.
4.
Kim
,
M.
,
Payne
,
W. V.
,
Domanski
,
P. A.
,
Yoon
,
S. H.
, and
Hermes
,
C. J. L.
,
2009
, “
Performance of a Residential Heat Pump Operating in the Cooling Mode With Single Faults Imposed
,”
Appl. Therm. Eng.
,
29
(
4
), pp.
770
778
. 10.1016/j.applthermaleng.2008.04.009
5.
Smith
,
V.
,
2006
, “
Advanced Automated HVAC Fault Detection and Diagnostics Commercialization Program Final Report: Project 3
,”
Technical Report, Architectural Energy Corporation, Sacramento, CA
.
6.
Taasevigen
,
D.
,
Brambley
,
M. R.
,
Huang
,
Y.
,
Lutes
,
R.
, and
Gilbride
,
S.
,
2015
,”
Field Testing and Demonstration of the Smart Monitoring and Diagnostic System (SMDS) for Packaged Air Conditioners and Heat Pumps. PNNL-24000. Pacific Northwest National Lab (PNNL)
,
Richland, WA
.
7.
Katipamula
,
S.
,
Kim
,
W.
,
Lutes
,
R.
, and
Underhill
,
R.
,
2015
, Rooftop Unit Embedded Diagnostics: Automated Fault Detection and Diagnostics (AFDD) Development, Field Testing and Validation. No. PNNL-23790.
Pacific Northwest National Lab (PNNL)
,
Richland, WA
.
8.
Wang
,
J.
,
Gorbounov
,
M.
,
Yasar
,
M.
,
Reeve
,
H.
,
Hjortland
,
A. L.
,
Wang
,
O.
,
Jinliang Wang
,
A.
,
Braun
,
J. E.
,
Hjortland
,
A.
, and
Braun
,
J.
,
2016
, “
Lab and Field Evaluation of Fault Detection and Diagnostics for Advanced Roof Top Unit
,”
International Refrigeration and Air Conditioning Conference
,
West Lafayette, IN
,
July 11–14
, Paper No. 1590.
9.
Sinopoli
,
J.
, “
FDD Going Mainstream? Whose Fault Is It?
”, http://www.automatedbuildings.com/news/apr10/articles/sinopoli/100329091909sinopoli.htm
10.
Western HVAC Performance Alliance
,
2013
, “
Onboard and In-Field Fault Detection and Diagnostics—Industry Roadmap
”.
11.
Braun
,
J.
, and
Yuill
,
D.
,
2013
, FDD Evaluator 0.1, Version 0.1, Ray W. Herrick Laboratories-Purdue University, West Lafayette.
12.
Yuill
,
D. P.
, and
Braun
,
J. E.
,
2013
, “
Evaluating the Performance of Fault Detection and Diagnostics Protocols Applied to Air-Cooled Unitary Air-Conditioning Equipment
,”
HVACR Res.
,
19
(
7
), pp.
882
891
. 10.1080/10789669.2013.808135
13.
Wall
,
J.
, and
Guo
,
Y.
,
2018
, Evaluation of Next-Generation Automated Fault Detection & Diagnostics (FDD) Tools for Commercial Building Energy Efficiency—Part I: FDD Case Studies in Australia.
14.
Dexter
,
A.
, and
Pakanen
,
J.
,
2001
, “
Demonstrating Automated Fault Detection and Diagnosis Methods in Real Buildings
,” IEA Annexes of Final. Technical Research Center. Finland.
15.
Jagpal
,
R.
,
2006
, Technical Synthesis Report IEA Annex 34: Computer Aided Evaluation of HVAC System Performance.
16.
Mehrabi
,
M.
, and
Yuill
,
D.
,
2018
, “
Development of a Method for Testing Air-Side Fouling Effects on Outdoor Heat Exchangers (RP-1705)
,”
2018 ASHRAE Winter Conference
,
Chicago, IL
,
Jan. 22
, Paper No. CH-18-C033.
17.
Mehrabi
,
M.
, and
Yuill
,
D.
,
2017
, “
Generalized Effects of Refrigerant Charge on Normalized Performance Variables of Air Conditioners and Heat Pumps
,”
Int. J. Refrig.
,
76
, pp.
367
384
. 10.1016/j.ijrefrig.2017.02.014
18.
Mehrabi
,
M.
, and
Yuill
,
D.
,
2018
, “
Generalized Effects of Faults on Normalized Performance Variables of Air Conditioners and Heat Pumps
,”
Int. J. Refrig.
,
85
(
2018
), pp.
409
430
. 10.1016/j.ijrefrig.2017.10.017
19.
Reddy
,
T. A.
,
2007
, “
Formulation of a Generic Methodology for Assessing FDD Methods and Its Specific Adoption to Large Chillers
,”
ASHRAE Trans.
,
113
(
2
), pp.
334
342
.
20.
Frank
,
S.
,
Lin
,
G.
,
Jin
,
X.
,
Singla
,
R.
,
Farthing
,
A.
, and
Granderson
,
J.
,
2019
, “
A Performance Evaluation Framework for Building Fault Detection and Diagnosis Algorithms
,”
Energy Build.
,
192
, pp.
84
92
. 10.1016/j.enbuild.2019.03.024
21.
2017
ASHRAE Handbook. Fundamentals
,
Atlanta, GA
.
22.
REFPROP | NIST
,” https://www.nist.gov/srd/refprop, Accessed February 2, 2020.
23.
Li
,
H.
, and
Braun
,
J.
,
2009
, “
Development, Evaluation, and Demonstration of a Virtual Refrigerant Charge Sensor
,”
HVACR Res.
,
15
(
1
), pp.
117
136
. 10.1080/10789669.2009.10390828
24.
Kim
,
W.
, and
Braun
,
J. E.
,
2015
, “
Extension of a Virtual Refrigerant Charge Sensor
,”
Int. J. Refrig.
,
55
, pp.
224
235
. 10.1016/j.ijrefrig.2014.09.015
25.
AHRI Standard 210/140, 2008 Performance Rating of Unitary Air-Conditioning & Air-Source Heat Pump Equipment, Air-Conditioning, Heating, and Refrigeration Inst.
26.
Applied Public Policy Research Institute for Study and Evaluation
,
2015
, Connecticut C11: Barriers to Commercial and Industrial Program Participation with a Focus on Financing and Cancellations, Connecticut.
27.
Opinion Dynamics Corporation
,
2010
, “
Market Awareness Report for the Connecticut Energy Efficiency Fund
,”
Waltham, MA
.
28.
NMR Group Inc.
,
2014
, “
Connecticut Ground Source Heat Pump Impact Evaluation & Market Assessment
,”
Report Submitted to Connecticut Energy Efficiency Board, Connecticut Clean Energy Finance and Investment Authority, Somerville, MA
.
29.
Granderson
,
J.
,
Lin
,
G.
,
Singla
,
R.
,
Mayhorn
,
E.
, and
Ehrlich
,
P.
,
2018
, “
Commercial Fault Detection and Diagnostics Tools : What They Offer, How They Differ, and What’s Still Needed
,” Lawrence Berkeley National Laboratory, Pacific Northwest National Laboratory, August 2018.
30.
Granderson
,
J.
,
Singla
,
R.
,
Mayhorn
,
E.
,
Erlich
,
P.
,
Vrabie
,
D.
, and
Frank
,
S.
,
2017
, “
Characterization and Survey of Automated Fault Detection and Diagnostic Tools
,” Lawrence Berkeley National Laboratory, November 2017, LBNL Report Number LBNL-2001075.
31.
EMI Consulting
,
2015
, “
C17 : Connecticut Commercial & Industrial (C & I) Market Research
,”
Report Submitted to CT Energy Efficiency Board, Evaluation Committee, Seattle, WA
.
32.
NMR Group Inc.
,
2012
, “
Connecticut Efficient Lighting Saturation and Market Assessment
Report Submitted to the Connecticut Energy Efficiency Fund, Connecticut Light and Power and the United Illuminating Company, Somerville, MA
.