Annex F Availability and Reliability for Critical Operations Power Systems; and Development and Implementation of Functional Performance Tests (FPTs) for Critical Operations Power Systems

This informative annex is not a part of the requirements of this NFPA document but is included for informational purposes only.

  1. Availability and Reliability for Critical Operations Power Systems. Critical operations power systems may support facili‐ ties with a variety of objectives that are vital to public safety. Often these objectives are of such critical importance that system downtime is costly in terms of economic losses, loss of security, or loss of mission. For those reasons, the availability of the critical operations power system, the percentage of time that the system is in service, is important to those facilities. Given a specified level of availability, the reliability and main‐ tainability requirements are then derived based on that availa‐ bility requirement.Availability. Availability is defined as the percentage of time that a system is available to perform its function(s). Availability is measured in a variety of ways, including the following:(3) The logistics provided to support maintenance of the system. The number and availability of spares, mainte‐ nance personnel, and other logistics resources (refueling, etc.) combined with the system’s level of maintainability determine the total downtime following a system failure.Reliability. Reliability is concerned with the probability and frequency of failures (or lack of failures). A commonly used measure of reliability for repairable systems is MTBF. The equivalent measure for nonrepairable items is MTTF. Reliabil‐ ity is more accurately expressed as a probability over a given duration of time, cycles, or other parameter. For example, the reliability of a power plant might be stated as 95 percent proba‐ bility of no failure over a 1000-hour operating period while generating a certain level of power. Reliability is usually defined in two ways (the electrical power industry has historically not used these definitions):
    1. The duration or probability of failure-free performance
      where:Availability MTBF MTBF  MTTRunder stated conditions
    2. The probability that an item can perform its intended function for a specified interval under stated conditions [For nonredundant items, this is equivalent to the
    MTBF = mean time between failuresMTTF = mean time to failureMTTR = mean time to repairSee the following table for an example of how to establish required availability for critical operation power systems:
    Availability Hours of Downtime*image0.9 876image0.99 87.6image0.999 8.76image0.9999 0.876image0.99999 0.0876image0.999999 0.00876image0.9999999 0.000876*Based on a year of 8760 hours.
    Availability of a system in actual operations is determined by the following:
    1. The frequency of occurrence of failures. Failures may prevent the system from performing its function or may cause a degraded effect on system operation. Frequency of failures is directly related to the system’s level of relia‐ bility.
    2. The time required to restore operations following a system failure or the time required to perform mainte‐ nance to prevent a failure. These times are determined in part by the system’s level of maintainability.preceding definition (1). For redundant items this is equivalent to the definition of mission reliability.]Maintainability. Maintainability is a measure of how quickly and economically failures can be prevented through preventive maintenance, or system operation can be restored following failure through corrective maintenance. A commonly used measure of maintainability in terms of corrective maintenance is the mean time to repair (MTTR). Maintainability is not the same thing as maintenance. It is a design parameter, while maintenance consists of actions to correct or prevent a failure event.Improving Availability. The appropriate methods to use for improving availability depend on whether the facility is being designed or is already in use. For both cases, a reliability/availa‐ bility analysis should be performed to determine the availability of the old system or proposed new system in order to ascertain the hours of downtime (see the preceding table). The AHJ or government agency should dictate how much downtime is acceptable.Existing facilities: For a facility that is being operated, two basic methods are available for improving availability when the current level of availability is unacceptable: (1) Selectively adding redundant units (e.g., generators, chillers, fuel supply) to eliminate sources of single-point failure, and (2) optimizing maintenance using a reliability-centered maintenance (RCM) approach to minimize downtime. [Refer to NFPA 70B-2010, Recommended Practice for Electrical Equipment Maintenance.] A combination of the previous two methods can also be imple‐ mented. A third very expensive method is to redesign subsys‐ tems or to replace components and subsystems with higher reliability items. [Refer to NFPA 70B.]New facilities: The opportunity for high availability and relia‐ bility is greatest when designing a new facility. By applying anINFORMATIVE ANNEX F Annex F: Critical Operations Power Systems
      effective reliability strategy, designing for maintainability, and ensuring that manufacturing and commissioning do not nega‐ tively affect the inherent levels of reliability and maintainability, a highly available facility will result. The approach should be as follows:
      1. Develop and determine a reliability strategy (establish goals, develop a system model, design for reliability, conduct reliability development testing, conduct reliability accept‐ ance testing, design system delivery, maintain design relia‐ bility, maintain design reliability in operation).
      2. Develop a reliability program. This is the application of the reliability strategy to a specific system, process, or func‐ tion. Each step in the preceding strategy requires the selection and use of specific methods and tools. For example, various tools can be used to develop require‐ ments or evaluate potential failures. To derive require‐ ments, analytical models can be used, for example, quality function development (a technique for deriving more detailed, lower-level requirements from one level to another, beginning with mission requirements, i.e., customer needs). This model was developed as part of the total quality management movement. Parametric models can also be used to derive design values of reliability from operational values and vice versa. Analytical methods include but are not limited to things such as thermal anal‐ ysis, durability analysis, and predictions. Finally, one should evaluate possible failures. A failure modes and effects criticality analysis (FMECA) and fault tree analysis (FTA) are two methods for evaluating possible failures. The mission facility engineer should determine which
    As the equipment/components/systems are installed, quality assurance procedures are administered to verify that compo‐ nents are installed in accordance with minimum manufactur‐ ers’ recommendations, safety codes, and acceptable installation practices. Quality assurance discrepancies are then identified and added to a “commissioning action list” that must be recti‐ fied as part of the commissioning program. These items would usually be discussed during commissioning meetings. Discrep‐ ancies are usually identified initially by visual inspection.
    1. Review FPTs. The tests must be reviewed by the customer, electrical contractors, quality assurance personnel, maintenance personnel, and other key personnel (the commis‐ sioning team). Areas of concern include, among others, all functions of the system being tested, all major components included, whether the tests reflect the system operating docu‐ ments, and verification that the tests make sense.
    2. Make Changes to FPTs as Required. The commissioning authority then implements the corrections, questions answered, and additions.
    3. FPTs Approval. After the changes are made to the FPTs, they are submitted to the commissioning team. When it is acceptable, the customer or the designated approval authority approves the FPTs. It should be noted that even though the FPT is approved, problems that arise during the test (or areas not covered) must be addressed.
    Testing Implementation for FPTs. The final step in the successful commissioning plan is testing and proper execution of system-integrated tests.(3)method to use or whether to use both.Identify Reliability Requirements. The entire effort for designing for reliability begins with identifying the mission critical facility’s reliability requirements. These requirements are stated in a variety of ways, depending on the customer and the specific system. For a mission- critical facility, it would be the mission success probability.(1) Systems Ready to Operate. The FPTs can be implemen‐ ted as various systems become operative (i.e., test for the gener‐ ator system) or when the entire system is installed. However, the final “pull the plug” test is performed only after all systems are completely installed. If the electrical contractor (or subcon‐ tractor) implements the FPTs, a witness must initial each step of the test. The electrical contractor cannot employ the witness
  2. Development and Implementation of Functional Perform‐ ance Tests (FPTs) for Critical Operations Power Systems Devel‐ opment of FPT
    1. Submit Functional Performance Tests (FPTs). System/ component tests or FPTs are developed from submitted draw‐ ings, systems operating documents (SODs), and systems opera‐ tion and maintenance manuals (SOMMs), including large component testing (i.e., transformers, cable, generators, UPS), and how components operate as part of the total system. The commissioning authority develops the test and cannot be the installation contractor (or subcontractor).directly or indirectly.
    2. Perform Tests (FPTs). If the system fails the test, the problem must be resolved and the equipment or system retes‐ ted or the testing requirements re-analyzed until successful tests are witnessed. Once the system or equipment passes test‐ ing, it is verified by designated commissioning official.
    3. Customer Receives System. After all tests are completed (including the “pull the plug” test), the system is turned over to the customer.
Sidebar