<?xml version='1.0' encoding='utf-8'?>
<!DOCTYPE rfc [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">
]>
<?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
<!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.38 (Ruby 3.0.2) -->
<rfc xmlns:xi="http://www.w3.org/2001/XInclude" ipr="trust200902" docName="draft-ietf-cats-metric-definition-08" category="std" submissionType="IETF" tocInclude="true" sortRefs="true" symRefs="true" version="3">
  <!-- xml2rfc v2v3 conversion 3.33.0 -->
  <front>
    <title abbrev="CATS Metrics">CATS Metrics Definition</title>
    <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-08"/>
    <author initials="Y." surname="Kehan" fullname="Kehan Yao">
      <organization>China Mobile</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>yaokehan@chinamobile.com</email>
      </address>
    </author>
    <author initials="C." surname="Li" fullname="Cheng Li">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>c.l@huawei.com</email>
      </address>
    </author>
    <author initials="L. M." surname="Contreras" fullname="L. M. Contreras">
      <organization>Telefonica</organization>
      <address>
        <email>luismiguel.contrerasmurillo@telefonica.com</email>
      </address>
    </author>
    <author initials="J." surname="Ros-Giralt" fullname="Jordi Ros-Giralt">
      <organization>Qualcomm Europe, Inc.</organization>
      <address>
        <email>jros@qti.qualcomm.com</email>
      </address>
    </author>
    <author initials="G." surname="Zeng" fullname="Guanming Zeng">
      <organization>Huawei Technologies</organization>
      <address>
        <postal>
          <country>China</country>
        </postal>
        <email>zengguanming@huawei.com</email>
      </address>
    </author>
    <date year="2026" month="May" day="15"/>
    <area>Routing</area>
    <workgroup>Computing-Aware Traffic Steering</workgroup>
    <keyword>CATS, metrics</keyword>
    <abstract>
      <?line 99?>

<t>Computing-Aware Traffic Steering (CATS) is a traffic engineering approach that optimizes the steering of traffic to a service instance by considering the dynamic state of computing and network resources. To
enable such decisions, CATS components exchange metrics that describe resource conditions affecting service instance selection. This document focuses on compute and communication metrics for CATS and defines a
hierarchical abstraction of these metrics to improve interoperability, scalability, and operational simplicity. It does not aim to standardize raw infrastructure (Level 0) metrics; instead, it specifies higher-level representations that can be derived from raw measurements using aggregation and normalization functions.</t>
    </abstract>
    <note removeInRFC="true">
      <name>Discussion Venues</name>
      <t>Discussion of this document takes place on the
    Computing-Aware Traffic Steering Working Group mailing list (cats@ietf.org),
    which is archived at <eref target="https://mailarchive.ietf.org/arch/browse/cats/"/>.</t>
      <t>Source for this draft and an issue tracker can be found at
    <eref target="https://github.com/VMatrix1900/draft-cats-metric-definition"/>.</t>
    </note>
  </front>
  <middle>
    <?line 105?>

<section anchor="introduction">
      <name>Introduction</name>
      <t>Service providers are deploying computing capabilities across the network for hosting applications such as distributed AI workloads, AR/VR and driverless vehicles, among others. In these deployments, multiple service instances are replicated across various sites to ensure sufficient capacity for maintaining the required Quality of Experience (QoE) expected by the application. To support the selection of these instances, a framework called Computing-Aware Traffic Steering (CATS) is introduced in <xref target="I-D.ietf-cats-framework"/>.</t>
      <t>CATS is a traffic engineering approach that optimizes the steering of traffic to a given service instance by considering the dynamic nature of computing and network resources. To achieve this, CATS components require performance metrics for both communication and compute resources. Since these resources are deployed by multiple providers, standardized metrics are essential to ensure interoperability and enable precise traffic steering decisions, thereby optimizing resource utilization and enhancing overall system performance.</t>
      <t>There are already well-defined network metrics for traffic steering, such as Traffic Engineering (TE) metrics and IGP metrics (e.g., link delay, link delay variation)<xref target="RFC7471"/>, which have been in use in network systems for a long time. In the context of CATS, computing metrics need to be introduced to enable joint TE decisions. <xref target="DMTF"/> defines some fine-grained computing metrics, such as CPU utilization, but directly using these fine-grained computing metrics lacks scalability.</t>
      <t>This document does not attempt to standardize low-level fine-grained performance metrics. Instead, it organizes computing and communication metrics into three abstraction levels and defines a metric framework based on aggregation and normalization functions. The framework specifies four categories of Level 1 metrics and a normalized Level 2 metric, balancing metric expressiveness with scalability and ease of use.</t>
    </section>
    <section anchor="conventions-and-definitions">
      <name>Conventions and Definitions</name>
      <t>This document uses the following terms defined in <xref target="I-D.ietf-cats-framework"/>:</t>
      <ul spacing="normal">
        <li>
          <t>Computing-Aware Traffic Steering (CATS)</t>
        </li>
        <li>
          <t>Service</t>
        </li>
        <li>
          <t>Service site</t>
        </li>
        <li>
          <t>Service contact instance</t>
        </li>
        <li>
          <t>CATS Service Contact Instance ID (CSCI-ID)</t>
        </li>
        <li>
          <t>CATS Service Metric Agent (C-SMA)</t>
        </li>
        <li>
          <t>CATS Network Metric Agent (C-NMA)</t>
        </li>
        <li>
          <t>CATS Path Selector (C-PS)</t>
        </li>
      </ul>
      <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they appear in all capitals, as shown here.</t>
    </section>
    <section anchor="design-principles">
      <name>Design Principles</name>
      <section anchor="three-level-metrics">
        <name>Three-Level Metrics</name>
        <t>As outlined in <xref target="I-D.ietf-cats-usecases-requirements"/>, the resource model that defines CATS metrics MUST be scalable, ensuring that its implementation remains within a reasonable and sustainable cost. To that end, a CATS system should select the most appropriate metrics for instance selection, recognizing that different metrics may influence outcomes in distinct ways depending on the specific use case.</t>
        <t>Defining metrics requires carefully balancing multiple considerations, including metric diversity, granularity, and rate of change (e.g., update frequency or advertisement churn). An excessive number of
metrics, overly fine granularity, or high update frequency can lead to significant signaling overhead, reducing scalability of the metric distribution protocol. In contrast, metrics that are too few, too
coarse-grained, or updated too infrequently may fail to provide sufficient information to support effective operational decisions.</t>
        <t>Conceptually, it is necessary to define at least two fundamental levels of metrics: one comprising all raw metrics, and the other representing a simplified form---consisting of a single value that encapsulates the overall capability of a service instance.</t>
        <t>However, such a definition may reduce implementation flexibility across diverse CATS use cases. Implementers typically seek balanced approaches that carefully manage trade-offs among encoding complexity, accuracy, scalability, and extensibility.</t>
        <t>To ensure scalability while providing sufficient detail for effective decision-making, this document provides a definition of metrics that incorporates three levels of abstraction:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Level 0: Raw metrics.</strong> These metrics are presented without abstraction, with each metric using its own unit and format as defined by the underlying resource.</t>
          </li>
          <li>
            <t><strong>Level 1: Metrics combined into categories.</strong> These metrics are derived from Level 0 metrics by applying aggregation functions and, optionally, normalization functions to form category-specific metrics, such as computing and communication.</t>
          </li>
          <li>
            <t><strong>Level 2: A single normalized metric.</strong> This metric is computed by aggregating lower-level metrics (Level 0
or Level 1) and applying normalization to produce a single, unitless Level 2 score within a defined range.</t>
          </li>
        </ul>
      </section>
      <section anchor="level-0-raw-metrics">
        <name>Level 0: Raw Metrics</name>
        <t>Level 0 metrics represent detailed, raw measurements collected from
underlying resources. These metrics are typically service-specific and
are not abstracted.</t>
        <t>Examples of Level 0 metrics include, but are not limited to:</t>
        <ul spacing="normal">
          <li>
            <t><strong>CPU:</strong> Base frequency, boosted frequency, number of cores, core
utilization, memory bandwidth, memory capacity, memory utilization,
and power consumption.</t>
          </li>
          <li>
            <t><strong>GPU:</strong> Frequency, number of processing units, memory bandwidth,
memory capacity, memory utilization, core utilization, and power
consumption.</t>
          </li>
          <li>
            <t><strong>NPU:</strong> Computational capacity, utilization, and power consumption.</t>
          </li>
          <li>
            <t><strong>Communication:</strong> Throughput, bandwidth, link utilization, packet
loss, delay, jitter, traffic counters (bytes and packets), and other
network performance indicators.</t>
          </li>
          <li>
            <t><strong>Storage:</strong> Available capacity, read throughput, and write throughput.</t>
          </li>
          <li>
            <t><strong>Service-specific metrics:</strong> Request rate (e.g., requests per second),
output rate (e.g., tokens per second), and other application-level
performance indicators.</t>
          </li>
        </ul>
        <t>Level 0 metrics serve as the foundational inputs for the metric
hierarchy. Some metrics are derived from monitoring systems (e.g.,
telemetry or counters), others reflect dynamic runtime state, and
others may correspond to relatively static properties of the underlying
infrastructure. These metrics provide the basic information required to
derive higher-level metrics, as described in the following sections.</t>
        <t>Level 0 metrics can be encoded and exposed using an Application Programming Interface (API), such as a RESTful API, and can be technology- and implementation-specific. Different resources can have their own metrics, each conveying unique information about their status. These metrics can generally have units, such as bits per second (bps) or floating point instructions per second (flops), or be unitless, such as CPU utilization.</t>
        <t>As examples, <xref target="RFC8911"/> and <xref target="RFC8912"/> define various network performance
metrics and their associated registries, while <xref target="DMTF"/> defines a
set of computing metrics. These Level 0 metrics are not standardized in
this document; rather, they serve as foundational inputs that can be used
within CATS to derive higher-level metrics.</t>
      </section>
      <section anchor="level-1-metrics-combined-in-categories">
        <name>Level 1: Metrics Combined in Categories</name>
        <t>Level 1 metrics are grouped into four categories: computing, communication, service, and composed, with the possibility of additional categories being defined in future specifications. For each category, a single Level 1 metric is derived through an aggregation function and, when appropriate, further normalized to
yield a unitless score reflecting the performance of the underlying resources. The Level 1 categories are described as follows:</t>
        <ul spacing="normal">
          <li>
            <t><strong>Computing:</strong> A value derived from aggregating one or more computing-related Level 0 metrics, such as CPU, GPU, and NPU utilization.</t>
          </li>
          <li>
            <t><strong>Communication:</strong> A value derived from aggregating one or more communication-related Level 0 metrics, such as communication throughput.</t>
          </li>
          <li>
            <t><strong>Service:</strong> A value derived from aggregating one or more service-related Level 0 metrics, such as tokens per second and service availability</t>
          </li>
          <li>
            <t><strong>Composed:</strong> A value derived from aggregating a combination of computing, communication, and service metrics.</t>
          </li>
        </ul>
        <t>Refer to <xref target="aggregation-function"/> and <xref target="normalization-function"/> for the definitions and examples of aggregation functions and normalization functions, respectively. Refer to <xref target="score-meaning"/> for the default policies and guidance provided to implementations.</t>
        <t>Level 1 metrics allow to focus solely on the metric categories and their simple values, thereby avoiding the need to process solution-specific Level 0 metrics.</t>
      </section>
      <section anchor="level-2-a-single-normalized-metric">
        <name>Level 2: A Single Normalized Metric</name>
        <t>The Level 2 metric is a single, normalized score derived from lower-level metrics (Level 0 and/or Level 1) through the application of aggregation and normalization functions. Different implementations
may apply different functions to characterize the overall performance of the underlying computing and communication resources. By consolidating multiple lower-level metrics into a single score, the Level 2 metric significantly reduces the complexity associated with metric collection and distribution. <xref target="score-meaning"/> further describes default policies for implementations.</t>
        <t>Figure 1 provides a summary of the logical relationships between metrics across the three levels of abstraction.</t>
        <figure anchor="fig-metric-levels">
          <name>Logic of CATS Metrics in levels</name>
          <artwork><![CDATA[
                                   +--------+
              Level 2 Metric:      |   M2   |
                                   +---^----+
                                       |
                         +-------------+-----------+------------+
                         |             |           |            |
                     +---+----+        |       +---+----+   +---+----+
 Level 1 Metrics:    |  M1-1  |        |       |  M1-2  |   |  M1-3  | (...)
                     +---^----+        |       +---^----+   +----^---+
                         |             |           |             |
                    +----+---+         |       +---+----+        |
                    |        |         |       |        |        |
                 +--+---+ +--+---+ +---+--+ +--+---+ +--+---+ +--+---+
 Level 0 Metrics:| M0-1 | | M0-2 | | M0-3 | | M0-4 | | M0-5 | | M0-6 | (...)
                 +------+ +------+ +------+ +------+ +------+ +------+

]]></artwork>
        </figure>
      </section>
    </section>
    <section anchor="cats-metrics-framework-and-specification">
      <name>CATS Metrics Framework and Specification</name>
      <t>The CATS metrics framework defines how metrics are encoded and transmitted over the network. The representation should be flexible enough to accommodate various types of metrics along with their respective units and precision levels, yet simple enough to enable easy implementation and deployment across heterogeneous edge environments.</t>
      <t>The design of the CATS metrics framework is guided by the following
principles:</t>
      <ul spacing="normal">
        <li>
          <t><strong>Semantic granularity and extensibility:</strong> The framework adopts a
layered abstraction of metrics to balance expressiveness and
scalability. By organizing metrics into multiple levels of increasing
abstraction (e.g., raw, aggregated, and normalized), it enables
implementations to select the appropriate level of detail for their
use case. This approach allows fine-grained metrics to be preserved at
lower levels while exposing more compact and semantically meaningful
representations at higher levels. In addition, the layered design
supports extensibility by allowing new metrics and categories to be
introduced without disrupting existing deployments.</t>
        </li>
        <li>
          <t><strong>Interoperability and flexibility:</strong> The framework allows
implementation-specific aggregation and normalization functions to
accommodate diverse deployment scenarios and operational objectives.
At the same time, it defines common metric structures and introduces
default policies to guide interpretation, ensuring a consistent
understanding of metrics across vendors and domains. This combination
of flexibility and guidance enables interoperability while preserving
innovation and adaptability in metric computation and usage.</t>
        </li>
        <li>
          <t><strong>Metric provenance and transparency:</strong> The framework explicitly captures the
origin and context of metrics by introducing a "Source" field, following the
model defined in <xref target="RFC9439"/>. This field distinguishes whether a metric
value is derived from direct measurement, estimation, aggregation, or
normalization. By identifying the source of each metric, the framework
improves transparency and enables implementations to better assess the
reliability, accuracy, and semantics of the reported values.</t>
        </li>
      </ul>
      <section anchor="cats-metric-fields">
        <name>CATS Metric Fields</name>
        <t>Each CATS metric is expressed as a structured set of fields, with each field describing a specific property of the metric. The following definition introduces the fields used in the CATS metric representations.</t>
        <ul spacing="normal">
          <li>
            <t><strong>Metric_Type</strong>: This field specifies the category or kind of CATS metric being reported, such as computational resources, storage capacity, or network bandwidth. It acts as a label that enables network devices to identify the purpose of the metric.</t>
          </li>
          <li>
            <t><strong>Level</strong>: This field specifies the level at which the metric is measured. It is used to categorize the metric based on its granularity and scope. There are only three valid metric levels defined in  <xref target="three-level-metrics"/>. This field can take two values: 1 for Level 1 and 2 for Level 2.</t>
          </li>
          <li>
            <t><strong>Format</strong>: This field indicates the data encoding format of the metric, such as uint, ieee_754_float.</t>
          </li>
          <li>
            <t><strong>Length</strong>: This field indicates the size of the value field measured in octets (bytes). It specifies how many bytes are used to store the value of the metric. The length field is important for memory allocation and data handling, ensuring that the value is stored and retrieved correctly.</t>
          </li>
          <li>
            <t><strong>Unit</strong>: This field defines the measurement units for the metric, such as hertz (Hz) for frequency, bytes (B) for data size, or bits per seconds (bps) for data transfer rate. It is usually associated with the metric to provide context for the value.</t>
          </li>
          <li>
            <t><strong>Source</strong>: This field describes the origin of the information used to obtain the metric. This field is optional. It may include one or more of the following non-mutually exclusive values:  </t>
            <ul spacing="normal">
              <li>
                <t>'nominal'. Similar to <xref target="RFC9439"/>, "a 'nominal' metric indicates that the metric value is statically configured by the underlying devices.  For example, bandwidth can indicate the maximum transmission rate of the involved device.</t>
              </li>
              <li>
                <t>'estimation'. The 'estimation' source indicates that the metric value is computed through an estimation process.</t>
              </li>
              <li>
                <t>'directly measured'. This source indicates that the metric is obtained directly from the underlying device and it is not estimated.</t>
              </li>
              <li>
                <t>'normalization'. The 'normalization' source indicates that the metric value is normalized. This type of metrics does not have units. This document specifies that the normalized value range for each metric is 0 to 10, where 0 indicates the poorest compute/composed capability, and 10 indicates the optimal compute/composed capability.</t>
              </li>
              <li>
                <t>'aggregation'. This source indicates that the metric value is obtained by using an aggregation function.</t>
              </li>
            </ul>
            <t>
Nominal metrics have inherent physical meanings and specific units without any additional processing. Aggregated metrics may or may not have physical meanings, but they retain their significance relative to the directly measured metrics. Normalized metrics, on the other hand, might have physical meanings but lack units.</t>
          </li>
          <li>
            <t><strong>Statistics</strong>: This field provides additional details about the metrics, particularly if there is any pre-computation performed on the metrics before they are collected. This field is optional. It is useful for services that require specific statistics for service instance selection. The 'Statistics' field must be used together with the 'Measurement_Window' parameter to indicate the sampling time interval. There are four kinds of statistics:  </t>
            <ul spacing="normal">
              <li>
                <t>'max'. The maximum value of the data collected over the intervals.</t>
              </li>
              <li>
                <t>'min'. The minimum value of the data collected over the intervals.</t>
              </li>
              <li>
                <t>'mean'. The average value of the data collected over the intervals.</t>
              </li>
              <li>
                <t>'cur'. The current value of the data collected.</t>
              </li>
            </ul>
          </li>
          <li>
            <t><strong>Value</strong>: This field represents the actual numerical value of the metric being measured. It provides the specific data point for the metric in question.</t>
          </li>
        </ul>
        <t>The value assignment and encoding rules for these fields are specified in Section <xref target="level-metric-representations"/>.</t>
      </section>
      <section anchor="aggregation-and-normalization-functions">
        <name>Aggregation and Normalization Functions</name>
        <t>In the context of CATS metric processing, aggregation and normalization are two fundamental operations that transform raw and derived metrics into forms suitable for decision-making and comparison across heterogeneous systems.</t>
        <section anchor="aggregation-function">
          <name>Aggregation</name>
          <t>Aggregation functions combine multiple values into a single representative value. Aggregation functions can be applied at all metric levels. This document supports the spatial aggregation and temporal aggregation that are defined in <xref target="RFC5835"/>, and further defines cross-category aggregation which can aggregate metrics from different types into a single value. The following are aggregation examples supported by CATS:</t>
          <ul spacing="normal">
            <li>
              <t>Spatial or temporal aggregation of multiple metrics of the same type to produce a derived metric. In this case, because the input metrics are homogeneous, the resulting metric may retain the same units as the inputs. For example, CPU utilization measurements (expressed in percentage) collected from multiple service instances (spatial aggregation) or averaged over consecutive time intervals (temporal aggregation) can be aggregated to produce a representative CPU utilization metric. Such aggregation concepts are consistent with those described in <xref target="RFC5835"/>.</t>
            </li>
            <li>
              <t>Aggregation of multiple metrics of different types to produce a higher-level metric that captures combined behavior across resource dimensions. In this case, because the input metrics use different units, the resulting metric cannot retain physical units and must be expressed as a unitless value. For example, CPU capacity (expressed in Hz) and available memory (expressed in bytes) can be combined through aggregation to generate a single computing-time metric that characterizes overall processing capability.</t>
            </li>
          </ul>
          <t>Some common aggregation functions include:</t>
          <ul spacing="normal">
            <li>
              <t>Mean: Computes the arithmetic mean of a set of input values.</t>
            </li>
            <li>
              <t>Minimum / Maximum: Selects the lowest or highest value from a set of input values.</t>
            </li>
            <li>
              <t>Weighted average: Computes an average by applying weights to individual values according to their relative importance or priority.</t>
            </li>
          </ul>
          <t>Aggregation functions are not standardized in this document. They are implementation-specific and controlled by operator policies.</t>
          <figure anchor="fig-agg-funct">
            <name>Aggregation function</name>
            <artwork><![CDATA[
    +-----------+     +-------------------+
    | Metric 1  |---->|                   |
    +-----------+     |    Aggregation    |     +------------+
           ...        |     Function      |---->| Metric n+1 |
    +-----------+     |                   |     +------------+
    | Metric n  |---->|                   |
    +-----------+     +-------------------+

    Input: Multiple values              Output: A single value

]]></artwork>
          </figure>
        </section>
        <section anchor="normalization-function">
          <name>Normalization</name>
          <t>Normalization functions convert a metric value (with or without units) into a unitless normalized score. Normalized metrics facilitate composite scoring and ranking, and can be used to produce Level 1 and Level 2 metrics. The following are normalization examples supported by CATS:</t>
          <ul spacing="normal">
            <li>
              <t>Normalizing a single Level 0 metric to generate a Level 1 or Level 2 normalized metric;</t>
            </li>
            <li>
              <t>Normalizing the output of aggregating multiple Level 0 metrics, to generate a Level 1 normalized metric.</t>
            </li>
          </ul>
          <t>Normalization functions are commonly used to transform metric values into a bounded range (e.g., an integer scale from 0 to 10) using techniques such as sigmoid function and min-max scaling <xref target="Min-max-sigmoid"/>:</t>
          <ul spacing="normal">
            <li>
              <t>Sigmoid function: Smoothly maps input values to a bounded range.</t>
            </li>
            <li>
              <t>Min-max scaling: Rescales values based on known minimum and maximum bounds.</t>
            </li>
          </ul>
          <t>These normalization functions are also not standardized in this document. They are implementation-specific and controlled by operator policies.</t>
          <figure anchor="fig-norm-funct">
            <name>Normalization function</name>
            <artwork><![CDATA[
  +----------+     +------------------------+     +----------+
  | Metric 1 |---->| Normalization Function |---->| Metric 2 |
  +----------+     +------------------------+     +----------+

  Input:  Value with or without units         Output: Unitless value
]]></artwork>
          </figure>
        </section>
      </section>
      <section anchor="score-meaning">
        <name>On the Meaning of Scores in Heterogeneous Metrics Systems</name>
        <t>In a system like CATS, where metrics originate from heterogeneous resources---such as compute, communication, and storage---the interpretation of scores requires careful consideration. While normalization functions can convert raw metrics into unitless scores to enable comparison, these scores may not be directly comparable across different implementations. For example, a score of 7 on a scale from 0 to 10 may represent a high-quality resource in one implementation, but only an average one in another.</t>
        <t>To achieve consistent cross-vendor behavior, the default normalization policies defined in this document should be followed by all implementations:</t>
        <ul spacing="normal">
          <li>
            <t>Score directions and semantic mapping:
A common 0-10 numeric range should be used for all normalized scores. Unless otherwise specified by the implementation in accompanying documentation, scores in the range 0-3 indicate low capability (not recommended for steering), 4-7 indicate medium capability (steering optional), and 8-10 indicate high capability (priority for steering). This mapping is normative for all CATS Level 1 and Level 2 metrics defined in this document.</t>
          </li>
          <li>
            <t>Normalization function baseline:
Unless documented otherwise, implementations should use min-max scaling to map the aggregated raw value into the 0-10 range, based on implementation-specific minimum and maximum expected values. Other functions (e.g., sigmoid) are permitted but their parameters must be documented.</t>
          </li>
          <li>
            <t>Measurement window: There is no fixed default measurement window. For illustration, a window of 10 seconds is suggested as an example. Implementations can use their chosen window length, but they must indicate the window length as a parameter (i.e., via the Measurement_Window field defined in the registry entries).</t>
          </li>
        </ul>
      </section>
      <section anchor="level-metric-representations">
        <name>Level Metric Representations</name>
        <t>This section specifies the representation format and constraints for
Level 1 and Level 2 metrics, ensuring consistent encoding and
interoperability across implementations.</t>
        <section anchor="level-0-metrics">
          <name>Level 0 Metrics</name>
          <t>Level 0 metrics are raw metrics that are not standardized in this
document. See <xref target="appendix-level-0"/> for examples of Level 0 metrics
defined in the compute and communication industries and by other
standardization organizations such as the <xref target="DMTF"/>.</t>
        </section>
        <section anchor="level-1-metrics">
          <name>Level 1 Metrics</name>
          <t>Level 1 metrics are derived from Level 0 metrics through the application
of aggregation functions and, when appropriate, normalization functions.
Depending on how they are formed, Level 1 metrics MAY retain physical
units inherited from their inputs or MAY be expressed as unitless values.</t>
          <t>Level 1 metrics are organized into semantic categories such as computing,
communication, service, and composed metrics. This categorization
provides context and meaning to the resulting metrics and enables
consistent interpretation across implementations.</t>
          <t>The <tt>Source</tt> field indicates how the metric value is derived. For Level 1
metrics, typical values include:</t>
          <ul spacing="normal">
            <li>
              <t><tt>aggregation</tt>: The value is obtained by combining Level 0 metrics
without normalization and MAY retain a physical unit.</t>
            </li>
            <li>
              <t><tt>normalization</tt>: The value is mapped into a unitless score.</t>
            </li>
          </ul>
          <section anchor="level-1-computing-metrics">
            <name>Level 1 Computing Metrics</name>
            <t>The Metric Type for Level 1 computing metrics is <tt>level1_computing</tt>.</t>
            <t><strong>Example A: Aggregation-derived (with units)</strong></t>
            <artwork><![CDATA[
Fields:
      Metric_type: level1_computing
      Level: Level 1
      Format: unsigned integer
      Length: two octets
      Unit: mhz
      Source: aggregation
      Value: 2400
]]></artwork>
            <t><strong>Example B: Normalized (unitless)</strong></t>
            <figure anchor="fig-level1-compute-metric">
              <name>Examples of Level 1 computing metrics</name>
              <artwork><![CDATA[
Fields:
      Metric_type: level1_computing
      Level: Level 1
      Format: unsigned integer
      Length: one octet
      Source: normalization
      Value: 5
]]></artwork>
            </figure>
          </section>
          <section anchor="level-1-communication-metrics">
            <name>Level 1 Communication Metrics</name>
            <t>The Metric Type for Level 1 communication metrics is <tt>level1_communication</tt>.</t>
            <t><strong>Example A: Aggregation-derived (with units)</strong></t>
            <artwork><![CDATA[
Fields:
      Metric_type: level1_communication
      Level: Level 1
      Format: unsigned integer
      Length: two octets
      Unit: mbps
      Source: aggregation
      Value: 800
]]></artwork>
            <t><strong>Example B: Normalized (unitless)</strong></t>
            <figure anchor="fig-level1-communication-metric">
              <name>Examples of Level 1 communication metrics</name>
              <artwork><![CDATA[
Fields:
      Metric_type: level1_communication
      Level: Level 1
      Format: unsigned integer
      Length: one octet
      Source: normalization
      Value: 1
]]></artwork>
            </figure>
          </section>
          <section anchor="level-1-service-metrics">
            <name>Level 1 Service Metrics</name>
            <t>The Metric Type for Level 1 service metrics is <tt>level1_service</tt>.</t>
            <t><strong>Example A: Aggregation-derived (with units)</strong></t>
            <artwork><![CDATA[
Fields:
      Metric_type: level1_service
      Level: Level 1
      Format: unsigned integer
      Length: two octets
      Unit: rps
      Source: aggregation
      Value: 45
]]></artwork>
            <t><strong>Example B: Normalized (unitless)</strong></t>
            <figure anchor="fig-level1-service-metric">
              <name>Examples of Level 1 service metrics</name>
              <artwork><![CDATA[
Fields:
      Metric_type: level1_service
      Level: Level 1
      Format: unsigned integer
      Length: one octet
      Source: normalization
      Value: 7
]]></artwork>
            </figure>
          </section>
          <section anchor="level-1-composed-metrics">
            <name>Level 1 Composed Metrics</name>
            <t>The Metric Type for Level 1 composed metrics is <tt>level1_composed</tt>.</t>
            <t><strong>Example A: Aggregation-derived (with units)</strong></t>
            <artwork><![CDATA[
Fields:
      Metric_type: level1_composed
      Level: Level 1
      Format: unsigned integer
      Length: two octets
      Unit: ms
      Source: aggregation
      Value: 20
]]></artwork>
            <t><strong>Example B: Normalized (unitless)</strong></t>
            <figure anchor="fig-level1-composed-metric">
              <name>Examples of Level 1 composed metrics</name>
              <artwork><![CDATA[
Fields:
      Metric_type: level1_composed
      Level: Level 1
      Format: unsigned integer
      Length: one octet
      Source: normalization
      Value: 8
]]></artwork>
            </figure>
          </section>
        </section>
        <section anchor="level-2-global-metric">
          <name>Level 2 Global Metric</name>
          <t>A Level 2 metric is a single-value, normalized metric that does not
carry any inherent physical unit. While each provider may employ its own
internal methods to compute this value, all providers MUST adhere to the
representation defined in this section to ensure consistent encoding and
interoperable interpretation of the normalized output.</t>
          <t>The Metric Type is <tt>level2_global</tt> and the Source must be <tt>normalization</tt>.</t>
          <figure anchor="fig-level-2-metric">
            <name>Example of a normalized Level 2 metric</name>
            <artwork><![CDATA[
Fields:
      Metric_type: level2_global
      Level: Level 2
      Format: unsigned integer
      Length: one octet
      Source: normalization
      Value: 1
]]></artwork>
          </figure>
        </section>
      </section>
    </section>
    <section anchor="comparison-among-metric-levels">
      <name>Comparison among Metric Levels</name>
      <t>Metrics are progressively consolidated from Level 0 to Level 1 and then to Level 2, with each level offering an increasing degree of abstraction to address the diverse requirements of different services. Table 1 provides a comparative overview of the defined metric levels.</t>
      <table anchor="comparison">
        <name>Comparison among Metrics Levels</name>
        <thead>
          <tr>
            <th align="center">Level</th>
            <th align="left">Encoding Complexity</th>
            <th align="left">Extensibility</th>
            <th align="left">Stability</th>
            <th align="left">Accuracy</th>
          </tr>
        </thead>
        <tbody>
          <tr>
            <td align="center">Level 0</td>
            <td align="left">High</td>
            <td align="left">Low</td>
            <td align="left">Low</td>
            <td align="left">High</td>
          </tr>
          <tr>
            <td align="center">Level 1</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
            <td align="left">Medium</td>
          </tr>
          <tr>
            <td align="center">Level 2</td>
            <td align="left">Low</td>
            <td align="left">High</td>
            <td align="left">High</td>
            <td align="left">Low</td>
          </tr>
        </tbody>
      </table>
      <t>Since Level 0 metrics are raw and service-specific, individual services may define their own metric sets, potentially resulting in hundreds or even thousands of distinct metrics across deployments. This diversity introduces significant complexity in protocol encoding and standardization. Consequently, Level 0 metrics are confined to bespoke implementations tailored to specific service needs, rather than being standardized for broad protocol use. In contrast, Level 1 metrics organize raw data into standardized categories, each consolidated into a single value. This structure makes them more suitable for protocol encoding and standardization. The Level 2 metric takes simplification a step further by consolidating all relevant information into a single normalized value, making them the easiest to encode, transmit, and standardize.</t>
      <t>Therefore, from the perspective of encoding complexity, Level 1 and Level 2 metrics are recommended.</t>
      <t>When considering extensibility, Level 0 metrics allow new services to define their own custom metrics. However, this flexibility requires corresponding protocol extensions, and the proliferation of metric types can introduce significant overhead, ultimately reducing the protocol's extensibility. In contrast, Level 1 metrics introduce only a limited set of standardized categories, making protocol extensions more manageable. Level 2 metrics go even further by consolidating all information into a single normalized value, placing the least burden on the protocol.</t>
      <t>Therefore, from an extensibility standpoint, Level 1 and Level 2 metrics are recommended.</t>
      <t>Regarding stability, Level 0 raw metrics would require frequent protocol extensions as new metrics are introduced, leading to an unstable and evolving protocol format. For this reason, standardizing Level 0 metrics within the protocol is not recommended. In contrast, Level 1 metrics involve only a limited set of predefined categories, and Level 2 metrics rely on a single consolidated value, both of which contribute to a more stable and maintainable protocol design.</t>
      <t>Therefore, from a stability standpoint, Level 1 and Level 2 metrics are preferred.</t>
      <t>In conclusion, for CATS, Level 2 metrics are recommended due to their simplicity and minimal protocol overhead. If more advanced scheduling capabilities are required, Level 1 metrics offer a balanced approach with manageable complexity. While Level 0 metrics are the most detailed and dynamic, their high overhead makes them unsuitable for direct transmission to network devices and thus not recommended for standard protocol integration.</t>
    </section>
    <section anchor="cats-metrics-registry">
      <name>CATS Metric Registry Entries</name>
      <t>This section defines the formal registry entries for one CATS Level 2 metric and four Level 1 metrics, intended for registration with IANA. By providing a common template that specifies the metric's summary, definition, method of measurement, output, and administrative items, this section ensures interoperability among different implementations.</t>
      <section anchor="cats-level-2-metric-registry">
        <name>CATS Level 2 Metric Registry Entry</name>
        <t>This section gives an initial Registry Entry for the CATS Level 2 metric.</t>
        <section anchor="summary">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name">
            <name>Name</name>
            <t>Norm_Passive_CATS-Level 2_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Norm: Metric type (Normalized Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 2: Metric level (CATS Metric Framework Level 2)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value</t>
              </li>
            </ul>
          </section>
          <section anchor="uri">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description">
            <name>Description</name>
            <t>This metric represents a single normalized score used within CATS (Level 2). It is derived by aggregating one or more CATS Level 0 and/or Level 1 metrics, followed by a normalization process that produces a unitless value. The resulting score provides a concise assessment of the overall capability of a service instance, enabling rapid comparison across instances and supporting efficient traffic steering decisions.</t>
          </section>
          <section anchor="change-controller">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition">
          <name>Metric Definition</name>
          <section anchor="reference-definition">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/>
Core referenced sections: Section 3.4 (Level 2 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions)</t>
          </section>
          <section anchor="fixed-parameters">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest capability, 10 indicates the optimal capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect Level 0 service and compute raw metrics using platform-specific management protocols or tools (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes). Collect Level 0 network performance raw metrics using existing standardized protocols (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>).</t>
            <t>Aggregation logic: Refer to <xref target="aggregation-function"/>.</t>
            <t>Normalization logic: Refer to <xref target="normalization-function"/>.</t>
            <t>The reference method aggregates and normalizes Level 0 metrics to generate Level 1 metrics in different categories, and further calculates a Level 2 singleton score for ultimate normalization.</t>
          </section>
          <section anchor="packet-stream-generation">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect Level 0 metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles">
            <name>Roles</name>
            <t>C-SMA: Collects Level 0 service and compute raw metrics, and optionally calculates Level 1 metrics according to service-specific strategies.</t>
            <t>C-NMA: Collects Level 0 network performance raw metrics, and optionally calculates Level 1 metrics according to service-specific strategies.</t>
            <t>C-PS: Aggregate all Level 1 metrics collected from C-NMA and C-SMA to calculate the Level 2 metric.
### Output</t>
            <t>This category specifies all details of the output of measurements using the metric.</t>
          </section>
          <section anchor="type">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-1">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.3</t>
            <t>Score semantics: 0-3 (Low capability, not recommended for steering), 4-7 (Medium capability, optional for steering), 8-10 (High capability, priority for steering)</t>
          </section>
          <section anchor="metric-units">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on standard test sets (fixed workload) to ensure the output score deviation of C-SMA and C-NMA is lower than 0.1 (one abnormal score in every ten test rounds).</t>
          </section>
        </section>
        <section anchor="administrative-items">
          <name>Administrative Items</name>
          <section anchor="status">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester">
            <name>Requester</name>
            <t>To-be-assgined</t>
          </section>
          <section anchor="revision">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-level-1-computing-metric">
        <name>CATS Level 1 Metric Registry Entry: Computing</name>
        <t>This section gives an initial Registry Entry for the CATS Level 1 metric in the <em>computing</em> category.</t>
        <section anchor="summary-1">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-1">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-1">
            <name>Name</name>
            <t>Comb_Passive_CATS-Level 1_Computing_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Comb: Metric type (Combined Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Computing: Metric category (Computing)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the computing category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-1">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-1">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>computing</em> category within CATS (Level 1). It is derived from one or more computing-related Level 0 metrics (e.g., CPU/GPU/NPU utilization, CPU frequency, memory utilization, or other computing resource indicators) by applying an implementation-specific aggregation function over the selected Level 0 computing metrics and then applying a normalization function to produce a unitless score.</t>
            <t>The resulting score provides a concise indication of the relative computing capability (or headroom) of a service contact instance for the purpose of instance selection and traffic steering. Higher values indicate better computing capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-1">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-1">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-1">
          <name>Metric Definition</name>
          <section anchor="reference-definition-2">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (Level 1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-1">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest computing capability, 10 indicates the optimal computing capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
              <li>
                <t>Metric type: "level1_computing"</t>
              </li>
              <li>
                <t>Level: Level 1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-1">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-1">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect computing-related Level 0 raw metrics (e.g., CPU/GPU/NPU, memory, and relevant platform counters) using platform-specific management protocols or tools (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes or equivalent telemetry systems).</t>
            <t>Aggregation logic (within computing category): Refer to <xref target="aggregation-function"/> to combine selected Level 0 computing metrics into a single intermediate value prior to normalization. The selection of Level 0 computing metrics and any weights used are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="normalization-function"/> to map the aggregated (or directly selected) computing value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes Level 0 computing metrics to generate a single Level 1 computing score ("level1_computing").</t>
          </section>
          <section anchor="packet-stream-generation-1">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-1">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-1">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying Level 0 computing metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-1">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-1">
            <name>Roles</name>
            <t>C-SMA: Collects Level 0 compute raw metrics and calculates the Level 1 compute normalized score ("level1_computing") according to service/provider-specific aggregation and normalization strategies.</t>
            <t>C-NMA: Not required for this metric.</t>
          </section>
        </section>
        <section anchor="output">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-1">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-3">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low compute capability, not recommended for steering), 4-7 (Medium compute capability, optional for steering), 8-10 (High compute capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-1">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-1">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative compute workloads (fixed test workload profiles) to align the mapping from Level 0 computing metrics to the Level 1 score, such that score deviation across measurement agents within the same administrative domain is minimized (e.g., less than 0.1 over repeated test rounds).</t>
          </section>
        </section>
        <section anchor="administrative-items-1">
          <name>Administrative Items</name>
          <section anchor="status-1">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-1">
            <name>Requester</name>
            <t>To-be-assgined</t>
          </section>
          <section anchor="revision-1">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-1">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-1">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-level-1-communication-metric">
        <name>CATS Level 1 Metric Registry Entry: Communication</name>
        <t>This section gives an initial Registry Entry for the CATS Level 1 metric in the <em>communication</em> category.</t>
        <section anchor="summary-2">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-2">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-2">
            <name>Name</name>
            <t>Comb_Passive_CATS-Level 1_Communication_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Comb: Metric type (Combined Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Communication: Metric category (Communication)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the communication category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-2">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-2">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>communication</em> category within CATS (Level 1). It is derived from one or more communication-related Level 0 metrics (e.g., throughput, bandwidth, link utilization, loss, delay, jitter, bytes/packets counters, and other network performance indicators) by applying an implementation-specific aggregation function over the selected Level 0 communication metrics and then applying a normalization function to produce a unitless score.</t>
            <t>The resulting score provides a concise indication of the relative communication capability (or headroom) associated with reaching a service contact instance for the purpose of instance selection and traffic steering. Higher values indicate better communication capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-2">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-2">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-2">
          <name>Metric Definition</name>
          <section anchor="reference-definition-4">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (Level 1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-2">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest communication capability, 10 indicates the optimal communication capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
              <li>
                <t>Metric type: "level1_communication"</t>
              </li>
              <li>
                <t>Level: Level 1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-2">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-2">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect communication-related Level 0 raw metrics using existing standardized protocols and telemetry systems (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>), and/or using network performance metric definitions and registries such as <xref target="RFC8911"/>, <xref target="RFC8912"/>, and <xref target="RFC9439"/> where applicable.</t>
            <t>Aggregation logic (within communication category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected Level 0 communication metrics into a single intermediate value prior to normalization. The selection of Level 0 communication metrics and any weights used are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated (or directly selected) communication value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes Level 0 communication metrics to generate a single Level 1 communication score ("level1_communication"). No cross-category aggregation is performed for this metric (i.e., it does not incorporate compute or service metrics).</t>
          </section>
          <section anchor="packet-stream-generation-2">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-2">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-2">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying Level 0 communication metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-2">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-2">
            <name>Roles</name>
            <t>C-NMA: Collects Level 0 communication raw metrics and calculates the Level 1 communication normalized score ("level1_communication") according to provider-specific aggregation and normalization strategies.</t>
            <t>C-SMA: Not required for this metric.</t>
          </section>
        </section>
        <section anchor="output-1">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-2">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-5">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low communication capability, not recommended for steering), 4-7 (Medium communication capability, optional for steering), 8-10 (High communication capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-2">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-2">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative network test profiles (e.g., fixed traffic mixes and path conditions) to align the mapping from Level 0 communication metrics to the Level 1 score, such that score deviation across measurement agents within the same administrative domain is minimized (e.g., less than 0.1 over repeated test rounds).</t>
          </section>
        </section>
        <section anchor="administrative-items-2">
          <name>Administrative Items</name>
          <section anchor="status-2">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-2">
            <name>Requester</name>
            <t>To-be-assgined</t>
          </section>
          <section anchor="revision-2">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-2">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-2">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-level-1-service-metric">
        <name>CATS Level 1 Metric Registry Entry: Service</name>
        <t>This section gives an initial Registry Entry for the CATS Level 1 metric in the <em>service</em> category.</t>
        <section anchor="summary-3">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-3">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-3">
            <name>Name</name>
            <t>Comb_Passive_CATS-Level 1_Service_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Comb: Metric type (Combined Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Service: Metric category (Service)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the service category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-3">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-3">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>service</em> category within CATS (Level 1). It is derived from one or more service-related Level 0 metrics that characterize the health and performance of the service instance itself (e.g., service availability, request success rate, admission/overload indicators, tokens per second and/or requests per second, application-level queue depth, and other service KPIs) by applying an implementation-specific aggregation function over the selected Level 0 service metrics and then applying a normalization function to produce a unitless score.</t>
            <t>The resulting score provides a concise indication of the relative service capability (or headroom) of a service contact instance for the purpose of instance selection and traffic steering. Higher values indicate better service capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-3">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-3">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-3">
          <name>Metric Definition</name>
          <section anchor="reference-definition-6">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (Level 1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-3">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest service capability, 10 indicates the optimal service capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
              <li>
                <t>Metric type: "level1_service"</t>
              </li>
              <li>
                <t>Level: Level 1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-3">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-3">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect service-related Level 0 raw metrics from the service runtime and service management plane using platform-specific telemetry systems (e.g., Prometheus <xref target="Prometheus"/> in Kubernetes or equivalent monitoring/observability tools). These metrics are service-dependent and may include availability/health status, success/error rates, overload or admission control signals, and throughput indicators (e.g., tokens per second for AI inference services), among others.</t>
            <t>Aggregation logic (within service category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected Level 0 service metrics into a single intermediate value prior to normalization. The selection of Level 0 service metrics, any weights used, and any gating logic (e.g., forcing the score to a low value when the instance is unhealthy) are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated (or directly selected) service value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes Level 0 service metrics to generate a single Level 1 service score ("level1_service"). No cross-category aggregation is performed for this metric (i.e., it does not incorporate compute or communication metrics).</t>
          </section>
          <section anchor="packet-stream-generation-3">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-3">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-3">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying Level 0 service metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-3">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-3">
            <name>Roles</name>
            <t>Service contact instace: Collects Level 0 service raw metrics and calculates the Level 1 service normalized score ("level1_service") according to service/provider-specific aggregation and normalization strategies.</t>
            <t>C-NMA: Not required for this metric.</t>
          </section>
        </section>
        <section anchor="output-2">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-3">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-7">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low service capability, not recommended for steering), 4-7 (Medium service capability, optional for steering), 8-10 (High service capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-3">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-3">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative service workload profiles (fixed request mixes and known-good baselines) to align the mapping from Level 0 service metrics to the Level 1 score, such that score deviation across measurement agents within the same administrative domain is minimized (e.g., less than 0.1 over repeated test rounds). Calibration MAY include failure/overload scenarios (e.g., simulated dependency failures or saturation) to ensure score behavior is consistent with operational intent.</t>
          </section>
        </section>
        <section anchor="administrative-items-3">
          <name>Administrative Items</name>
          <section anchor="status-3">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-3">
            <name>Requester</name>
            <t>To-be-assigned</t>
          </section>
          <section anchor="revision-3">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-3">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-3">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
      <section anchor="cats-level-1-composed-metric">
        <name>CATS Level 1 Metric Registry Entry: Composed</name>
        <t>This section gives an initial Registry Entry for the CATS Level 1 metric in the <em>composed</em> category.</t>
        <section anchor="summary-4">
          <name>Summary</name>
          <t>This category includes multiple indexes to the Registry Entry: the element ID, Metric Name, URI, Metric Description, Metric Controller, and Metric Version.</t>
          <section anchor="id-identifier-4">
            <name>ID (Identifier)</name>
            <t>IANA has allocated the Identifier XXX for the Named Metric Entry in this section. See the next Section for mapping to Names.</t>
          </section>
          <section anchor="name-4">
            <name>Name</name>
            <t>Comb_Passive_CATS-Level 1_Composed_RFCXXXXsecY_Unitless_Singleton</t>
            <t>Naming Rule Explanation</t>
            <ul spacing="normal">
              <li>
                <t>Comb: Metric type (Combined Score)</t>
              </li>
              <li>
                <t>Passive: Measurement method</t>
              </li>
              <li>
                <t>CATS-Level 1: Metric level (CATS Metric Framework Level 1)</t>
              </li>
              <li>
                <t>Composed: Metric category (Composed)</t>
              </li>
              <li>
                <t>RFCXXXXsecY: Specification reference (To-be-assigned RFC number and section number)</t>
              </li>
              <li>
                <t>Unitless: Metric has no units</t>
              </li>
              <li>
                <t>Singleton: Metric is a single value for the composed category</t>
              </li>
            </ul>
          </section>
          <section anchor="uri-4">
            <name>URI</name>
            <t>To-be-assigned.</t>
          </section>
          <section anchor="description-4">
            <name>Description</name>
            <t>This metric represents a single normalized score for the <em>composed</em> category within CATS (Level 1). A composed metric is derived by combining multiple lower-level metrics that may span different categories (e.g., compute, communication, and service) and/or multiple components along the request path.</t>
            <t>Typical examples of composed metrics include (but are not limited to) end-to-end delay, application-level response time, or other synthesized indicators that are computed as a function of multiple contributing factors (e.g., the sum of compute processing delay and network transmission delay along the selected path).</t>
            <t>The composed Level 1 score is obtained by applying an implementation-specific aggregation function over the selected contributing Level 0 metrics (and/or previously computed Level 1 category metrics), followed by a normalization function that yields a unitless score. Higher values indicate better composed capability according to the provider's normalization strategy.</t>
          </section>
          <section anchor="change-controller-4">
            <name>Change Controller</name>
            <t>IETF</t>
          </section>
          <section anchor="version-4">
            <name>Version</name>
            <t>1.0</t>
          </section>
        </section>
        <section anchor="metric-definition-4">
          <name>Metric Definition</name>
          <section anchor="reference-definition-8">
            <name>Reference Definition</name>
            <t><xref target="I-D.ietf-cats-metric-definition"/></t>
            <t>Core referenced sections: Section 3.3 (Level 1 Level Metric Definition), Section 4.2 (Aggregation and Normalization Functions), Section 4.4.2 (Level 1 Metric Representations)</t>
          </section>
          <section anchor="fixed-parameters-4">
            <name>Fixed Parameters</name>
            <ul spacing="normal">
              <li>
                <t>Normalization score range: 0-10 (0 indicates the poorest composed capability, 10 indicates the optimal composed capability)</t>
              </li>
              <li>
                <t>Data precision: non-negative integer</t>
              </li>
              <li>
                <t>Metric type: "level1_composed"</t>
              </li>
              <li>
                <t>Level: Level 1</t>
              </li>
              <li>
                <t>Metric units: Unitless</t>
              </li>
            </ul>
          </section>
        </section>
        <section anchor="method-of-measurement-4">
          <name>Method of Measurement</name>
          <t>This category includes columns for references to relevant sections of the RFC(s) and any supplemental information needed to ensure an unambiguous method for implementations.</t>
          <section anchor="reference-methods-4">
            <name>Reference Methods</name>
            <t>Raw Metrics collection: Collect contributing Level 0 raw metrics from the relevant sources across categories. For example, compute- and service-related Level 0 metrics may be collected by a C-SMA using platform-specific telemetry systems (e.g., Prometheus <xref target="Prometheus"/>), while communication-related Level 0 metrics may be collected by a C-NMA using network telemetry and protocols (e.g., NETCONF <xref target="RFC6241"/>, IPFIX <xref target="RFC7011"/>), and/or using network performance metric definitions and registries such as <xref target="RFC8911"/>, <xref target="RFC8912"/>, and <xref target="RFC9439"/> where applicable.</t>
            <t>Aggregation logic (within composed category): Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.1 (e.g., Weighted Average Aggregation) to combine selected contributing metrics into a single intermediate value prior to normalization. The aggregation function MAY combine Level 0 metrics directly, and/or MAY take as input one or more Level 1 category metrics (e.g., "level1_computing" and "level1_communication"). The selection of contributing metrics, any weights used, and the composition model (e.g., sum of delays, bottleneck/maximum, or weighted utility) are implementation-specific.</t>
            <t>Normalization logic: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.2.2 (e.g., Sigmoid Normalization or Min-max scaling) to map the aggregated composed value into the fixed score range.</t>
            <t>The reference method aggregates and normalizes the selected contributing metrics to generate a single Level 1 composed score ("level1_composed").</t>
          </section>
          <section anchor="packet-stream-generation-4">
            <name>Packet Stream Generation</name>
            <t>N/A</t>
          </section>
          <section anchor="traffic-filtering-observation-details-4">
            <name>Traffic Filtering (Observation) Details</name>
            <t>N/A</t>
          </section>
          <section anchor="sampling-distribution-4">
            <name>Sampling Distribution</name>
            <t>Sampling method: Continuous sampling (e.g., collect underlying contributing metrics every 10 seconds)</t>
          </section>
          <section anchor="runtime-parameters-and-data-format-4">
            <name>Runtime Parameters and Data Format</name>
            <t>CATS Service Contact Instance ID (CSCI-ID): an identifier of CATS service contact instance. According to <xref target="I-D.ietf-cats-framework"/>, a unicast IP address can be an example of identifier. (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Service_Instance_IP: Service instance IP address (format: ipv4-address-no-zone or ipv6-address-no-zone, complying with <xref target="RFC9911"/>)</t>
            <t>Measurement_Window: Metric measurement time window (Units: seconds, milliseconds; Format: uint64; Default: 10 seconds)</t>
          </section>
          <section anchor="roles-4">
            <name>Roles</name>
            <t>C-SMA: Collects Level 0 service and compute raw metrics that may contribute to the composed score, and MAY calculate the Level 1 composed score ("level1_composed") when it has access to the required inputs.</t>
            <t>C-NMA: Collects Level 0 communication raw metrics that may contribute to the composed score, and MAY calculate the Level 1 composed score ("level1_composed") when it has access to the required inputs.</t>
            <t>CATS Controller (or other CATS component): MAY compute the Level 1 composed score when the contributing metrics originate from multiple agents and are combined at a common computation point.</t>
          </section>
        </section>
        <section anchor="output-3">
          <name>Output</name>
          <t>This category specifies all details of the output of measurements using the metric.</t>
          <section anchor="type-4">
            <name>Type</name>
            <t>Singleton value</t>
          </section>
          <section anchor="reference-definition-9">
            <name>Reference Definition</name>
            <t>Output format: Refer to <xref target="I-D.ietf-cats-metric-definition"/> Section 4.4.2</t>
            <t>Score semantics: 0-3 (Low composed capability, not recommended for steering), 4-7 (Medium composed capability, optional for steering), 8-10 (High composed capability, priority for steering)</t>
          </section>
          <section anchor="metric-units-4">
            <name>Metric Units</name>
            <t>Unitless</t>
          </section>
          <section anchor="calibration-4">
            <name>Calibration</name>
            <t>Calibration method: Conduct benchmark calibration based on representative end-to-end test profiles (fixed request mixes and controlled network/compute conditions) to align the mapping from contributing metrics to the Level 1 composed score. The calibration goal is to minimize score deviation across measurement agents and computation points within the same administrative domain (e.g., less than 0.1 over repeated test rounds). Calibration MAY include failure and saturation scenarios (e.g., compute overload, network congestion, and dependency failures) to ensure the composed score behavior is consistent with operational intent.</t>
          </section>
        </section>
        <section anchor="administrative-items-4">
          <name>Administrative Items</name>
          <section anchor="status-4">
            <name>Status</name>
            <t>Current</t>
          </section>
          <section anchor="requester-4">
            <name>Requester</name>
            <t>To-be-assigned</t>
          </section>
          <section anchor="revision-4">
            <name>Revision</name>
            <t>1.0</t>
          </section>
          <section anchor="revision-date-4">
            <name>Revision Date</name>
            <t>2026-01-20</t>
          </section>
          <section anchor="comments-and-remarks-4">
            <name>Comments and Remarks</name>
            <t>None</t>
          </section>
        </section>
      </section>
    </section>
    <section anchor="security-considerations">
      <name>Security Considerations</name>
      <t>The CATS metrics defined in this document are dynamic and potentially sensitive. To prevent stability attacks (e.g., rapid metric churn), implementations MUST support aggregation, dampening, and threshold-triggered updates. To protect against disclosure or tampering, metric collection and distribution MUST use encryption, integrity protection, and authentication among C-SMA, C-NMA, and receivers. C-SMAs MUST authenticate the service instances they report on. False reporting SHOULD be mitigated via secondary validation.</t>
    </section>
    <section anchor="iana-considerations">
      <name>IANA Considerations</name>
      <t>This document defines several CATS metric registry entries. IANA is requested to create a new registry titled "CATS Metrics" under a new "Computing-Aware Traffic Steering (CATS)" heading.</t>
      <t>The initial entries for this registry are defined in <xref target="cats-metrics-registry"/> as follows:</t>
      <t><xref target="cats-level-2-metric-registry"/>: CATS L2 Metric Registry Entry</t>
      <t><xref target="cats-level-1-computing-metric"/>: CATS L1 Metric Registry Entry: Computing</t>
      <t><xref target="cats-level-1-communication-metric"/>: CATS L1 Metric Registry Entry: Communication</t>
      <t><xref target="cats-level-1-service-metric"/>: CATS L1 Metric Registry Entry: Service</t>
      <t><xref target="cats-level-1-composed-metric"/>: CATS L1 Metric Registry Entry: Composed</t>
      <t>For each entry, IANA is requested to assign a unique Identifier (defined in each subsection) from the registry's assignment pool.</t>
      <t>All metric entries have the following common attributes: Name, URI, Description, Change Controller (IETF), and Version. The naming convention and structure follow the definitions in each respective subsection of <xref target="cats-metrics-registry"/>.</t>
    </section>
  </middle>
  <back>
    <references anchor="sec-combined-references">
      <name>References</name>
      <references anchor="sec-normative-references">
        <name>Normative References</name>
        <reference anchor="RFC2119" xml:base="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.2119.xml">
          <front>
            <title>Key words for use in RFCs to Indicate Requirement Levels</title>
            <author fullname="S. Bradner" initials="S." surname="Bradner"/>
            <date month="March" year="1997"/>
            <abstract>
              <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="2119"/>
          <seriesInfo name="DOI" value="10.17487/RFC2119"/>
        </reference>
        <reference anchor="RFC5835">
          <front>
            <title>Framework for Metric Composition</title>
            <author fullname="A. Morton" initials="A." role="editor" surname="Morton"/>
            <author fullname="S. Van den Berghe" initials="S." role="editor" surname="Van den Berghe"/>
            <date month="April" year="2010"/>
            <abstract>
              <t>This memo describes a detailed framework for composing and aggregating metrics (both in time and in space) originally defined by the IP Performance Metrics (IPPM), RFC 2330, and developed by the IETF. This new framework memo describes the generic composition and aggregation mechanisms. The memo provides a basis for additional documents that implement the framework to define detailed compositions and aggregations of metrics that are useful in practice. This document is not an Internet Standards Track specification; it is published for informational purposes.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="5835"/>
          <seriesInfo name="DOI" value="10.17487/RFC5835"/>
        </reference>
        <reference anchor="RFC6241">
          <front>
            <title>Network Configuration Protocol (NETCONF)</title>
            <author fullname="R. Enns" initials="R." role="editor" surname="Enns"/>
            <author fullname="M. Bjorklund" initials="M." role="editor" surname="Bjorklund"/>
            <author fullname="J. Schoenwaelder" initials="J." role="editor" surname="Schoenwaelder"/>
            <author fullname="A. Bierman" initials="A." role="editor" surname="Bierman"/>
            <date month="June" year="2011"/>
            <abstract>
              <t>The Network Configuration Protocol (NETCONF) defined in this document provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses an Extensible Markup Language (XML)-based data encoding for the configuration data as well as the protocol messages. The NETCONF protocol operations are realized as remote procedure calls (RPCs). This document obsoletes RFC 4741. [STANDARDS-TRACK]</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="6241"/>
          <seriesInfo name="DOI" value="10.17487/RFC6241"/>
        </reference>
        <reference anchor="RFC7011">
          <front>
            <title>Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of Flow Information</title>
            <author fullname="B. Claise" initials="B." role="editor" surname="Claise"/>
            <author fullname="B. Trammell" initials="B." role="editor" surname="Trammell"/>
            <author fullname="P. Aitken" initials="P." surname="Aitken"/>
            <date month="September" year="2013"/>
            <abstract>
              <t>This document specifies the IP Flow Information Export (IPFIX) protocol, which serves as a means for transmitting Traffic Flow information over the network. In order to transmit Traffic Flow information from an Exporting Process to a Collecting Process, a common representation of flow data and a standard means of communicating them are required. This document describes how the IPFIX Data and Template Records are carried over a number of transport protocols from an IPFIX Exporting Process to an IPFIX Collecting Process. This document obsoletes RFC 5101.</t>
            </abstract>
          </front>
          <seriesInfo name="STD" value="77"/>
          <seriesInfo name="RFC" value="7011"/>
          <seriesInfo name="DOI" value="10.17487/RFC7011"/>
        </reference>
        <reference anchor="RFC7471">
          <front>
            <title>OSPF Traffic Engineering (TE) Metric Extensions</title>
            <author fullname="S. Giacalone" initials="S." surname="Giacalone"/>
            <author fullname="D. Ward" initials="D." surname="Ward"/>
            <author fullname="J. Drake" initials="J." surname="Drake"/>
            <author fullname="A. Atlas" initials="A." surname="Atlas"/>
            <author fullname="S. Previdi" initials="S." surname="Previdi"/>
            <date month="March" year="2015"/>
            <abstract>
              <t>In certain networks, such as, but not limited to, financial information networks (e.g., stock market data providers), network performance information (e.g., link propagation delay) is becoming critical to data path selection.</t>
              <t>This document describes common extensions to RFC 3630 "Traffic Engineering (TE) Extensions to OSPF Version 2" and RFC 5329 "Traffic Engineering Extensions to OSPF Version 3" to enable network performance information to be distributed in a scalable fashion. The information distributed using OSPF TE Metric Extensions can then be used to make path selection decisions based on network performance.</t>
              <t>Note that this document only covers the mechanisms by which network performance information is distributed. The mechanisms for measuring network performance information or using that information, once distributed, are outside the scope of this document.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="7471"/>
          <seriesInfo name="DOI" value="10.17487/RFC7471"/>
        </reference>
        <reference anchor="RFC8174" xml:base="https://bib.ietf.org/public/rfc/bibxml/reference.RFC.8174.xml">
          <front>
            <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
            <author fullname="B. Leiba" initials="B." surname="Leiba"/>
            <date month="May" year="2017"/>
            <abstract>
              <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
            </abstract>
          </front>
          <seriesInfo name="BCP" value="14"/>
          <seriesInfo name="RFC" value="8174"/>
          <seriesInfo name="DOI" value="10.17487/RFC8174"/>
        </reference>
        <reference anchor="RFC8911">
          <front>
            <title>Registry for Performance Metrics</title>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="B. Claise" initials="B." surname="Claise"/>
            <author fullname="P. Eardley" initials="P." surname="Eardley"/>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="A. Akhter" initials="A." surname="Akhter"/>
            <date month="November" year="2021"/>
            <abstract>
              <t>This document defines the format for the IANA Registry of Performance
Metrics. This document also gives a set of guidelines for Registered
Performance Metric requesters and reviewers.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8911"/>
          <seriesInfo name="DOI" value="10.17487/RFC8911"/>
        </reference>
        <reference anchor="RFC8912">
          <front>
            <title>Initial Performance Metrics Registry Entries</title>
            <author fullname="A. Morton" initials="A." surname="Morton"/>
            <author fullname="M. Bagnulo" initials="M." surname="Bagnulo"/>
            <author fullname="P. Eardley" initials="P." surname="Eardley"/>
            <author fullname="K. D'Souza" initials="K." surname="D'Souza"/>
            <date month="November" year="2021"/>
            <abstract>
              <t>This memo defines the set of initial entries for the IANA Registry of
Performance Metrics. The set includes UDP Round-Trip Latency and
Loss, Packet Delay Variation, DNS Response Latency and Loss, UDP
Poisson One-Way Delay and Loss, UDP Periodic One-Way Delay and Loss,
ICMP Round-Trip Latency and Loss, and TCP Round-Trip Delay and Loss.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="8912"/>
          <seriesInfo name="DOI" value="10.17487/RFC8912"/>
        </reference>
        <reference anchor="RFC9439">
          <front>
            <title>Application-Layer Traffic Optimization (ALTO) Performance Cost Metrics</title>
            <author fullname="Q. Wu" initials="Q." surname="Wu"/>
            <author fullname="Y. Yang" initials="Y." surname="Yang"/>
            <author fullname="Y. Lee" initials="Y." surname="Lee"/>
            <author fullname="D. Dhody" initials="D." surname="Dhody"/>
            <author fullname="S. Randriamasy" initials="S." surname="Randriamasy"/>
            <author fullname="L. Contreras" initials="L." surname="Contreras"/>
            <date month="August" year="2023"/>
            <abstract>
              <t>The cost metric is a basic concept in Application-Layer Traffic
Optimization (ALTO), and different applications may use different
types of cost metrics. Since the ALTO base protocol (RFC 7285)
defines only a single cost metric (namely, the generic "routingcost"
metric), if an application wants to issue a cost map or an endpoint
cost request in order to identify a resource provider that offers
better performance metrics (e.g., lower delay or loss rate), the base
protocol does not define the cost metric to be used.</t>
              <t>This document addresses this issue by extending the specification to
provide a variety of network performance metrics, including network
delay, delay variation (a.k.a. jitter), packet loss rate, hop count,
and bandwidth.</t>
              <t>There are multiple sources (e.g., estimations based on measurements
or a Service Level Agreement) available for deriving a performance
metric. This document introduces an additional "cost-context" field
to the ALTO "cost-type" field to convey the source of a performance
metric.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9439"/>
          <seriesInfo name="DOI" value="10.17487/RFC9439"/>
        </reference>
        <reference anchor="RFC9911">
          <front>
            <title>Common YANG Data Types</title>
            <author fullname="J. Schönwälder" initials="J." role="editor" surname="Schönwälder"/>
            <date month="December" year="2025"/>
            <abstract>
              <t>This document defines a collection of common data types to be used with the YANG data modeling language. It includes several new type definitions and obsoletes RFC 6991.</t>
            </abstract>
          </front>
          <seriesInfo name="RFC" value="9911"/>
          <seriesInfo name="DOI" value="10.17487/RFC9911"/>
        </reference>
        <reference anchor="I-D.ietf-cats-framework">
          <front>
            <title>A Framework for Computing-Aware Traffic Steering (CATS)</title>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Zongpeng Du" initials="Z." surname="Du">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Mohamed Boucadair" initials="M." surname="Boucadair">
              <organization>Orange</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="John Drake" initials="J." surname="Drake">
              <organization>Independent</organization>
            </author>
            <date day="2" month="April" year="2026"/>
            <abstract>
              <t>   This document describes a framework for Computing-Aware Traffic
   Steering (CATS).  Specifically, the document identifies a set of CATS
   functional components, describes their interactions, and provides
   illustrative workflows of the control and data planes.  The framework
   covers only the case of a single service provider.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-framework-24"/>
        </reference>
        <reference anchor="I-D.ietf-cats-metric-definition">
          <front>
            <title>CATS Metrics Definition</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Cheng Li" initials="C." surname="Li">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Jordi Ros-Giralt" initials="J." surname="Ros-Giralt">
              <organization>Qualcomm Europe, Inc.</organization>
            </author>
            <author fullname="Guanming Zeng" initials="G." surname="Zeng">
              <organization>Huawei Technologies</organization>
            </author>
            <date day="8" month="May" year="2026"/>
            <abstract>
              <t>   Computing-Aware Traffic Steering (CATS) is a traffic engineering
   approach that optimizes the steering of traffic to a service instance
   by considering the dynamic state of computing and network resources.
   To enable such decisions, CATS components exchange metrics that
   describe resource conditions affecting service instance selection.
   This document focuses on compute and communication metrics for CATS
   and defines a hierarchical abstraction of these metrics to improve
   interoperability, scalability, and operational simplicity.  It does
   not aim to standardize raw infrastructure (Level 0) metrics; instead,
   it specifies higher-level representations that can be derived from
   raw measurements using aggregation and normalization functions.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-metric-definition-07"/>
        </reference>
      </references>
      <references anchor="sec-informative-references">
        <name>Informative References</name>
        <reference anchor="I-D.ietf-cats-usecases-requirements">
          <front>
            <title>Computing-Aware Traffic Steering (CATS) Problem Statement, Use Cases, and Requirements</title>
            <author fullname="Kehan Yao" initials="K." surname="Yao">
              <organization>China Mobile</organization>
            </author>
            <author fullname="Luis M. Contreras" initials="L. M." surname="Contreras">
              <organization>Telefonica</organization>
            </author>
            <author fullname="Hang Shi" initials="H." surname="Shi">
              <organization>Huawei Technologies</organization>
            </author>
            <author fullname="Shuai Zhang" initials="S." surname="Zhang">
              <organization>China Unicom</organization>
            </author>
            <author fullname="Qing An" initials="Q." surname="An">
              <organization>Alibaba Group</organization>
            </author>
            <date day="2" month="February" year="2026"/>
            <abstract>
              <t>   Distributed computing enhances service response time and energy
   efficiency by utilizing diverse computing facilities for compute-
   intensive and delay-sensitive services.  To optimize throughput and
   response time, "Computing-Aware Traffic Steering" (CATS) selects
   servers and directs traffic based on compute capabilities and
   resources, rather than static dispatch or connectivity metrics alone.
   This document outlines the problem statement and scenarios for CATS
   within a single domain, and drives requirements for the CATS
   framework.

              </t>
            </abstract>
          </front>
          <seriesInfo name="Internet-Draft" value="draft-ietf-cats-usecases-requirements-14"/>
        </reference>
        <reference anchor="performance-metrics" target="https://www.iana.org/assignments/performance-metrics/performance-metrics.xhtml">
          <front>
            <title>performance-metrics</title>
            <author>
              <organization/>
            </author>
            <date year="2020" month="March" day="19"/>
          </front>
        </reference>
        <reference anchor="DMTF" target="https://www.dmtf.org/">
          <front>
            <title>DMTF</title>
            <author>
              <organization/>
            </author>
            <date year="1998"/>
          </front>
        </reference>
        <reference anchor="Prometheus" target="https://prometheus.io/">
          <front>
            <title>Prometheus</title>
            <author>
              <organization/>
            </author>
            <date year="2012"/>
          </front>
        </reference>
        <reference anchor="Min-max-sigmoid" target="https://doi.org/10.1016/C2013-0-18660-6">
          <front>
            <title>Data Mining: Concepts and Techniques (Fourth Edition)</title>
            <author>
              <organization/>
            </author>
            <date year="2023"/>
          </front>
        </reference>
      </references>
    </references>
    <?line 1360?>

<section anchor="appendix-level-0">
      <name>Level 0 Metric Examples</name>
      <t>Several definitions have been developed within the compute and communication industries, as well as through various standardization efforts---such as those by the <xref target="DMTF"/>---that can serve as Level 0 metrics. This section provides illustrative examples.</t>
      <section anchor="compute-raw-metrics">
        <name>Compute Raw Metrics</name>
        <t>This section uses CPU frequency as an example to illustrate the representation of raw computing metrics. The metric type is labeled as compute_CPU_frequency, with the unit specified in GHz. The format supports floating-point values. The corresponding metric fields are defined as follows:</t>
        <figure anchor="fig-compute-raw-metric">
          <name>An Example for Compute Raw Metrics</name>
          <artwork><![CDATA[
Fields:
      Metric_Type: compute_CPU_frequency
      Level: Level 0
      Format: floating point
      Length: four octets
      Unit: GHz
      Source: nominal
      Value: 2.2
]]></artwork>
        </figure>
      </section>
      <section anchor="communication-raw-metrics">
        <name>Communication Raw Metrics</name>
        <t>This section takes the total transmitted bytes (TxBytes) as an example to show the representation of communication raw metrics. TxBytes are named as "communication type_TxBytes". The unit is Mega Bytes (MB). Format is unsigned integer. It will occupy 4 octets. The source of the metric is "Directly measured" and the statistics is "mean". Example:</t>
        <figure anchor="fig-network-raw-metric">
          <name>An Example for Communication Raw Metrics</name>
          <artwork><![CDATA[
Fields:
      Metric_Type: "communication type_TXBytes"
      Level: Level 0
      Format: unsigned integer
      Length: four octets
      Unit: MB
      Source: Directly measured
      Statistics: mean
      Value: 100
]]></artwork>
        </figure>
      </section>
      <section anchor="delay-raw-metrics">
        <name>Delay Raw Metrics</name>
        <t>Delay is a kind of synthesized metric which is influenced by computing, storage access, and network transmission. Usually delay refers to the overal processing duration between the arrival time of a specific service request and the departure time of the corresponding service response. It is named as "delay_raw". The format supports floating point. Its unit is microseconds, and it occupies 4 octets. For example:</t>
        <figure anchor="fig-delay-raw-metric">
          <name>An Example for Delay Raw Metrics</name>
          <artwork><![CDATA[
Fields:
      Metric_Type: "delay_raw"
      Level: Level 0
      Format: floating point
      Length: four octets
      Unit: microsecond
      Source: aggregation
      Statistics: max
      Value: 231.5
]]></artwork>
        </figure>
      </section>
    </section>
    <section anchor="contributors" numbered="false" toc="include" removeInRFC="false">
      <name>Contributors</name>
      <contact initials="M." surname="Boucadair" fullname="Mohamed Boucadair">
        <organization>Orange</organization>
        <address>
          <email>mohamed.boucadair@orange.com</email>
        </address>
      </contact>
      <contact initials="Z." surname="Du" fullname="Zongpeng Du">
        <organization>China Mobile</organization>
        <address>
          <email>duzongpeng@chinamobile.com</email>
        </address>
      </contact>
      <contact initials="H." surname="Shi" fullname="Hang Shi">
        <organization>Huawei</organization>
        <address>
          <email>shihang9@huawei.com</email>
        </address>
      </contact>
    </section>
  </back>
  <!-- ##markdown-source:
H4sIAAAAAAAAA+19a3Mbx7H2d/yKLeqDSQmASEq+IXXeCk1JNhNTUkQqcc5b
J/ISGAAbLnbhvZCCKfm3n77OZXdBgrJkWzlSVWIQ2J1LT09fnu7pGQwGvV6V
VKkZRVuHB6cn0bGpimRcRo/MNMmSKsmzrV58dlaYi8YTW71xXJlZXqxGUVlN
er1JPs7iBTQ0KeJpNUhMNR3AI+VgQS8MJrbFwe5XvbI+WyRlCX9VqyW8dPT4
9Ekvqxdnphj1JtDyqDfOs9JkZV2OetD5g15cmBgG8SKvqySbbfUu8+J8VuT1
EkeWL5b09eDgEp6LTmEM02QcnVTGFPT0uVnBC5NRhJPoRzyosteL62qeQ5/R
oBfBvyQrR9E/h9FfzTzO6JtpnaY8Mfou+mec0/d5MYuz5OcYZwStzpMsjo7z
syQ19LNZxEk6ilZxfo6v/XmMDyzo9+E4X9Az47zOKiQgvR0M4XAYfZ80+j+c
m2ymX4fdf1fHlyaJTs14nuVpPktM6Y9iPEz/PKdHNun7++HxMDqElSlMEZeN
QXw/jFq/hmM5NamZ5lkyjv0hpHVSLpJZbVIYgry8qIskTfM/V/YNHp43lr8M
oxd5Ofg2KeK0agzlL7CeSfPncCx/q+MUmlxEj+siX5p+dJSNh/6w/l3k5Z9/
qpLhT/JkawTfDv8byN7o+ts6zhbAV5H97Vbr8TO8NZMmbliYHlErOasr5NJB
xP0f53P47yT6Jq/H8SROih6NYBQ9K+JshhwoPS34weGZPvjnnJ6g7rS1/86z
2RJ561Gt7TT4WVqb1D/Loy1+1ra+g9ajk3miDTElXBPlPIHtMPvan3YvyaZ5
sQDaXcC+j6KjwaOhkx91acZxacpBYX6qk8IsTFaV+NjSFPRaNjYiZOjrKBKJ
1vF7x1IdZZUpMlNFByCOZhkQ9SmJoTI6INGQVCt6i4RStL+7vzvYfTDY+5p7
iouZqUbRvKqW5ej+/cvLy2ESZ/EQ+rgfU4M03PsdY+n6bvh6Xi1SaPrR8emT
YDL4RcfoHyUlcweM+xg6nhF5otO4PI+e5MXYeGPf+/rrr9aOerKopjRqeOJ5
kcN45qYO6em+7hKAaV4D6WgNIyuOYQx1NqFHAiLu7XcOZGl7GCY5juQ4yQaL
+PUACLnIk0kwnK1HcRXjE9DPCCXS2CyrMoqzCe+75KfalNE2jKCo5tHjCeme
nS1qgqW+TC+KBtHWXxLascC+W8GXWfTcJN5XyN/IwNFpjmqlTYitY/o7+mtc
T2Ft4f36LE3KOXBUHwYXJYslaKQqyqfR47Q0F4kptkIGexDQZkuJM8kTWqC9
3eHe7t4X9w+BjA8Gu4O9r774YnfwxVavl/mb6MWTw/29va/l4+dfPfhcPn6x
/3BPPn65u2c/PvxSP3619+VD/fi1fQA+7svHrx8+0Ha/lgfCLTstQBagfm7/
1LIGRr3eYDCI4jNg5Hhc9Xo36fJoG1X4TpTAWkeV/AgiKcnk93gJfBSP51E1
j4HOyypZJD8DKwBjgakiDwH59d0qh4ZKU1wkY4Myv8IdGZ2tIrRAkgk/jy9P
ViDj4AV4ojLYwtjyOXIdCBGcc1SYEnhubMohMEnPZPFZCh3XMKCJGSdo8wAn
kC2F7+cZCojIvB4jWxm1THjwE1OOYXsb2yaOiTkZZj+dmjH13hp8CRp1jE/B
EOZAKDDOahIMU/gAsjTKMxm8oaGj5qtRAeM7dgggnXic+AitGLwZ9+bAsnEB
CmAcp3bd8D2k6dyU3hxyYvf8AocGYhZUcBGDygCZ2o9KeN3+gT3Qr9gQNFvC
e2kyht+G0RHQIYees7yK4mSBreI0JzGo/5+BNPEltA4sBwOpx1UNLLP9vbkw
abS7oyP5E5HGxJN+lFRRuYR1mIJOjubJDDbmIKXHC7MEMgOVYqYvrcAYtizQ
H7ngAmTsFCQU9bgwcVmLMorqknhgNivMjElI/IDbMRXBAMZDRlQqh8zwi2Qy
Ad3au4MaqMgnNf3a653IWiLZkPmA4gX2v0zzFfbieG4cL5l+OJN4DJYM87jy
IS7fPC8r2ROpLG/JvBgDV3jK4+AowpfSPJ4Adx68uP/3F7zsOO8iNdD2hYEl
h0+wWosct1CFQg3WJ5Nl50ESScDCrtMqWSLnN5iTJwTEpgFB1zL0i7hI8hpG
l1SGWAeN/wJ3Du7SBLkXZ4w8QVMDewKWisQ/TVvMgwkZffgQStjXwFPwKnS/
/bf88Q5sM1h77BS2N77k0QV3K3S2XOZFxbJCN5FjbDsHoEFkpRyMK02hzVtI
rkTWHN5Ksujqao38fPsWuIW24PuVdjNY1OxWMi+LaWdtJvRgSUFIwK6vQPi0
ZZ2slG+gBTLnDDirIZJESJHA8ro6SfBdXhv7tbdheKEtK9o91fclyMR2ji8C
p8MgE5BBjgWbwouGI4IdZAYIdWPpa2nuCXvcKAYGIquDv1qBDqS0MoKbBT0w
pkWDjQeMFZUraHPhUwuY4hSbpAHHKTjGk1V0adKU9apxy+KTtTnCvhUEyqiP
Pb7aPn284wgDAzv69rn9e9sMZ8N+lCbZOcwzjVf+Z9rJNKGdqysxLd6+7UeX
ID7m0TwGvjgzwH7A+DXtKTtanimPNo5SFDJAMKMiBpmzMq/JdmI33vGijgyG
P8GVOzP+HqOlpOX6d47G1+ljtzxD2H5oXb99a3VcCWZohB8HsyImerY6csQ7
fP7SX8R+BAIVRCuwRZWuRDUwh17fYpTG4/PSV4u0zL72dmqwAjotq6YqTPNL
0WVBVx37DEnqFKJYsNB4uLm7zQIgYA4zKowJtD91XIa2grzjicozcOUmaH5s
qi3BgDHe+053T2H/RAJC4d/AFKz29wK2jW3L0C8/sC8PwFIBrXmzyUBBP8DO
LFE8osq7TEASeSvCOxSmgL0B8w5Rf4PrcYEig2wy+N1hZ2VzAcn2Qk6e5iks
FrGGKYDjdd9erwzQVt5Uy+CjYkt4H0m7+n/jloIFtDqAekB5rQ8cygNHqiSO
HkEPJ4dHg6NHO62nGR2MDmY42+3DwcnxgXvoqWzz5kNP/Yeex0DyE1K9IAXg
1+c4F2SCc7NCG2VSgo/18uR0q8//jZ4+o88vHv/t5dGLx4/w88l3B99/bz/o
EyffPXv5/SP3yb15+Oz4+PHTR/wyfBs1vjo++OcWG6lbz56fHj17evD9Fq5V
FawuroYVPaYATiLzprRmPK3vN4fPo72HEYlG9NBA7tBndLvg8+XcZGIPZyA/
+E9gmRVqehMX2ARqBbCFkipO0RIBoTHPL7MIVQJx5CODyAM47KAeUfEBH965
AxsJtuyAt4DivFd3aCOz1FAU4m2vdwD7qa7SdTzZicmgjGdDTFTbIgd9oI4M
SwRaYt2etHpALd5gqemzvmWBCS8lYCugH0DNs1woEEbKeGMiIeCLuMxZtCPN
yrpEm5D+HoPtS9YINWayCRptNADRqEC1Op2InUdDX8ArbFEtUYWFZknbu+pD
9+N8lrFK54km4JUVyA366gI0IvgnaU1mKFAVpKpBKUr2N6xQFV3GK2SSJYyR
FD9rOxF1Y9KSSG5YW5YtntKQBQDJDdyHAOXKF2pq+KhVxw4AyPxsnNYTT+5N
0MovyRUDrZHVaVxYv6xQb5cdVNH99RIBC5DMMACYGRg3oLMn0EoF1hBth/G8
LrKdYXSQoXfLUjVijB+a61lFinYODBsZJOwc/Rdw0NpdoVOWgvIi/QecjlSK
oUf8DJJebKc5qTfwCGoihi/G2aB3kxc/CDkM1r7Kx3lKRgeBr+BY9kO3nDd6
Hk3NZR8/9MZ5XJRW4dLIedATeg7dUxo7WgTID9M4IftSDFLfx7FYKHKBc0gM
O/tAQd9NdkYM4iaEgIHzk65IqydoDSHd42KFTfEeBNsBaQd8DsIY9ewkpu2V
qgIH2iiaCpxoyCQoEvZwQe6w8ytLh+yBhCRX0DnQ9Kz48KCrJ7h9FuD0Ehuy
Qwq94BPZDLjzIobNobsU5FoJHFCJolQj2Dq7K3m14bsAAb7LL2EGhVpmkcOY
iObECKYpUKapeZ2ocmdHlPeCYVGhew8tJn0TnfJqtUT8A9azNOZc9hxKe3HH
jMUPdFsuCJ5FK3xiBvl0WoobDVPOJ+rZ42ho343HNRhWXTAJGMAgJhNnIjpP
2eNwMLatw0Pc7zhsYipkPxRpjquUkwaL+Jx8g1C1CaOWIVkdq4jAhqkUwK6y
emghOqbybEWyY+7eFZRmFL1wPDW8exctPg9Ews0mfAX0RcEPMtRvrc9mmkEn
WHY0W92oP1AvgglbEel4Z7FGZntLYADYBSiDfMds6I9xb2Q1JqzSmahF2FPO
/uweeIAbyXztA2ek07lb3xy2xi8Ouk9eI+523NZrrGTc3jg5Hc9qYJVHy1+5
xsQP5rw/ig50h3omNLfHswUOEYIn2i7T1M4GegE710Js1n8UUvSAB4XCO2yt
Kz3CibKopP2rUqNPy0rAlFr1JXCfcaaBrjEHvHpkAwUsd6xB4ObCWEEmewVF
egv0Ax2RMpaEi9vrYCF2Xhoc4QsOEmFuqYAAGOVmD0/420xg5I9fxygaPB9n
1/PGUJUbdjv17TRZJKx8ZKuBkzqCJfsGXRerR+GdHAwemoH9yiroCKlZ9uk/
vSh0cRdmAUwGUi+bXCaTam6/UYTOfuG/Bq3gGi+RIcgkqRdLZjsc4rc8xCdd
Q4HFJwMCiIurXnaMABrfZAw0nfAbO6Ze1B7VUx4VO12qdl0P3Q21mzn0t9mI
Nk+R17M5NNr3yUgoStAo9HRuMLadgm7qK9ry76SqUNUppkNRY9RL22crFL40
FHqz3BFvAlU0NKNYiw8KJGB3wtDyouTRnsBH0FU4zoML4H82p+2kCzK9vAlg
+5dgsxnvW2mpyeNqWkDTL3ChwQ4h+1KsyoK/K3F4sEEw2rGDSwsiH9oMHq3y
c9B7wYNuoj6wy7InDBgHU27tf9yZBmUlu+oawoSVTzIYhsBp1oC0EZHVMDpB
6GitBgCFn0CfpJAF6+LZ9DAHAl8jO1pXE2bEIDvQZUpOisKxBTyQLAyHomje
PXkSTR3gcdi6yzwjC7kAlkEdjyIHWXiM22mJhjpLlFAB9sJYSlOEqcWKL53F
JYp+z2C1GHyV93jiYYjFGY4NpzjEREpjQyWttZGADFlNaG+RRbTMEVWSMEwW
HbjFx6g1mOULytagWP80xmjAwfOjHacTY3D4T07BTovge2Yj6afSFI7VgL4O
zUfL18PokXX8HBCNbRDeCdNLCrJFLAXIXBkjdLQSsQacHxAzPkNDh1/Fhatb
+gTbn5mMLOQV9yTiUSd2lgR7CcTDstxBFpumOavnJQGiaEYXtRgT/vPw3JL4
sEBqqMpdi34OCTowoq/60f+XIPL/EPHkr/3/UV9EYz4dQqlnNxH7GECEuCzz
cUJeFVgX5LRhH2zqtkDcuFeaKgxYWPCTydhkLdWeQWwgyXqBKfwnFEJzkr0I
ylhR0SUm/AgiOBKTntgm5FmQR7Z2iww9c8WzPg+d9RkdWttTd8leMBVKjlND
tYGWjhxV+qEN2Fe7pG+DLri5xMjGfQp/q/tBdv2EA9KkFy0ae2Y4BmJhzWlN
ASTdMLGgu0/QC6GtIJZr3/mF4aTQxlRhKmoG93qX4cx2M0JnPpjTh98LUg6e
OQuCapWYFHFia0+yHSkiVwNhvvJoCc2GyWdH7hGEdYGKPGIYFHelGmi6HKR0
xSUOdIdvU6NfjiFQHKddyAEJeoty77ZNf9is/ehb/D9c2qetrdtpqdx2MO7t
mwcUBhh848G3Hm49CrWsb+y/ZUMwiCjYQsy2D3G6WyXcDRuNKBZfMVZfef2W
83t1AuCFAY2CcuLqyuPygXI5yDp87+oq8JX8n9VMcS57KQrT+RNrHc91viYa
abiL2aQYRt4oaeMMwEtCiDLsP67TCiQH5nSIeTqrkwntJjEpJpIu4qlXZ5t5
kg13Dfu7Y8wWyFM0bAQyFUnhbzurPQiOErDJi8nGFzkjJJw6wcMQfwNbrwM9
32QlX0yTw3zCouupkzAsujmEEcafOKiv7qwnlFgABYx1nReNk7zve9IqHhsZ
Ds3lvjbs5uyZxpr00MIkP90DuwMkYjyPyXUtMCbpg3jXy9DrYo+efP2GkxSA
lSa8zyzK3UUjUn5WoxBhOUrRWAoPRk4VKywl6KywnG+AkDZUdmMkQEnqw8nD
rl0hWki1QdneHxRuaG2FJ8kMVeiej8aBm7lAfFfIibnGmJbFNj+8Nk+WqIyr
S4y42z3kkoWuQemgx19++UVSH6/9d28g/+41nlYi8x4Y8Zdv4H/H+/hh07b/
1dX22n/XNGsHyqNd8/m6rt6s/Sv4Zc0Y7mlP95pvBb+4P3rWlDhWTJ5fOt4b
7Hl9vnH/hV/2+Qv+4wF+2B4Ohzvrx/SvtWP6lz8m+uvXE2cNde7pOtxzD3o/
NejW2USLHgFlmh/aTdzTAfgf8NO97p/u+Wu0a9foTXS8C8vzJqIP+/rhgX54
qB8+1w9frF8j4cx7t/rAW/dqFN2ZJjNNupVdTvnT/7X1PYoKTaWxDkaiiRxb
b3uU3OD/+MSmYaCcO/FNedZvQXjXJW2oSzbPLwMPxXfiQepk5QJhrQnpCz+V
kY3qMD9To7fgWnEMJ8X2WO1hAhrqj5zihupk4kkjP7wFpgTGX9SvSQrPsGE/
mmG0QkIjQph+tDKVWhOuR8kvMnG5asaYOCVGcyNV+s4NJpWh+46DM5MZtnaR
FDkfGeAcL1QSGMoX8b6GvmBHoDXl4hkWSuktbQ4AOBp3yaYG/YsgkBdqbceV
GKL0E2/iSU7p9YhExiuDIE8j99fL+pVwWDOdBmGqKMhwQm0u2Ud+VJu0ttPr
VjvBXDDgn9ChG797hQ/jy741cdBp9U0cgwhhUslK4SGGhoqleKtLBvDzANio
gBF4wTPiGUTGNTzPERGbkEmWahmmYvk0krBWgTZezAgvYscyW0Y1CNoiyqir
h7k47C7wOhL0I8bFtEaYs5nHHFeCMEjTFNlWr52NIV1R5jZcIw47lyFXUGhH
QbrMeJuZADNrdNPskL4uA0/jdmAcFfWS7DbYtBwL9vKGh8ykR10Jl16otoM9
iditNfWCK5sZvggIRIH40HCwt4PLMfAQCJWylbeen/2bBUiJx8wOJJEYBkmJ
jMR+Kgyph8zanwq4cpuWcjillnkI9KX97vKMxI+0+TNxJLF2GC6yKFrZBGtJ
7L1hCcL+nOSFJO/llGIj3Oy5sAjDT8OAue/GybZqJ8tqKJp4nbdukmX5hVuL
eBIvK308yZxdbYMu9FhdxhTIQw6RHDI6XJBR/1aJLEG1ZOMOHoHNRGcKUooR
MbFhfeioWjJLNMvY5ph6cVpdD6bt1gk5I1uwt00KUsbL5qPmOPEpyOmTYzNv
3wpd6U3JAAISlpgwcDk3HLvQuEIkEIOHfJEzyNmlfjgSlh5aWiic4HgdkVuM
+vjcTlIX2AfEx3Slrq/kbMG0vUA6SwdLQd5eSPIyILWXD93M2BJhgNEqdJ9M
qSQHDyVxeQ023cEXbTY6ARINhBHMnp13dro9yyR6gtQse73HOHRPSSLlRAcx
6Ba7jYb90DLTUpR+FoEsDvtnksyickRiJ40cIklTtXzgJUm4rczEpN4IDtbA
hz/ghvAWIIyn+eoULJi7d0c+B7l8WPJVBURFJOw8Qdk0DVpnVFbJ2cwJUCFm
nW1Mk6dIoBf8g5YVrbdxSzqjA4qpZBKDdte8P2UKfWViEOPik0HCgIyu1gUC
aw2ieukI106btTN0xynmHhJEGQq0SyY0yEQo76VuCEahFNL8ZDT/miYSePJL
johJ7j2laLILDayZqH5XHe5JABABXZmWoTjAUEEVnxtKzWJeH4HrN3XYDg1j
3/tmX4j0hMJGDSpJkFOoNMHzkjbbSHJhAoI7hqgTlCmJMebVl58/fEXRIrsc
2ayaX9tTiUSVllmC8VO6FEiQfFyZSuPVO7Q43rksdBTiDA2OSrBzXTbkSOO1
3LENUxqhjowkEvB7TMffCk0KQIPBO1xCxJnDp5Tw2TAR1fUGrdEA2GcpsFNz
Qdn8BSf8C5FewuZvkEj1Po/WSm7xNMKIslsIYLXq52j7u5936BE/d4NIs/0N
/0DjR7pzqC4M+5US97MPkvBG2BYD6m5jUO5gC+XydoeXsqh6UgdO9FHcnsRH
a/4KdxEkyBpXVs8Pe+pC52eYydtYXMdwpc2LovFzli2lwgShAOnASeYMc9xq
TpPEvNS0prxU2W098r8H0WdZvgCzJ/0MzxgtEpABjHBbNd6PtmL3lBU33jYQ
vpFfPPaJ1WoHEk4J0uvKQxNJOYw4RMaIvZcrQsJC++Oe4tfJol6oJ03VNWzy
LlP5Ik8vyMzHtod2ss52+Ix3kP+NWgYbzM2mgHnxOdeSouquX3tKRgXDZ7LG
N3aJy0/8gbPRVsg26iQj29ScEptXOibKrbLr7RlISoXwy1sQwrmcMiHEHnyT
0p7kcUH75lFdX8VJN16IgLui1DbO5PSyH6GVXWTXvV0KgsIu2G0I6GWOuV2V
Lth9jfJ6abZsiu0136QjbBjqXf+mI6pnh268spaGdn3PVi6zoytaJf095b1o
SUyUTbI5ByiW81VJyLi4yuzruBR7EsI2uRQ0jxfWdtlnw+jAggtBgj+dR125
FW11x/l5lDKA7hoLNopKadxhbGymTkQHrEzU2h8ug+FpMyMTE+kzLxl7TjHw
BTj+64ZEI8JDZ8J/IrtRPpVofTfktws6OMowGlK6VBU3GHANoBG0nmD8yZTj
bRTxAuKCjTvwvTsJDLHl5TUDButU1P2KrACbdHmtMmArD9N5cGtIYFVYTQ+f
2qUv7YT9p9ecpQeh4Aj0mZo1NWylM2uizNiLs7rzs2On7V/9Azg/v/wMyQNO
VcWR00CMlyjpEzn6yN70Bc7LWZ2Uy4HmPTlIbvxOe4EqEBmmSiEwlsgGcPmr
Fm7VzsqhbShRYQifflVDwHTSUozRwJl515bAU5SG4BPt7WsaEqb+Oz7R4Gfr
abFgA/cFbALMOQVPG7dJh3kp/lPgUdhtUflHdmgUnFoVWnZo+lKKIwuuU2tZ
umox4k2LnV7UqbHmYWn9x9ixMNvTJxKBvLryPYxBw5+kA+XgPB800LCnARr2
RNGwXq/75K1OxknG/g0AG6U+Nw6cWOBMtQBZpZjGjsnWjJoz6hHgwvgEFjFI
KkLcyagNTy/YzCVw3UrsvAtxlwRMokdIkKs7nekWvd5BZ66EnAhweDXbko3Q
s78Qam8OozUtcsoYhe4JHKaTN4Fv2bIWFLBlNozpBHtzSfDQMPjz4Q/2RFMT
r8KaMWjoEvZqY9YCXiJBBxZu8NtjH3zsKWvvGB2jV5o1wBGZkE5CmBBOoXPu
Xh82gUWmzVYCcialU53I/HHTdE0ZDTFdLB2abHTGatFaC84chIwoB9LR4o1L
NMrNOMYoAAssTFT2o1zzfKFMZw9IYu/u9B0fT7LuDo1BAlCla1Sz5dQVaKRe
hqcTth3slZB+HSPrzcxO49zCdXU6tjv4iJJHRYKLmKZCfeOaDRdfZ0ELXeTf
sfztbKmA2I2t0p4nr8EJucjeso5tDajCePi3auK89DPwQi4nPXFwM4s0eTcY
d0cep6aACtpsDw+dGbDJEiQliyZ7ZnYCBMykLsGmTIbfuoFJBnAnowHh0UQV
XrMWoQt2qinTAE1taqTszhYf2sosId8hbEEAvz1FIOhL+BhjQMoWlkjWi/Sl
VS4Zz5U7CuSlQBL/BaT3cpFKl4jkzpMEfgtl70tgpjsxTjAGEjNg12HlM/aE
xIoogNegfzrpEGd6VrHi4CUumoWwB1S7DE2q+9ExW2kjOfguqGZ+iT6anILF
j4KlUaLh2lb/YdDmx5XjXeqNEGWyGF/+2bNLeqNUSxTMmVrtn5JCYQUnyOU2
TC5uigJrY8JblkVC5fKG6xTlmvzqKMivJtHPxv7aUJ6EaYqcyu5QYRXkCByD
hMi83KUgyydqfhOm/LzRgALm1uDX/y9MXPETR9rt0rP+1G22yfrsouFwGCam
qO0l38kgZFjZvb3re2+OdF3vrsV3mWg3AenhI2THUXTcsIWCf8/oMI93vJAe
ClNWYPex5aXpKl0sxTkqYL2F5uvVnTUZsb3e0zVRXzqIUVSuYglvtW1SHXlh
kQGSlDtqslix2Mzd7PLPoylISJAzKLkYNcEDU/i4Wqxg/vLZW+/8iaKhqmX8
OECYwlh2GU2hEX6D2aRj1uPbXhL+rhWqgfzV0bhwRPuo6J+aTRNGwee5/IRU
P5OzlbDd3W37WOr6FY4LlexUlYep6jwOf9mtSXqGZzr0BKnmmBDsCnYvouug
PEUeC+C2owV/XAVKRfKlhmVwSAG9aixwSS3he1dXjZKXUvTlpPEyaIpFnldz
Ole+LAMtELXHrurG72sUvTA0gVLfs7Gv84yOKol6onEKjECtSm5S2WSvkNhg
/+W/ubi/t4mo6v4ZBaMn/1UqdnvGTcG8T7LyV/Xes8IzIrgi6pQ9LSH6MjDN
AhmKqxMK0e7NgWIUhOgz9j6OGR7EvXlCR4DJkAtcZ80HPJEzjFd3whxngg1i
LbOSJudG6nUxGm3taYoBcWkP2EKhe26D0ECcMExtuo9PcKQanrawkU2MIZiM
59KslxKWRhlG/6BklXV8jUJZVYVXCIPlRXh+qPSSAh0U0RcURx5RzPjMw3r5
Ya5oowUp1qTiNwzxWA4OwGy/pBJbHQJKfE09385Oy+AnKZlo3RAMzWXNLckY
NglQz5Kk51CaEe7MBSm0/p/nhDFiwIlG1v3pB8dDQqrbbCcPlwiLUngZoKT0
pPYA2PcNOlHq4wkfqiAy2zMummqCUnSJQrF3oPb/7gCoJXCgaADXISkQKlMH
vTW1P6zLy4w4gUhyieUBHVYnob5GjihScExLn3HcSiapZ/LsTiS/jkaDCcUW
OcZDMV6NlG1283AqhtQAQdtSIWynHz0cfOneXZhJUi+C113lSMHV5XD1VwMv
IsTVefzX1AUIuxOsSkhs42PkQigNCVW8xrhZywWUD9Yt10ihYQ2rUU/WQ18y
E7c0/VbGkiwzutVN/YxpqfGSXT2HXqAokLhVJqEb4h5ap76XU7JGw3VpWlup
VFy76BkBcE4WiTkidsIO10kxhaRQn9mjwzbSUFrv3pGBqOfFJ0DZYHxiJNEG
WqpomrymoDHv0kXraRZDSZrWmJArIll+Q2EElNA8BIwA1rOZKaUyWmyNUq++
TuyErQAeMI0x4jeZtsopHl5QjaYWxFGCJxnGcEGX7WRogHgXSaw6rxGhCZI2
bKaWHDtegWCnw8c7/vkvMQVeNNJvr+5cC8hLiUA58t5IbGqkvGv9GjaIkNhJ
xrkjvWu2jpfN4sljG2DApOx2gVPWPO0zSHdcEZX19VOowK+nHi3EvM4c7Dlz
8MTgWW6sdQeL+VrSpXblMKFZXwil11iq9eWtYX1rPjtOP6I9iVur5wYmRoNX
192Z8di2njUP6LHXpEd4GvvaSkBrDuz1rjuf2XXAeW3J6Ud+cTnMrrIRVY67
9lu1M48P/tnECXtsgVJMPbEAMm9POfEOa4QvNhHEED/sPNaJdouUIZUj61Y3
e5nlrSJG/d4mp9d9F5kAVUn9YyrbEJ5GuUgOixks8rwJpZZ+zmvP21cNy3Pt
RkJf/UdOlPqxlUQnK9TKiRAmYoErJHSF9KS0kHNiHVT5o8dGP5J07060YOwV
p9ncXpH1QxqBPSCDxytxiCpjFZgfgxeavaNVoCvePIPP+8ttMHevhN1qp3Nb
dBSzY4NcyXaJXejvR5Ioe6/sjz+iDrwrJZaig5GP4A101zIQxOjP3bvsbXLS
sd4hISm6fJ1Rsw95hgY2sgvHX3Lm5ggal/tHBF2w76D+GlHwlHMm5Qd0/EbR
Yv6z/M28NPIFhvxC7uQo2n+4u0sD96b7zchHqraV/L/THClxD+fYmFLAQOGk
Pg88Xh6TpJborSrq/LaraHWwCPvCIc95umNTvusqmxzwnnvgw/Kf6+cD8uDZ
styUCb/6QDz4Xuf5Dny4t4YPvWoYG3Fjm3FaHBnWWr6BFxtVJXwulJ8+IP9J
Dx+O84rNGe/h5++f797f/N6B477s4jitfHIzrzUYo0vusem0sar1Da2mpsXf
PrCixS4+oIzbmNH2P5CSfU/zewdO+2qdjsUhbahkfeYQXrN+6rdpfhar/9zr
HVxTJmVAdmO/HflhH1PTqnvjuMC0pGzVkQFMdqkAvZQ2rdeCEDZqFnjMU2vH
smcsacXzfML1TcS1JBxKBiRZBXJlD1U2jyeEobAD0Wu48k04S/1/d+nIBr56
2gV0NxLFOdI2bG9fu0P3X81oAX605ZxPpH67wEUNA364Gddqs108u/976OPB
fjevcpLG2lsiiFtJHGo2IZVtFlLSsyAcNSTC1YrzmRx7T/1iNU3vH9bah20q
dOftd/v+gUQ9fj6VO38y7yQ8sNIMD6CFxVsoCDiZFHLg0p5i9gv2hwlNmiYN
PjJxVlBmRoISXID8Ap80lzbjVlg5zE/s9d5o/ReMqz1WFj605XTgS/9w+Zvo
xJ4BfnMgB0Lf9N6MNEQ2ujdqB9IG99b/+cZ+gLEo0WEs3yFsHf6D38Hl7vzz
jT7uWtnjSCGh5mErwZf+n2/0o2tlv9WtvhYM0P/zjT7+hljbS3EVll7DplIj
mSQv35m0DrLzKpFZfLrvpwXZbHoUllLGsWqUt8TcJDwCkFd8lRJVVFL8BCTe
vM6AMycEFxm8iQqBhTKWhHZ7LYIdm4TBvDIBkv2qtxb4h2z9CwG82k2JK+0f
yNOoAfvR5balVuvvdxKKDm5lRi4aKpf5eTOeA5suTtK8kDOL9qyBGF1YZwwL
qVElSVRdmaSWB+Ao3YVV5PHEjRzvnAkvJmgCaQqi0WpSGjqDaX7DDlBzdUid
jFqTj0tn1/SGvUV8zhD1Qiru+dnYG9L5tF2Fq6Jm9eIAPZ6JkaSlTUE+a9b/
olsJDIiduHF9QjiR5uGpfiTJ4jQLFGQoTzHNjnQwVqbp26o0/cYU7BVcUyoo
Zk+egVq2JWTwKH1Xbf/rolx8NZ4N20Ev/0Ct4N/KFlTk6OBPqo+HxTncuZe8
vVHHoNvzhcNF7eUJZI34RR5cxNxWFaa6sXaReTxUGVANCPgRFlCC6u7cmyTN
8slF2bDBfnXXdqCwwCN6WozNluGUbj9rlCa5YVe47jh8bSukSxbl2u0hPNIx
W2Z8vtUBWX/YWsxZztLtWs69DcMu09gSgu/wOKuLicn01JS9u6TNnBRo80u5
0IzphMptOfIFuD2cEVpWcZMN/bDPJYVR9cCVXoHSScu4DOvJFP71bX267EXA
eAwKYoa63vhj8ExrsERMT8bIiZv5jiD/zr8OgFvvDfDpqOdF/fnfxGd0xHYN
l4GdrsaSz2NdZC+kqqWX4+yJaOEHuigR2pUjF3pTuOEMMBbMjlJ6YaZcWyhT
5JI/XRzjlvdWzAKThJ1fEKswreiMNS6AXijbv4nLokltXL6xuwxWc+boCKqd
gkoNWJspzzqeXPB1LOV4DtIjDRK9tRqvFivv0KFoE2MOXfNWFyn7aPe8J9fV
o+yyFyiUg1dL6XUSfLiJa7n3ZZaUTKFT8RUssHtw1omLvwSnvIFUzQobLIrr
FgNLagbvBI/T0fGSJKhG3bfohUa9H3PUO7q6493ljJeA8e/NWLZf8YA2ZdoK
oNNo0L3z8j+sNUAHj/CsY2N9+jRaOxlpU04f4QIdHTw9oCI37v6bWNN68ERK
yjkCcdUIt3P7n5VaVbPv1XHpCwLA2swrvMMOdl8KGSFz8mAwPx6z4/qhg8/e
fUeZJLbX1+d6uaI3Sqau5Vnp4oQu79o1wptgS3YoEzrr02hNTy92rI9Ev0+Y
VtKuPRIm0cfSJfWCE2FeszmCLYYdjdgC4xlHR4/6OrunMdbNevniyH7ziM7w
LHlR5LtDzQ8teCHk67+je8AcjUPFqwuPuOxMYoodkE/AKNE8LrUaiGHzxT0T
/fDDD5YGOBTbNJOnAd9w+gIhMBhH1jOZfF0xZz/B7LGdUseEf3DW8qvnMWEG
r5DWck/f/qsXTw5hDD9AB/98pbmer7jKcIVlF+F9bPZFDRR+/BpYW4p1SVaU
lo7nM23bHg5J+XA78Jh0OwrygJjZ4Vd/MLYxxiO2gxJMtsyWPIxNe4MfhfUi
I9ISfDPzaT44MwM+AAsDg5f06hf2RpmI/BW2qmSww8EVzDgBs8RMP6WOfcBD
EPWwAREf2ArzFf3udV18NhPeFrHkHRzustU4B5MSBP1y/9tKFj2hroh347ok
v3iJt+maRZ6dLAxyH5s5lFLHmmTdUj3k9nmu0yDDgWcQoD8Z3XHMBbyIPwT7
2fR6tj6nStCB5niZdJ3O9e4Hp4sc6YQCOTz25rL1dyzrqh3yHYVOHMAef3z6
RH4VcdDr7Q13WXhZkaJiXp58YbnT/6l5DabIVqck3r7tHcrdAfy6Zd9yZKXB
g+FDyw1h6pjra6dvH3843I+2NzyvvSPDf0IZe89t5p9/BINfkTsOkFojTlbc
XlskxKsJsr4eiH2IrnJ9ROfftXLqiKruZDSBC2NBXl0B0aqe+FmrTMBUqRdZ
KXpfiFzKLTeMACjBlUVBnGyXcgwxWxFjiV4NnS/EZMzEA+DJ04gXZ8msxoR0
Uf/dpbkbTMOTArJ714x5ZcLxSB59tpvbXjng33PuuVJ8qgQNF7pK0WWOkim6
8N0qQtWqHD9IiuhzMOexJgVM4urK/fH2Laqvv9YgVcF4pCJczWF1XRbVHpYt
5Bl40W48Moynj08Pnz19wsduv9h/SJeCHz1/cvQDf/Xl7h58tdM4PkgVzUfR
jXchtI7+tF9cd0+CBEecRpKlthm+4X0IpmwZ+f7xpLZL6Bl1TbdPoYFxnI7l
1svYmlml6jHZrch6ios0KjoKBz6nK7+ikwpc3kX0LQ+JRNfT+wfyjN4X/SRJ
K7kw+tkZ8p8czn7ENV38V060JMkjr7R9r2e/ZnqNSOwmGe0WW8VEFn/c4Cyl
DoJOKy9DWCXYC7ngyskwohfJFY4a9Xqb31M9IgvXWXVaxkK3XfMW7GF04B97
veZK7j5r0zGCMUfPbchFT7jb9GY6qmsHMIy2pxL6SpYXDwfy2iDLBz+LAQDf
f9H8ni+8l0O76OhwQbKvad/AevBsXikJXh09H1n62Ho23ig/xCDaqdTWCvMz
x2lxJUV7Gw06UJDCAli1KE0T+etPLkgIeuOLh39CLYmJ6KMursnpsmu6d9wK
2XJTKSv3x9nrNv1d2cpX9fmjdZMjeYDg35CZTxecd4zmBuH6wUbz/MSlXhjC
IJvNNUpE0ARoNERYLp4pYyEV2/QLcSn4fFpTkTt/G/vV8lFqTNpDoUEpCzlR
6ZcFZUEGHg2Fs0RI+oZ9t/3GY4qU6z3VcKNd55ljD4cPoNsxXzQklWpHdBpn
+/vgFE5/DfYSnMLZPm6evnE3vjafp9M329+Fp276UfepG6GE7D3aYr2euk5q
LoMGOVMV4f3hS3TwGTALIBvPwc0/x5W3T9mDLRZPqtBgLKmwJ58aQQ5P83iy
4xlW3lrrTTcXiQ0VMI8xtyHfAftwYXQKk+0O96JtlE7xGWtAaQK0LKuSCsOJ
dK0kHVXd0fo+ITJzhMiMaje6Ww/mz+WkLAdRmSY0U62DOEPo1v5+kYTOhPsO
1RRw4v7u/heD3b3Bvv5+SIwgpTZeGKQn6lmYThPc2esGd0ZeCnSA82jOK9bA
YO59D0CPu3mNYfG7to+7dkd/AoHeHwiEl/t1gUB7r+yq/xo4CJtvwEH2OsHb
g0F7twGD9nZ6dx3r2jctn2zb3/7QqJFlApc2rlO4GVD6dXiS9ty1B7twpr0W
zkSq/FYXB6rtfvj85f1v4X+NCwO58o9XkbjrumWE99nDsSTzDhnrHbw74V3o
609Kdt72aIsFcqlGbyLtEyA20cr1tubUVFjbqXUwZUPITObo5ebZ6jk+E7kj
tFjux8STIs8XOyGW1nRSLE94xdPbVSv1XoQAOBtSThFQzR4VkoOTUqm/c2jN
SkA23/GzskFCMThXfxxUbiNY7oHdPO8blusH5uO+68dq+eBQ6IeB8TrW9DpA
r+PxW0B7A1/RjKKt5mmhLXykkcfsXqrZIQys1f/jSOF6ae0Dcm2JrWK5LxXr
Ze4KJbq7xn8TjJEy/n6qE5A7hALYW8+lGmcn+seHApKsQ+3ubIAMSso21ejc
QEOESTgUpsUCCXyTGBoB5G5RyD28VOV07otd74RytxZCFtPKaxQtuqYGzq8A
N9eULNi2mQR4LbxQZccbaqOeAXtznrh5d9C0TY7Oon7tI3rc+3Zblux8PPin
V5p+PT0+gaKfQNHfDBTtCjdxAToLNzqMb88+3vJQuvZlJzB5X+3GTS9n60JT
n+a2lPtEDGHrUQkm8Z+MQO5fi0DKEr0rEtnx+iaIZMdrvz8y2aglrINUWNLi
lAQZ6rdo50yTFGvSoi2Q4g2ctPoC3wSneDrVmb9j5NJpKl3BSWcN0FMSIHxJ
AAZXVgU5qVQUupFfxlf1URUFzInk44Wsb1JJ+2DElDxkIIVhdOs/Gh71TlC3
IdLWcewPBJO6fj5Bpb85VOqI/5HDpW4i3ZCp+/2jgU293fl7QKdd+/Ld4VNP
mtwAoUqBJ0oUtld4gZROsvMQL01BE2DacRqD+vw31nIr5Kq3+0tybkrrsUuA
mNDVrljybwKvdhQ6+WNBrAHDrYFZmzfeFXgojgf++4Cv3YP+BMB+7ABs57pe
D8J2vvJegFjX8icw9vZg7DWy//YZknw5TwMLvWXeZF8zxLnLLo0gWtNtzFJA
YbIs/fp61PBXBF/07R/7ehuQdw+mlHSWooV4DPIGALfDANh5Z0d4H7NBmEr2
2osDqUvsjWLnJgi4q1zXh4CB12jL9wwF346C+0pBrTEfNo8VHcMiuDu3hpS9
aX9QWLmDvDdBy94rbRjLk5A7eKXDdddeJaV3i2EDkNJys4mrUIOyMS/wbqTK
oRLe9YMy/o8e2e5Ykk/o9id0+wOg291JtiELbo5xey9di3T7IiI00H8lyn3y
CeW+EeVeY07fDute08hmiPeal/9wuLdagwT9KrqtElwQcNEei+S1KNllXFFV
Ab7ud1MwvFsJfwLEfx9AXHVBAwoP60R+CBBcevgEf/+28LcaBB8x8C1T6IC8
5ZePA+y2huZvDXO3dt47AtwqI9ZB2637Nan7uYlTvG8D9YcHPej1uk3TFOhq
0qm9zEQPaPF9oaJNCxatqDDoNHtBdx2g6KfqJ/dRmFPY1iHeeGPcucnIJxOL
UeERac3/qe9fuiD3t8JDNeqkJeL0DmjXEf71+dEHg9WbNZv/QIC6Y+o/WMZy
x8A+weUfNVzeXtFrgPL2w78eIpc2P4HjtwTH1+kN3/m2BRN14QoBf7z6o0Hi
MRgoZm2G8lro/N2ykhc5rCZdy3o/Z/xMJAplPNN1ZniHnxXPnq6c0D075LBQ
xTW7zIFKuy9KsiRPoq+K7b4pCuQB5O5+ZLUa3pCmuk4vwqSqiXFqSy5qdNlT
gTb03FKEuNwHR8gosrRaKRJjCFSFinRdeS2U3zRvfmcQv3XPwXuH7xs99Fuo
fd/uRynlIwQTJzsvbO1GloI0QCzWyQOi25zwV2cb4cVJzCqrnf+omIDS8sNF
A5r8cG0cQB9uwHuqAH4r7L8TPPm4IwDNZfiE/X/C/t879n/StawIIKwtALJh
HMCW614bAbAi4lOm++8RA+jyEW6B/ne9vgHu3/XaHw7x10G2cto1411RFYf3
07Xvg1kOClZvzt0I9O/QtR8P3O/Tny4yVIN9CtsFhuPQpXJsshiW2bt6d1Gz
i6Nm/3ilr5FPUYJ5XzjjVXwwnr5egY2j96564Uvfl6LbYy7MS7ccv7eoBGGO
v3uaPt8I1FHExLtU6EPVMMEuPgUmfvsSJkj3jzkyoXNYU8AEf/o4YhP2Rq7f
pXxJuP3WRScOmveGNYrmju1VtXZnUp0owe+DKAUCMeUy7q7E6NwY8sT6oR/W
9yGpHQ0i2C5pjBnPPs3FdlLVigF0dGHlXl7/9urG3OxtvdE2Xmmud2Xr/QFV
vgPaYzKo8oHByvF8PKAdtOD7OfDO9ASljS2+Uq4y+G8pdytbhMjeyy1T5yvZ
vRDF1J+o3C5A6h/MbB9iQv0M9pTMC91ZqTycUJFeGC0bu5qF4Neul58t8Sy0
g9TbEQTAkiswKpq3F7/HWEww3dahDuECYHxQ4XVJt3wJBW0OkbK3+vHXV2p2
ERxckxXdrdYO4WxQtkW29acoyH9GFKRjSW8o2tJ4+v3UbMFGPwVCbn1KoEOG
dEZB3PyoMpa98swpKb7ORjSI1VWD4L62dYF61H5nxqvqSfKH6yy+v5AKbJdL
ugFls4Nx6wb11A7K5azpUCip4B3LSn8kxyNCs+x3CqoEjPteIiqdWhe9be27
yR4aNLDLhg/j9XBI/iQj7MrLVVmndZUG7doUtFJrc/1bUaAukqwLATkLO2Eg
J5+geyGgAdtJZPWUdIUTyMvMjM/vL+LXyaJekNl2qWtGh0OrjzX8Yxn6/Qd6
1ltrmx794JF1FC8hZfdxxl06CfEp6PIp6PJHqbHu/PHwuroAlRDEmAAw1BEd
xcY32cEcUk8qRsY4d1C6sgEVUiXXVWlff4DkDzsT3HXOa6MYPMMA9ItFLGDH
igaW28zXjsimJnSKF7BPZ0mGkyJ71iIGguGTbc4YAyN1iDkQWXOt6Me0pfsF
/69HtDr9vVsWb2q9v2H1ptZ7f7igloeBNU6yrItpjXUfWPTpvi1UtdHJlnWm
xfrtwtajP51ZHtNdomgkSaDqFkEwJ0m9fbJpbOx9h8PY4bRxrXZQzOazSNis
b10toOQMurKYakfQrHlLQEMO/WfGzHp3UDzUtNEO5aJnxh7YKCapbf0iuT9W
AzaTfFwzsyA38Y2i7Cd7l6+XeM0uzh5YMyfUEt9w17vGFVhm53YJ+YY28YbH
87pABK15v/nxy5NTvaXNd+/60QSMMYPAvE1ONOU8T2HngqKYGdRV9RLvsS1l
ODDSMbYRo/2E17+P05xYAN1JbKygxnRAFmlhLvLMZx4UOGJ48XaxktAaX22K
85SuLAfGNabVV/aiccp9JMOmz0iEFs0dG7ppfsg/yuS9t03n4QZyUvD+aqIR
usJP4rQ08gXKk5Pvnr38/hFasQtYH/aZQCCIyRWDygO1RLdFy7WsFOhrM4nP
CHrramnobj6ffVrXrw65QbojmZmeMLJxYdhpwjuZ7TsVSnjwmb0wWbnFzoc8
unVoixQfXCJDql90ohf2UYxtZ4sODGBWP3O4hnONdyms3NwsfRN3O9a/uhp3
XkD7FtEBhtnLEWLM4+uuQn07kkjxmqtUGw103LFhW7j5vo6uxtrV6DZq0L3W
arRxru/m5sR16ZyqH4nfbKb4Rq9HSCXelozrCWZEJ4+xUGXPDr72Y9Xb3kpT
O2V9JsDvjo+Zcv+fldIWJ4rndP35QaoRQMtUoDp4mzJ/sLdMRiiIPzbfwRDz
wvRBfL4VDIm2MRjCsKKNz5PmzziADTsYxazKKRgq2Dt1of3TUHyoUSeLMTyc
LObw2HmjWbuW6WG+vcFgAFbT+JzUifouGuLXuOPVHbBsQOcmr2Wdd9+iR8ty
wh8L0erMGIzOwXOgWye+vaE6XiwTzz8CZ7NmrLSPW/HSwDrEpSaogzQrEkIy
bBkcfs1MYcvD3AYDRVjBSgRJebai/q6uHh2fPnn7Fn7nQ2cx558TGNhADnEN
vJQRe7wJ3N3aGgEaiR1yxopMx8PzG4knoFLK8LYLipQ65AEY2nZghD394BCu
H/qOrWKhzDILL6UB7zuKz0zK0Vgh9Svo/JV31QaZPNgNhl2sX0Q75tvvfuZG
2VFRHQ1yEcwxEl5kQEoMUYzVvODQ8cQNLZpKENITvYF0/eWXX3pP6JlRL6J/
TLxXpxQ+6hy5PBhEkXblS8UgdJxs6No3slk1H/F94Pm4MuB08C/oeYxw1vL3
CYVQMMYF2zBO5du/42xH0T54Xzjuq1F0Z5rMBhpHgbUZ6CKgmvuvrYNM9w1f
W9/mka23PWKfRtHR9VxU6Z3uwC4YoJIweMXRD8Q4t09ff4MfdtrsBSbU5RrG
WgtRwOJye5xQQGk/0PBW+AJy3St5cIv5gbgKRn4MRl3ELWwff7MzlDXikwmS
uCLhQzpOegmbAFZnXC9X0UNZJgHT+coX8dVdQsfWIz0dIJ7PZMti6Hg8Butm
YfABnoQHMhifrMrNDNg5zR94mpvwYXOGG3Li8TcNRmzNUH+30xvhT1nIqXu7
uwGnihe1Ead2cyPy6x262BoTLgI25a8oX+g8yShg66eMSH+X8wSEc4KqaprW
HJE/WzmZ1ocVyynCxBBVf23SxzB6WdbkmnD2B+H+1q3mS6WDFBJ1N8+gMSNg
VFwUeGKLYU4+d2pvPNQ8b8EDlKPA5YwLUsL6UtWSfu5dzqXRc9Ju+9CYX8FC
bF0vaAXRgvdLu6PAPQM/X0FYHBd8TzsGbRS3Z7xo7wac7kb0wQSsN/AGf3vO
Xxdnx68bIvjB3vDzgLVp9DczdotvgaH/F3Z6YbbCBAEA

-->

</rfc>
