1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
|
Internet Engineering Task Force (IETF) D. Malas
Request for Comments: 6076 CableLabs
Category: Standards Track A. Morton
ISSN: 2070-1721 AT&T Labs
January 2011
Basic Telephony SIP End-to-End Performance Metrics
Abstract
This document defines a set of metrics and their usage to evaluate
the performance of end-to-end Session Initiation Protocol (SIP) for
telephony services in both production and testing environments. The
purpose of this document is to combine a standard set of common
metrics, allowing interoperable performance measurements, easing the
comparison of industry implementations.
Status of This Memo
This is an Internet Standards Track document.
This document is a product of the Internet Engineering Task Force
(IETF). It represents the consensus of the IETF community. It has
received public review and has been approved for publication by the
Internet Engineering Steering Group (IESG). Further information on
Internet Standards is available in Section 2 of RFC 5741.
Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
http://www.rfc-editor.org/info/rfc6076.
Malas & Morton Standards Track [Page 1]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
This document may contain material from IETF Documents or IETF
Contributions published or made publicly available before November
10, 2008. The person(s) controlling the copyright in some of this
material may not have granted the IETF Trust the right to allow
modifications of such material outside the IETF Standards Process.
Without obtaining an adequate license from the person(s) controlling
the copyright in such materials, this document may not be modified
outside the IETF Standards Process, and derivative works of it may
not be created outside the IETF Standards Process, except to format
it for publication as an RFC or to translate it into languages other
than English.
Malas & Morton Standards Track [Page 2]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
Table of Contents
1. Introduction and Scope ..........................................3
2. Terminology .....................................................4
3. Time Interval Measurement and Reporting .........................5
4. SIP Performance Metrics .........................................7
4.1. Registration Request Delay (RRD) ...........................8
4.2. Ineffective Registration Attempts (IRAs) ...................9
4.3. Session Request Delay (SRD) ...............................10
4.3.1. Successful Session Setup SRD .......................11
4.3.2. Failed Session Setup SRD ...........................12
4.4. Session Disconnect Delay (SDD) ............................13
4.5. Session Duration Time (SDT) ...............................15
4.5.1. Successful Session Duration SDT ....................15
4.5.2. Failed Session Completion SDT ......................17
4.6. Session Establishment Ratio (SER) .........................18
4.7. Session Establishment Effectiveness Ratio (SEER) ..........19
4.8. Ineffective Session Attempts (ISAs) .......................20
4.9. Session Completion Ratio (SCR) ............................21
5. Additional Considerations ......................................23
5.1. Metric Correlations .......................................23
5.2. Back-to-Back User Agent (B2BUA) ...........................23
5.3. Authorization and Authentication ..........................23
5.4. Forking ...................................................24
5.5. Data Collection ...........................................24
5.6. Testing Documentation .....................................25
6. Conclusions ....................................................25
7. Security Considerations ........................................25
8. Contributors ...................................................26
9. Acknowledgements ...............................................26
10. References ....................................................26
10.1. Normative References .....................................26
10.2. Informative References ...................................27
1. Introduction and Scope
SIP has become a widely used standard among many service providers,
vendors, and end users in the telecommunications industry. Although
there are many different standards for measuring the performance of
telephony signaling protocols, such as Signaling System 7 (SS7), none
of the metrics specifically address SIP.
The scope of this document is limited to the definitions of a
standard set of metrics for measuring and reporting SIP performance
from an end-to-end perspective in a telephony environment. The
metrics introduce a common foundation for understanding and
quantifying performance expectations between service providers,
vendors, and the users of services based on SIP. The intended
Malas & Morton Standards Track [Page 3]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
audience for this document can be found among network operators, who
often collect information on the responsiveness of the network to
customer requests for services.
Measurements of the metrics described in this document are affected
by variables external to SIP. The following is a non-exhaustive list
of examples:
o Network connectivity
o Switch and router performance
o Server processes and hardware performance
This document defines a list of pertinent metrics for varying aspects
of a telephony environment. They may be used individually or as a
set based on the usage of SIP within the context of a given
telecommunications service.
The metrics defined in this document DO NOT take into consideration
the impairment or failure of actual application processing of a
request or response. The metrics do not distinguish application
processing time from other sources of delay, such as packet transfer
delay.
Metrics designed to quantify single device application processing
performance are beyond the scope of this document.
This document does not provide any numerical objectives or acceptance
threshold values for the SIP performance metrics defined below, as
these items are beyond the scope of IETF activities, in general.
The metrics defined in this document are applicable in scenarios
where the SIP messages launched (into a network under test) are
dedicated messages for testing purposes, or where the messages are
user-initiated and a portion of the live is traffic present. These
two scenarios are sometimes referred to as active and passive
measurement, respectively.
2. Terminology
The following terms and conventions will be used throughout this
document:
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119].
Malas & Morton Standards Track [Page 4]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
End-to-End - This is described as two or more elements utilized for
initiating a request, receiving the request, and responding to the
request. It encompasses elements as necessary to be involved in a
session dialog between the originating user agent client (UAC),
destination user agent server (UAS), and any interim proxies (may
also include back-to-back user agents (B2BUAs)). This may be
relative to a single operator's set of elements or may extend to
encompass all elements (if beyond a single operator's network)
associated with a session.
Session - As described in RFC 3261 [RFC3261], SIP is used primarily
to request, create, and conclude sessions. "These sessions include
Internet telephone calls, multimedia distribution, and multimedia
conferences". The metrics within this document measure the
performance associated with the SIP dialogs necessary to establish
these sessions; therefore, they are titled as Session Request Delay,
Session Disconnect Delay, etc. Although the titles of many of the
metrics include this term, they are specifically measuring the
signaling aspects only. Each session is identified by a unique
"Call-ID", "To", and "From" header field tag.
Session Establishment - Session establishment occurs when a 200 OK
response from the target UA has been received, in response to the
originating UA's INVITE setup request, indicating the session setup
request was successful.
Session Setup - As referenced within the sub-sections of Section 4.2
in this document, session setup is the set of messages and included
parameters directly related to the process of a UA requesting to
establish a session with a corresponding UA. This is also described
as a set of steps in order to establish "ringing" [RFC3261].
3. Time Interval Measurement and Reporting
Many of the metrics defined in this memo utilize a clock to assess
the time interval between two events. This section defines time-
related terms and reporting requirements.
t1 - start time
This is the time instant (when a request is sent) that begins a
continuous time interval. t1 occurs when the designated request has
been processed by the SIP application and the first bit of the
request packet has been sent from the UA or proxy (and is externally
observable at some logical or physical interface).
Malas & Morton Standards Track [Page 5]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
t1 represents the time at which each request-response test begins,
and SHALL be used to designate the time of day when a particular
measurement was conducted (e.g., the Session Request Delay at "t1"
(at some specific UA interface) was measured to be X ms).
t4 - end time
This is the time instant that concludes the continuous time interval
begun when the related request is sent. t4 occurs when the last bit
of the designated response is received by the SIP application at the
requesting device (and is externally observable at some logical or
physical interface).
Note: The designations t2 and t3 are reserved for future use at
another interface involved in satisfying a request.
Section 10.1 of [RFC2330] describes time-related issues in
measurements, and defines the errors that can be attributed to the
clocks themselves. These definitions are used in the material below.
Time-of-Day Accuracy
As defined above, t1 is associated with the start of a request and
also serves as the time-of-day stamp associated with a single
specific measurement. The clock offset [RFC2330] is the difference
between t1 and a recognized primary source of time, such as UTC
(offset = t1 - UTC).
When measurement results will be correlated with other results or
information using time-of-day stamps, then the time clock that
supplies t1 SHOULD be synchronized to a primary time source, to
minimize the clock's offset. The clocks used at the different
measurement points SHOULD be synchronized to each other, to minimize
the relative offset (as defined in RFC2330). The clock's offset and
the relative offset MUST be reported with each measurement.
Time Interval Accuracy
The accuracy of the t4-t1 interval is also critical to maintain and
report. The difference between a clock's offsets at t1 and t4 is one
source of error for the measurement and is associated with the
clock's skew [RFC2330].
Malas & Morton Standards Track [Page 6]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
A stable and reasonably accurate clock is needed to make the time
interval measurements required by this memo. This source of error
SHOULD be constrained to less than +/- 1 ms, implying 1-part-per-1000
frequency accuracy for a 1-second interval. This implies that
greater stability is required as the length of the t4-t1 increases,
in order to constrain the error to be less than +/- 1 ms.
There are other important aspects of clock operation:
1. Synchronization protocols require some ability to make
adjustments to the local clock. However, these adjustments
(clock steps or slewing) can cause large errors if they occur
during the t1 to t4 measurement interval. Clock correction
SHOULD be suspended during a t1 to t4 measurement interval,
unless the time interval accuracy requirement above will be met.
Alternatively, a measurement SHOULD NOT be performed during clock
correction, unless the time interval accuracy requirement above
will be met.
2. If a free-running clock is used to make the time interval
measurement, then the time of day reported with the measurement
(which is normally timestamp t1) SHOULD be derived from a
different clock that meets the time-of-day accuracy requirements
described above.
The physical operation of reading time from a clock may be
constrained by the delay to service the interrupt. Therefore, if the
accuracy of the time stamp read at t1 or t4 includes the interrupt
delay, this source of error SHOULD be known and included in the error
assessment.
4. SIP Performance Metrics
In regard to all of the following metrics, t1 begins with the first
associated SIP message sent by either UA, and is not reset if the UA
must retransmit the same message, within the same transaction,
multiple times. The first associated SIP message indicates the t1
associated with the user or application expectation relative to the
request.
Some metrics are calculated using messages from different
transactions in order to measure across actions such as redirection
and failure recovery. The end time is typically based on a
successful end-to-end provisional response, a successful final
response, or a failure final response for which there is no recovery.
The individual metrics detail which message to base the end time on.
Malas & Morton Standards Track [Page 7]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
The authentication method used to establish the SIP dialog will
change the message exchanges. The example message exchanges used do
not attempt to describe all of the various authentication types.
Since authentication is frequently used, SIP Digest authentication
was used for example purposes.
In regard to all of the metrics, the accuracy and granularity of the
output values are related to the accuracy and granularity of the
input values. Some of the metrics below are defined by a ratio.
When the denominator of this ratio is 0, the metric is undefined.
While these metrics do not specify the sample size, this should be
taken into consideration. These metrics will provide a better
indication of performance with larger sample sets. For example, some
SIP Service Providers (SSPs) [RFC5486] may choose to collect input
over an hourly, daily, weekly, or monthly timeframe, while another
SSP may choose to perform metric calculations over a varying set of
SIP dialogs.
4.1. Registration Request Delay (RRD)
Registration Request Delay (RRD) is a measurement of the delay in
responding to a UA REGISTER request. RRD SHALL be measured and
reported only for successful REGISTER requests, while Ineffective
Registration Attempts (Section 4.2) SHALL be reported for failures.
This metric is measured at the originating UA. The output value of
this metric is numerical and SHOULD be stated in units of
milliseconds. The RRD is calculated using the following formula:
RRD = Time of Final Response - Time of REGISTER Request
In a successful registration attempt, RRD is defined as the time
interval from when the first bit of the initial REGISTER message
containing the necessary information is passed by the originating UA
to the intended registrar, until the last bit of the 200 OK is
received indicating the registration attempt has completed
successfully. This dialog includes an expected authentication
challenge prior to receiving the 200 OK as described in the following
registration flow examples.
The following message exchange provides an example of identifiable
events necessary for inputs in calculating RRD during a successful
registration completion:
Malas & Morton Standards Track [Page 8]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
UA1 Registrar
| |
|REGISTER |
t1---->|--------------------->|
/\ | 401|
|| |<---------------------|
RRD |REGISTER |
|| |--------------------->|
\/ | 200|
t4---->|<---------------------|
| |
Note: Networks with elements using primarily Digest authentication
will exhibit different RRD characteristics than networks with
elements primarily using other authentication mechanisms (such as
Identity). Operators monitoring RRD in networks with a mixture of
authentication schemes should take note that the RRD measurements
will likely have a multimodal distribution.
4.2. Ineffective Registration Attempts (IRAs)
Ineffective registration attempts are utilized to detect failures or
impairments causing the inability of a registrar to receive a UA
REGISTER request. This metric is measured at the originating UA.
The output value of this metric is numerical and SHOULD be reported
as a percentage of registration attempts.
This metric is calculated as a percentage of total REGISTER requests.
The IRA percentage is calculated using the following formula:
# of IRAs
IRA % = ----------------------------- x 100
Total # of REGISTER Requests
A failed registration attempt is defined as a final failure response
to the initial REGISTER request. It usually indicates a failure
received from the destination registrar or interim proxies, or
failure due to a timeout of the REGISTER request at the originating
UA. A failure response is described as a 4XX (excluding 401, 402,
and 407 non-failure challenge response codes), 5XX, or possible 6XX
message. A timeout failure is identified by the Timer F expiring.
IRAs may be used to detect problems in downstream signaling
functions, which may be impairing the REGISTER message from reaching
the intended registrar; or, it may indicate a registrar has become
overloaded and is unable to respond to the request.
Malas & Morton Standards Track [Page 9]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
The following message exchange provides a timeout example of an
identifiable event necessary for input as a failed registration
attempt:
UA1 Registrar
| |
|REGISTER |
|--------------------->|
|REGISTER |
|--------------------->|
|REGISTER |
|--------------------->|
| |
Failure ---->|***Timer F Expires |
| |
In the previous message exchange, UA1 retries a REGISTER request
multiple times before the timer expires, indicating the failure.
Only the first REGISTER request MUST be used for input to the
calculation and an IRA. Subsequent REGISTER retries are identified
by the same transaction identifier (the same topmost Via header field
branch parameter value) and MUST be ignored for purposes of metric
calculation. This ensures an accurate representation of the metric
output.
The following message exchange provides a registrar servicing failure
example of an identifiable event necessary for input as a failed
registration attempt:
UA1 Registrar
| |
|REGISTER |
|--------------------->|
| |
| |
| |
| |
| 503|
Failure ---->|<---------------------|
| |
4.3. Session Request Delay (SRD)
Session Request Delay (SRD) is utilized to detect failures or
impairments causing delays in responding to a UA session request.
SRD is measured for both successful and failed session setup requests
as this metric usually relates to a user experience; however, SRD for
session requests ending in a failure MUST NOT be combined in the same
Malas & Morton Standards Track [Page 10]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
result with successful requests. The duration associated with
success and failure responses will likely vary substantially, and the
desired output time associated with each will be significantly
different in many cases. This metric is similar to Post-Selection
Delay defined in [E.721], and it is measured at the originating UA
only. The output value of this metric MUST indicate whether the
output is for successful or failed session requests and SHOULD be
stated in units of seconds. The SRD is calculated using the
following formula:
SRD = Time of Status Indicative Response - Time of INVITE
4.3.1. Successful Session Setup SRD
In a successful request attempt, SRD is defined as the time interval
from when the first bit of the initial INVITE message containing the
necessary information is sent by the originating user agent to the
intended mediation or destination agent, until the last bit of the
first provisional response is received indicating an audible or
visual status of the initial session setup request. (Note: In some
cases, the initial INVITE may be forked. Section 5.4 provides
information for consideration on forking.) In SIP, the message
indicating status would be a non-100 Trying provisional message
received in response to an INVITE request. In some cases, a non-100
Trying provisional message is not received, but rather a 200 message
is received as the first status message instead. In these
situations, the 200 message would be used to calculate the interval.
In most circumstances, this metric relies on receiving a non-100
Trying message. The use of the Provisional Response ACKnowledgement
(PRACK) method [RFC3262] MAY improve the quality and consistency of
the results.
The following message exchange provides an example of identifiable
events necessary for inputs in calculating SRD during a successful
session setup request without a redirect (i.e., 3XX message):
UA1 UA2
| |
|INVITE |
t1---->|--------------------->|
/\ | |
|| | |
SRD | |
|| | |
\/ | 180|
t4---->|<---------------------|
| |
Malas & Morton Standards Track [Page 11]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
The following message exchange provides an example of identifiable
events necessary for inputs in calculating SRD during a successful
session setup with a redirect (e.g., 302 Moved Temporarily):
UA1 Redirect Server UA2
| | |
|INVITE | |
t1---->|--------------------->| |
/\ | 302| |
|| |<---------------------| |
|| |ACK | |
SRD |--------------------->| |
|| |INVITE |
|| |------------------------------------------->|
\/ | 180|
t4---->|<-------------------------------------------|
4.3.2. Failed Session Setup SRD
In a failed request attempt, SRD is defined as the time interval from
when the first bit of the initial INVITE message containing the
necessary information is sent by the originating agent or user to the
intended mediation or destination agent, until the last bit of the
first provisional response or a failure indication response. A
failure response is described as a 4XX (excluding 401, 402, and 407
non-failure challenge response codes), 5XX, or possible 6XX message.
A change in the metric output might indicate problems in downstream
signaling functions, which may be impairing the INVITE message from
reaching the intended UA or may indicate changes in end-point
behavior. While this metric calculates the delay associated with a
failed session request, the metric Ineffective Session Attempts
(Section 4.8) is used for calculating a ratio of session attempt
failures.
The following message exchange provides an example of identifiable
events necessary for inputs in calculating SRD during a failed
session setup attempt without a redirect (i.e., 3XX message):
Malas & Morton Standards Track [Page 12]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
UA1 UA2
| |
|INVITE |
t1---->|--------------------->|
/\ | |
|| | |
SRD | |
|| | |
\/ | 480|
t4---->|<---------------------|
| |
The following message exchange provides an example of identifiable
events necessary for inputs in calculating SRD during a failed
session setup attempt with a redirect (e.g., 302 Moved Temporarily):
UA1 Redirect Server UA2
| | |
|INVITE | |
t1---->|--------------------->| |
/\ | 302| |
|| |<---------------------| |
|| |ACK | |
SRD |--------------------->| |
|| |INVITE |
|| |------------------------------------------->|
\/ | 480|
t4---->|<-------------------------------------------|
4.4. Session Disconnect Delay (SDD)
This metric is utilized to detect failures or impairments delaying
the time necessary to end a session. SDD is measured for both
successful and failed session disconnects; however, SDD for session
disconnects ending in a failure MUST NOT be combined in the same
result with successful disconnects. The duration associated with
success and failure results will likely vary substantially, and the
desired output time associated with each will be significantly
different in many cases. It can be measured from either end-point UA
involved in the SIP dialog. The output value of this metric is
numerical and SHOULD be stated in units of milliseconds. The SDD is
calculated using the following formula:
SDD = Time of 2XX or Timeout - Time of Completion Message (BYE)
SDD is defined as the interval between the first bit of the sent
session completion message, such as a BYE, and the last bit of the
subsequently received 2XX response. In some cases, a recoverable
Malas & Morton Standards Track [Page 13]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
error response, such as a 503 Retry-After, may be received. In such
situations, these responses should not be used as the end time for
this metric calculation. Instead, the successful (2XX) response
related to the recovery message is used. The following message
exchanges provide an example of identifiable events necessary for
inputs in calculating SDD during a successful session completion:
Measuring SDD at the originating UA (UA1) -
UA1 UA2
| |
|INVITE |
|--------------------->|
| 180|
|<---------------------|
| 200|
|<---------------------|
|ACK |
|--------------------->|
|BYE |
t1---->|--------------------->|
/\ | |
|| | |
SDD | |
|| | |
\/ | 200|
t4---->|<---------------------|
Measuring SDD at the target UA (UA2) -
UA1 UA2
| |
|INVITE |
|--------------------->|
| 180|
|<---------------------|
| 200|
|<---------------------|
|ACK |
|--------------------->|
| BYE|
|<---------------------|<----t1
| | /\
| | ||
| | SDD
| | ||
|200 | \/
|--------------------->|<----t4
Malas & Morton Standards Track [Page 14]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
In some cases, no response is received after a session completion
message is sent and potentially retried. In this case, the
completion message, such as a BYE, results in a Timer F expiration.
Sessions ending in this manner SHOULD be excluded from the metric
calculation.
4.5. Session Duration Time (SDT)
This metric is used to detect problems (e.g., poor audio quality)
causing short session durations. SDT is measured for both successful
and failed session completions. It can be measured from either end-
point UA involved in the SIP dialog. This metric is similar to Call
Hold Time, and it is traditionally calculated as Average Call Hold
Time (ACHT) in telephony applications of SIP. The output value of
this metric is numerical and SHOULD be stated in units of seconds.
The SDT is calculated using the following formula:
SDT = Time of BYE or Timeout - Time of 200 OK response to INVITE
This metric does not calculate the duration of sessions leveraging
early media. For example, some automated response systems only use
early media by responding with a SIP 183 Session Progress message
with the Session Description Protocol (SDP) connecting the
originating UA with the automated message. Usually, in these
sessions the originating UA never receives a 200 OK, and the message
exchange ends with the originating UA sending a CANCEL.
4.5.1. Successful Session Duration SDT
In a successful session completion, SDT is calculated as an average
and is defined as the duration of a dialog defined by the interval
between receipt of the first bit of a 200 OK response to an INVITE,
and receipt of the last bit of an associated BYE message indicating
dialog completion. Retransmissions of the 200 OK and ACK messages
due to network impairments do not reset the metric timers.
The following message exchanges provide an example of identifiable
events necessary for inputs in calculating SDT during a successful
session completion. (The message exchanges are changed between the
originating and target UAs to provide varying examples.):
Malas & Morton Standards Track [Page 15]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
Measuring SDT at the originating UA (UA1) -
UA1 UA2
| |
|INVITE |
|--------------------->|
| 180|
|<---------------------|
| 200|
t1---->|<---------------------|
/\ |ACK |
|| |--------------------->|
|| | |
SDT | |
|| | |
|| | |
\/ | BYE|
t4---->|<---------------------|
| |
When measuring SDT at the target UA (UA2), it is defined by the
interval between sending the first bit of a 200 OK response to an
INVITE, and receipt of the last bit of an associated BYE message
indicating dialog completion. If UA2 initiates the BYE, then it is
defined by the interval between sending the first bit of a 200 OK
response to an INVITE, and sending the first bit of an associated BYE
message indicating dialog completion. This is illustrated in the
following example message exchange:
UA1 UA2
| |
|INVITE |
|--------------------->|
| 180|
|<---------------------|
| 200|
|<---------------------|<----t1
|ACK | /\
|--------------------->| ||
| | ||
| | SDT
| | ||
| | ||
| BYE| \/
|<---------------------|<----t4
| |
Malas & Morton Standards Track [Page 16]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
(In these two examples, t1 is the same even if either UA receives the
BYE instead of sending it.)
4.5.2. Failed Session Completion SDT
In some cases, no response is received after a session completion
message is sent and potentially retried. In this case, SDT is
defined as the interval between receiving the first bit of a 200 OK
response to an INVITE, and the resulting Timer F expiration. The
following message exchanges provide an example of identifiable events
necessary for inputs in calculating SDT during a failed session
completion attempt:
Measuring SDT at the originating UA (UA1) -
UA1 UA2
| |
|INVITE |
|--------------------->|
| 180|
|<---------------------|
| 200|
t1---->|<---------------------|
/\ |ACK |
|| |--------------------->|
|| |BYE |
SDT |--------------------->|
|| |BYE |
|| |--------------------->|
\/ | |
t4---->|***Timer F Expires |
When measuring SDT at UA2, SDT is defined as the interval between
sending the first bit of a 200 OK response to an INVITE, and the
resulting Timer F expiration. This is illustrated in the following
example message exchange:
Malas & Morton Standards Track [Page 17]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
UA1 UA2
| |
|INVITE |
|--------------------->|
| 180|
|<---------------------|
| 200|
|<---------------------|<----t1
| ACK| /\
|--------------------->| ||
| BYE| ||
|<---------------------| SDT
| BYE| ||
|<---------------------| ||
| | \/
| Timer F Expires***|<----t4
Note that in the presence of message loss and retransmission, the
value of this metric measured at UA1 may differ from the value
measured at UA2 up to the value of Timer F.
4.6. Session Establishment Ratio (SER)
This metric is used to detect the ability of a terminating UA or
downstream proxy to successfully establish sessions per new session
INVITE requests. SER is defined as the ratio of the number of new
session INVITE requests resulting in a 200 OK response, to the total
number of attempted INVITE requests less INVITE requests resulting in
a 3XX response. This metric is similar to the Answer Seizure Ratio
(ASR) defined in [E.411]. It is measured at the originating UA only.
The output value of this metric is numerical and SHOULD be adjusted
to indicate a percentage of successfully established sessions. The
SER is calculated using the following formula:
# of INVITE Requests w/ associated 200 OK
SER = --------------------------------------------------------- x 100
(Total # of INVITE Requests) -
(# of INVITE Requests w/ 3XX Response)
The following message exchange provides an example of identifiable
events necessary for inputs in determining session establishment as
described above:
Malas & Morton Standards Track [Page 18]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
UA1 UA2
| |
|INVITE |
+----------->|------------------>|
| | 180|
| |<------------------|
Session Established | |
| | |
| | 200|
+----------->|<------------------|
| |
The following is an example message exchange including a SIP 302
Redirect response.
UA1 UA2 UA3
| | |
|INVITE | |
+----------->|------------------>| |
| | | |
INVITE w/ 3XX Response | | |
| | 302| |
+----------->|<------------------| |
| | |
|INVITE |
+----------->|-------------------------------------->|
| | |
| | 180|
Session Established |<--------------------------------------|
| | |
| | 200|
+----------->|<--------------------------------------|
| |
4.7. Session Establishment Effectiveness Ratio (SEER)
This metric is complimentary to SER, but is intended to exclude the
potential effects of an individual user of the target UA from the
metric. SEER is defined as the ratio of the number of INVITE
requests resulting in a 200 OK response and INVITE requests resulting
in a 480, 486, 600, or 603; to the total number of attempted INVITE
requests less INVITE requests resulting in a 3XX response. The
response codes 480, 486, 600, and 603 were chosen because they
clearly indicate the effect of an individual user of the UA. It is
possible an individual user could cause a negative effect on the UA.
For example, they may have misconfigured the UA, causing a response
code not directly related to an SSP, but this cannot be easily
determined from an intermediary B2BUA somewhere between the
Malas & Morton Standards Track [Page 19]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
originating and terminating UAs. With this in consideration,
response codes such as 401, 407, and 420 (not an exhaustive list)
were not included in the numerator of the metric. This metric is
similar to the Network Effectiveness Ratio (NER) defined in [E.411].
It is measured at the originating UA only. The output value of this
metric is numerical and SHOULD be adjusted to indicate a percentage
of successfully established sessions less common UAS failures.
The SEER is calculated using the following formula:
SEER =
# of INVITE Requests w/ associated 200, 480, 486, 600, or 603
------------------------------------------------------------- x 100
(Total # of INVITE Requests) -
(# of INVITE Requests w/ 3XX Response)
Reference the example flows in Section 4.6.
4.8. Ineffective Session Attempts (ISAs)
Ineffective session attempts occur when a proxy or agent internally
releases a setup request with a failed or overloaded condition. This
metric is similar to Ineffective Machine Attempts (IMAs) in telephony
applications of SIP, and was adopted from Telcordia GR-512-CORE
[GR-512]. The output value of this metric is numerical and SHOULD be
adjusted to indicate a percentage of ineffective session attempts.
The following failure responses provide a guideline for this
criterion:
o 408 Request Timeout
o 500 Server Internal Error
o 503 Service Unavailable
o 504 Server Time-out
This set was derived in a similar manner as described in Section 4.7.
In addition, 408 failure responses may indicate an overloaded state
with a downstream element; however, there are situations other than
overload that may cause an increase in 408 responses.
This metric is calculated as a percentage of total session setup
requests. The ISA percentage is calculated using the following
formula:
Malas & Morton Standards Track [Page 20]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
# of ISAs
ISA % = ----------------------------- x 100
Total # of Session Requests
The following dialog [RFC3665] provides an example describing message
exchanges of an ineffective session attempt:
UA1 Proxy 1 Proxy 2 UA2
| | | |
|INVITE | | |
|--------------->| | |
| 407| | |
|<---------------| | |
|ACK | | |
|--------------->| | |
|INVITE | | |
|--------------->|INVITE | |
| 100|--------------->|INVITE |
|<---------------| 100|--------------->|
| |<---------------| |
| | |INVITE |
| | |--------------->|
| | | |
| | |INVITE |
| | |--------------->|
| | | |
| | 408| |
| 408|<---------------| |
|<---------------|ACK | |
| |--------------->| |
|ACK | | |
|--------------->| | |
4.9. Session Completion Ratio (SCR)
A session completion is defined as a SIP dialog, which completes
without failing due to a lack of response from an intended proxy or
UA. This metric is similar to the Call Completion Ratio (CCR) in
telephony applications of SIP. The output value of this metric is
numerical and SHOULD be adjusted to indicate a percentage of
successfully completed sessions.
This metric is calculated as a percentage of total sessions completed
successfully. The SCR percentage is calculated using the following
formula:
Malas & Morton Standards Track [Page 21]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
# of Successfully Completed Sessions
SCR % = --------------------------------------- x 100
Total # of Session Requests
The following dialog [RFC3665] provides an example describing the
necessary message exchanges of a successful session completion:
UA1 Proxy 1 Proxy 2 UA2
| | | |
|INVITE | | |
|--------------->| | |
| 407| | |
|<---------------| | |
|ACK | | |
|--------------->| | |
|INVITE | | |
|--------------->|INVITE | |
| 100|--------------->|INVITE |
|<---------------| 100|--------------->|
| |<---------------| |
| | | 180|
| | 180 |<---------------|
| 180|<---------------| |
|<---------------| | 200|
| | 200|<---------------|
| 200|<---------------| |
|<---------------| | |
|ACK | | |
|--------------->|ACK | |
| |--------------->|ACK |
| | |--------------->|
| Both Way RTP Media |
|<================================================>|
| | | BYE|
| | BYE|<---------------|
| BYE|<---------------| |
|<---------------| | |
|200 | | |
|--------------->|200 | |
| |--------------->|200 |
| | |--------------->|
| | | |
Malas & Morton Standards Track [Page 22]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
5. Additional Considerations
5.1. Metric Correlations
These metrics may be used to determine the performance of a domain
and/or user. The following is an example subset of dimensions for
providing further granularity per metric:
o To "user"
o From "user"
o Bi-direction "user"
o To "domain"
o From "domain"
o Bi-direction "domain"
5.2. Back-to-Back User Agent (B2BUA)
A B2BUA may impact the ability to collect these metrics with an end-
to-end perspective. It is necessary to realize that a B2BUA may act
as an originating UAC and terminating UAS, or it may act as a proxy.
In some cases, it may be necessary to consider information collected
from both sides of the B2BUA in order to determine the end-to-end
perspective. In other cases, the B2BUA may act simply as a proxy
allowing data to be derived as necessary for the input into any of
the listed calculations.
5.3. Authorization and Authentication
During the process of setting up a SIP dialog, various authentication
methods may be utilized. These authentication methods will add to
the duration as measured by the metrics, and the length of time will
vary based on those methods. The failures of these authentication
methods will also be captured by these metrics, since SIP is
ultimately used to indicate the success or failure of the
authorization and/or authentication attempt. The metrics in
Section 3 are inclusive of the duration associated with this process,
even if the method is external to SIP. This was included
purposefully, due to its inherent impact on the protocol and the
subsequent SIP dialogs.
Malas & Morton Standards Track [Page 23]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
5.4. Forking
Forking SHOULD be considered when determining the messages associated
with the input values for the described metrics. If all of the
forked dialogs were used in the metric calculations, the numbers
would skew dramatically. There are two different points of forking,
and each MUST be considered. First, forking may occur at a proxy
downstream from the UA that is being used for metric input values.
The downstream proxy is responsible for forking a message. Then,
this proxy will send provisional (e.g., 180) messages received from
the requests and send the accepted (e.g., 200) response to the UA.
Second, in the cases where the originating UA or proxy is forking the
messages, then it MUST parse the message exchanges necessary for
input into the metrics. For example, it MAY utilize the first INVITE
or set of INVITE messages sent and the first accepted 200 OK. Tags
will identify this dialog as distinct from the other 200 OK
responses, which are acknowledged, and an immediate BYE is sent. The
application responsible for capturing and/or understanding the input
values MUST utilize these tags to distinguish between dialog
requests.
Note that if an INVITE is forked before reaching its destination,
multiple early dialogs are likely, and multiple confirmed dialogs are
possible (though unlikely). When this occurs, an SRD measurement
should be taken for each dialog that is created (early or confirmed).
5.5. Data Collection
The input necessary for these calculations may be collected in a
number of different manners. It may be collected or retrieved from
call detail records (CDRs) or raw signaling information generated by
a proxy or UA. When using records, time synchronization MUST be
considered between applicable elements.
If these metrics are calculated at individual elements (such as
proxies or endpoints) instead of by a centralized management system,
and the individual elements use different measurement sample sizes,
then the metrics reported for the same event at those elements may
differ significantly.
The information may also be transmitted through the use of network
management protocols like the Simple Network Management Protocol
(SNMP) and via future extensions to the SIP Management Information
Base (MIB) modules [RFC4780], or through a potential undefined new
performance metric event package [RFC3265] retrieved via SUBSCRIBE
requests.
Malas & Morton Standards Track [Page 24]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
Data may be collected for a sample of calls or all calls, and may
also be derived from test call scenarios. These metrics are flexible
based on the needs of the application.
For consistency in calculation of the metrics, elements should expect
to reveal event inputs for use by a centralized management system,
which would calculate the metrics based on a varying set sample size
of inputs received from elements compliant with this specification.
5.6. Testing Documentation
In some cases, these metrics will be used to provide output values to
signify the performance level of a specific SIP-based element. When
using these metrics in a test environment, the environment MUST be
accurately documented for the purposes of replicating any output
values in future testing and/or validation.
6. Conclusions
This document provides a description of common performance metrics
and their defined use with SIP. The use of these metrics will
provide a common viewpoint across all vendors, service providers, and
users. These metrics will likely be utilized in production telephony
SIP environments for providing input regarding Key Performance
Indicators (KPI) and Service Level Agreement (SLA) indications;
however, they may also be used for testing end-to-end SIP-based
service environments.
7. Security Considerations
Security should be considered in the aspect of securing the relative
data utilized in providing input to the above calculations. All
other aspects of security should be considered as described in
RFC 3261 [RFC3261].
Implementers of these metrics MUST realize that these metrics could
be used to describe characteristics of customer and user usage
patterns, and privacy should be considered when collecting,
transporting, and storing them.
Malas & Morton Standards Track [Page 25]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
8. Contributors
The following people made substantial contributions to this work:
Carol Davids Illinois Institute of Technology
Marian Delkinov Ericsson
Adam Uzelac Global Crossing
Jean-Francois Mule CableLabs
Rich Terpstra Level 3 Communications
9. Acknowledgements
We would like to thank Robert Sparks, John Hearty, and Dean Bayless
for their efforts in reviewing the document and providing insight
regarding clarification of certain aspects described throughout the
document. We also thank Dan Romascanu for his insightful comments
and Vijay Gurbani for agreeing to perform the role of document
shepherd.
10. References
10.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston,
A., Peterson, J., Sparks, R., Handley, M., and E.
Schooler, "SIP: Session Initiation Protocol", RFC 3261,
June 2002.
[RFC3262] Rosenberg, J. and H. Schulzrinne, "Reliability of
Provisional Responses in Session Initiation Protocol
(SIP)", RFC 3262, June 2002.
[RFC3265] Roach, A., "Session Initiation Protocol (SIP)-Specific
Event Notification", RFC 3265, June 2002.
[RFC3665] Johnston, A., Donovan, S., Sparks, R., Cunningham, C.,
and K. Summers, "Session Initiation Protocol (SIP) Basic
Call Flow Examples", BCP 75, RFC 3665, December 2003.
[RFC4780] Lingle, K., Mule, J-F., Maeng, J., and D. Walker,
"Management Information Base for the Session Initiation
Protocol (SIP)", RFC 4780, April 2007.
Malas & Morton Standards Track [Page 26]
^L
RFC 6076 SIP End-to-End Performance Metrics January 2011
10.2. Informative References
[E.411] ITU-T, "Series E: Overall Network Operation, Telephone
Service, Service Operation and Human Factors", E.411 ,
March 2000.
[E.721] ITU-T, "Series E: Overall Network Operation, Telephone
Service, Service Operation and Human Factors", E.721 ,
May 1999.
[GR-512] Telcordia, "LSSGR: Reliability, Section 12", GR-512-
CORE Issue 2, January 1998.
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis,
"Framework for IP Performance Metrics", RFC 2330,
May 1998.
[RFC5486] Malas, D. and D. Meyer, "Session Peering for Multimedia
Interconnect (SPEERMINT) Terminology", RFC 5486,
March 2009.
Authors' Addresses
Daryl Malas
CableLabs
858 Coal Creek Circle
Louisville, CO 80027
US
Phone: +1 303 661 3302
EMail: d.malas@cablelabs.com
Al Morton
AT&T Labs
200 Laurel Avenue South
Middletown, NJ 07748
US
Phone: +1 732 420 1571
EMail: acmorton@att.com
Malas & Morton Standards Track [Page 27]
^L
|