1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
|
Internet Engineering Task Force (IETF) O. Bonaventure
Request for Comments: 8041 UCLouvain
Category: Informational C. Paasch
ISSN: 2070-1721 Apple, Inc.
G. Detal
Tessares
January 2017
Use Cases and Operational Experience with Multipath TCP
Abstract
This document discusses both use cases and operational experience
with Multipath TCP (MPTCP) in real networks. It lists several
prominent use cases where Multipath TCP has been considered and is
being used. It also gives insight to some heuristics and decisions
that have helped to realize these use cases and suggests possible
improvements.
Status of This Memo
This document is not an Internet Standards Track specification; it is
published for informational purposes.
This document is a product of the Internet Engineering Task Force
(IETF). It represents the consensus of the IETF community. It has
received public review and has been approved for publication by the
Internet Engineering Steering Group (IESG). Not all documents
approved by the IESG are a candidate for any level of Internet
Standard; see Section 2 of RFC 7841.
Information about the current status of this document, any errata,
and how to provide feedback on it may be obtained at
http://www.rfc-editor.org/info/rfc8041.
Bonaventure, et al. Informational [Page 1]
^L
RFC 8041 MPTCP Experience January 2017
Copyright Notice
Copyright (c) 2017 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction ....................................................3
2. Use Cases .......................................................4
2.1. Datacenters ................................................4
2.2. Cellular/WiFi Offload ......................................5
2.3. Multipath TCP Proxies ......................................8
3. Operational Experience ..........................................9
3.1. Middlebox Interference .....................................9
3.2. Congestion Control ........................................11
3.3. Subflow Management ........................................12
3.4. Implemented Subflow Managers ..............................13
3.5. Subflow Destination Port ..................................15
3.6. Closing Subflows ..........................................16
3.7. Packet Schedulers .........................................17
3.8. Segment Size Selection ....................................18
3.9. Interactions with the Domain Name System ..................19
3.10. Captive Portals ..........................................20
3.11. Stateless Webservers .....................................20
3.12. Load-Balanced Server Farms ...............................21
4. Security Considerations ........................................21
5. References .....................................................23
5.1. Normative References ......................................23
5.2. Informative References ....................................23
Acknowledgements ..................................................30
Authors' Addresses ................................................30
Bonaventure, et al. Informational [Page 2]
^L
RFC 8041 MPTCP Experience January 2017
1. Introduction
Multipath TCP was specified in [RFC6824] and five independent
implementations have been developed. As of November 2016, Multipath
TCP has been or is being implemented on the following platforms:
o Linux kernel [MultipathTCP-Linux]
o Apple iOS and macOS
o Citrix load balancers
o FreeBSD [FreeBSD-MPTCP]
o Oracle Solaris
The first three implementations are known to interoperate. Three of
these implementations are open source (Linux kernel, FreeBSD and
Apple's iOS and macOS). Apple's implementation is widely deployed.
Since the publication of [RFC6824] as an Experimental RFC, experience
has been gathered by various network researchers and users about the
operational issues that arise when Multipath TCP is used in today's
Internet.
When the MPTCP working group was created, several use cases for
Multipath TCP were identified [RFC6182]. Since then, other use cases
have been proposed and some have been tested and even deployed. We
describe these use cases in Section 2.
Section 3 focuses on the operational experience with Multipath TCP.
Most of this experience comes from the utilization of the Multipath
TCP implementation in the Linux kernel [MultipathTCP-Linux]. This
open-source implementation has been downloaded and implemented by
thousands of users all over the world. Many of these users have
provided direct or indirect feedback by writing documents (scientific
articles or blog messages) or posting to the mptcp-dev mailing list
(see https://listes-2.sipr.ucl.ac.be/sympa/arc/mptcp-dev). This
Multipath TCP implementation is actively maintained and continuously
improved. It is used on various types of hosts, ranging from
smartphones or embedded routers to high-end servers.
The Multipath TCP implementation in the Linux kernel is not, by far,
the most widespread deployment of Multipath TCP. Since September
2013, Multipath TCP is also supported on smartphones and tablets
beginning with iOS7 [IETFJ]. There are likely hundreds of millions
of MPTCP-enabled devices. This Multipath TCP implementation is
Bonaventure, et al. Informational [Page 3]
^L
RFC 8041 MPTCP Experience January 2017
currently only used to support the Siri voice recognition/control
application. Some lessons learned from this deployment are described
in [IETFJ].
Section 3 is organized as follows. Supporting the middleboxes was
one of the difficult issues in designing the Multipath TCP protocol.
We explain in Section 3.1 which types of middleboxes the Linux Kernel
implementation of Multipath TCP supports and how it reacts upon
encountering these. Section 3.2 summarizes the MPTCP-specific
congestion controls that have been implemented. Sections 3.3 to 3.7
discuss heuristics and issues with respect to subflow management as
well as the scheduling across the subflows. Section 3.8 explains
some problems that occurred with subflows having different Maximum
Segment Size (MSS) values. Section 3.9 presents issues with respect
to content delivery networks and suggests a solution to this issue.
Finally, Section 3.10 documents an issue with captive portals where
MPTCP will behave suboptimally.
2. Use Cases
Multipath TCP has been tested in several use cases. There is already
an abundant amount of scientific literature on Multipath TCP
[MPTCPBIB]. Several of the papers published in the scientific
literature have identified possible improvements that are worth being
discussed here.
2.1. Datacenters
A first, although initially unexpected, documented use case for
Multipath TCP has been in datacenters [HotNets][SIGCOMM11]. Today's
datacenters are designed to provide several paths between single-
homed servers. The multiplicity of these paths comes from the
utilization of Equal-Cost Multipath (ECMP) and other load-balancing
techniques inside the datacenter. Most of the deployed load-
balancing techniques in datacenters rely on hashes computed over the
five tuple. Thus, all packets from the same TCP connection follow
the same path: so they are not reordered. The results in [HotNets]
demonstrate by simulations that Multipath TCP can achieve a better
utilization of the available network by using multiple subflows for
each Multipath TCP session. Although [RFC6182] assumes that at least
one of the communicating hosts has several IP addresses, [HotNets]
demonstrates that Multipath TCP is beneficial when both hosts are
single-homed. This idea is analyzed in more details in [SIGCOMM11],
where the Multipath TCP implementation in the Linux kernel is
modified to be able to use several subflows from the same IP address.
Measurements in a public datacenter show the quantitative benefits of
Multipath TCP [SIGCOMM11] in this environment.
Bonaventure, et al. Informational [Page 4]
^L
RFC 8041 MPTCP Experience January 2017
Although ECMP is widely used inside datacenters, this is not the only
environment where there are different paths between a pair of hosts.
ECMP and other load-balancing techniques such as Link Aggregation
Groups (LAGs) are widely used in today's networks; having multiple
paths between a pair of single-homed hosts is becoming the norm
instead of the exception. Although these multiple paths often have
the same cost (from an IGP metrics viewpoint), they do not
necessarily have the same performance. For example, [IMC13c] reports
the results of a long measurement study showing that load-balanced
Internet paths between that same pair of hosts can have huge delay
differences.
2.2. Cellular/WiFi Offload
A second use case that has been explored by several network
researchers is the cellular/WiFi offload use case. Smartphones or
other mobile devices equipped with two wireless interfaces are a very
common use case for Multipath TCP. In September 2015, this is also
the largest deployment of MPTCP-enabled devices [IETFJ]. It has been
briefly discussed during IETF 88 [IETF88], but there is no published
paper or report that analyses this deployment. For this reason, we
only discuss published papers that have mainly used the Multipath TCP
implementation in the Linux kernel for their experiments.
The performance of Multipath TCP in wireless networks was briefly
evaluated in [NSDI12]. One experiment analyzes the performance of
Multipath TCP on a client with two wireless interfaces. This
evaluation shows that when the receive window is large, Multipath TCP
can efficiently use the two available links. However, if the window
becomes smaller, then packets sent on a slow path can block the
transmission of packets on a faster path. In some cases, the
performance of Multipath TCP over two paths can become lower than the
performance of regular TCP over the best performing path. Two
heuristics, reinjection and penalization, are proposed in [NSDI12] to
solve this identified performance problem. These two heuristics have
since been used in the Multipath TCP implementation in the Linux
kernel. [CONEXT13] explored the problem in more detail and revealed
some other scenarios where Multipath TCP can have difficulties in
efficiently pooling the available paths. Improvements to the
Multipath TCP implementation in the Linux kernel are proposed in
[CONEXT13] to cope with some of these problems.
The first experimental analysis of Multipath TCP in a public wireless
environment was presented in [Cellnet12]. These measurements explore
the ability of Multipath TCP to use two wireless networks (real WiFi
and 3G networks). Three modes of operation are compared. The first
mode of operation is the simultaneous use of the two wireless
networks. In this mode, Multipath TCP pools the available resources
Bonaventure, et al. Informational [Page 5]
^L
RFC 8041 MPTCP Experience January 2017
and uses both wireless interfaces. This mode provides fast handover
from WiFi to cellular or the opposite when the user moves.
Measurements presented in [CACM14] show that the handover from one
wireless network to another is not an abrupt process. When a host
moves, there are regions where the quality of one of the wireless
networks is weaker than the other, but the host considers this
wireless network to still be up. When a mobile host enters such
regions, its ability to send packets over another wireless network is
important to ensure a smooth handover. This is clearly illustrated
from the packet trace discussed in [CACM14].
Many cellular networks use volume-based pricing; users often prefer
to use unmetered WiFi networks when available instead of metered
cellular networks. [Cellnet12] implements support for the MP_PRIO
option to explore two other modes of operation.
In the backup mode, Multipath TCP opens a TCP subflow over each
interface, but the cellular interface is configured in backup mode.
This implies that data flows only over the WiFi interface when both
interfaces are considered to be active. If the WiFi interface fails,
then the traffic switches quickly to the cellular interface, ensuring
a smooth handover from the user's viewpoint [Cellnet12]. The cost of
this approach is that the WiFi and cellular interfaces are likely to
remain active all the time since all subflows are established over
the two interfaces.
The single-path mode is slightly different. This mode benefits from
the break-before-make capability of Multipath TCP. When an MPTCP
session is established, a subflow is created over the WiFi interface.
No packet is sent over the cellular interface as long as the WiFi
interface remains up [Cellnet12]. This implies that the cellular
interface can remain idle and battery capacity is preserved. When
the WiFi interface fails, a new subflow is established over the
cellular interface in order to preserve the established Multipath TCP
sessions. Compared to the backup mode described earlier,
measurements reported in [Cellnet12] indicate that this mode of
operation is characterized by a throughput drop while the cellular
interface is brought up and the subflows are reestablished.
From a protocol viewpoint, [Cellnet12] discusses the problem posed by
the unreliability of the REMOVE_ADDR option and proposes a small
protocol extension to allow hosts to reliably exchange this option.
It would be useful to analyze packet traces to understand whether the
unreliability of the REMOVE_ADDR option poses an operational problem
in real deployments.
Bonaventure, et al. Informational [Page 6]
^L
RFC 8041 MPTCP Experience January 2017
Another study of the performance of Multipath TCP in wireless
networks was reported in [IMC13b]. This study uses laptops connected
to various cellular ISPs and WiFi hotspots. It compares various file
transfer scenarios. [IMC13b] observes that 4-path MPTCP outperforms
2-path MPTCP, especially for larger files. However, for three
congestion-control algorithms (LIA, OLIA, and Reno -- see
Section 3.2), there is no significant performance difference for file
sizes smaller than 4 MB.
A different study of the performance of Multipath TCP with two
wireless networks is presented in [INFOCOM14]. In this study the two
networks had different qualities: a good network and a lossy network.
When using two paths with different packet-loss ratios, the Multipath
TCP congestion-control scheme moves traffic away from the lossy link
that is considered to be congested. However, [INFOCOM14] documents
an interesting scenario that is summarized hereafter.
client ----------- path1 -------- server
| |
+--------------- path2 ------------+
Figure 1: Simple network topology
Initially, the two paths in Figure 1 have the same quality and
Multipath TCP distributes the load over both of them. During the
transfer, the path2 becomes lossy, e.g., because the client moves.
Multipath TCP detects the packet losses and they are retransmitted
over path1. This enables the data transfer to continue over this
path. However, the subflow over path2 is still up and transmits one
packet from time to time. Although the N packets have been
acknowledged over the first subflow (at the MPTCP level), they have
not been acknowledged at the TCP level over the second subflow. To
preserve the continuity of the sequence numbers over the second
subflow, TCP will continue to retransmit these segments until either
they are acknowledged or the maximum number of retransmissions is
reached. This behavior is clearly inefficient and may lead to
blocking since the second subflow will consume window space to be
able to retransmit these packets. [INFOCOM14] proposes a new
Multipath TCP option to solve this problem. In practice, a new TCP
option is probably not required. When the client detects that the
data transmitted over the second subflow has been acknowledged over
the first subflow, it could decide to terminate the second subflow by
sending a RST segment. If the interface associated to this subflow
is still up, a new subflow could be immediately reestablished. It
would then be immediately usable to send new data and would not be
forced to first retransmit the previously transmitted data. As of
this writing, this dynamic management of the subflows is not yet
implemented in the Multipath TCP implementation in the Linux kernel.
Bonaventure, et al. Informational [Page 7]
^L
RFC 8041 MPTCP Experience January 2017
Some studies have started to analyze the performance of Multipath TCP
on smartphones with real applications. In contrast with the bulk
transfers that are used by many publications, many deployed
applications do not exchange huge amounts of data and mainly use
small connections. [COMMAG2016] proposes a software testing
framework that allows to automate Android applications to study their
interactions with Multipath TCP. [PAM2016] analyses a one-month
packet trace of all the packets exchanged by a dozen of smartphones
utilized by regular users. This analysis reveals that short
connections are important on smartphones and that the main benefit of
using Multipath TCP on smartphones is the ability to perform seamless
handovers between different wireless networks. Long connections
benefit from these handovers.
2.3. Multipath TCP Proxies
As Multipath TCP is not yet widely deployed on both clients and
servers, several deployments have used various forms of proxies. Two
families of solutions are currently being used or tested.
A first use case is when an MPTCP-enabled client wants to use several
interfaces to reach a regular TCP server. A typical use case is a
smartphone that needs to use both its WiFi and its cellular interface
to transfer data. Several types of proxies are possible for this use
case. An HTTP proxy deployed on a MPTCP-capable server would enable
the smartphone to use Multipath TCP to access regular web servers.
Obviously, this solution only works for applications that rely on
HTTP. Another possibility is to use a proxy that can convert any
Multipath TCP connection into a regular TCP connection. MPTCP-
specific proxies have been proposed [HotMiddlebox13b] [HAMPEL].
Another possibility leverages the SOCKS protocol [RFC1928]. SOCKS is
often used in enterprise networks to allow clients to reach external
servers. For this, the client opens a TCP connection to the SOCKS
server that relays it to the final destination. If both the client
and the SOCKS server use Multipath TCP, but not the final
destination, then Multipath TCP can still be used on the path between
the clients and the SOCKS server. At IETF 93, Korea Telecom
announced that they have deployed (in June 2015) a commercial service
that uses Multipath TCP on smartphones. These smartphones access
regular TCP servers through a SOCKS proxy. This enables them to
achieve throughputs of up to 850 Mbps [KT].
Bonaventure, et al. Informational [Page 8]
^L
RFC 8041 MPTCP Experience January 2017
Measurements performed with Android smartphones [Mobicom15] show that
popular applications work correctly through a SOCKS proxy and MPTCP-
enabled smartphones. Thanks to Multipath TCP, long-lived connections
can be spread over the two available interfaces. However, for short-
lived connections, most of the data is sent over the initial subflow
that is created over the interface corresponding to the default route
and the second subflow is almost not used [PAM2016].
A second use case is when Multipath TCP is used by middleboxes,
typically inside access networks. Various network operators are
discussing and evaluating solutions for hybrid access networks
[TR-348]. Such networks arise when a network operator controls two
different access network technologies, e.g., wired and cellular, and
wants to combine them to improve the bandwidth offered to the end
users [HYA-ARCH]. Several solutions are currently investigated for
such networks [TR-348]. Figure 2 shows the organization of such a
network. When a client creates a normal TCP connection, it is
intercepted by the Hybrid CPE (HPCE) that converts it in a Multipath
TCP connection so that it can use the available access networks (DSL
and LTE in the example). The Hybrid Access Gateway (HAG) does the
opposite to ensure that the regular server sees a normal TCP
connection. Some of the solutions currently discussed for hybrid
networks use Multipath TCP on the HCPE and the HAG. Other solutions
rely on tunnels between the HCPE and the HAG [GRE-NOTIFY].
client --- HCPE ------ DSL ------- HAG --- internet --- server
| |
+------- LTE -----------+
Figure 2: Hybrid Access Network
3. Operational Experience
3.1. Middlebox Interference
The interference caused by various types of middleboxes has been an
important concern during the design of the Multipath TCP protocol.
Three studies on the interactions between Multipath TCP and
middleboxes are worth discussing.
Bonaventure, et al. Informational [Page 9]
^L
RFC 8041 MPTCP Experience January 2017
The first analysis appears in [IMC11]. This paper was the main
motivation for Multipath TCP incorporating various techniques to cope
with middlebox interference. More specifically, Multipath TCP has
been designed to cope with middleboxes that:
o change source or destination addresses
o change source or destination port numbers
o change TCP sequence numbers
o split or coalesce segments
o remove TCP options
o modify the payload of TCP segments
These middlebox interferences have all been included in the MBtest
suite [MBTest]. This test suite is used in [HotMiddlebox13] to
verify the reaction of the Multipath TCP implementation in the Linux
kernel [MultipathTCP-Linux] when faced with middlebox interference.
The test environment used for this evaluation is a dual-homed client
connected to a single-homed server. The middlebox behavior can be
activated on any of the paths. The main results of this analysis
are:
o the Multipath TCP implementation in the Linux kernel is not
affected by a middlebox that performs NAT or modifies TCP sequence
numbers
o when a middlebox removes the MP_CAPABLE option from the initial
SYN segment, the Multipath TCP implementation in the Linux kernel
falls back correctly to regular TCP
o when a middlebox removes the DSS option from all data segments,
the Multipath TCP implementation in the Linux kernel falls back
correctly to regular TCP
o when a middlebox performs segment coalescing, the Multipath TCP
implementation in the Linux kernel is still able to accurately
extract the data corresponding to the indicated mapping
o when a middlebox performs segment splitting, the Multipath TCP
implementation in the Linux kernel correctly reassembles the data
corresponding to the indicated mapping. [HotMiddlebox13] shows,
in Figure 4 in Section 3.3, a corner case with segment splitting
that may lead to a desynchronization between the two hosts.
Bonaventure, et al. Informational [Page 10]
^L
RFC 8041 MPTCP Experience January 2017
The interactions between Multipath TCP and real deployed middleboxes
are also analyzed in [HotMiddlebox13]; a particular scenario with the
FTP Application Level Gateway running on a NAT is described.
Middlebox interference can also be detected by analyzing packet
traces on MPTCP-enabled servers. A closer look at the packets
received on the multipath-tcp.org server [TMA2015] shows that among
the 184,000 Multipath TCP connections, only 125 of them were falling
back to regular TCP. These connections originated from 28 different
client IP addresses. These include 91 HTTP connections and 34 FTP
connections. The FTP interference is expected since Application
Level Gateways used for FTP modify the TCP payload and the DSS
Checksum detects these modifications. The HTTP interference appeared
only on the direction from server to client and could have been
caused by transparent proxies deployed in cellular or enterprise
networks. A longer trace is discussed in [COMCOM2016] and similar
conclusions about the middlebox interference are provided.
From an operational viewpoint, knowing that Multipath TCP can cope
with various types of middlebox interference is important. However,
there are situations where the network operators need to gather
information about where a particular middlebox interference occurs.
The tracebox software [tracebox] described in [IMC13a] is an
extension of the popular traceroute software that enables network
operators to check at which hop a particular field of the TCP header
(including options) is modified. It has been used by several network
operators to debug various middlebox interference problems.
Experience with tracebox indicates that supporting the ICMP extension
defined in [RFC1812] makes it easier to debug middlebox problems in
IPv4 networks.
Users of the Multipath TCP implementation have reported some
experience with middlebox interference. The strangest scenario has
been a middlebox that accepts the Multipath TCP options in the SYN
segment but later replaces Multipath TCP options with a TCP EOL
option [StrangeMbox]. This causes Multipath TCP to perform a
fallback to regular TCP without any impact on the application.
3.2. Congestion Control
Congestion control has been an important challenge for Multipath TCP.
The coupled congestion-control scheme defined in [RFC6356] in an
adaptation of the NewReno algorithm. A detailed description of this
coupled algorithm is provided in [NSDI11]. It is the default scheme
in the Linux implementation of Multipath TCP, but Linux supports
other schemes.
Bonaventure, et al. Informational [Page 11]
^L
RFC 8041 MPTCP Experience January 2017
The second congestion-control scheme is OLIA [CONEXT12]. It is also
an adaptation of the NewReno single path congestion-control scheme to
support multiple paths. Simulations [CONEXT12] and measurements
[CONEXT13] have shown that it provides some performance benefits
compared to the default coupled congestion-control scheme.
The delay-based scheme proposed in [ICNP12] has also been ported to
the Multipath TCP implementation in the Linux kernel. It has been
evaluated by using simulations [ICNP12] and measurements [PaaschPhD].
BALIA, defined in [BALIA], provides a better balance between TCP
friendliness, responsiveness, and window oscillation.
These different congestion-control schemes have been compared in
several articles. [CONEXT13] and [PaaschPhD] compare these
algorithms in an emulated environment. The evaluation showed that
the delay-based congestion-control scheme is less able to efficiently
use the available links than the three other schemes.
3.3. Subflow Management
The multipath capability of Multipath TCP comes from the utilization
of one subflow per path. The Multipath TCP architecture [RFC6182]
and the protocol specification [RFC6824] define the basic usage of
the subflows and the protocol mechanisms that are required to create
and terminate them. However, there are no guidelines on how subflows
are used during the lifetime of a Multipath TCP session. Most of the
published experiments with Multipath TCP have been performed in
controlled environments. Still, based on the experience running them
and discussions on the mptcp-dev mailing list, interesting lessons
have been learned about the management of these subflows.
From a subflow viewpoint, the Multipath TCP protocol is completely
symmetrical. Both the clients and the server have the capability to
create subflows. However, in practice, the existing Multipath TCP
implementations have opted for a strategy where only the client
creates new subflows. The main motivation for this strategy is that
often the client resides behind a NAT or a firewall, preventing
passive subflow openings on the client. Although there are
environments such as datacenters where this problem does not occur,
as of this writing, no precise requirement has emerged for allowing
the server to create new subflows.
Bonaventure, et al. Informational [Page 12]
^L
RFC 8041 MPTCP Experience January 2017
3.4. Implemented Subflow Managers
The Multipath TCP implementation in the Linux kernel includes several
strategies to manage the subflows that compose a Multipath TCP
session. The basic subflow manager is the full-mesh. As the name
implies, it creates a full-mesh of subflows between the communicating
hosts.
The most frequent use case for this subflow manager is a multihomed
client connected to a single-homed server. In this case, one subflow
is created for each interface on the client. The current
implementation of the full-mesh subflow manager is static. The
subflows are created immediately after the creation of the initial
subflow. If one subflow fails during the lifetime of the Multipath
TCP session (e.g., due to excessive retransmissions or the loss of
the corresponding interface), it is not always reestablished. There
is ongoing work to enhance the full-mesh path manager to deal with
such events.
When the server is multihomed, using the full-mesh subflow manager
may lead to a large number of subflows being established. For
example, consider a dual-homed client connected to a server with
three interfaces. In this case, even if the subflows are only
created by the client, six subflows will be established. This may be
excessive in some environments, in particular when the client and/or
the server have a large number of interfaces. Implementations should
limit the number of subflows that are used.
Creating subflows between multihomed clients and servers may
sometimes lead to operational issues as observed by discussions on
the mptcp-dev mailing list. In some cases, the network operators
would like to have a better control on how the subflows are created
by Multipath TCP [MPTCP-MAX-SUB]. This might require the definition
of policy rules to control the operation of the subflow manager. The
two scenarios below illustrate some of these requirements.
host1 ---------- switch1 ----- host2
| | |
+-------------- switch2 --------+
Figure 3: Simple Switched Network Topology
Bonaventure, et al. Informational [Page 13]
^L
RFC 8041 MPTCP Experience January 2017
Consider the simple network topology shown in Figure 3. From an
operational viewpoint, a network operator could want to create two
subflows between the communicating hosts. From a bandwidth
utilization viewpoint, the most natural paths are host1-switch1-host2
and host1-switch2-host2. However, a Multipath TCP implementation
running on these two hosts may sometimes have difficulties to obtain
this result.
To understand the difficulty, let us consider different allocation
strategies for the IP addresses. A first strategy is to assign two
subnets: subnetA (resp. subnetB) contains the IP addresses of host1's
interface to switch1 (resp. switch2) and host2's interface to switch1
(resp. switch2). In this case, a Multipath TCP subflow manager
should only create one subflow per subnet. To enforce the
utilization of these paths, the network operator would have to
specify a policy that prefers the subflows in the same subnet over
subflows between addresses in different subnets. It should be noted
that the policy should probably also specify how the subflow manager
should react when an interface or subflow fails.
A second strategy is to use a single subnet for all IP addresses. In
this case, it becomes more difficult to specify a policy that
indicates which subflows should be established.
The second subflow manager that is currently supported by the
Multipath TCP implementation in the Linux kernel is the ndiffport
subflow manager. This manager was initially created to exploit the
path diversity that exists between single-homed hosts due to the
utilization of flow-based load-balancing techniques [SIGCOMM11].
This subflow manager creates N subflows between the same pair of IP
addresses. The N subflows are created by the client and differ only
in the source port selected by the client. It was not designed to be
used on multihomed hosts.
A more flexible subflow manager has been proposed, implemented and
evaluated in [CONEXT15]. This subflow manager exposes various kernel
events to a user space daemon that decides when subflows need to be
created and terminated based on various policies.
Bonaventure, et al. Informational [Page 14]
^L
RFC 8041 MPTCP Experience January 2017
3.5. Subflow Destination Port
The Multipath TCP protocol relies on the token contained in the
MP_JOIN option to associate a subflow to an existing Multipath TCP
session. This implies that there is no restriction on the source
address, destination address and source or destination ports used for
the new subflow. The ability to use different source and destination
addresses is key to support multihomed servers and clients. The
ability to use different destination port numbers is worth discussing
because it has operational implications.
For illustration, consider a dual-homed client that creates a second
subflow to reach a single-homed server as illustrated in Figure 4.
client ------- r1 --- internet --- server
| |
+----------r2-------+
Figure 4: Multihomed-Client Connected to Single-Homed Server
When the Multipath TCP implementation in the Linux kernel creates the
second subflow, it uses the same destination port as the initial
subflow. This choice is motivated by the fact that the server might
be protected by a firewall and only accept TCP connections (including
subflows) on the official port number. Using the same destination
port for all subflows is also useful for operators that rely on the
port numbers to track application usage in their network.
There have been suggestions from Multipath TCP users to modify the
implementation to allow the client to use different destination ports
to reach the server. This suggestion seems mainly motivated by
traffic-shaping middleboxes that are used in some wireless networks.
In networks where different shaping rates are associated with
different destination port numbers, this could allow Multipath TCP to
reach a higher performance. This behavior is valid according to the
Multipath TCP specification [RFC6824]. An application could use an
enhanced socket API [SOCKET] to behave in this way.
However, from an implementation point-of-view supporting different
destination ports for the same Multipath TCP connection can cause
some issues. A legacy implementation of a TCP stack creates a
listening socket to react upon incoming SYN segments. The listening
socket is handling the SYN segments that are sent on a specific port
number. Demultiplexing incoming segments can thus be done solely by
looking at the IP addresses and the port numbers. With Multipath TCP
however, incoming SYN segments may have an MP_JOIN option with a
different destination port. This means that all incoming segments
Bonaventure, et al. Informational [Page 15]
^L
RFC 8041 MPTCP Experience January 2017
that did not match on an existing listening-socket or an already
established socket must be parsed for an eventual MP_JOIN option.
This imposes an additional cost on servers, previously not existent
on legacy TCP implementations.
3.6. Closing Subflows
client server
| |
MPTCP: ESTABLISHED | | MPTCP: ESTABLISHED
Sub: ESTABLISHED | | Sub: ESTABLISHED
| |
| DATA_FIN |
MPTCP: CLOSE-WAIT | <------------------------ | close() (step 1)
Sub: ESTABLISHED | DATA_ACK |
| ------------------------> | MPTCP: FIN-WAIT-2
| | Sub: ESTABLISHED
| |
| DATA_FIN + subflow-FIN |
close()/shutdown() | ------------------------> | MPTCP: TIME-WAIT
(step 2) | DATA_ACK | Sub: CLOSE-WAIT
MPTCP: CLOSED | <------------------------ |
Sub: FIN-WAIT-2 | |
| |
| subflow-FIN |
MPTCP: CLOSED | <------------------------ | subflow-close()
Sub: TIME-WAIT | subflow-ACK |
(step 3) | ------------------------> | MPTCP: TIME-WAIT
| | Sub: CLOSED
| |
Figure 5: Multipath TCP may not be able to avoid time-wait state on
the subflow (indicated as Sub in the drawing), even if enforced by
the application on the client-side.
Figure 5 shows a very particular issue within Multipath TCP. Many
high-performance applications try to avoid TIME-WAIT state by
deferring the closure of the connection until the peer has sent a
FIN. That way, the client on the left of Figure 5 does a passive
closure of the connection, transitioning from CLOSE-WAIT to Last-ACK
and finally freeing the resources after reception of the ACK of the
FIN. An application running on top of an MPTCP-enabled Linux kernel
might also use this approach. The difference here is that the
close() of the connection (step 1 in Figure 5) only triggers the
Bonaventure, et al. Informational [Page 16]
^L
RFC 8041 MPTCP Experience January 2017
sending of a DATA_FIN. Nothing guarantees that the kernel is ready
to combine the DATA_FIN with a subflow-FIN. The reception of the
DATA_FIN will make the application trigger the closure of the
connection (step 2), trying to avoid TIME-WAIT state with this late
closure. This time, the kernel might decide to combine the DATA_FIN
with a subflow-FIN. This decision will be fatal, as the subflow's
state machine will not transition from CLOSE_WAIT to Last-ACK, but
rather go through FIN_WAIT-2 into TIME-WAIT state. The TIME-WAIT
state will consume resources on the host for at least 2 MSL (Maximum
Segment Lifetime). Thus, a smart application that tries to avoid
TIME-WAIT state by doing late closure of the connection actually ends
up with one of its subflows in TIME-WAIT state. A high-performance
Multipath TCP kernel implementation should honor the desire of the
application to do passive closure of the connection and successfully
avoid TIME-WAIT state -- even on the subflows.
The solution to this problem lies in an optimistic assumption that a
host doing active-closure of a Multipath TCP connection by sending a
DATA_FIN will soon also send a FIN on all its subflows. Thus, the
passive closer of the connection can simply wait for the peer to send
exactly this FIN -- enforcing passive closure even on the subflows.
Of course, to avoid consuming resources indefinitely, a timer must
limit the time our implementation waits for the FIN.
3.7. Packet Schedulers
In a Multipath TCP implementation, the packet scheduler is the
algorithm that is executed when transmitting each packet to decide on
which subflow it needs to be transmitted. The packet scheduler
itself does not have any impact on the interoperability of Multipath
TCP implementations. However, it may clearly impact the performance
of Multipath TCP sessions. The Multipath TCP implementation in the
Linux kernel supports a pluggable architecture for the packet
scheduler [PaaschPhD]. As of this writing, two schedulers have been
implemented: round-robin and lowest-rtt-first. The second scheduler
relies on the round-trip time (rtt) measured on each TCP subflow and
sends first segments over the subflow having the lowest round-trip
time. They are compared in [CSWS14]. The experiments and
measurements described in [CSWS14] show that the lowest-rtt-first
scheduler appears to be the best compromise from a performance
viewpoint. Another study of the packet schedulers is presented in
[PAMS2014]. This study relies on simulations with the Multipath TCP
implementation in the Linux kernel. They compare the lowest-rtt-
first with the round-robin and a random scheduler. They show some
situations where the lowest-rtt-first scheduler does not perform as
well as the other schedulers, but there are many scenarios where the
Bonaventure, et al. Informational [Page 17]
^L
RFC 8041 MPTCP Experience January 2017
opposite is true. [PAMS2014] notes that "it is highly likely that
the optimal scheduling strategy depends on the characteristics of the
paths being used."
3.8. Segment Size Selection
When an application performs a write/send system call, the kernel
allocates a packet buffer (sk_buff in Linux) to store the data the
application wants to send. The kernel will store at most one MSS
(Maximum Segment Size) of data per buffer. As the MSS can differ
amongst subflows, an MPTCP implementation must select carefully the
MSS used to generate application data. The Linux kernel
implementation had various ways of selecting the MSS: minimum or
maximum amongst the different subflows. However, these heuristics of
MSS selection can cause significant performance issues in some
environments. Consider the following example. An MPTCP connection
has two established subflows that respectively use an MSS of 1420 and
1428 bytes. If MPTCP selects the maximum, then the application will
generate segments of 1428 bytes of data. An MPTCP implementation
will have to split the segment in two (1420-byte and 8-byte) segments
when pushing on the subflow with the smallest MSS. The latter
segment will introduce a large overhead as this single data segment
will use 2 slots in the congestion window (in packets) therefore
reducing by roughly twice the potential throughput (in bytes/s) of
this subflow. Taking the smallest MSS does not solve the issue as
there might be a case where the subflow with the smallest MSS only
sends a few packets, therefore reducing the potential throughput of
the other subflows.
The Linux implementation recently took another approach [DetalMSS].
Instead of selecting the minimum and maximum values, it now
dynamically adapts the MSS based on the contribution of all the
subflows to the connection's throughput. For each subflow, it
computes the potential throughput achieved by selecting each MSS
value and by taking into account the lost space in the congestion
window. It then selects the MSS that allows to achieve the highest
potential throughput.
Given the prevalence of middleboxes that clamp the MSS, Multipath TCP
implementations must be able to efficiently support subflows with
different MSS values. The strategy described above is a possible
solution to this problem.
Bonaventure, et al. Informational [Page 18]
^L
RFC 8041 MPTCP Experience January 2017
3.9. Interactions with the Domain Name System
Multihomed clients such as smartphones can send DNS queries over any
of their interfaces. When a single-homed client performs a DNS
query, it receives from its local resolver the best answer for its
request. If the client is multihomed, the answer in response to the
DNS query may vary with the interface over which it has been sent.
cdn1
|
client -- cellular -- internet -- cdn3
| |
+----- wifi --------+
|
cdn2
Figure 6: Simple Network Topology
If the client sends a DNS query over the WiFi interface, the answer
will point to the cdn2 server while the same request sent over the
cellular interface will point to the cdn1 server. This might cause
problems for CDN providers that locate their servers inside ISP
networks and have contracts that specify that the CDN server will
only be accessed from within this particular ISP. Assume now that
both the client and the CDN servers support Multipath TCP. In this
case, a Multipath TCP session from cdn1 or cdn2 would potentially use
both the cellular network and the WiFi network. Serving the client
from cdn2 over the cellular interface could violate the contract
between the CDN provider and the network operators. A similar
problem occurs with regular TCP if the client caches DNS replies.
For example, the client obtains a DNS answer over the cellular
interface and then stops this interface and starts to use its WiFi
interface. If the client retrieves data from cdn1 over its WiFi
interface, this may also violate the contract between the CDN and the
network operators.
A possible solution to prevent this problem would be to modify the
DNS resolution on the client. The client subnet Extension Mechanisms
for DNS (EDNS) defined in [RFC7871] could be used for this purpose.
When the client sends a DNS query from its WiFi interface, it should
also send the client subnet corresponding to the cellular interface
in this request. This would indicate to the resolver that the answer
should be valid for both the WiFi and the cellular interfaces (e.g.,
the cdn3 server).
Bonaventure, et al. Informational [Page 19]
^L
RFC 8041 MPTCP Experience January 2017
3.10. Captive Portals
Multipath TCP enables a host to use different interfaces to reach a
server. In theory, this should ensure connectivity when at least one
of the interfaces is active. However, in practice, there are some
particular scenarios with captive portals that may cause operational
problems. The reference environment is shown in Figure 7.
client ----- network1
|
+------- internet ------------- server
Figure 7: Issue with Captive Portal
The client is attached to two networks: network1 that provides
limited connectivity and the entire Internet through the second
network interface. In practice, this scenario corresponds to an open
WiFi network with a captive portal for network1 and a cellular
service for the second interface. On many smartphones, the WiFi
interface is preferred over the cellular interface. If the
smartphone learns a default route via both interfaces, it will
typically prefer to use the WiFi interface to send its DNS request
and create the first subflow. This is not optimal with Multipath
TCP. A better approach would probably be to try a few attempts on
the WiFi interface and then, upon failure of these attempts, try to
use the second interface for the initial subflow as well.
3.11. Stateless Webservers
MPTCP has been designed to interoperate with webservers that benefit
from SYN-cookies to protect against SYN-flooding attacks [RFC4987].
MPTCP achieves this by echoing the keys negotiated during the
MP_CAPABLE handshake in the third ACK of the three-way handshake.
Reception of this third ACK then allows the server to reconstruct the
state specific to MPTCP.
However, one caveat to this mechanism is the unreliable nature of the
third ACK. Indeed, when the third ACK gets lost, the server will not
be able to reconstruct the MPTCP state. MPTCP will fall back to
regular TCP in this case. This is in contrast to regular TCP. When
the client starts sending data, the first data segment also includes
the SYN-cookie, which allows the server to reconstruct the TCP-state.
Further, this data segment will be retransmitted by the client in
case it gets lost and thus is resilient against loss. MPTCP does not
include the keys in this data segment and thus the server cannot
reconstruct the MPTCP state.
Bonaventure, et al. Informational [Page 20]
^L
RFC 8041 MPTCP Experience January 2017
This issue might be considered as a minor one for MPTCP. Losing the
third ACK should only happen when packet loss is high; in this case,
MPTCP provides a lot of benefits as it can move traffic away from the
lossy link. It is undesirable that MPTCP has a higher chance to fall
back to regular TCP in those lossy environments.
[MPTCP-DEPLOY] discusses this issue and suggests a modified handshake
mechanism that ensures reliable delivery of the MP_CAPABLE, following
the three-way handshake. This modification will make MPTCP reliable,
even in lossy environments when servers need to use SYN-cookies to
protect against SYN-flooding attacks.
3.12. Load-Balanced Server Farms
Large-scale server farms typically deploy thousands of servers behind
a single virtual IP (VIP). Steering traffic to these servers is done
through Layer 4 load-balancers that ensure that a TCP-flow will
always be routed to the same server [Presto08].
As Multipath TCP uses multiple different TCP subflows to steer the
traffic across the different paths, load-balancers need to ensure
that all these subflows are routed to the same server. This implies
that the load-balancers need to track the MPTCP-related state,
allowing them to parse the token in the MP_JOIN and assign those
subflows to the appropriate server. However, server farms typically
deploy several load-balancers for reliability and capacity reasons.
As a TCP subflow might get routed to any of these load-balancers,
they would need to synchronize the MPTCP-related state -- a solution
that is not feasible on a large scale.
The token (carried in the MP_JOIN) contains the information
indicating to which MPTCP-session the subflow belongs. As the token
is a hash of the key, servers are not able to generate the token in
such a way that the token can provide the necessary information to
the load-balancers, which would allow them to route TCP subflows to
the appropriate server. [MPTCP-LOAD] discusses this issue in detail
and suggests two alternative MP_CAPABLE handshakes to overcome these.
4. Security Considerations
This informational document discusses use cases and operational
experience with Multipath TCP. An extensive analysis of the
remaining security issues in the Multipath TCP specification has been
published in [RFC7430], together with suggestions for possible
solutions.
Bonaventure, et al. Informational [Page 21]
^L
RFC 8041 MPTCP Experience January 2017
From a security viewpoint, it is important to note that Multipath
TCP, like other multipath solutions such as SCTP, has the ability to
send packets belonging to a single connection over different paths.
This design feature of Multipath TCP implies that middleboxes that
have been deployed on-path assuming that they would observe all the
packets exchanged for a given connection in both directions may not
function correctly anymore. A typical example are firewalls,
Intrusion Detection System (IDS) or deep packet inspections (DPIs)
deployed in enterprise networks. Those devices expect to observe all
the packets from all TCP connections. With Multipath TCP, those
middleboxes may not observe anymore all packets since some of them
may follow a different path. The two examples below illustrate
typical deployments of such middleboxes. The first example,
Figure 8, shows an MPTCP-enabled smartphone attached to both an
enterprise and a cellular network. If a Multipath TCP connection is
established by the smartphone towards a server, some of the packets
sent by the smartphone or the server may be transmitted over the
cellular network and thus be invisible for the enterprise middlebox.
smartphone +----- enterprise net --- MBox----+------ server
| |
+----- cellular net -------------+
Figure 8: Enterprise Middlebox May Not Observe
All Packets from Multihomed Host
The second example, Figure 9, shows a possible issue when multiple
middleboxes are deployed inside a network. For simplicity, we assume
that network1 is the default IPv4 path while network2 is the default
IPv6 path. A similar issue could occur with per-flow load-balancing
such as ECMP [RFC2992]. With regular TCP, all packets from each
connection would either pass through Mbox1 or Mbox2. With Multipath
TCP, the client can easily establish a subflow over network1 and
another over network2 and each middlebox would only observe a part of
the traffic of the end-to-end Multipath TCP connection.
client ----R-- network1 --- MBox1 -----R------------- server
| |
+-- network2 --- MBox2 -----+
Figure 9: Interactions between
Load-Balancing and Security Middleboxes
In these two cases, it is possible for an attacker to evade some
security measures operating on the TCP byte stream and implemented on
the middleboxes by controlling the bytes that are actually sent over
each subflow and there are tools that ease those kinds of evasion
[PZ15] [PT14]. This is not a security issue for Multipath TCP itself
Bonaventure, et al. Informational [Page 22]
^L
RFC 8041 MPTCP Experience January 2017
since Multipath TCP behaves correctly. However, this demonstrates
the difficulty of enforcing security policies by relying only on
on-path middleboxes instead of enforcing them directly on the
endpoints.
5. References
5.1. Normative References
[RFC6182] Ford, A., Raiciu, C., Handley, M., Barre, S., and J.
Iyengar, "Architectural Guidelines for Multipath TCP
Development", RFC 6182, DOI 10.17487/RFC6182, March 2011,
<http://www.rfc-editor.org/info/rfc6182>.
[RFC6824] Ford, A., Raiciu, C., Handley, M., and O. Bonaventure,
"TCP Extensions for Multipath Operation with Multiple
Addresses", RFC 6824, DOI 10.17487/RFC6824, January 2013,
<http://www.rfc-editor.org/info/rfc6824>.
5.2. Informative References
[BALIA] Peng, Q., Walid, A., Hwang, J., and S. Low, "Multipath
TCP: analysis, design, and implementation", IEEE/ACM
Trans. on Networking (TON), Volume 24, Issue 1, February
2016.
[CACM14] Paasch, C. and O. Bonaventure, "Multipath TCP",
Communications of the ACM, 57(4):51-57, April 2014,
<http://inl.info.ucl.ac.be/publications/multipath-tcp>.
[Cellnet12]
Paasch, C., Detal, G., Duchene, F., Raiciu, C., and O.
Bonaventure, "Exploring Mobile/WiFi Handover with
Multipath TCP", ACM SIGCOMM workshop on Cellular
Networks (Cellnet12), August 2012,
<http://inl.info.ucl.ac.be/publications/
exploring-mobilewifi-handover-multipath-tcp>.
[COMCOM2016]
Tran, V., De Coninck, Q., Hesmans, B., Sadre, R., and O.
Bonaventure, "Observing real Multipath TCP traffic",
Computer Communications, DOI 10.1016/j.comcom.2016.01.014,
April 2016, <http://inl.info.ucl.ac.be/publications/
observing-real-multipath-tcp-traffic>.
Bonaventure, et al. Informational [Page 23]
^L
RFC 8041 MPTCP Experience January 2017
[COMMAG2016]
De Coninck, Q., Baerts, M., Hesmans, B., and O.
Bonaventure, "Observing Real Smartphone Applications over
Multipath TCP", IEEE Communications Magazine Network
Testing Series, 54(3), March 2016,
<http://inl.info.ucl.ac.be/publications/observing-real-
smartphone-applications-over-multipath-tcp>.
[CONEXT12] Khalili, R., Gast, N., Popovic, M., Upadhyay, U., and J.
Leboudec, "MPTCP is not Pareto-Optimal: Performance Issues
and a Possible Solution", CoNEXT '12: Proceedings of the
8th international conference on Emerging networking
experiments and technologies, DOI 10.1145/2413176.2413178,
December 2012.
[CONEXT13] Paasch, C., Khalili, R., and O. Bonaventure, "On the
Benefits of Applying Experimental Design to Improve
Multipath TCP", Conference on emerging Networking
EXperiments and Technologies (CoNEXT),
DOI 10.1145/2535372.2535403, December 2013,
<http://inl.info.ucl.ac.be/publications/benefits-applying-
experimental-design-improve-multipath-tcp>.
[CONEXT15] Hesmans, B., Detal, G., Barre, S., Bauduin, R., and O.
Bonaventure, "SMAPP: Towards Smart Multipath TCP-enabled
APPlications", Proc. Conext 2015, Heidelberg, Germany,
December 2015, <http://inl.info.ucl.ac.be/publications/
smapp-towards-smart-multipath-tcp-enabled-applications>.
[CSWS14] Paasch, C., Ferlin, S., Alay, O., and O. Bonaventure,
"Experimental evaluation of multipath TCP schedulers",
CSWS '14: Proceedings of the 2014 ACM SIGCOMM workshop on
Capacity sharing workshop, DOI 10.1145/2630088.2631977,
August 2014.
[DetalMSS] Detal, G., "dynamically adapt mss value", Post on the
mptcp-dev mailing list, September 2014,
<https://listes-2.sipr.ucl.ac.be/sympa/arc/mptcp-dev/
2014-09/msg00130.html>.
[FreeBSD-MPTCP]
Williams, N., "Multipath TCP For FreeBSD Kernel Patch
v0.5", <http://caia.swin.edu.au/urp/newtcp/mptcp>.
Bonaventure, et al. Informational [Page 24]
^L
RFC 8041 MPTCP Experience January 2017
[GRE-NOTIFY]
Leymann, N., Heidemann, C., Wasserman, M., Xue, L., and M.
Zhang, "GRE Notifications for Hybrid Access", Work in
Progress, draft-lhwxz-gre-notifications-hybrid-access-01,
January 2015.
[HAMPEL] Hampel, G., Rana, A., and T. Klein, "Seamless TCP mobility
using lightweight MPTCP proxy", MobiWac '13: Proceedings
of the 11th ACM international symposium on Mobility
management and wireless access,
DOI 10.1145/2508222.2508226, November 2013.
[HotMiddlebox13]
Hesmans, B., Duchene, F., Paasch, C., Detal, G., and O.
Bonaventure, "Are TCP Extensions Middlebox-proof?", CoNEXT
workshop Hot Middlebox, December 2013,
<http://inl.info.ucl.ac.be/publications/
are-tcp-extensions-middlebox-proof>.
[HotMiddlebox13b]
Detal, G., Paasch, C., and O. Bonaventure, "Multipath in
the Middle(Box)", HotMiddlebox '13, December 2013,
<http://inl.info.ucl.ac.be/publications/
multipath-middlebox>.
[HotNets] Raiciu, C., Pluntke, C., Barre, S., Greenhalgh, A.,
Wischik, D., and M. Handley, "Data center networking with
multipath TCP", Hotnetx-IX: Proceedings of the 9th ACM
SIGCOMM Workshop on Hot Topics in Networks Article No. 10,
DOI 10.1145/1868447.1868457, October 2010,
<http://doi.acm.org/10.1145/1868447.1868457>.
[HYA-ARCH] Leymann, N., Heidemann, C., Wasserman, M., Xue, L., and M.
Zhang, "Hybrid Access Network Architecture", Work in
Progress, draft-lhwxz-hybrid-access-network-
architecture-02, January 2015.
[ICNP12] Cao, Y., Xu, M., and X. Fu, "Delay-based congestion
control for multipath TCP", 20th IEEE International
Conference on Network Protocols (INCP),
DOI 10.1109/ICNP.2012.6459978, October 2012.
[IETF88] Stewart, L., "IETF 88 Meeting minutes of the MPTCP working
group", November 2013, <https://www.ietf.org/proceedings/
88/minutes/minutes-88-mptcp>.
Bonaventure, et al. Informational [Page 25]
^L
RFC 8041 MPTCP Experience January 2017
[IETFJ] Bonaventure, O. and S. Seo, "Multipath TCP Deployments",
IETF Journal, Vol. 12, Issue 2, November 2016.
[IMC11] Honda, M., Nishida, Y., Raiciu, C., Greenhalgh, A.,
Handley, M., and H. Tokuda, "Is it still possible to
extend TCP?", IMC '11: Proceedings of the 2011 ACM SIGCOMM
conference on Internet measurement conference,
DOI 10.1145/2068816.2068834, November 2011,
<http://doi.acm.org/10.1145/2068816.2068834>.
[IMC13a] Detal, G., Hesmans, B., Bonaventure, O., Vanaubel, Y., and
B. Donnet, "Revealing Middlebox Interference with
Tracebox", Proceedings of the 2013 ACM SIGCOMM conference
on Internet measurement conference,
DOI 10.1145/2504730.2504757, October 2013,
<http://inl.info.ucl.ac.be/publications/
revealing-middlebox-interference-tracebox>.
[IMC13b] Chen, Y., Lim, Y., Gibbens, R., Nahum, E., Khalili, R.,
and D. Towsley, "A measurement-based study of MultiPath
TCP performance over wireless network", ICM '13:
Proceedings of the 2013 conference on Internet
measurement conference, DOI 10.1145/2504730.2504751,
October 2013,
<http://doi.acm.org/10.1145/2504730.2504751>.
[IMC13c] Pelsser, C., Cittadini, L., Vissicchio, S., and R. Bush,
"From Paris to Tokyo: on the suitability of ping to
measure latency", IMC '13: Proceedings of the 2013
conference on Internet measurement Conference,
DOI 10.1145/2504730.2504765, October 2013,
<http://doi.acm.org/10.1145/2504730.2504765>.
[INFOCOM14]
Lim, Y., Chen, Y., Nahum, E., Towsley, D., and K. Lee,
"Cross-layer path management in multi-path transport
protocol for mobile devices", IEEE INFOCOM'14,
DOI 10.1109/INFOCOM.2014.6848120, April 2014.
[KT] Seo, S., "KT's GiGA LTE", July 2015,
<https://www.ietf.org/proceedings/93/slides/
slides-93-mptcp-3.pdf>.
[MBTest] Hesmans, B., "MBTest", October 2013,
<https://bitbucket.org/bhesmans/mbtest>.
Bonaventure, et al. Informational [Page 26]
^L
RFC 8041 MPTCP Experience January 2017
[Mobicom15]
De Coninck, Q., Baerts, M., Hesmans, B., and O.
Bonaventure, "Poster - Evaluating Android Applications
with Multipath TCP", Mobicom 2015 (Poster),
DOI 10.1145/2789168.2795165, September 2015.
[MPTCP-DEPLOY]
Paasch, C., Biswas, A., and D. Haas, "Making Multipath TCP
robust for stateless webservers", Work in Progress,
draft-paasch-mptcp-syncookies-02, October 2015.
[MPTCP-LOAD]
Paasch, C., Greenway, G., and A. Ford, "Multipath TCP
behind Layer-4 loadbalancers", Work in Progress,
draft-paasch-mptcp-loadbalancer-00, September 2015.
[MPTCP-MAX-SUB]
Boucadair, M. and C. Jacquenet, "Negotiating the Maximum
Number of Multipath TCP (MPTCP) Subflows", Work in
Progress draft-boucadair-mptcp-max-subflow-02, May 2016.
[MPTCPBIB] Bonaventure, O., "Multipath TCP - Annotated bibliography",
Technical report, April 2015,
<https://github.com/obonaventure/mptcp-bib>.
[MultipathTCP-Linux]
Paasch, C., Barre, S., and . et al, "Multipath TCP - Linux
Kernel implementation", <http://www.multipath-tcp.org>.
[NSDI11] Wischik, D., Raiciu, C., Greenhalgh, A., and M. Handley,
"Design, implementation and evaluation of congestion
control for multipath TCP", NSDI11: In Proceedings of the
8th USENIX conference on Networked systems design
and implementation, 2011.
[NSDI12] Raiciu, C., Paasch, C., Barre, S., Ford, A., Honda, M.,
Duchene, F., Bonaventure, O., and M. Handley, "How Hard
Can It Be? Designing and Implementing a Deployable
Multipath TCP", NSDI '12: USENIX Symposium of Networked
Systems Design and implementation, April 2012,
<http://inl.info.ucl.ac.be/publications/how-hard-can-it-
be-designing-and-implementing-deployable-multipath-tcp>.
[PaaschPhD]
Paasch, C., "Improving Multipath TCP", Ph.D. Thesis ,
November 2014, <http://inl.info.ucl.ac.be/publications/
improving-multipath-tcp>.
Bonaventure, et al. Informational [Page 27]
^L
RFC 8041 MPTCP Experience January 2017
[PAM2016] De Coninck, Q., Baerts, M., Hesmans, B., and O.
Bonaventure, "A First Analysis of Multipath TCP on
Smartphones", 17th International Passive and Active
Measurements Conference (PAM2016) volume 17, March 2016,
<http://inl.info.ucl.ac.be/publications/
first-analysis-multipath-tcp-smartphones>.
[PAMS2014] Arzani, B., Gurney, A., Cheng, S., Guerin, R., and B. Loo,
"Impact of Path Selection and Scheduling Policies on MPTCP
Performance", PAMS2014, DOI 10.1109/WAINA.2014.121, May
2014.
[Presto08] Greenberg, A., Lahiri, P., Maltz, D., Patel, P., and S.
Sengupta, "Towards a next generation data center
architecture: scalability and commoditization", ACM
PRESTO 2008, DOI 10.1145/1397718.1397732, August 2008,
<http://dl.acm.org/citation.cfm?id=1397732>.
[PT14] Pearce, C. and P. Thomas, "Multipath TCP Breaking Today's
Networks with Tomorrow's Protocols", Proc.
Blackhat Briefings, 2014, <http://www.blackhat.com/docs/
us-14/materials/us-14-Pearce-Multipath-TCP-Breaking-
Todays-Networks-With-Tomorrows-Protocols-WP.pdf>.
[PZ15] Pearce, C. and S. Zeadally, "Ancillary Impacts of
Multipath TCP on Current and Future Network Security",
IEEE Internet Computing, vol. 19, no. 5, pp. 58-65,
DOI 10.1109/MIC.2015.70, September 2015.
[RFC1812] Baker, F., Ed., "Requirements for IP Version 4 Routers",
RFC 1812, DOI 10.17487/RFC1812, June 1995,
<http://www.rfc-editor.org/info/rfc1812>.
[RFC1928] Leech, M., Ganis, M., Lee, Y., Kuris, R., Koblas, D., and
L. Jones, "SOCKS Protocol Version 5", RFC 1928,
DOI 10.17487/RFC1928, March 1996,
<http://www.rfc-editor.org/info/rfc1928>.
[RFC2992] Hopps, C., "Analysis of an Equal-Cost Multi-Path
Algorithm", RFC 2992, DOI 10.17487/RFC2992, November 2000,
<http://www.rfc-editor.org/info/rfc2992>.
[RFC4987] Eddy, W., "TCP SYN Flooding Attacks and Common
Mitigations", RFC 4987, DOI 10.17487/RFC4987, August 2007,
<http://www.rfc-editor.org/info/rfc4987>.
Bonaventure, et al. Informational [Page 28]
^L
RFC 8041 MPTCP Experience January 2017
[RFC6356] Raiciu, C., Handley, M., and D. Wischik, "Coupled
Congestion Control for Multipath Transport Protocols",
RFC 6356, DOI 10.17487/RFC6356, October 2011,
<http://www.rfc-editor.org/info/rfc6356>.
[RFC7430] Bagnulo, M., Paasch, C., Gont, F., Bonaventure, O., and C.
Raiciu, "Analysis of Residual Threats and Possible Fixes
for Multipath TCP (MPTCP)", RFC 7430,
DOI 10.17487/RFC7430, July 2015,
<http://www.rfc-editor.org/info/rfc7430>.
[RFC7871] Contavalli, C., van der Gaast, W., Lawrence, D., and W.
Kumari, "Client Subnet in DNS Queries", RFC 7871,
DOI 10.17487/RFC7871, May 2016,
<http://www.rfc-editor.org/info/rfc7871>.
[SIGCOMM11]
Raiciu, C., Barre, S., Pluntke, C., Greenhalgh, A.,
Wischik, D., and M. Handley, "Improving datacenter
performance and robustness with multipath TCP", SIGCOMM
'11: Proceedings of the ACM SIGCOMM 2011 conference,
DOI 10.1145/2018436.2018467, August 2011,
<http://doi.acm.org/10.1145/2018436.2018467>.
[SOCKET] Hesmans, B. and O. Bonaventure, "An enhanced socket API
for Multipath TCP", Proceedings of the 2016 Applied
Networking Research Workshop, DOI 10.1145/2959424.2959433,
July 2016, <http://doi.acm.org/10.1145/2959424.2959433>.
[StrangeMbox]
Bonaventure, O., "Multipath TCP through a strange
middlebox", Blog post, January 2015,
<http://blog.multipath-tcp.org/blog/html/2015/01/30/
multipath_tcp_through_a_strange_middlebox.html>.
[TMA2015] Hesmans, B., Tran Viet, H., Sadre, R., and O. Bonaventure,
"A First Look at Real Multipath TCP Traffic", Traffic
Monitoring and Analysis, 2015,
<http://inl.info.ucl.ac.be/publications/
first-look-real-multipath-tcp-traffic>.
[TR-348] Broadband Forum, ., "TR 348 - Hybrid Access Broadband
Network Architecture", Issue: 1, July 2016,
<https://www.broadband-forum.org/technical/download/
TR-348.pdf>.
Bonaventure, et al. Informational [Page 29]
^L
RFC 8041 MPTCP Experience January 2017
[tracebox] Detal, G. and O. Tilmans, "Tracebox: A Middlebox Detection
Tool", 2013, <http://www.tracebox.org>.
Acknowledgements
This work was partially supported by the FP7-Trilogy2 project. We
would like to thank all the implementers and users of the Multipath
TCP implementation in the Linux kernel. This document has benefited
from the comments of John Ronan, Yoshifumi Nishida, Phil Eardley,
Jaehyun Hwang, Mirja Kuehlewind, Benoit Claise, Jari Arkko, Qin Wu,
Spencer Dawkins, and Ben Campbell.
Authors' Addresses
Olivier Bonaventure
UCLouvain
Email: Olivier.Bonaventure@uclouvain.be
Christoph Paasch
Apple, Inc.
Email: cpaasch@apple.com
Gregory Detal
Tessares
Email: Gregory.Detal@tessares.net
Bonaventure, et al. Informational [Page 30]
^L
|